halid
stringlengths 8
12
| lang
stringclasses 1
value | domain
sequencelengths 0
36
| timestamp
stringclasses 652
values | year
stringclasses 55
values | url
stringlengths 43
370
| text
stringlengths 16
2.18M
|
---|---|---|---|---|---|---|
03671370 | en | [
"qfin"
] | 2024/03/04 16:41:22 | 2021 | https://hal.science/hal-03671370/file/S0301420721001549.pdf | Marwa Talbi
email: [email protected]
Rihab Bedoui
Christian De Peretti
Lotfi Belkacem
Is the role of precious metals as precious as they are? A vine copula and BiVaR approaches
Keywords: Precious metals, G-7 stock markets, hedge, safe haven, diversification, vine copula, BiVaR. JEL classification: C02, C58, G1
This paper revisits the international evidence on hedge, safe haven, and diversification properties of precious metals-namely gold, silver, and platinum-for the G-7 stock markets.
Therefore, this study proposes a multivariate vine copula-based GARCH model to assess the hedge and safe haven properties of precious metals and a Bivariate Value at Risk-based copula (BiVaR) measure to analyse the diversification potential of precious metals. Our empirical results suggest that; (1) gold is the strongest hedge and safe haven asset in almost all the G-7 stock markets, (2) silver and platinum results show that they may act as weak hedge assets, (3) silver bears the potential of a strong safe haven role only for Germany's and Italy's stock markets; however, platinum provides a weak safe haven role for most developed stock markets, (4) precious metals appear as interesting assets for diversifying a portfolio for G-7 stock markets investors. Overall, our findings provide noteworthy practical implication for investors.
1
Introduction
In the aftermath of the Global Financial Crisis, the interest in holding precious metals has been increased, triggering a rise in their prices. This interest is due to the role of precious metals, practically gold, as important stores of value and in diversifying risk [START_REF] Adrangi | Economic activity, inflation, and hedging: the case of gold and silver investments[END_REF][START_REF] Lucey | Seasonality, risk and return in daily COMEX gold and silver data 1982-2002[END_REF]. The literature on the hedging and safe haven potentials of precious metals, especially gold, is among the faster growing fields of the financial literature since the turmoil period of the last decade. Therefore, the aim of this study is to move forward the academic debate on the precious metals-stock markets nexus by examining the hedge, safe haven and diversification properties of precious metals, namely gold, silver, and platinum for the G-7 countries1 using more flexible copula-based models, named vine copula, that allows a finer analysis. We also deepen the analysis of our results using the BiVaR method.
The existing literature provides evidence that gold may serve as a hedge asset against stock market in normal periods and as a safe haven during turmoil periods (e.g., Baur and Lucey, 2010;Baur and McDermott, 2010;[START_REF] Ciner | Hedges and safe havens: An examination of stocks, bonds, gold, oil and exchange rates[END_REF][START_REF] Reboredo | Is gold a safe haven or a hedge for the US dollar? Implications for risk management[END_REF]Beckmann et al., 2015, among others). Recently, attention has been shifted from gold in favour of other precious metals, which have often similar properties to gold. Precious metals may serve as hedge assets against various market risks such as exchange rates (e.g., [START_REF] Ciner | Hedges and safe havens: An examination of stocks, bonds, gold, oil and exchange rates[END_REF]Reboredo, 2013b;[START_REF] Bedoui | Diamonds versus precious metals: What gleams most against USD exchange rates?[END_REF][START_REF] Nguyen | Hedging and safehaven characteristics of Gold against currencies: An investigation based on multivariate dynamic copula theory[END_REF], inflation (e.g., [START_REF] Hoang | Is Gold a Hedge Against Inflation? New Evidence from a Nonlinear ARDL Approach[END_REF][START_REF] Salisu | Assessing the inflation hedging of gold and palladium in OECD countries[END_REF],), oil prices (e.g., [START_REF] Rehman | Precious metal returns and oil shocks: A time varying connectedness approach[END_REF] and stock market indices (e.g., [START_REF] Hood | Is gold the best hedge and a safe haven under changing stock market volatility[END_REF][START_REF] Mensi | Are Sharia stocks, gold and US Treasury hedges and/or safe havens for the oil-based GCC markets?[END_REF][START_REF] Ali | Revisiting the valuable roles of commodities for international stock markets[END_REF]. Some other studies have documented the stochastic properties of precious metals, their dynamic interlinkages, and their volatility spillover (e.g., [START_REF] Lucey | What precious metals act as safe havens, and when? Some US evidence[END_REF][START_REF] Balcilar | The volatility effect on precious metals price returns in a stochastic volatility in mean model with time-varying parameters[END_REF][START_REF] Talbi | Dynamics and causality in distribution between spot and future precious metals: A copula approach[END_REF]. These studies are of key importance regarding information about hedging strategies for investors.
Prior studies on the hedge and safe-haven properties of precious metals for stock markets highlights a heterogeneous role of precious metals against equity movement. A number of papers including Baur and Lucey (2010), Baur and McDermott (2010), [START_REF] Hood | Is gold the best hedge and a safe haven under changing stock market volatility[END_REF], [START_REF] Ciner | Hedges and safe havens: An examination of stocks, bonds, gold, oil and exchange rates[END_REF], [START_REF] Bredin | Does gold glitter in the long-run? Gold as a hedge and safe haven across time and investment horizon[END_REF] and [START_REF] Shahzad | Dependence of stock markets with gold and bonds under bullish and bearish market states[END_REF] has particularly focused on gold which is traditionally accepted as hedge and safe haven asset. While recent studies have expanded the set of potential hedges and safe havens to other precious metals (e.g., [START_REF] Lucey | What precious metals act as safe havens, and when? Some US evidence[END_REF][START_REF] Low | Diamonds vs. precious metals: What shines brightest in your investment portfolio[END_REF][START_REF] Li | Reassessing the role of precious metals as safe havens-What colour is your haven and why[END_REF][START_REF] Ali | Revisiting the valuable roles of commodities for international stock markets[END_REF]. On the whole, although the literature has documented the role of precious metals, the results are quite mixed for different markets. This is somewhat to be expected, given the use of different market variables, different countries, different time periods and different methods. Therefore, this incites us to further explore this topic.
To examine the interactions between precious metals and stock markets, various econometric methods have been used, which can be divided into generalized autoregressive conditional heteroskedasticity (GARCH)-type models (e.g., [START_REF] Hood | Is gold the best hedge and a safe haven under changing stock market volatility[END_REF][START_REF] Low | Diamonds vs. precious metals: What shines brightest in your investment portfolio[END_REF]Li and[START_REF] Li | Reassessing the role of precious metals as safe havens-What colour is your haven and why[END_REF][START_REF] Wu | Economic evaluation of asymmetric and price range information in gold and general financial markets[END_REF][START_REF] Wu | Economic evaluation of asymmetric and price range information in gold and general financial markets[END_REF], vector autoregression (VAR)-type models (e.g., [START_REF] Wan | Interactions between oil and financial markets-do conditions of financial stress matter?[END_REF], wavelet models (e.g., [START_REF] Bredin | Does gold glitter in the long-run? Gold as a hedge and safe haven across time and investment horizon[END_REF], bivariate copulas ( e.g., [START_REF] Nguyen | Gold price and stock markets nexus under mixed-copulas[END_REF]) and quantile regression methods ( e.g., [START_REF] Shahzad | Dependence of stock markets with gold and bonds under bullish and bearish market states[END_REF][START_REF] Ali | Revisiting the valuable roles of commodities for international stock markets[END_REF].
In particular, the contribution of this present research consists of three main aspects. Firstly, we extend the analysis of precious metals hedging and safe haven properties by modeling the multivariate dependence using vine copula-based GARCH model to study whether precious metals are strong or weak safe haven and/or hedge for the stock markets. The use of copula in higher dimensions is challenging, where standard multivariate copulas, such as the multivariate Gaussian or Student-t copulas, suffer from inflexibility in modelling the dependence structure among larger numbers of variables and exhibit a problem of parameters restriction so they do not allow for different dependency structures between pairs of variables.
Hence, the use of vine copulas overcomes the restrictive characteristics of the bivariate copulas by providing flexible and conditional dependence structure between the variables [START_REF] Brechmann | Risk management with high-dimensional vine copulas: an analysis of the Euro Stoxx 50[END_REF][START_REF] Ji | Portfolio diversification strategy via tail-dependence clustering and arma-garch vine copula approach[END_REF]. Therefore, the vine copula approach has been widely used in the context of time series models, risk management and so on (see the review about the financial applications of vine copulas provided by [START_REF] Aas | Pair-Copula Constructions for Financial Applications: A Review[END_REF]). Nevertheless, in this paper we use the information of dependence on average and in times of extreme market conditions provided by the vine copula to assess, respectively, for the hedge and safe haven properties of precious metals. Secondly, we propose a new definition of "strong safe haven" property. In fact, the copula can test only for weak safe haven since it provides a non-zero probability of extreme price movements to test for uncorrelated series. So, to overcome this limitation, we propose to use simulated data from the best-fitting copula model to compute the tail correlation to rigorously test for strong safe haven property of precious metals. Finally, a BiVaR novel method proposed by [START_REF] Bedoui | On the study of conditional dependence structure between oil, gold and USD exchange rates[END_REF], will be employed to check for the diversification potential of precious metals for G-7 investors. As far as we know, we are the first to apply the BiVaR based copula method. The importance of this measure remains in combining the copula and the VaR techniques. In fact, the use of copulas allows to construct a level graph of two-dimensional Value at Risk and examine for a risk level, the marginal rate of substitution (TMS) between the Value at Risk of precious metal and the stock index. So, this method illustrates the level graph of a two-dimensional Value at Risk in a graphical way which enable us to have a clearer vision on the dependence structure between variable and their positioning with regard to the independence, comonotonicity and anti-monotonicity cases which enable as to analyse the diversification property.
The remainder of this paper is structured as follows: Section 2 presents the state of the art. Section 3 develops the data and the methodology. Section 4 reports and discusses the empirical results of our analysis. Finally, Section 5 concludes.
Literature review
In the recent literature, a stream of research emerges focusing on the role of gold and other precious metals as hedges, safe haven and diversifier assets for stock markets.
Notwithstanding the vast existing literature on precious metal markets, the gold market has received the most extensive study.
As defined in Baur and Lucey (2010) who provide the first operational definition of hedge and safe haven, an asset is considered to be a strong (weak) hedge instrument when it is negatively correlated (uncorrelated) on average with another asset. For safe haven property, an asset is considered to be a strong (weak) safe haven instrument when it is negatively correlated (uncorrelated) during the extreme market conditions. However, these hypotheses are based on linear models which generally look at linear correlation and cannot capture rare events on the tails of distributions.
In their paper, they examine the static and time-varying relations between United States (US), United Kingdom (UK), and German stock return and gold return using daily data from 1995 to 2005 to evaluate gold as a hedge and a safe haven. They find that gold serves as an effective hedging tool for stocks and has a role as a safe haven in extreme stock market conditions. As an extension of Baur and Lucey (2010)'s work, Baur and McDermott (2010) study the relationship between gold and stock markets in developed and developing countries using multiple-frequency data from 1979 until 2009. They apply rolling window regression to analyse the time-varying relationship between gold return and the world portfolio index. They find that gold serves as a hedge and safe haven only in European and US markets but not in the BRICs markets (Australia, Canada and Japan). Adopting the same methodology, [START_REF] Hood | Is gold the best hedge and a safe haven under changing stock market volatility[END_REF] study the role of precious metals (gold, silver, and platinum) relative to the Volatility Index (VIX) as a hedge and safe haven against the US stock market; and they find that, unlike gold, platinum and silver serve neither as a hedge nor as a safe haven for the US stock market. Similarly, [START_REF] Arouri | World gold prices and stock returns in China: insights for hedging and diversification strategies[END_REF] use a bivariate VAR-GARCH model to study return and volatility spillovers between world gold prices and the Chinese stock market over the period from 22 March 2004, through 31 March 2011. They find significant return and volatility cross effects between the variables, and gold may serve as a safe haven for the Chinese stock market. [START_REF] Shen | The dependence structure analysis among gold price, stock price index of gold mining companies, and Shanghai composite index[END_REF] investigate the dependence structure among gold price, stock price index of gold mining companies, and the Shanghai Composite Index in China using bivariate copula based GARCH models and find that the gold return price has positive correlation with stock market returns, which differs from the findings of other research (gold price typically has negative correlation with stock market returns). [START_REF] Kumar | Return and volatility transmission between gold and stock sectors: Application of portfolio management and hedging effectiveness[END_REF] investigates the mean and volatility transmission between gold and Indian industrial sectors. Using a generalised VAR-ADCC-BVGARCH model, he finds unidirectional and significant return spillover from gold to stock sectors and claims that gold can be considered a valuable asset class that can improve the risk-adjusted performance of a well-diversified portfolio of stocks and acts as a hedge against different markets.
Applying the Dynamic Conditional Correlation (DCC) Multivariate GARCH Model, [START_REF] Lucey | What precious metals act as safe havens, and when? Some US evidence[END_REF] study the safe haven properties of four precious metals (gold, silver, platinum, and palladium) using daily data from 1989 to 2013. They find evidence that in some periods of time, silver and platinum can act as safe haven when gold does not, and the effect can sometimes be stronger. [START_REF] Nguyen | Gold price and stock markets nexus under mixed-copulas[END_REF] investigate the role of gold as a safe haven in seven countries: UK, US, Indonesia, Japan, Malaysia, Philippines, Singapore, and Thailand. Using bivariate copulas, they find that gold may be a safe haven asset during a market crash for the cases of Malaysia, Singapore, Thailand, the UK and US markets but not for the Indonesian, Japanese, and the Philippine markets.
Expanding the work of [START_REF] Lucey | What precious metals act as safe havens, and when? Some US evidence[END_REF], [START_REF] Li | Reassessing the role of precious metals as safe havens-What colour is your haven and why[END_REF] examine the safe haven properties of precious metals versus equity market movements across a wide variety of countries using daily data from January 1994 to July 2016. Applying the standard approach as outlined in Baur and Lucey (2010) and Baur and McDermott (2010), they find that each metal may play a safe haven role against the stock market during tail events. [START_REF] Klein | Dynamic correlation of precious metals and flight-to-quality in developed markets[END_REF] examines the connection of developed markets and precious metal prices using daily data from Jan 2000 to Dec 2016. Applying DCC-GARCH model, he finds that gold and silver act as safe haven assets while platinum serves as a temporal surrogate safe haven in extreme market conditions. [START_REF] He | Is Gold a sometime Safe Haven or an Always Hedge for Equity Investor ? A Markov Swithching CAPM Approach for US and UK stock indices[END_REF] re-examines whether gold is a safe haven asset for UK and US investors.
Applying a Markov-switching CAPM (Capital asset pricing model) approach, they find that gold is consistently a hedge but that no distinct safe haven state exists between gold and UK or US stock markets. [START_REF] Junttila | Commodity marketbased hedging against stock market risk in times offinancial crisis: The case of crude oil and gold[END_REF] study the hedging property of gold and oil against stock market risk in times of financial crisis. They find that stock and gold markets become negatively correlated during times of financial crises. Thus, the gold market provides a better hedge than the oil market against stock market risks.
Recently, [START_REF] Ali | Revisiting the valuable roles of commodities for international stock markets[END_REF] re-examine the safe haven, hedge, and diversification potentials of 21 commodities (including precious metals) for 49 international stock markets. Using the cross-quantilogram approach (the quantile dependence across the whole range of quantiles), they find that precious metals in general and gold in particular provide strong safe havens for developed and frontier stock markets.
Table 1 summarises leading works, covering the period 2010 to 2020, that dealt with the hedge, safe haven, and diversification properties of precious metals. In the existing literature, linear models, GARCH-type models, and quantile regressions are frequently used methods.
Our contribution to the literature is two-fold. First, we apply a vine copula to analyse the hedge and safe haven properties. Then, we propose a novel method, the BiVaR measure, to examine the diversification potential of precious metals.
Data and methodology
Data description
This study considers daily prices data for precious metals and stock indices.
Vine copula model
Copulas have found many successful applications in a various empirical works to model joint distributions of random variables. 4 In his theorem, [START_REF] Sklar | Fonctions de Répartition à n Dimensions et Leurs Marges[END_REF] states that any n-dimensional multivariate distribution can be decomposed into n marginal distributions and a unique copula. More formally:
, , … . . , = , , … , (1)
where is a joint distribution of , , … . . , with marginal distributions = for = 1, 2,… , , and :[0,1] → [0,1] is a copula function. Suppose that and are differentiable. Then, the joint density function is defined as:
, , … . . , = , , … , (2)
where = is the (unconditional) density of and ! is the density of the copula.
Hence, the copula function separates the joint distribution into two contributions: the marginal distributions of each variable and the copula that combines these marginal distributions into a joint distribution.
Given the rich variety of bivariate copulas, they are limited to only one or two parameters to describe the dependence structure among variables. Hence, even though it is simple to generate multivariate Elliptical or Archimedean copulas, they cannot adequately capture the dependence in the multivariate scale.5 Therefore, we may go beyond these standard multivariate copulas by using vine copula approach, which is a more flexible alternative measure to capture the dependence structure among assets. [START_REF] Joe | Multivariate models and dependence concepts[END_REF]Bedford andCooke, 2001, 2002;[START_REF] Kurowicka | Uncertainty Analysis with High Dimensional Dependence Modelling[END_REF][START_REF] Aas | Pair-Copula Constructions of Multiple Dependence[END_REF] Technically, a vine copula consists of building a multivariate joint distribution from a cascade of unconditional and conditional bivariate copulas. It is well known that any multivariate density function can be represented as a product of unconditional and conditional densities:
, … , = . | . # | , … | , … , (3)
Bedford and Cooke ( 2002) introduced two types of vine copulas: canonical vine copulas (Cvine) and drawable vine copulas (D-vine).
In the C-Vine copula one variable plays a pivotal role. The general -dimensional C-vine copula can be written as:
10 ! , … , = ! , $%| ,…, & | , … , , $% | , … , ' % (4)
For instance, a 4-dimensional C-vine density decomposition and its hierarchical tree structure are represented as follows:
, , # , ( = .
. (5)
Similarly, D-vine copulas are also constructed by choosing a specific order for the variables in which variables are connected in a symmetric way. The general -dimensional D-vine copula can be written as:
! , … , = ∏ ∏ ! %,%$ |%$ ,…,%$ & % | %$ , … , %$ , %$ | %$ , … , %$ ' % . ( 6
)
We illustrate an example of 4-dimensional D-vine density decomposition and its hierarchical tree structure as follows:
, , # , ( = .
.
For the selection of the appropriate vine tree structure, pair-copula families and their parameter values, we follow the sequential procedure proposed by [START_REF] Dissmann | Selecting andestimating regular vine copulae and application to financial returns[END_REF], which is summarised in Table 2.
Table 2. Sequential method to select a vine copula model Algorithm. Sequential method to select a vine copula model 1-Calculate the empirical Kendall's τ for all possible variable pairs. 2-Select the tree that maximises the sum of absolute values of Kendall's τ.
3-Select a copula for each pair and fit the corresponding parameters. 4-Transform the observations using the copula and parameters from Step 3. To obtain the transformed values. 5-Use transformed observations to calculate empirical Kendall's τ's for all possible pairs. 6-Proceed with Step 2. Repeat until the vine copula is fully specified.
The estimation of vine copulas is a two-step separation procedure which is called the inference functions for the margin method (IFM). It implies that the joint log-likelihood is simply the sum of univariate log-likelihoods, and the copula log-likelihood is given as follows:
+ = , + + + &! , … , '. (8)
To full fill the aim of our study, we investigated the conditional dependence structure. As a first step, we filtered the returns using AR(1)-GARCH (1,1) processes. 6 Then, we applied the empirical cumulative distribution function (ECDF) for the standardized residuals. As a final step, we estimated the parameters of the vine copula, mainly the C-vine and D-vine copulas, using the sequential maximum likelihood estimation procedure described in Table 2 in order to examine the multivariate dependence and analysis of the hedge and safe haven properties of precious metals against stock markets.
Testing for hedge and safe haven properties
The use of copulas is crucial since it gives us information about both the average dependence and the dependence in times of extreme market movements (independent from how the marginal distributions are modelled). On one hand, the average dependence is given by correlation measures (Kendall's tau or Spearman's rho) which are obtained from the dependence parameter of the copula. On the other hand, the dependence in terms of extreme market movements is obtained from the copula tail dependence coefficients.
On the basis of copula dependence information, we can formulate the conditions under which a precious metal is characterised as a strong (weak) hedge and/or strong (weak) safe haven for the stock indexes under study as in Table 3.
Table 3. Hypothesis testing Conditions Conclusion
. /0,1 < 3 Precious metal is a strong hedge . /0,1 = 3 or strongly near to 0 Precious metal is a weak hedge
4 5 = 3
Precious metal is a weak safe haven 4 5 = 3 and . /0, 161 < 1 7 < 3 Precious metal is a strong safe haven Notes: . /0,1 is the average dependence measure (given by the copula parameter or the Kendall tau) between precious metals (PM) and stock market indexes (I) and 8 9 is the lower tail dependence coefficient for the joint distribution of precious metals and stock market indexes. We define . /0, 161 < 1 7 as the correlation between precious metals (PM) and stock market indexes (I) at the lower tail of stock returns distribution, where 1 7 denotes the q th percentile of I.
Based on the values of Kendall's tau, the precious metal serves as a strong hedge when the value of Kendall's tau is negative or equal to zero. Meanwhile, if Kendall's tau is positive and near to zero, the precious metal is considered as a weak hedge. Then, based on the lower tail dependence coefficient 4 5 , the precious metal is a weak safe haven asset when it shows zero lower tail dependence ( 4 5 = 0) with the stock market index. This means that in this case investors will not experience a loss in their precious metal holdings when the stock market crashes. And, if 4 5 > 0 , this implies that there is a positive probability of concurrent losses in precious metal and stock market indexes during periods of turmoil. However, positive lower tail dependence does not strictly signify a null probability of positive precious metal returns during the time of significant equity downturns since 4 5 is only a conditional probability measure rather than a tail correlation. The reason is that it is more reasonable to compute the value of correlation between stock market (I) and precious metal (PM) returns at the lower tail of the stock returns distribution. Hence, the precious metal is a strong safe haven asset for the stock market if λ @ = 0 and A B, C|C < C < 0 at the same time.
BiVaR based copula model
Based on the dependence structure computed using copulas, the bivariate Value at Risk (BiVaR) risk measure is implemented to examine the diversification benefits of precious metals. Copulas allow us to draw the level curves of the two-dimensional Value at Risk and examine, for a given threshold level (D=5%), the marginal rate of substitution (TMS) between the VaR of the two univariate risks. Hence, for a given marginal distribution of precious metals and stock returns, it is possible to draw the contours corresponding to the minimum copula (anti-monotonicity case), maximum copula (comonotonicity case), and the independence copula.
Let E F and E G be the precious metal and stock market index return series with univariate distribution functions, F and G , respectively. Hence, ∀ the threshold R[0,1], we have:
V E W , E X ; max W E W + X E X -1,0 = D], anti-monotonicity case (10) V E W , E X ; W E W . X E X = D], independence case (9) ^ E W , E X ; min W E W , X E X = D_ , comonotonicity case (11)
The level curves from the empirical copula are given by: ^ E W , E X ; W E W , X E X = D_ .
(12)
The level curves are used to determine the TMS between the two univariate VaR. The higher the level of the empirical curves (approaching the anti-monotonicity case), the more that dependence between the precious metals and stock market indices returns is negative. Hence, there is the presence of a compensation effect. However, the closer the curves are to their lower limit, corresponding to the case of comonotonicity (positive dependence) where the returns tend to move in the same direction, the more the correlation between losses is therefore extremely high. Regarding the curves of multiplication, they correspond to the diversification case (see [START_REF] Cherubini | Value-at-risk Trade-off and Capital Allocation with Copulas[END_REF][START_REF] Bedoui | Copulas and bivariate risk measures: An application to hedge funds[END_REF].
Empirical results and discussions
This section discusses the empirical results of this study. The preliminary analyses results are firstly presented. Then, we discuss our main results about the hedge, safe haven and diversification potentials of precious metals for the G-7 stock markets.
Preliminary analyses results
The descriptive statistics for the daily returns of precious metals and equity indexes are reported in Table 4. Results show that all the means are close to zero, and the standard deviations are small, which means that all series are around the mean. Likewise, among the precious metals, silver has the highest standard deviation, while gold has the lowest, which implies that silver is the most volatile. Moreover, asymmetry and fat tails in the returns were evident for all series. Thus, the results for the skewness and the excess of kurtosis reinforce the rejection of normality. The Jarque-Bera (JB) test rejects the normality hypothesis, while the Ljung-Box (LB) statistic confirmed the absence of serial correlation, except for the Nikkei 225, gold, and platinum. Likewise, the ARCH effect test indicates the presence of ARCH effects in all series.
Main results
In the first step, we filter the returns using GARCH model, aiming to obtain the residuals since copula model request independent and identically distributed (i.i.d) uniform data (the results of AR-GARCH model are presented in Appendix A, Table A 2). Parameter estimates for marginal models of precious metals and stock market returns). Then, we filter the standard innovation by t-distribution. Thereafter, we apply the vine copula to the transformed standardised residuals of each asset returns, mainly the C-vine and D-vine copulas. 7
Table 5 presents the best vine copula model that fits our data using the Log Likelihood (LL) values, AIC, and BIC criteria. The results show that the D-vine structure for Canada, Japan, UK, and US markets is more appropriate than the C-vine, whereas the latter is more suitable for France, Germany, and Italy than the former.
Table 5. Vine copula model selection
7 Before estimating copula models, we calculate the empirical Kendall's a for all possible pairs of variables to select the tree that maximises the sum of absolute values of Kendall's a. (Appendix A, Table A 3) The results shows that platinum has the strongest dependency in terms of the empirical value of pairwise Kendall's tau; hence, we consider platinum as the first root node. Thereby, the order of variables is as follows. For France, Canada, and US, the order of variables should be the following: Platinum (order 1), Gold (order 2), Silver (order 3), and the Stock Index (order 4). And for the rest of countries, the order is: Platinum (order 1), Silver (order 2), Gold (order 3), and the Stock Index (order 4). Tables 6-7 represent results of the parameter estimation of the selected C-vine and D-vine copulas for each of the G-7 stock markets.
For France, Germany, and Italy, the first tree of Table 6 shows symmetric upper and lower tail dependence between all pairs, indicating similar dependence during upward and downward periods and an average dependence near to zero, particularly, between platinum and the stock market indexes. This result indicates that platinum may serve as a weak hedge for French, German, and Italian investors. Taking the second tree, results show that gold may serve as both hedge and safe haven on the stock market of France, and silver may have the role of a hedge and a safe haven asset in the stock markets of Germany and Italy. Finally, the third tree shows zero upper tail for France, Germany, and Italy, a weak dependence on average for France, and negative dependence on average for Germany and Italy. This result reveals that silver is a weak hedge and safe haven asset for French investors, and gold serves as a strong hedge and a safe haven asset for German and Italian investors. In the first tree of Table 7, we observe symmetric upper and lower tail dependence given by the t-copula. We also observe a dependence on average strongly near to zero between silver and stock market indexes for Canada and US and between gold and stock market indexes for UK and Japan, indicating that silver (gold) is only a weak hedge asset for Canadian and American (UK and Japanese) investors. In the second tree, results indicate that gold may serve as a strong hedge and a safe haven asset (b c = 0 against the stock markets of Canada and US. However, silver is only a weak hedge for Japan and UK. Finally, the third tree shows weak dependence on average between platinum and stock market indexes, confirming the role of platinum as a weak hedge for the Canada, US, UK, and Japan stock markets. Regarding lower tail dependence, results indicate zero tail dependence for Canada and US, arguing that platinum is a safe haven for Canadian and American investors.
Table 7. Results of estimated parameters for D-vine copulas
Notes: P= platinum, G=gold, S=silver and I= stock index for each country. /,k denotes copula between platinum and gold. k,l|/ denotes copula between gold and silver given platinum. m is the Kendall's tau of the specified copulas, n is the copula parameter and o is the degree of freedom.
Canada
Japan
In the next step, we estimate the tail correlation between precious metals and stock market index returns to find out if precious metals are strong safe haven assets or just weak safe haven assets for the stock markets under study. For that end, we performed a Monte Carlo simulation with N = 10 5 simulations from the joint distributions characterised by the best fitting copula functions for each country where precious metals are shown to be weak safe haven assets. Then, we compute the correlation between precious metals and stock market index returns at the 1% tail of stock market returns. Notes: P= platinum, G=gold, S=silver and I= stock index for each country. g,h denotes copula between platinum and gold. g,i|h denotes copula between platinum and silver given gold. a is the Kendall's tau of the specified copulas, e is the copula parameter and f is the degree of freedom.
Table 9 summarises our empirical findings regarding hedge and safe haven properties of precious metals.
Table 9. Hedge and safe haven analysis results summary
In Fig 2 we present the 95% level curves of the BiVaR between precious metals and the S&P500 index. As we can see, the 95% level curve of the empirical copula is closer to the level curve of multiplication or independence, which means that the losses for precious metals and the stock market are uncorrelated.
As we expected before, the low correlations between the precious metals and equity market indices support the diversification properties of metals. Thus, in order to guarantee benefit from diversification, it is preferred to put precious metal and equity indexes in the same portfolio. This result is similar for the other stock markets. 8
8 The others level curves of the BiVaR are contained in Appendix A, Error! Reference source not found.
Discussion
By leading this study, additional insights are made regarding the literature debate on the interaction between precious metals and stock markets with a special concern on the valuable roles of precious metals; namely gold, silver and platinum as hedge, safe haven and diversification assets. Indeed, this study adopts the combination of copulas and VaR techniques. On one hand, we apply vine copulas to assess for the hedge and safe haven properties of precious metals for the G-7 stock markets and propose a new definition of "strong safe haven" asset by computing the tail correlation using simulated data from the bestfitting copula model. Our estimation results show that gold serves as a strong hedge and safe haven for European countries as well as the US and Canada, but only weak hedge for UK and Japan. These findings are in line with previous studies Baur and[START_REF] Baur | Is gold a safe haven? International evidence[END_REF][START_REF] Bredin | Does gold glitter in the long-run? Gold as a hedge and safe haven across time and investment horizon[END_REF] Gold is considered as safe haven asset for several reasons. To start with, gold served historically as a currency and still remains a monetary asset. Second, it is the most liquid precious metal and easiest to trade. Also, our results confirm that gold does not comove with stocks during extreme market conditions. Last but not least, it is an international asset, and its value is independent of the decision of a particular State.
Regarding silver, our results reveal that it may acts as strong hedge and safe haven asset in German and Italian stock markets, which means that it may be suitable and affordable alternative safe haven since it is cheaper than gold. However, silver is not considered as a safe haven asset for the rest of G-7 stock markets since it is more thinly traded, making it more volatile and illiquid. For the case of platinum, it is a weak hedge for all G-7 stock markets and only a weak safe haven for American, Canadian, and Japanese stock markets. As we know, an
Fig 2. BiVaR level curves between S&P 500 and Precious metals
asset is considered to be a safe haven when its market is extremely liquid, and platinum is relatively illiquid. This can explain the fact that platinum cannot be a safe haven for most of the G-7 stock markets. However, it is a cheap alternative to gold and may be profitable for investors if the price of gold continues to rise. On the other hand, we apply a novel methodthe BiVaR based copula method-to analyse the diversification potential of precious metals.
Our findings confirm that precious metals provide a useful means of diversification for G-7 investors.
Overall, our results suggest that there is some degree of heterogeneity regarding the role of precious metals between the G7 countries, which is due to the fact that each country has its own financial risk exposure.
To sum up, even though gold acts as a better hedge and safe haven for the G-7 stock markets, investors can find a valuable investment benefit in silver and platinum with different degrees (weak or strong).
Conclusions
In this study, we analyse the hedge, safe haven, and diversification potential of precious metals-namely gold, silver, and platinum-for the G-7 stock markets. Indeed, the vine copula method is used to test the hedge and safe haven hypotheses and the BiVaR is applied to assess the diversification benefits of precious metals. Our empirical results show that precious metal hedge and safe haven behaviors vary by country.
First, gold provides the strongest safe haven property for all G-7 countries, which is consistent with previous literature. This result means that gold may be used to offset losses in equity markets during turmoil periods. Also, silver bears the potential of a strong safe haven role for German and Italian stock markets. However, platinum provides a weak safe haven role for most developed markets. Furthermore, in line with existing studies, our study suggests that gold has a strong hedging property in developed stock markets. For silver and platinum, results show that they may act as weak hedge assets. Finally, the results of the BiVaR analysis argue that all precious metals exhibit diversification benefits for G-7 stock markets investors.
Our findings provide a noteworthy practical implication for investors in the G-7 countries in building their investment strategies. We suggest that investors may hedge their equity investments in normal times by investing in these precious metals and ensure their portfolios from losses during periods of turbulence by investing in gold.
Further research might analyze the out-of-sample forecasting of expected returns in precious metals and how precious metals investment should fit into a diversified portfolio.
Survival version of copula
p, q = p + q -s + s -p, s -q
Dependence measures
The Kendall's tau can be written as a function of the copula as follows:n m = -˜˜ p, q ™ p, q Notes: 8 ˆ and 8 9 denote the lower and upper tail dependence, respectively. For the Gaussian copula r s p et r s q are the standard normal quantile functions and r is the bivariate standard normal cumulative distribution function with correlation parameter .. For the tcopula • s p and • s q are the quantile functions of the univariate Student-t distribution and T is the bivariate Student-t cumulative distribution function with o the degree-of-freedom and . the correlation parameter. For the SJC copula, ‰ = s zv¢ • m ˆ and Š = s …£¤m 9 . m ˆ and m 9 denote the upper and lower tails of the SJC and the Joe-Clayton copulas.
Table A 2. Parameter estimates for marginal models of precious metals and stock market returns
The AR (1)-GARCH (1,1) model can be written as: = ¥ + ∅ + § , § = ¨ ©ª , ¨ ~¬-®¯° ---f , ª = ± + D § + ² ª , where ± >0, D , ² Ÿ 0, and are the logarithmic returns at time t, ¥ is a constant term of the return equation, § represents the return residuals, and ¨ are the standardised residuals which have a student-t distribution with f degrees of freedom.
CAC40
Fig 1 displays the dynamics between precious metals and the S&P 500 index prices during the sampling period. 3 It reflects that precious metals and the US stock market are poorly correlated. The same is true for the other markets shown in Fig. A 1 in the Appendix. For almost all countries, a falling stock market results in a rise in precious metals demand.
Fig 1 .
1 Fig 1. Precious metals and S&P500 dynamics
The upper (right) and lower (left) tail dependence measures are, respectively,
Fig.A 1
1 Fig.A 1 Precious metals and stock market indexes dynamics
Table 1 . Literature review summary Authors Precious metal Stock markets data Examined Property Model Findings Baur and Lucey (2010)
1
Gold Germany, Hedge and Asymmetric Gold is a hedge and a safe
US and UK safe haven GARCH haven
Baur and Gold Developed Hedge and Quantile Gold is a hedge and a safe
McDermott (2010) and emerging Safe haven regression haven in Europe and the
markets approach US and strong safe haven
in developed countries
Ibrahim and Gold Malaysia Diversifier, EGARCH Gold is a diversifier
Baharom (2011) Hedge and model
safe haven
Ciner et al. (2013) Gold US and UK Safe haven DCC model Gold is a safe haven
Hood and Malik Gold, silver US Hedge and GARCH model Gold serves as hedge and
(2013) and safe haven weak safe haven
platinum
Flavin et al. (2014) Gold US Safe haven Regime- Gold is a safe haven
switching
Model
Bredin et al. (2015) Gold Germany, Hedge and Wavelet Gold is a hedge and a safe
US and UK safe haven analysis haven
Arouri et al. (2015) Gold China Diversifier, VAR-GARCH Gold is a hedge and a safe
hedge and haven
safe haven
Lucey and Li Precious US Safe haven DCC model Silver, platinum and
(2015) metals palladium serve as safe
haven assets
Chkili (2016) Gold BRICS Hedge and Asymmetric Gold is a hedge and a safe
safe haven DCC model haven
Mensi et al. (2016) Gold Gulf Hedge and Quantile Gold is a strong hedge and
safe haven regression and safe haven at various
wavelet investment horizons.
decomposition
Low et al. (2016) Gold, Brazil, Hedge and GJR-GARCH Gold, silver, platinum and
sikver, Australia, safe haven model palladium serve as hedges
platinum China, and safe haven assets
and Germany,
palladium France, UK
and US
Table 1 .
1 (Continued)
Authors Precious Stock Examined Model Findings
metals markets data Property
data
Nguyen et al. Gold Japan, Hedge and safe Copula Gold serve as a safe
(2016) Singapore, haven haven asset for US,
Malaysia, UK, Singapore and
Thailand, Thailand
Philippines,
US and UK
Li and Lucey Gold, Developed and Safe haven Asymmetric Precious metals serve
(2017) sikver, emerging GARCH as safe haven assets for
platinum markets all countries under
and study clustered during
palladium some periods.
Shahzad et al. Gold Developed and Diversifier, Quantile-on- Gold is a strong hedge
(2017) Eurozone hedge and safe quantile method and a diversifier asset
peripheral haven
Bekiros et al. Gold BRICS Diversifier, Multi-scale Gold is a diversifier
(2017) hedge and safe wavelet copula- asset
haven GARCH
Wu and Chiu Gold US Diversifier and Asymmetric Gold is a safe haven
(2017) safe haven GARCH model
Wen and Cheng Gold BRICS, Safe haven Copula Gold is a safe haven for
(2018) Chechnya, emerging markets
Malaysia, and
Thailand
Raza et al. (2019) Gold Developed, Hedge and DCC model Gold serves as a hedge
emerging diversifier and diversifier asset
markets
Europe, Asia
Pacific and
Islamic stock
markets
Ali et al. (2020) Gold, 49 Safe haven, Cross- Precious metals and
sikver, international hedge, and quantilogram gold in particular
platinum stock markets diversification approach provide strong safe
and havens for developed
palladium and frontier stock
markets.
Table 4 . Descriptive statistics for log-returns
4 Dev denotes the standard deviation, JB denotes the Jarque-Bera statistic for normality testing. Q (20) denotes the Ljung-Box statistic for autocorrelation testing, LM stat denotes Engle's LM test statistic for heteroskedasticity testing. (*) indicates rejection of the null hypothesis at the 5% level.
CAC40 DAX FTSE FTSE NIKKEI S&P\ S&P500 Gold Silver Platinum
100 MIB 225 TSX
Mean(10 -2 ) 0.0102 0.0255 0.0054 -0.0016 0. 0211 -0.1195 0.0232 0.0370 0.0270 0.0178
Std. Dev 0.0155 0.0156 0.0134 0.0167 0.0140 0.0983 0.0115 0.0110 0.0200 0.0148
Max 0.1214 0.1236 0.1221 0.1238 0.1164 0.0992 0.1095 0.0684 0.1828 0.0843
Min -0.117 -0.0960 -0.1150 -0.1542 -0.1211 -6.6025 -0.0947 -0.0960 -0.3535 -0.1728
Skewness 0.0926 0.0573 -0.0203 -0.1592 -0.4742 -0.5417 -0.2356 -0.3902 -1.7524 -0.9114
Kurtosis 8.8223 8.2855 10.7229 9.1053 9.6310 11.2394 11.3305 7.5531 31.2669 12.1964
JB stat 4757,6* 3918,7* 8362,7* 5240,3* 6291,1* 9682,9* 9761,1* 2992* 113751* 12323,7*
Q(20) 63.41* 40.09* 81.20* 46.17* 22.31 68.80* 73.77* 26.53 84.10* 20.89
LM stat 198.79* 171.45* 354.85* 114.19* 243.51* 345.60* 365.35* 40.81* 117.90* 155.82*
Table 6 . Results of estimated parameters for C-vine copulas
6
France
Table 8 . Correlation between stock and precious metals returns at 1% tail of stock returns Copula Tail correlation Conclusions
8 As shown in Table8, the tail correlation is negative only for gold and silver. Hence, based on our definition of a strong safe haven asset, we conclude that gold may act as a strong safe haven instrument against extreme losses in France, Germany, Italy, Canada, and US stock markets. Silver is a strong safe haven asset in Germany and Italy. However, platinum is a weak safe haven asset against extreme losses in Canada and US stock markets.
Tree Blocks 1 2 g,h|i g,i i,h h,j i,j|h 3 g,j|i,h Family t-Student t-Student t-Student t-Student Survival Gumbel t-Student a 0.39 0.4 0.02 0.25 0.06 0.09 Parameter 0.5805 e 8.1081 f 0.5842 10.601 0.0379 8.7998 0.3883 11.645 1.0617 0 0.1433 19.984 b c 0.1540 0.1074 0.0133 0.0350 0.0789 0.0007 b d 0.1540 0.1074 0.0133 0.0350 0 0.0007
France Gold Clayton -0.0005 Strong safe haven
Germany Gold Rotated Gumbel -0.0001 Strong safe haven
Silver Rotated Joe -0.0003 Strong safe haven
Italy Gold Rotated Gumbel -0.0021 Strong safe haven
Silver Rotated Joe -0.0014 Strong safe haven
Platinum Student-t 0.0021 Weak safe haven
Canada Gold Rotated Joe -0.0010 Strong safe haven
Platinum Survival Clayton 0.0002 Weak safe haven
US Gold Gaussian -0.0001 Strong safe haven
Platinum Rotated Joe 0.0042 Weak safe haven
Table A 3. Empirical Kendall's tau matrices for G-7 counties
A
France
CAC 40 Gold Silver Platinum
CAC 40 1 -0.02249074 0.03375731 0.08429327
Gold -0.02249074 1 0.39800740 0.41186940
Silver 0.03375731 0.39800740 1 0.39854344
Platinum 0.08429327 0.41186940 0.39854344 1
Sum 1,14054132 1,83236754 1,830308 1,89470611
Germany
DAX Gold Silver Platinum
DAX 1 -0.02068612 0.04223904 0.08117698
Gold -0.02068612 1 0.39800740 0.41186940
Silver 0.04223904 0.39800740 1 0.39854344
Platinum 0.08117698 0.41186940 0.39854344 1
Sum 1,14410214 1,8305629 1,83878988 1,89158982
UK
FTSE100 Gold Silver Platinum
FTSE100 1 0.01770941 0.06112196 0.1128933
Gold 0.01770941 1 0.39800740 0.4118694
Silver 0.06112196 0.39800740 1 0.3985434
Platinum 0.11289327 0.41186940 0.39854344 1
Sum 1,19172464 1,827586 1,8576728 1,923306
Italy
FTSE MIB Gold Silver Platinum
FTSE MIB Gold 1 -0.01972463 -0.01972463 1 0.03392975 0.39800740 0.08201692 0.41186940
Silver 0.03392975 0.39800740 1 0.39854344
Platinum 0.08201692 0.41186940 0.39854344 1
Sum 1,135671 1,829601 1,83048059 1,89242976
Japan
NIKKEI225 Gold Silver Platinum
NIKKEI225 1 0.02487169 0.05709541 0.09598458
Gold 0.02487169 1 0.39800740 0.41186940
Silver 0.05709541 0.39800740 1 0.39854344
Platinum 0.09598458 0.41186940 0.39854344 1
Sum 1,17795168 1,83474849 1,85364625 1,90639742
Canada
S&P/TSX Gold Silver Platinum
S&P/TSX Gold 1 -0.002292608 -0.002292608 1 0.005024797 0.398007396 0.008544629 0.411869405
Silver 0.005024797 0.398007396 1 0.398543445
Platinum 0.008544629 0.411869405 0.398543445 1
Sum 1,015862034 1,812169409 1,801575638 1,818957479
US
S&P500 Gold Silver Platinum
S&P500 1 -0.006665807 0.01759668 0.06968461
Gold -0.006665807 1 0.39800740 0.41186940
Silver 0.017596684 0.398007396 1 0.39854344
Platinum 0.069684607 0.411869405 0.39854344 1
Sum 1,093947098 1,816542608 1,81414752 1,88009745
The G-7 countries are Canada, France, Germany, Italy, Japan, U.K and U.S
The fixing occurs twice a day, except for silver, which is fixed at noon each day. The price fixing done in the morning is called the AM Fix, while the afternoon fixing is called the PM Fix.
Plots of precious metals and other stock market indexes are reported in Appendix A, Error! Reference source not found..
We refer the reader to[START_REF] Joe | Multivariate models and dependence concepts[END_REF] andNelson (2006) for more details about copulas.
The main characteristics of copula functions used in this study are summarised in Appendix A, TableA 1.
We refer readers to see TableA2, in Appendix A
FigA.1(Continued)
Acknowledgments
This work was financially supported by the "PHC Utique" programme of the French Ministry of Foreign Affairs and Ministry of Higher Education and Research and the Tunisian Ministry of Higher Education and Scientific Research in the CMCU project number 18G0411.
Rotated version of copula
There are three rotated forms, with angles 90 degrees, 180 degrees, and 270 degrees, defined as follows:
7 show that the D coefficients, which measure the adjustment to past shocks, and the ² coefficients, which measure the volatility persistence of the process, are significant for all series, which indicates that the conditional volatility is persistent over time and is past-dependent. As usual, all series are described by significant GARCH effects. The Ljung-Box (Q statistic) and ARCH (LM statistic) statistics indicate that neither autocorrelation nor ARCH effects remained in the residuals of the marginal models. We also checked the adequacy of the Student-t distribution model, testing the null hypothesis that the standardised model residuals were uniform (0,1) by comparing the empirical and theoretical distribution functions using the Kolmogorov-Smirnov test. The p-values for those tests indicate that the null hypothesis could not be rejected at the 5% significance level for either of the marginal models. Hence, the marginal models are correctly specified, and the copula model can correctly capture dependence between precious metals and equity markets. |
03208691 | en | [
"shs.eco"
] | 2024/03/04 16:41:22 | 2021 | https://hal.science/hal-03208691/file/S0144818821000144.pdf | Maxime Charreire
email: [email protected]
Eric Langlais
email: [email protected].
Economix
Should environment be a concern for competition policy when …rms face environmental liability ?
Keywords: Strict liability, negligence, damages apportionment rules, market share liability, environmental liability, Cournot oligopoly, competition policy K13, K32, L13, L49
In the recent period, more and more voices have called for unconventional competition policies as a way to achieve higher environmental investments from …rms. This paper shows that this objective may come into con ‡ict with those of environmental liability laws. We introduce a basic oligopoly model where …rms produce a joint and indivisible environmental harm as a by-product of their output.
We …rst analyze the e¤ects on the equilibrium of alternative designs in environmental liability law, secondly, we discuss the rationale for "non-conventional" competition policies, i.e. more concerned with public interest such as the preservation of environment (as well as human health and so on). We study …rms decisions of care and output under various liability regimes (strict liability vs negligence) associated with alternative damages apportionment rules (per capita vs market share rule), and in some cases with damages multipliers. We …nd that basing an environmental liability law on the combination of strict liability, the per capita rule, and an "optimal" damages multiplier, is consistent with a conservative competition policy, focused on consumers surplus, since, weakening …rms' market power also increases aggregate expenditures in environment preservation and social welfare. In contrast, a shift to the market share rule, or to a negligence regime, may be consistent with a restriction of competition, since …rms' entry may instead lead to a decrease in aggregate environmental expenditures and losses of social welfare. Nevertheless the …ne tuning of the policy requires speci…c information from a Competition Authority, which we discuss as well.
Introduction
In the very recent period, competition policy has been advocated as a mean to reach environment preservation. Before Ursula von der Leyen the new President of the European Commission, announced as a priority the issue of environment sustainability (European Green Deal), the European Parliament in February 2019 asked for an adaptation of competition policies to take into account issues of public interest such as environmental sustainability, corporate social responsibility and so on. 1 The argument is that "competition law may be an obstacle to competition restraints that would nonetheless promote welfare, by ensuring more sustainable consumption and production for instance [...] When production involves a negative externality, such as a harmful by-product of output (e.g. pollution) or the decentralized use of a common resource, then a restraint of trade among competitors and the ensuing drop in output will necessarily limit the negative externality (see Cosnita-Langlais (2020) page 1)". In this paper, we contribute to this debate, analyzing possible designs for environmental liability laws, considering how …rms'decisions are constrained by these alternative environmental liability laws, and …nally discussing the consequences for (non-conventional) competition policies.
We consider the realistic situation where …rms in an oligopoly produce a joint and indivisible environmental harm as the by-product of their output. Aside of expenditures in productive inputs and the precaution cost, the existence of environmental liability law compels …rms to bear supplementary costs that re ‡ect their liability burden. The characteristics of this liability cost for environmental harm re ‡ect the speci…c design of the law: this latter may be described as the combination of a liability regime (negligence vs strict liability) that requires or not a targeted precautionary activity to avoid any liability burden, a damages apportionment rule (per capita vs market share rule) that de…nes the way total damages for harm done to the environment are allocated among responsible …rms, and a damages multiplier (loosely speaking, punitive damages) that expands the e¤ective damages paid above the observed value of the environmental damages su¤ered. Here however, we consider a basic competition policy focused on consumers surplus and the expansion of aggregate output on the market thanks to …rms' entry on the market. 2 Our concern is twofold: On the one hand, do environmental liability laws give e¢ cient incentives to …rms for the protection of environment, or at least, is there a speci…c legal design that dominates the others according to the Social Welfare criterion ? On the other hand, to what extent are environmental liability laws constraining competition policy, justifying non-conventional, more lenient actions with the purpose of increasing …rms'expenditures in environment preservation ? The analytical framework we use to tackle these issues is based on a symmetric oligopoly à la Cournot, where …rms produce a homogeneous good for a constant marginal cost. The cost of precautionary measures is modeled as a …xed cost with respect to production, and liability cost as a function of the expected environmental harm speci…c to the design of the liability law. The expected environmental harm is related both to expenditures in precaution and production, and increases with the industry output at a more-than-proportional rate.
This "cumulative e¤ect" [START_REF] Daughety | Cumulative Harm and Resilient Liability Rules for Product Markets[END_REF] is relevant in contexts where environmental harms are the result of non linear e¤ects, i.e. responses more than proportional to the exposition to dangerous/toxic substances, and/or because harms can only be observed after lagtimes and long latency periods, once a serious environmental deterioration is achieved. 3Regarding our …rst research question, we show that environmental liability laws generally fail in achieving e¢ cient care expenditures. Firms follow suboptimal decision rules in the setting of precautionary measures as well as in the choice of output, and thus reach equilibrium levels that may be excessive as well as insu¢ cient with regards to socially optimal ones. The introduction of optimal damages multipliers under strict liability allows …rms to adopt e¢ cient rules in care activity, but the outcome at equilibrium is nevertheless that care as well as output are lower than what is required in a Pareto e¢ ciency. Under negligence (with a standard of care) …rms face e¢ cient incentives for care activities, but choose excessive levels of output and care. The main implication of this part of the paper is that environmental liability laws, alone, cannot attain the social optimum in a context where care and output decisions are interrelated.
With regards to our second research question, we analyze how the speci…c design of environmental liability laws is constraining the objective of competition policy. A law based on strict liability associated with the per capita rule, and an optimal damages multiplier, has an appealing feature for a Competition Authority. In implementing the law, Court actions do not impinge on its domain: a standard/conservative competition policy will succeed in increasing consumer surplus thanks to the expansion of aggregate output, and this will be accompanied by an increase in aggregare care expenditures. Moreover, such a policy is welfare improving, and should market structure be closer to perfect competition then the equilibrium would coincide with the social optimum. In contrast, we show that an environmental liability law based on the market share rule under strict liability, or negligence with a standard of care, does not verify this property. Such designs translate to …rms a structure of liability cost that develops anti-competitive e¤ects, as it increases with the number of …rms on the market. Thus, …rms' entry may lead to a cut in individual output and care decisions large enough to produce a decrease of the aggregate output and care levels, despite more …rms compete on the market. Nonetheless, Social Welfare under such laws is maximized for a …nite number of …rms; in contrast, as the market structure becomes closer to perfect competition, the aggregate care and output levels fall short of their optimal values. In all, strict liability with the per capita rule and optimal damages multipliers appears as an "ideal" environmental liability law, a kind of …rst best design, and is consistent with a standard competition policy dedicated to the improvement of competition on the market. This ideal policy mix requires no speci…c coordination between Courts and the Competition Authority. In turn, alternative designs for environmental liability law put stronger constraints on …rms as well as a Competition Authority, that may justify non-conventional competition policies, limiting …rms'entry, as a kind of second best solution. However, to reach the …ne tuning of such orientation, a Competition Authority needs speci…c information on Courts' behavior as well as care technologies. To sum up, the key point for a Competition Authority is the knowledge of the impact of liability law on …rms production costs, since this cost depends on the liability regime as well as the sharing rule of damages between …rms.
The former results are obtained under the implicit assumption that …rms make positive pro…ts at equilibrium. In this perspective we also show that under the market share rule …rms make a higher pro…t than under the per capita rule (for any given number of …rms, and the optimal damages multiplier). This means that regarding the issue of …rms'entry, both rules exert di¤erent private incentives on …rms, that may potentially translate into rule speci…c consequences at the long run equilibrium : the zero pro…t condition is met with more …rms on the market under the market share rule than under the per capita rule.
However, we show that in such a long run equilibium the aggregate output, the aggregate care expenditures as well as the level of Social Welfare obtained under the per capita rule are always larger than under the market share rule.
Section 2 reviews the literature. Section 3 introduces the model and solves the social optimum.
Section 4 analyzes the equilibrium of the industry under Cournot competition, when strict liability is associated with either the per capita or the market share rule. We study whether the combination of a damages multiplier and …rms'entry have the potential to recover the social optimum. Section 5 considers the implications of a shift from strict liability to the negligence rule. Section 6 a¤ords some robustness checks, and discuss the implications for competition policy. Section 7 concludes.
Literature review
The central issue of the paper being the design of environmental liability laws, it is worth starting with some institutional and legal considerations, and existing environmental laws. Comprehensive Environmental Response, Compensation, and Liability Act -CERCLA -in USA has been adopted by the Congress in 1980 (and amended by the Superfund Amendments and Reauthorization Act of 1986) as a tool allowing to clean up uncontrolled or abandoned hazardous-waste sites existing throughout the United States, as well as accidents, spills, and other emergency releases of pollutants and contaminants into the environment. All activities targeted by the Act are subject to strict liability, and in cases where the several. 4 But CERCLA is silent on issues in relation with damages apportionment. The Environmental Liability Directive of the European Union was passed for similar reasons, stating in its preamble that: "There are currently many contaminated sites in the Community, posing signi…cant health risks, and the loss of biodiversity has dramatically accelerated over the last decades. Failure to act could result in increased site contamination and greater loss of biodiversity in the future" (Environmental Liability Directive 2004/35/CE of the European Parliament and the Council, alinea (1) page 2). Alike CERCLA, the European Directive introduces a distinction between operations and activities that are subject to strict liability all being listed in its Annex III, 5 and those (not listed) that are subject to negligence. However, it does not make a de…nite choice regarding the rule governing damages sharing among multiple party causation; instead, it states that liability apportionment should be determined in accordance with national law. Indeed, statute law (both in common law countries and civil law countries) does not provide such provision for apportioning damages among multiple tortfeasors, but case law provides traditional solutions. Courts decisions are founded on the seriousness of each defendant's misconduct to establish how the damages compensating the victims will be shared among the di¤erent injurers. Accordingly, two polar rules have emerged and are speci…cally of interest here, e.g. the no liability rule and the per capita rule. 6 Less often, Courts use the solution called the market share rule, according to which total damages are shared between tortfeasors competing in the same industry, in proportion to their market share. It …rst appeared in 1980 in the Californian case "Sindell v. Abbott Laboratories", and US Courts have limited its use up to now to toxic torts (exposure to a chemical) such as in the abestos litigation (Becker v. Baron Bros, 649 A.2d 613 (N.J. 1994).) or the MTBE (a gasoline additive) litigation (In re Methyl Tertiary Butyl Ether, 175 F. Supp. 2d 593;S.D.N.Y. 2001). In France, two cases come to the mind where the market share has been applied: the Orly Airport litigation for noise pollution (1988, Cass. 2e civ, No 86-12.543), and more recently the Distilbène litigation 7 , a case close to Sindell's one. Some 4 According to CERCLA, the liable party is the owners or operators of sites where a hazardous substance has been released, as well as the generators and transporters of hazardous substances which have been released. In cases where the Environmental Protection Agency is forced to use the Superfund, punitive damages may be imposed up to three times the cleanup costs incurred from the owner or operator of the property or from the generator of the hazardous materials. When activating the Superfund, the Environmental Protection Agency designates a Potentially Responsibile Party (ideally, the deep pocket ) to implement or …nance the cleanup of a site on which hazardous materials have been found. Because liability is joint and several, PRP is sent scrambling to identify other PRPs to whom it can look for contribution (Smith 2012). Later on, the 1990 Oil Pollution Act has been prompted after the 1988 Exxon Valdez disaster (major oil spills in US waters).
5 Indeed, Annex III lists oprations and activities that are otherwise convered by a regulation of the European Parliament or directive of the European Council.
6 Basic causation requirements imply that the contribution of each defendant, among the pool of identi…ed tortfeasors, should be proportional to its contribution to the harm of the victims. Thus, if no fault is established for any defendant, then no one is liable (no liability rule); if all the defendants committed a fault with the same intensity, then the damages are equally shared between them (per capita rule). In France, for example, Article L. 162-18 of Code de l'environnement, states that: Lorsqu'un dommage à l'environnement a plusieurs causes, le coût des mesures de prévention ou de réparation est réparti par l'autorité visée au 2 de l'article L. 165-2 entre les exploitants, à concurrence de la participation de leur activité au dommage ou à la menace imminente de dommage.
7 See Tribunal de Grande Instance of Nanterre, April 10, 2014 n 12/12349 and n 12/13064. Both cases concern the diethylstilbestrol (DES), a product delivered to pregnant women and which caused years later injuries to the children exposed in utero.
scholars advocate for an extensive use of the market share rule to environmental liability or competition law [START_REF] Ferey | Pour une prise en compte des parts de marché dans la détermination de la contribution à la dette de réparation[END_REF]G'sell 2013, G'sell 2010).
As a paper focused on environmental liability, our work of course has connections with the vivid literature of the 90s focused on extended liability as a solution to injurer's insolvency and the judgmentproof problem (see for example [START_REF] Beard | Bankruptcy and care choice[END_REF][START_REF] Boyd | Noncompensatory Damages and Potential Insolvency[END_REF],1997[START_REF] Boyer | Environmental Risks and Bank Liability[END_REF][START_REF] Innes | Optimal liability with stochastic harms, judgement-proof injurers, and asymmetric information[END_REF][START_REF] Pitchford | How liable should a lender be ? The case of judgment-proof …rms and environmental risks[END_REF], Ringleb and Wiggins 1990[START_REF] Shavell | The judgment proof problem[END_REF]. [START_REF] Van 't Veld | Hazardous-industry restructuring to avoid liability for accident[END_REF] considers the e¤ects of liability on …rms'size, and discusses the welfare impacts following the restructuring of judgment-proof …rms. In the present set up with a symmetric oligopoly, the judgment proofness is of weakest interest since either all …rms are judgment-proof, or no …rm is. In contrast, our paper considers the interplay between liability and market mechanisms, taking into account the strategic interactions between …rms.
The market share rule, mainly used in personal injury cases, has motivated a debate among US scholars focusing its consistency with basic causation requirements, as well as its moral/ethical roots (see [START_REF] Dillbary | Apportioning liability behind a veil of uncertainty[END_REF][START_REF] Priest | Market Share Liability in Personal Injury and Public Nuisance Litigation: An Economic Analysis[END_REF]. The debate has known a revival in France with the 2014 Distilbène litigation. Several French scholars argue in favor of traditional solutions adopted for damages apportionment [START_REF] Molfessis | Du critère des parts de marché comme prétendu remède à l'incertitude sur l'origine d'un dommage[END_REF], Quézel-Ambrunaz 2010). Others have defended the market share rule on the grounds that market shares may be a proxy for the likelihood of individual liability at the stage of damages apportionment, in contexts of joint liability characterized by hard uncertainty and ambiguous causation [START_REF] Ferey | Pour une prise en compte des parts de marché dans la détermination de la contribution à la dette de réparation[END_REF]G'sell 2013, G'sell 2010), such as when the set of all potential o¤enders is identi…ed without any doubt, but it is impossible to establish the origin of the harmful product (evidence are missing, or destroyed). In contrast, our paper studies the case for joint liability from an ex ante perspective, a¤ording a comparative analysis of di¤erent apportionment rules (per capita vs market share rules) and their impact on the incentives to undertake precaution in richer strategic environments (oligopoly market). Moreover, we discuss di¤erent policy implications, including the choice of a liability regime (strict liability vs negligence).
The interactions between liability laws and other kinds of public interventions has motivated an important literature in Law & Economics, to begin with the mix between ex ante regulation and ex post liability in di¤erent informational contexts (Bhole and Wagner (2008), [START_REF] Friehe | On the political economy of public safety investments[END_REF], [START_REF] Innes | Enforcement costs, optimal sanctions, and the choice between ex-post liability and ex-ante regulation[END_REF], [START_REF] Kolstad | Ex post Liability form Harm vs. Ex ante Safety Regulation : Substitutes or Complements ?[END_REF], [START_REF] Schmitz | On the joint use of liability and safety regulation[END_REF], Shavell (1984a,b)). Studies focused on the mix between competition policy and liability law are scarce. [START_REF] Marino | Market Share Liability and Economic E¢ ciency[END_REF] studies product liability for joint but divisible harms, and analyses the e¤ects of …rms'entry in a oligopoly under strict liability and the market share rule. [START_REF] Daughety | Cumulative Harm and Resilient Liability Rules for Product Markets[END_REF] consider the case of product liability law for divisible but cumulative harms, and analyze …rms'entry under strict liability with a "modi…ed" market share rule vs negligence, while [START_REF] Friehe | Tacit collusion and liability rules[END_REF] addresses the issue of tacit collusion under liability laws.
Baumann, Cosnita-Langlais and Charreire (2020) study also the stability of cartels under liability laws in case of indivisible environmental harms, but do not introduce precautionary expenditures. Our paper considers instead the case for environmental liability law, and discuss the interplay between competition policy and di¤erent designs for environmental liability laws.
Our work is a contribution to the debate regarding the de…nition of new areas for competition policy, i.e. whether competition authorities should take into account public interests above competition objectives. Existing works focused on the issue of environmental protection assume that …rms'contributions to environmental protection are voluntary [START_REF] Hashimzade | Do Corporate Environmental Contributions Justify the Public Interest Defence[END_REF], [START_REF] Schinkel | Public Goods Provision by a Private Cartel[END_REF], [START_REF] Schinkel | Can Collusion Promote Sustainable Production[END_REF], [START_REF] Treuren | Can Collusion Promote Sustainable Consumption and Production ? Not Bene…cially Beyond Duopoly[END_REF]). In contrast, our analysis states that …rms' decisions are constrained by environmental liability, and suggests that this raises coordination issues between Courts (focused on the incentives to precautionary expenditures) and Competition Authorities (focused on consumers surplus), in order to improve environment protection measures.
Lastly, our paper is obviously connected to the vast literature on product liability law. Tracing back to the pioneering works by [START_REF] Polinsky | Strict Liability vs. Negligence in a Market Setting[END_REF], [START_REF] Polinsky | Products Liability, Consumer Misperceptions, and Market Power[END_REF], a recent stream of the Law & Economics literature has analyzed the performances of product liability in alternative competitive environments. 8 To sum up, this literature arrives at a quite optimistic conclusion regarding product liability in oligopolistic markets when consumers'expected harm is modeled as a linear function of …rms' individual output 9 : under any liability regime (strict liability, negligence, as well as no liability) …rms are induced to choose the …rst best optimal level of care. Hence, there would be "no role for the in ‡uence of market structure or strategic interaction on liability policy" [START_REF] Daughety | Economic Analysis of Products Liability: Theory[END_REF] in this case. This equivalence of liability rules, and the e¢ ciency result fall down when the expected harm to victims depends on the level of output in a more complex way (Marino 1988, Daughety and[START_REF] Daughety | Cumulative Harm and Resilient Liability Rules for Product Markets[END_REF]. 10 Daughety and Reinganum (2014) show that for cumulative harms to consumers, strict liability associated with a "modi…ed" market share rule (see below) and an optimal damages multiplier, dominates the negligence rule (that degenerates into a no liability rule). In contrast our paper shows that for a joint, indivisible and cumulative environmental harm, strict liability with the per capita rule and an optimal damages multiplier, dominates strict liability with the market share rule, as well as the negligence rule. 8 To have a focus on oligopoly markets: quantity competition [START_REF] Baumann | Optimal damage multipliers in oligopolistic markets[END_REF][START_REF] Daughety | Economic Analysis of Products Liability: Theory[END_REF][START_REF] Friehe | Tacit collusion and liability rules[END_REF][START_REF] Leshem | Allocation of Liability: on the E¢ ciency of Composite Sharing Rules[END_REF], or price competition (Cournot with product di¤erentiation: Baumann, Friehe and Rasch 2018; spatial competition: Baumann, Friehe and Rasch 2016, Chen and Hua 2017).
9 [START_REF] Polinsky | Products Liability, Consumer Misperceptions, and Market Power[END_REF] assume that the harm conditional to an accident is constant, and that the probability of accident is proportional to care. [START_REF] Marino | Market Share Liability and Economic E¢ ciency[END_REF][START_REF] Marino | Product Liability and Scale E¤ects in a Long-Run Competitive Equilibrium[END_REF] and [START_REF] Daughety | Cumulative Harm and Resilient Liability Rules for Product Markets[END_REF] reach similar conclusions when the probability of harm is linear in the output.
1 0 [START_REF] Marino | Product Liability and Scale E¤ects in a Long-Run Competitive Equilibrium[END_REF] considers "scale e¤ects" on the probability of accident, when it depends on the level of output, and shows that comparative performances of the liability regimes depends on the one hand on the nature of these scale e¤ects (the probability of harm may increase or decreases with the output), and on the other, on the intensity of competition (pure competition vs Cournot oligopoly). [START_REF] Daughety | Cumulative Harm and Resilient Liability Rules for Product Markets[END_REF] deal with a case of a cumulative harm, e.g. when the expected harm is proportional to the square of the output; they …nd that for both a monopoly and an oligopoly (assuming that …rms' speci…c risks are independent) strict liability maintains e¢ cient incentives to take precaution, in contrast to no liability and negligence; however, strict liability induces an underprovision both of care and output at equilibrium as a result of the distortions due to the imperfect competition. Daughety and Reinganum show that the superiority of strict liability holds when …rms speci…c risks are interdependent, to the extent that strict liability is associated with the use of an optimal damages multiplier in order to maintain e¢ cient incentives to take precaution.
3 The framework
Assumptions
We consider a symmetric oligopoly à la Cournot where n > 2 …rms compete for a homogeneous product. 11 Both consumers and …rms are risk neutral. The quantity of goods produced by …rm i is denoted q i (i = 1; :::; n), and Q = n P i=1 q i represents the industry output. The market demand is P (Q) = a bQ, (a > 0; b > 0). We assume that a …rm i (8 = 1; :::; n) operates at a cost given by C(q i ; x i ) = x i , where x i represents the level of care of …rm i; this means that the marginal cost corresponding to the use of all productive inputs (aside from care expenditures) is constant and null, and that care expenditures do not depend on …rm's activity level. Finally following [START_REF] Tietenberg | Indivisible toxic torts: The economics of joint and several liability[END_REF], the product may cause a joint and indivisible harm to society (third party victims, not consuming the good), such that the expected harm is de…ned as H(Q; X) = h(X):Q 2 , where h(X) such that h(0) > 0 and h(X) < 1 for any X 0, is the joint probability of harm and X = n P i=1
x i is the aggregate care expenditure.
Finally, most of the central results of the paper are obtained thanks to a set of very simple assumptions, namely:
H1a: b > 2:h(0); H1b: h 0 (X) < 0 < h 00 (X) for any X 0; H1c: H(Q; X) is convex in (Q; X).
H1a is a very basic assumption regarding two parameters of the paper -requiring that the price sensibility of market demand is not too large compared to the baseline probability of accident. H1b is usual in the literature on liability rules, saying that the probability function is decreasing and convex with care expenditures. H1c has sound economic motivations, since it establishes that the expected external harm to society is supposed to be a convex function, which is a very natural and reasonable assumption for a cost function. Remark that H1a and H1b together imply that the next condition is satis…ed:
For any X > 0 and
2 : b > (h(0) h(X)) > h 0 (X): X (C1)
Several basic static comparative results require (C1) to hold. H1c in turn implies:
For any (Q; X) > 0 :
h h(X)h 00 (X) 2 (h 0 (X)) 2 i > 0 (C2)
Second order conditions are satis…ed (most of the time) under (C2), as well as the stability of Nash equilibria (see below). When necessary, we will introduce additional restrictions in order to qualify our results.
Remark 1: Care expenditures are captured here as a …xed cost, independent of the units of output produced -care is durable, with the terminology of [START_REF] Nussim | A revised model of unilateral accidents[END_REF]. This makes sense when such expenditures correspond to the acquisition of safety technologies which are very speci…c assets adapted to a …rm's business, and as such represent inputs that are essential for the …rm's activity but that cannot be relocated to other businesses. This encompasses a large set of safety devices: it can be (smoke) alarms and detectors (for the leakage of toxic substances), a videosurveillance device, or the installation of a su¢ cient number of circuit-brakers. In the same vein, it can be also the investment in a containment system (to prevent the leakage of dangerous products outside of the plant, and limit the release in the environment), the capacity of which does not re ‡ect the normal activity on the plant on a day-to-day basis, but an abnormal event, such as the biggest accident at the plant. Alternatively, care has a durable nature when it corresponds to R&D investments dedicated to the design of "green products" and entails a substitution between polluting goods and products with lower environmental impacts. In contrast, precaution is said to be non-durable when it is related to the usual activity at the plant and the quantity supplied. It can be establishing check-lists and adopting regular check-up procedures for equipment, together with their regular cleaning to avoid and detect potential failures and repair in time; as well as the observance of resting periods for the employees in order to limit their fatigue and the occurrence of human failures. These kinds of safety measures imply expenditures that are roughly speaking proportional to the number of working hours for employees as well as the intensity with which equipments are used, and thus proportional to the …nal output of the plant. Care can also be understood as having a non-durable nature when we consider various activities dedicated to the collect of end-of-life products, and/or e¤orts focused on recycling industrial wastes and components of end of life products. 12 In a companion paper (Charreire and Langlais 2020), we compare Joint Liability vs Joint and Several
Liability for a non-durable care, under strict liability; we show that the main results of the current paper (propositions 1 to 5) still hold.
Remark 2: Indivisible harms are usually associated either with situations where several tortfeasors act in concert to produce the harm, or when the independant but successive acts of several tortfeasors have conducted to produce a single result/harm [START_REF] Kle¤ner | Successive Torts Resulting in a Single, Indivisible Injury: Plainti¤s, Prepare to Prove the Impossible[END_REF]. In both cases, the point is that it is by no mean possible to disentangle the in ‡uence of each single …rms and thus, it is impossible to assess neither individual harms, nor individual probabilities. For practicles purposes, a indisivisible harm may be the result for example of the simultaneous actions by …rms the activities of which are concentrated on the same geographical area. We can think of noise pollution (in the suburban neighborhood of Orly airport), or water pollution (rivers pollution with acids from tannery activities and dyeing plants in Indian cities; or Great Lakes pollution by mercury from the automobile industry in North America). However, indivisibility do not require the contemporaneous actions of …rms to exist. Whenever pollutions are di¤use, the harm becomes observable only after a latency periods or lag time that are quite long, such that it may be impossible to trace back the origines of the pollution and identity of the polluters, as for soil pollutions (in former mining areas in Eastern France). In famous litigation cases (see "Sindell v.
Abbott Laboratories", or the MTBE litigation in USA; in France, the Distilbène litigation ), the existence of indivisibilities has been motivated de facto because it was impossible several years after the accidental event for Courts to disentangle the individual responsibilities of each tortfeasor (evidence are missing, or destroyed), at least too costly to reach this goal (given the number of injurers and the complexity of the phenomenon) and separate individual responsibilities at a reasonable economic and social cost. By the same token, the accumulation of toxic substances produced by di¤erent polluters in a given area may also entail non linear reactions above some thresholds (du to the physical or chemical property of the polluant, at any horizon), that would not manifest should the polluter be unique. For example, in the case of the Orly Airport litigation for noise pollution (1988, Cass. 2e civ, No 86-12.543), the individual contribution of each planes and/or company cannot be easily assessed, at least at a reasonable economic or social cost -neither instantaneously, nor across periods.
Our paper is focused on indivisible harms, in contrast with the literature that deals with liability rules and market interactions pioneered by [START_REF] Polinsky | Strict Liability vs. Negligence in a Market Setting[END_REF]. Indivisibility has usually been captured in analysis of multitortfeasors cases (see [START_REF] Kornhauser | Sharing Damages among Multiple Tortfeasors[END_REF][START_REF] Tietenberg | Indivisible toxic torts: The economics of joint and several liability[END_REF] through a total expected harm function denoted as h(x 1 ; :::; x n ) D, where h(x 1 ; :::; x n ) is the overall probability of accident and D is the exogenous harm conditional to the accident; a basic property is that individual care expenditures for each tortfeasor decreases the overall probability of accident/harm ( @h @xi < 0 for any i). Our paper extends this set up taking into account care decisions together with output decisions, under symmetric Cournot competition. The functional we introduce here has two advantages: i) it maintains the separation between the probability of harm h(X), and the harm realized post-accident D = Q 2 , and ii) it enables to capture …rms symmetry with a parcimonious assumption -indeed, h(X) avoids the recourse to unavoidable technical restrictions on derivatives 13 that would be necessary in order to reach …rms's symmetry under the alternative function h(x 1 ; :::; x n ) -thus saving on notations without loss of generality as far as symmetry is a concern. As we explain above, keeping with the assumption of …rms' symmetry is useful here for the purpose of comparing our results with those issued by other papers.
The benchmark : Social Welfare maximization
Social Welfare is the sum of consumers'total utility minus the sum of …rms'operating costs (cost of care), minus the expected harm:
SW = Q R 0 P (z)dz n P i=1 c(x i ) H (Q; X).
We will directly use …rms'symmetry in order to reduce the dimension of the optimization problem, allowing the benevolent planner to focus on a symmetric optimum where q 1 = ::: = q n = q and x 1 = ::: = x n = x, with the aggregate output and function of (q; x):
SW = anq b 2 (nq) 2 n:x h(nx) (nq) 2 (1)
The …rst-order conditions for an interior solution (q sw ; x sw ) are written:
a bnq = 2h(nx)nq (2) h 0 (nx): (nq) 2 = 1 (3)
meaning that the optimal output level (see condition ( 2)) must be pushed up to the point where average market proceeds (inverse of market demand, LHS) are equal to the marginal cost associated with expected harm (RHS); similarly (condition (3)), optimal care expenditures are such that the marginal cost of care (RHS) is equal to the social marginal bene…ts associated with the decrease in the expected harm (LHS).
Second order conditions are veri…ed by convexity of the expected harm function (see Appendix 1); they also imply that output and care are strategic complements, and that the Nash equilibrium satisfying
(2)-( 3) is unique and stable.
The convexity of the expected harm function also implies that (q sw ; x sw ) decreases with the number of …rms (obvious, the proof is omitted). In contrast, the aggregate output and care at the social optimum do not depend on the number of …rms: substituting with Q = nq and X = nx, it is easy to see that
(2)-(3) give Q sw = a b+2h(X sw ) and h 0 (X sw ) (Q sw ) 2 = 1 which do not depend on n. As a consequence, the optimal expected harm h 0 (X sw ) (Q sw ) 2 and Social Welfare at optimum SW (Q sw ; X sw ) do not depend on n. Hence, an increase in n has also no e¤ect on Social Welfare (as long as the cost of care is the unique source of …xed costs for …rms, which is consistent with our interpretation that care is durable).
Oligopoly equilibrium under strict liability
We assume in this section that the liability regime is based on strict liability according to which Courts do not rely on a negligence test, and only need to establish causation to prove the responsibility of …rms.
We discuss the case for negligence in section 5. In our context of joint and indivisible harms to the environment, the legal doctrine that allows the Plainti¤ to sue several defendants in a single trial is Joint Liability. Given Joint Liability, the main issue when Defendants are strictly liable is the way total damages will be apportioned between them.
Remark 3: Other legal doctrines of large application exist, for cases where multiparties are involved in the harm a victim is su¤ering from [START_REF] Kornhauser | Sharing Damages among Multiple Tortfeasors[END_REF]. One is Several Liability, according to which the Plainti¤ is entitled with the right to sue each one of the …rms in separate trials. Several Liability is not relevant here because in cases with indivisibility, establishing facts on an individual basis and separate causations for each defendant is not possible -or will leave the Plainti¤ with the impossibility to obtain the payment of damages and be compensated for the harms borne. An alternative doctrine is Joint and Several Liability which makes each defendant responsible in solidum, and provides the Plainti¤ with the opportunity to obtain the payment of total damages from any one of the defendants. We discuss the case with Joint and Several Liability in a companion paper.
Remark 4: We do not provide speci…c details regarding procedural rules, nor with settlement issues (existence of pre-trial negotiations), and generally speaking we ignore the existence of litigation costs, as well as strategic aspects of settlements and litigations and the associated impacts on litigation costs.
Similarly, we are not speci…c with Plainti¤ type/identity. It may be a (public) environmental agency -in this case, the trial will be heard by an administrative jurisdiction/Court; or it may be a private association involved in the protection of environment, or generally speaking a third-party victim (the Plainti¤ is an individual who has neither economic nor contractual relationships with the Defendants)hence, the case will be settled in front of a civil Court. Finally, we assume here that in the implementation of the law, Courts choose the way total damages are shared between multidefendants. This corresponds to reality for some national jurisdictions (see the Distilbène litigation in France), whereas for other jurisdictions or under alternative doctrines, the sharing of total damages is a decision of the Plainti¤ (see the implementation of CERCLA by Environmental Agencies in USA) who may strategically consider the solvability of the di¤erent defendants to secure the recovering of damages. Indeed, this is neutral here since we assume complete information, we rule out the insolvability problem, and do not consider strategic aspects of litigations. The rationale is that the paper is rather focused on liability costs and the way some speci…c designs of liability rules may shape this liability cost, with the ensuing consequences for …rms.
Joint liability and damages sharing
Let us denote L JL i (q i ; x i ) = s i H(X; Q) the amount of compensation accruing to …rm i; s i is …rm's i liability share, and > 0 is a damages multiplier, exogenously set to 4.2, that Courts may use to in ‡ate total damages. We will investigate the e¤ects of two di¤erent damages sharing rules.
The per capita rule. Courts may divide total expected damages equally between all …rms pertaining to the industry, i.e. s i = 1 n , 8i = 1; :::; n. (C2) guarantees the convexity of the individual liability cost function L pc i (q i ; x i ) = n H(Q; X) with respect to (q i ; x i ). In this case, each …rm i(8i = 1; :::; n) chooses a level of output and a level of care in order to maximize its pro…t:
pc (q i ; x i ) = (a bQ)q i x i n h(X)Q 2 (4)
Using the …rst order conditions (see Appendix 1; second order conditions are met under (C2)), the symmetric Cournot-Nash equilibrium where q 1 = ::: = q n = q pc and x 1 = ::: = x n = x pc solves the system:
a b(1 + n)q = 2 h(nx)q (5) h 0 (nx): nq 2 = 1 (6)
meaning that the output level (see condition ( 5)) must be pushed up to the point where marginal market proceeds are equal to the marginal cost of liability (RHS); similarly (condition ( 6)), care expenditures are such that the marginal cost of care (RHS) is equal to their marginal bene…ts, de…ned as the decrease in the expected liability cost (LHS). Remark more speci…cally that the LHS in (5) exhibits the standard distortion due to imperfect competition according to which the marginal market proceeds are smaller than the market demand price (RHS in ( 5)). In Appendix 1, we verify that output and care levels de…ned by ( 5)-( 6) are strategic complements, and that the associated Nash equilibrium is stable and unique (under (C2)).
The market share rule. Courts may assess individual contributions according to individual market shares, i.e. s i = qi Q , 8i = 1; :::; n. Let us assume that the individual liability cost function
L ms i (q i ; x i ) = q i Q H(Q; X) is convex in (q i ; x i ), which implies that 2q i Q:h(X):h 00 (X) (Q + q i ) 2 (h 0 (X)) 2 > 0.
This requirement is stronger than (C2), but as shown in Appendix 1, it allows that second order conditions for pro…t maximization under the market share rule are satis…ed. Firm i chooses now a level of output and care that maximize the pro…t:
ms (q i ; x i ) = (a bQ)q i x i h(X)q i Q (7)
Once more, using the …rst order conditions and focusing on the symmetric Cournot-Nash equilibrium, the solution q 1 = ::: = q n = q ms and x 1 = ::: = x n = x ms solves the system:
a b(1 + n)q = (1 + n) h(nx)q (8) h 0 (nx): nq 2 = 1 (9)
with a meaning equivalent to ( 5)-( 6). The LHS in (8) also exhibits the standard distortion due to imperfect competition according to which the marginal market proceeds are smaller than the market demand price.
In Appendix 1, we verify that output and care levels de…ned by ( 8)-( 9) are strategic complements, and that the associated Nash equilibrium is stable and unique (under (C2)).
Before going into the analysis of the performances of both rules and the comparison with the social optimum, remark how the di¤erent sharing rules shape the relationship between output decisions and the individual cost of liability, and the way it translates to the marginal cost of production. More precisely, it turns out that the marginal cost of production/liability under the market share is proportional to the number of Defendants (…rms). Throughout the paper, this di¤erence with the per capita rule will drive the results. Remark also that despite the damages multiplier is exogenously set, we assume until the next section that it cannot include values that are too large (see below). This quali…cation will be relaxed later on, when we will discuss the case for optimal damages multipliers. We have the following result:
Proposition 1 Assume 1 < n. i) Strict liability under the per capita rule yields a level of output and a level of care larger than under the market share rule (x pc > x ms and q pc > q ms ). ii) Under strict liability with the per capita rule, the equilibrium output and care levels may be larger as well as smaller than their socially optimal levels. iii) Under strict liability with the market share rule, the equilibrium output and care levels may be larger as well as smaller than their socially optimal levels if
2n 1+n ; if > 2n
1+n , the equilibrium output and care levels are smaller than their socially optimal levels.
Proof: Let us denote q sw (x) the output level that solves (2), for any given value of care; while x sw (q)
is the care level that solves (3), for any given value of the output. Similarly, let us denote q pc (x) the output level that solves (5), for any given value of care; while x pc (q) is the care level that solves (6), for any given value of the output; and let q ms (x) be the output level that solves (8), for any given value of care; while x ms (q) is the care level that solves (9), for any given value of the output. i) According to conditions (6) and ( 9): 8q > 0; x pc (q) = x ms (q). In contrast, according to ( 5) and ( 8):
2qh(nx) < (1 + n)qh(nx)
implying that: 8x > 0; q pc (x) > q ms (x). Hence: x pc > x ms and q pc > q ms since care and output are strategic complements under both rules.
ii) and iii) From ( 3)-( 6)-( 9), it comes that: h 0 (nx): nq 2 < h 0 (nx): (nq) 2 ; hence: 8q > 0; x sw (q) >
x pc (q) = x ms (q). In turn, (2)-( 5)-( 8) show there are two opposite e¤ects on output. On the one hand (LHS): a (1 + n)bq < a nbq; this reduces the incentives to produce under strict liability at any level of care, compared to the social optimum. On the other hand (RHS): if 1 2n 1+n , then 2nh(nx)q > (1 + n) h(nx)q > 2 h(nx)q; this increases now the incentives to produce at any level of care, compared with the social optimum. Thus, q sw (x) is not generally comparable with q pc (x) or q ms (x).
But if > 2n
1+n , then 2nh(nx)q < (1 + n) h(nx)q and thus q pc (x) > q ms (x), implying that x sw > x ms and q sw > q ms . Hence the result.
For both damages apportionment rules, …rms have the same best response in terms of care to any feasible activity level. However, …rms'private marginal bene…ts associated with care under strict liability are smaller than their socially optimal value, implying that …rms have ine¢ cient incentives to take care at any output level under strict liability.
In contrast, for any given feasible level of care, …rms face a marginal cost for liability under the per capita rule that is smaller than under the market share rule (for the same marginal market proceeds); thus at any level of care, …rms'best response function in terms of output is larger under the per capita than under the market share rule. The intuition is simple to get: under both liability sharing rules, …rms obtain obviously the same market share at equilibrium, 1=n. However, output decisions are driven by the ex ante individual liability cost which is rule-speci…c. Under the market share rule, the cost of liability for a …rm is related to its individual market power, the larger its market share, the larger its liability burden. Thus, aside from the standard negative competition externality, the market share rule adds a liability cost externality that drives down the output decisions of …rms at any level of care. As a result, the market share rule provides …rms with more incentives to reduce output and care than the per capita rule.
This said, strict liability (whatever the damage sharing rule) leads to an ine¢ cient rule for output choice since the furniture of the output is driven by two opposite e¤ects. On the one hand, …rms' marginal market proceeds under strict liability (for both rules) fall short of their socially optimal value: this is nothing else but the standard output distortion of output that re ‡ects …rms'market power under imperfect competition. On the other hand, the marginal cost of liability accruing to a …rm under the per capita rule falls short of its optimal value; this is also true under the market share rule if the damages multiplier is small enough ( 2n 1+n ). As a result, strict liability (whatever the damage sharing rule) also exerts ine¢ cient incentives to produce at any level of care, but this is the result of two countervailing in ‡uences and the net e¤ect is ambiguous: …rms may produce either too much or not enough at any care level. Hence the comparison between the equilibrium values for output and cares under strict liability (for any damages sharing rule) and their optimal values is also generally ambiguous. 14 When the damages multiplier trespasses a threshold ( > 2n 1+n ), the marginal cost of liability under the market share rule increases above its social value, thus shifting the equilibrium below the social optimum.
In the next section, we will discuss the combination of a optimal damages multipliers (used by Courts) and …rms'entry (forstered by a Competition Authority). As a matter of comparison, we brie ‡y review here what is the impact of …rms' entry on the equilibrium under strict liability for exogenous values of the damages multiplier.
Proposition 2 (…rms' entry under strict liability with a exogenous damages multiplier) i) Under both rules, …rms' entry yields a decrease in individual output and care levels. ii) Under the per 1 4 It can be understood as the result of the comparison between the slope of two curves: the slope of the marginal market proceeds (mainly driven by the value of b), and the slope of the marginal cost of liability; the proof is omitted. capita rule, the aggregate output increases with the number of …rms, while the aggregate care decreases with the number of …rms. iii) Under the market share rule, the aggregate output increase (decrease) with the number of …rms when b is large (small) enough, while the aggregate care always decreases with the number of …rms.
Proof. See Appendix 2. i) We show that under the capita rule, the result holds for n 1;
although the market share rule does not require such a restriction. ii) In contrast to the social optimum, it is obvious that the equilibrium levels of aggregate output and care under both rules depend on the number of …rms. Using ( 5)-( 6), the aggregate output and care levels under the per capita rule satisfy:
a b 1 + 1 n Q = 2 n h(X)Q (5bis) h 0 (X): n Q 2 = 1 (6bis)
It is easy to see that …rms'entry entails opposite e¤ects on aggregate output and care expenditures.
The LHS in (5bis) is increasing with n, while the RHS is decreasing with n: hence the larger n, the larger the aggregate output Q pc (X) at any level of X. In turn, the LHS in (6bis) is decreasing with n, while the RHS does not depend on n: hence the larger n, the smaller the aggregate care X pc (Q) at any level of Q. However, the conditions b > 2h(0) and 1 < n 1 are su¢ cient to sign the e¤ect at equilibrium.
iii) Similar e¤ects arise under the market share rule. Using (8)-( 9), the aggregate output and care levels satisfy now:
a b 1 + 1 n Q = 1 + 1 n h(X)Q (8bis) h 0 (X): n Q 2 = 1 (9bis)
The LHS in (8bis) is increasing with n, while the RHS is decreasing with n: hence the larger n, the larger the aggregate output Q ms (X) at any level of X. In turn, since (9bis) is identical to (6bis), the larger n, the smaller the aggregate care X ms (Q) at any level of Q. Once again, the net e¤ect at equilibrium is ambiguous. Explicit comparative statics show that H1a is not enough, and a su¢ cient condition for the aggregate output to increase (decrease) is that b be large (small) enough with respect to a new theshold value de…ned in Appendix 2, larger than 2h(0).
Proposition 2 establishes that following …rms'entry in oligopoly, the e¤ect on output and care at …rm level under both rules is the usual/intuitive one. Less market power yields less production at …rms level, and given the strategic complementarity between care and output, this also implies less individual care.
The impact on the aggregate care level, identical under both rules, is particularly interesting. The intuition of the result is related to the situation where …rms share a joint harm they have produced, and the public good characteristics of care activity: as the number of …rms increases, the equilibrium individual share (1=n) in total harm also decreases under both rules, hence diminishing the expected bene…ts of care more than what would be justi…ed by the decrease in the output level. As a result, …rms' entry has such a large a negative impact on individual care levels, that the aggregate care level also decreases: the additional number of …rms that invest in care does not compensate the decrease in the individual care (there are more …rms, but poorly investing in care).
In contrast, the impact on the aggregate output is rule-speci…c. Under the per capita rule, we obtain the usual e¤ect according to which the aggregate supply expands following …rms'entry (the individual output decreases, but this e¤ect is compensated by more …rms producing the output). Inspection of the RHS (5bis) shows that the feedback in ‡uence of care on output decision is neglictable as n becomes large under the per capita rule. Thus the e¤ect on the LHS is dominating. In contrast, the e¤ect is ambiguous with the market share rule: inspection of the RHS in (8bis) suggests that the explanation is in the importance of the feedback in ‡uence of care on output (the RHS is proportional to 1 + 1 n h(X) and thus tends to 1 as n becomes large) compared to the impact on marginal market proceeds (the LHS is also proportional to 1 + 1 n :b), the net e¤ect depending on the price sensibility of the market demand (see Appendix 2 for an explicit proof). Depending on the relative size of the marginal cost of production and marginal market proceeds, the decrease in the individual output may be so large that it results in a decrease of the output at the industry level -the entry of new …rms does not compensate the large cut in individual production levels.
Obviously, these distortions in care and output decisions imply a loss of welfare. Speci…cally as regards with the discussion below, the di¤erentiated responses of aggregate output and care suggest that there may exist a trade-o¤ between consumers surplus and environmental harm. Let us assess how Social Welfare evolves as the number of …rms increases, starting with the per capita rule. Evaluating (1) at (Q pc ; X pc ) and di¤erentiating in n, we have (using (5bis)-(6bis)):
dSW dn (Q pc ; X pc ) = [b 2h(X pc )(n )] : Q pc n : dQ pc dn (n ) (Q pc ) 2 n h 0 (X pc ): dX pc dn
where h 0 (X pc ): dX pc dn > 0. Still considering a damages multiplier small enough (1 n), it comes that the impact of …rms' entry on Social Welfare is generally ambiguous. When the slope of the (inverse) market demand is small enough (b < 2h(X pc )(n )), indeed Social Welfare decreases -the intuition is that aggregate output is too large, and aggregate care expenditures are too low, implying losses of welfare related to an excessive expected environmental harm that is not compensated by consumers surplus. However, as the damage multiplier increases (such that b > 2h(X pc )(n )), there is a balance between the welfare gains provided by aggregate output expansion and the losses that result from the increase in expected harm (decrease in aggregate care expenditures).
Turning to the market rule now, evaluating (1) at (Q ms ; X ms ) and di¤erentiating in n yields (using (8bis)-( 9bis)):
dSW dn (Q ms ; X ms ) = h b 2h(X ms ) n 2 (1 + n) i : Q ms n : dQ ms dn (n ) (Q ms ) 2 n h 0 (X ms ):
dX ms dn with once more h 0 (X ms ): dX ms dn > 0. The same comments are relevant here, with one noticeable di¤erence. Although the impact on Social Welfare resulting from the decrease in aggregate care expenditures is still conditioned by the sign of n , the impact of the increase in the aggregate output is now depending on the sign of 2n 1+n
. As seen before (in the proof of proposition 1), when > 2n 1+n the marginal cost of liability is above its socially optimal value; thus the equilibrium corresponds to output and care levels that are below their optimal values. Hence in this situation, increasing n provides welfare gains thanks to the increase in aggregate output, despite the decrease in aggregate expenditures -the intuition is that in this situation the market equilibrium is associated with output distortions that large, that an increase in the number of …rms will provide a gain for consumers, whatever the price sensibility of market demand, which will do more than compensate the loss associated with environment deprivation. This e¤ect disappears when < 2n 1+n , and we obtain e¤ects on Social Welfare similar to the per capita rule. These considerations suggest that there may exist a …nite number of …rms that maximizes Social Welfare, at least as long as the damage multiplier is low enough, under strict liability. However, in any case, the distortion in care decisions is an issue. We now discuss the potential for improving …rms care decisions, and the consequences that arise.
Optimal damages multipliers
We have found for both rules that …rms choose an ine¢ cient rule of care (this is not an e¢ cient response at any level of output). Thus, …rms'incentives in care activities may be corrected, with the means of a speci…c value for the damages multiplier > 1, at the disposal of Courts. We focus here on this optimal value of this damages multiplier.
Proposition 3 (impact of an optimal damages multiplier) i) For both rules, the optimal damages multiplier is = n. ii) In a regime of strict liability (the damages sharing rule being either the per capita or the market share rule) with an optimal damage multiplier, the industry provides insu¢ cient levels of output and care, compared with the social optimum (q sw > q pc > q ms and x sw > x pc > x ms ).
Proof: i) Comparing the LHS in ( 6) or ( 9) and (3), it comes that = n ) h 0 (nx): nq 2 = h 0 (nx): (nq) 2 , in which case x sw (q) = x pc (q) = x ms (q), 8q > 0. ii) According to the RHS of ( 2) and
(5), when = n ) 2 h(nx)q = 2nh(nx)q implying that q sw (x) > q pc (x), 8x > 0. Given i), we obtain that: q sw > q pc and x sw > x pc . In turn considering the RHS in ( 8) and ( 2), = n ) (1 + n) h(nx)q > 2nh(nx)q, implying that q sw (x) > q ms (x), 8x > 0; as a result given i): q sw > q ms and x sw > x ms .
Part i) of Proposition 3 suggests that an optimal damages multiplier may be quite easy to assess for Courts, since it corresponds to industry size = n. Part ii) illustrates that when the per capita rule is combined with a damages multiplier optimally chosen, …rms face a marginal cost of liability equal to its social value. Hence an optimal damages multiplier solves the issue of care incentives (…rms face e¢ cient incentives to take care at any level of output), and also part of the distortion on production. Of course, this is not su¢ cient to reach optimal levels of care and output at equilibrium, since the distortion coming from imperfect competition still holds. A single instrument such as a damages multiplier is obviously not enough to solve two distortions; nevertheless, under the capita rule, the optimal damages multiplier allows to improve both the incentives to take care and the incentives to produce. This property does not hold under the market share rule. Under the market share rule, the optimal damages multiplier will exert perverse incentives on output choice now, since it puts an excessive liability cost on …rms. This aggravates the problem of output underprovision created by the distortion due to imperfect competition. 15 The issue we consider now is whether fostering …rms'entry will allow that the equilibrium be closer to the social optimum given the constraint of the environmental liability law considered here. We proceed in two stages. First, we focus on a situation where as the Competition Authority forsters …rms' entry, …rms still make a positive pro…t (short run equilibrium). In this case, we study the (potential) convergence of the equilibrium towards the optimum as the number of …rms increases, and compare the equilibrium pro…ts and social welfare between both rules. Second, we consider the situation were the zero pro…t condition is met, and compare the properties of the long run equilibrium and the social welfare level obtained under both rules.
As it will appear, we may need a very simple and intuitive assumption in order to qualify some e¤ects:
H2: h 0 (nx): (nq) 2 is decreasing with n.
H2 means that the marginal bene…t of care must be decreasing with the number of …rms; it implies that the next condition holds in this case:
1 5 One may argue that under the market share rule, the multiplier could be designed according to output distortions (equation ( 8)), i.e. in order to correct the marginal liability cost associated with the output decision. It can be shown that such a multiplier has the same e¤ects as those described in proposition 5 below. The results are available on request. In the text, we consider the usual de…nition of the optimal damages multiplier, relying on care incentives.
For any x; n : 2h 0 (nx) + nxh 00 (nx) > 0
(C3)
Let us …rst consider equilibria where …rms make a positive pro…t. The next proposition collects the results for the per capita rule.
Proposition 4 Assume environmental liability law relies on the combination (strict liability, the per capita rule, an optimal damages multiplier); then: i) The individual output decreases with the number of …rms; individual care decreases with the number of …rms under (C3). ii) The aggregate output and care levels increase with the number of …rms. iii) As n ! 1, the equilibrium industry (Q pc 1 ; X pc 1 ) converges to the social optimum (Q sw ; X sw ). iv) fostering …rms' entry is always welfare improving.
Proof. i) See Appendix 2. Regarding the ambiguous impact on care, it is shown that (C3) is a su¢ cient condition for x pc to decrease with n; thus, should the inequality (C3) not hold, it may be that
x pc still decreases with n. ii) To illustrate, let us write (5bis)-(6bis) substituting with = n; it comes:
a b 1 + 1 n Q = 2h(X)Q (5bis, ) h 0 (X)Q 2 = 1 (6bis, )
Condition (6bis, ) does not depend on n. In turn, (5bis) shows that an increase in n shifts upward the marginal market proceeds (LHS). Thus, an increase in n yields an increase in Q pc and X pc . iii) As n ! 1, the LHS in (5bis, ) is equal to the demand price; thus as n ! 1, the equilibrium industry (Q pc 1 ; X pc 1 )
converges to the social optimum (Q sw ; X sw ). iv) Evaluating ( 1) at (Q pc ; X pc ) and di¤erentiating in n, we have (using (5bis, )-(6bis, ) now):
dSW dn (Q pc ; X pc ) = b n :Q pc : dQ pc dn > 0 since dQ pc dn > 0.
The optimal damages multiplier correct …rms incentives regarding care decisions; as the public good e¤ects is removed, …rms are bound to follow e¢ cient rule of care at any level of output, on the pathway that leads to the social optimum. Part i) and ii) of Proposition 4 illustrates a nice property of an optimal damages multiplier under the per capita rule. Speci…cally, the aggregate output and the aggregate care both increase now as more …rms enter the market. Part iii) of Proposition 4 shows that under the per capita rule, fostering …rms'entry on the market increases Social Welfare, and when competition between …rms turns out to be perfect, the …rst best optimum is recovered. In this perspective the per capita rule appears as a quite ‡exible damages apportioning arrangement: once the incentives to invest in care and the incentives to produce are aligned with the socially optimal ones, there is a clear-cut separation between
Courts'action and Competition Authorities policies. The decisions of the former do not impinge on the domain of the latter, and as long as Courts commit to use optimal damages multipliers, Competition
Authorities have room to pursue their traditional objectives moving the equilibrium closer to the social optimum.
In turn, the next proposition collects the results for the market share rule:
Proposition 5 Assume environmental liability law is designed according to the combination (strict liability, the market share rule, an optimal damages multiplier); then: i) The individual output decreases with the number of …rms; (C3) is su¢ cient for that individual care decreases with the number of …rms. ii)
If b is large (small) enough, then the aggregate output and care levels increase (decrease) with the number of …rms. iii) As n ! 1, the industry vanishes, i.e. (Q ms 1 = 0; X ms 1 = 0). iv) If dQ ms dn > 0, fostering …rms' entry is Social Welfare improving. If dQ ms dn < 0, fostering …rms' entry reduces Social Welfare. v) Social Welfare is maximized for a …nite number of …rms, n ms = q b h(X ms ) .
Proof. i) See Appendix 2. Once more we …nd that (C3) is a su¢ cient condition for x pc to decrease with n. ii) To illustrate, let us substitute = n in conditions ( 8bis)-(9bis):
a b 1 + 1 n Q = (1 + n)h(X)Q (8bis, ) h 0 (X):Q 2 = 1 (9bis, )
Condition (9bis, ) is similar to (6bis, ). According to (8bis, ), the marginal cost of liability increases with n, which drives downward the output; to the opposite, the marginal market proceeds also increases with n, driving upward the output. Thus, Q ms (X) may increase or decrease at any level of X, depending on whether the impact of n on the market proceeds is larger or smaller than on the cost of liability. In Appendix 2, we show that b > n 2 h(X) implies dQ ms dn > 0 and dX ms dn > 0; to the converse, b < n 2 h(X) implies dQ ms dn < 0 and dX ms dn < 0. iii) According to the RHS in (8bis), the marginal cost of liability goes to in…nity as n ! 1. Thus the aggregate output and care levels become smaller and smaller with n (Q ms 1 ! 0 and X ms 1 ! 0). iv) Evaluating ( 1) at (Q ms ; X ms ) and di¤erentiating in n (using (8bis, )-(9bis, )), we obtain:
dSW dn (Q ms ; X ms ) = b n + (n 1)h(X ms ) :Q ms : dQ ms dn
The result iv) is straightforward. v) In Appendix 2, we show that Social Welfare is maximized neither with perfect competition (n ! 1) nor with a monopoly (n = 1); there exists a …nite number of …rms n ms > 1, for which SW (Q ms ; X ms ) is maximized, satisfying the condition:
dSW dn (Q ms ; X ms ) = 0 ) dQ ms dn = 0 (10)
Given that b n + (n 1)h(X ms ) > 0, it must be that dQ ms dn = 0 , b n 2 h(X) = 0; solving yields:
n ms = q b h(X ms ) .
Proposition 5 (Part i) and ii)) shows that if at …rms level, the behavior of the individual output and care levels under the market share rule are qualitatively very similar to those obtained for the per capita rule, at the industry/aggregate level, the implied adjustments following …rms' entry are very di¤erent (rule-speci…c). In short, optimal damages multipliers do not remove the public good e¤ect for sure.
Proposition ii) highlights that under the market share rule, the net e¤ect at the aggregate level is driven in a complex way by the relative size of the slopes of market demand and the marginal cost of liability.
When n increases (see (8bis, )), both the aggregate marginal proceeds (LHS) and aggregate marginal cost of liability (RHS) increase. If the price-elasticity of market demand is low (b is large) enough, the …rst e¤ect (LHS) dominates, entailing an increase in the aggregate output and care levels -the rationale is that despite the increase in the individual marginal liability cost at a …rm level, the cut in the individual output level is of limited scale, such that the aggregate output increases with …rms'entry. In contrast as the price-elasticity of market demand becomes low (b is large) enough, the second e¤ect (RHS) is large compared to the …rst one -the associated cut in the output level at a …rm level is that large that it is not be compensated at the aggregate level by the entry of new …rms; as a result, there is a contraction in the market supply, associated with a decrease in aggregate care expenditures. On the other hand, Proposition 5 Parts iii) and i) illustrate that the implications for Social Welfare analysis are also very contrasted between both rules; part iii) may be also understood as follows. Starting with a given number of …rms, and fostering …rms'entry may allow in a …rst stage to improve both the output and care levelthis is the "traditional" e¤ect: as n increases, both individual output and care expenditures decrease at …rm level, but given that more …rms operate on the market, the aggregate output and care expenditures both increase with n. However, as n becomes great enough, fostering …rms' entry a little further may have countervailing e¤ects, in the sense that this will induce a decrease of individual output and care expenditures at …rm level that large, that the aggregate output and Social Welfare also decrease. This explains Part v): restricting …rms'entry is socially worth in this case.
More generally, the implication of proposition 5 is that the market share rule appears as less ‡exible, in the sense that Courts'action impinges on the domain of Competition policy. The use of the market share rule yields an additional distortion in output decisions at …rms'level (the higher the market share, the higher the (marginal) cost of liability), and thus it is not granted that fostering …rms' entry under the market share rule be socially welfare improving. Moreover recovering perfect competition under the market share would entail a severe contraction in the aggregate output supply and aggregate care expenditures, and Social Welfare would be reduced compared to the oligopoly.
Note once more that this analysis relies on the implicit assumption that …rms always make a positive pro…t at equilibrium. In this perspective it appears that …rms'interest is opposite to society' s interest, as we show now :
Proposition 6 (short run equilibrium, …rms' entry and damages sharing) Assume Courts use an optimal damages multiplier. For any n exogenously given, …rms' equilibrium pro…t under the market share rule is larger than under the per capita rule; in contrast, social welfare under the market share rule is smaller than under the per capita rule.
Proof: For more details, see Appendix 3. The comparison of pro…ts and social welfare level under both rules is easier using a parametrization of the damages sharing rule, such that the individual liability cost to any …rm is de…ned as:
L k i (q i ; x i ) = k 1 n Q 2 + (1 k)q i Q ,
i.e. a mix of the per capita and market share rules, with respective proportions k; 1 k. The symmetric Cournot-Nash equilibrium is thus parametrized by k 2 [0; 1], such that the equilibrium levels of the aggregate output and care expenditures, Q(k); X(k), are two increasing
functions of k, with Q(0) = Q ms ; X(0) = X ms and Q(1) = Q pc ; X(1) = X pc .
As a result, the equilibrium pro…t may be written as (with
= n) : k (Q(k); X(k)) = (a bQ(k)) Q(k) n X(k) n h(X(k)) (Q(k))
2 , meaning that it depends on k only through Q(k); X(k), with:
d k dk (Q(k); X(k)) = 1 n n qk Q(k) 3 b n h 00 (X(k)) + (1 + k)h(X(k)):h 00 (X(k)) 2(h 0 (X(k)) 2 )
where > 0 by second conditions, qk > 0 (see in Appendix 3), and using (C2) :
(1 + k)h(X):h 00 (X) 2(h 0 (X) 2 ) > 0 for any k 0. Thus d dk (Q(k); X(k))
< 0 : increasing k (moving from the market share rule k = 0, to the per capita rule k = 1) yields a decrease in …rms' individual pro…t for any given n (i.e. (Q ms ; X ms ) > (Q pc ; X pc )).
In turn, evaluating (1) at Q(k); X(k), and di¤erentiating in k gives (after substituting with the …rst order conditions):
dSW dk (Q(k); X(k)) = b n + (n 1)(1 k)h(X(k)) Q(k) dQ(k) dk
with dQ(k) dk > 0. Hence, we obtain dSW dk (Q(k); X(k)) > 0 for any k 0 : increasing k (moving from the market share rule: k = 0, to the per capita rule: k = 1) increases social welfare for any given n (i.e. SW (Q ms ; X ms ) < SW (Q pc ; X pc )).
The …nding in proposition 6 calls for the analysis of the long run equilibrium reached under each rule. 16 As more …rms enter the market, the equilibrium pro…t decreases, and it becomes harder for a Competition Authority to ignore …rms'private incentives to enter the market. The rational is that as the pro…t is higher under the market share rule all else equal (for any n given), then the zero pro…t condition will be met under the per capita rule for a number of …rms smaller than under the market share rule. Thus in the long run, more …rms are in the industry under the market share rule; hence it could happen that this latter is associated with higher levels of aggregate output and care expenditures and …nally, a higher level of social welfare, than the per capita rule. Indeed, the next proposition shows that this is not the case.
Proposition 7 (long run equilibrium, …rms' entry, and damages sharing) Assume Courts use an optimal damages multiplier, and that …rms make zero pro…t; then, the levels of aggregate output, aggregate care expenditures, and social welfare reached under the market share rule are larger than under the per capita rule.
Proof: For more details, see Appendix 3. Using
L k i (q i ; x i ) = k 1 n Q 2 + (1 k)q i Q, we …nd that the associated symmetric long run equilibrium Q(k); X(k); n(k), is such that dQ(k) dk > 0; dX(k) dk > 0 and dn(k) dk < 0 (hence Q(0) = Q ms < Q(1) = Q pc ; X(0) = X ms < X(1) = X pc but n(0) = n ms > n(1) = n pc ). The zero pro…t condition gives (with = n(k)) : k (Q(k); X(k); n(k)) = 0 , (a bQ(k))Q(k) X(k) = n(k):h(X(k)) (Q(k)) 2
Now, substituting in (1), we obtain the next expression for social welfare :
SW (Q(k); X(k); n(k)) = b 2 + (n(k) 1):h(X(k)) (Q(k)) 2
Di¤erentiating in k, it comes that :
dSW dk (Q(k); X(k); n(k)) = k qk ( ) h(X(k))(Q(k)) 5 :h 00 (X(k)) b (n(k)) 2 + (1 k) n(k) 1 n(k) h(X(k))
where < 0 by second conditions, k qk > 0 (see in Appendix 3). Thus dSW dk (Q(k); X(k); n(k)) > 0 for any k 0 : with the zero pro…t condition, increasing k (moving from the market share rule: k = 0, to the per capita rule: k = 1) leads to a higher social welfare level (i.e. SW (Q ms ; X ms ; n ms ) < SW (Q pc ; X pc ; n pc )).
1 6 We are thankfull to the referee for having suggested a detailed analysis of this issue.
Proposition 7 implies that the advantages (in terms of aggregate output, care expenditures, and social welfare level) of the per capita rule over the market share rule still survive in a long run equilibrium (zero pro…t), despite this latter allows that more …rms enter the market compared with the former.
At that point, two issues are worth of consideration before tuning to the analysis of environmental liability regimes based on a negligence test.
Remark 5 (No liability regime): Let us consider the impact of a no liability regime. In this case, each …rm i chooses a level of output and a level of care in order to maximize its pro…t: nl (q i ; x i ) = P (Q):q i x i . Thus, …rms do not invest in care x nl = 0, and we obtain the standard symmetric Cournot-Nash equilibrium where …rms choose a level of output equal to q nl = a (1+n)b , associated with an aggregate output Q nl = na (1+n)b . It is straightforward that compared with the equilibrium with strict liability (whatever the damages rule), the no liability regime yields the highest level of output q nl > q pc > q ms and the lowest level of care expenditures x nl = 0 < x pc < x ms . However, the comparison with the social optimum yields an ambiguous result; it can be veri…ed that b > (<)2nh(x sw ) ) q sw > (<)q nl . Moreover, we obtain dSW dn (Q nl ; 0)) = b n 2h(0) :Q nl : dQ nl dn where dQ nl dn > 0. On the one hand:
lim n!1 Q nl = a b and lim n!1 dQ nl dn = 0, implying that lim n!1 dSW dn (Q nl ; 0)) ! 0 ; on the other hand, Q nl (n = 1) = a 2b
and dQ nl dn n=1
= a 4b , implying that dSW dn (Q nl ; 0)) n=1 > 0 under H1a. As a result the number of …rms that maximizes Social Welfare under no liability is n nl = b 2h(0) .
Remark 6: Our analysis of the market share rule (propositions 2, 5) reaches conclusions that are close to Marino (1989) although with a very di¤erent framework. Marino uses the standard "proportional harm model" where the expected harm is divisible (the equivalent with our notations being
H = ( P n i=1 h(x i )q i ))
, and …nds that under the market share rule (without an optimal damages multiplier) …rms undertake ine¢ cient decisions of care (both in terms of response to any output level, and in terms of equilibrium level); moreover, he shows that the equilibrium care level decreases with …rms' entry, and that Social Welfare is maximized for a …nite number of …rms. In contrast, recent papers a¤ord formal arguments according to which the market share rule allows to reach an e¢ cient goal. The papers by [START_REF] Dehez | How to share joint liability: a cooperative game approach[END_REF] and [START_REF] Ferey | Multiple causation, apportionment and the Shapley value[END_REF] rely on a cooperative game-theoretic frameworks to analyze apportionment rules that satisfy both e¢ ciency and fairness criterions; however, they consider the case for a exogenous harm, and this way eliminate the issue of care incentives and the interdependence with endogenous decisions of production, as well as the backward in ‡uence of market interactions. [START_REF] Daughety | Cumulative Harm and Resilient Liability Rules for Product Markets[END_REF] 17 discuss a case with correlated and cumulative individual harms (the equivalent with our notations is H = ( P n i=1 h(x i )q i ) 2 ); the resulting aggregate expected harm being divisible, this enable them to introduce a "modi…ed market share" rule whose equivalent with our notations here is h(x i )q i P n i=1 h(x i )q i . Daughety and Reinganum show under this modi…ed market share rule and an optimal damages multiplier, that the oligopoly equilibrium has several properties that are close to those described for the per capita rule in our framework (see proposition 4), with the noticeable exception that the equilibrium level of individual care in their set up increases with the number of …rms (whereas, it is decreasing in our set up). Remark that this modi…ed market share rule is not relevant for our case since the environmental harms is indivisible, 18 and going back to the "simple" market share rule qi Q makes sense in this context. However, we …nd instead that the use of the basic per capita rule for apportioning indivisible environmental damages between …rms has nice properties for care incentives and production decisions, since it introduces no distortion in competition above those due to strategic market interactions. In contrast, the market share rule will exert perverse incentives on …rms regarding their choice of output, since the liability cost increases with their market power, thus aggravating the issue of output underprovision due to imperfect competition. Interestingly enough this result mirrors an old debate about the comparison between the per capita rule and the market share rule in personal injury case (see [START_REF] Kornhauser | Sharing Damages among Multiple Tortfeasors[END_REF], although we reach the exact opposite conclusion, as far as imperfect quantity competition is concerned.
Oligopoly equilibrium under negligence
CERCLA as well as The Environmental Liability Directive of the European Union are focused on speci…c polluting or dangerous activities and/or operations, thus being explicitly recognized as subject to strict liability. By default, those not listed are subject to negligence. Having provided the analysis of strict liability, we now investigate here the case with the negligence regime.
Assume Courts rely on a negligence test considering a ‡exible standard of care: 19 a …rm will be considered as not negligent to the extent that its care expenditures proove to be an e¢ cient behavioral response, in the sense of …rm's best care expenditure it may have chosen considering any relevant foreseeable instances, that Courts consider as being adapted to the situation -in which case the …rm will again avoid any liability cost (i.e. s i = 0). It is natural to consider in that case that Courts promote a standard of care x = x i (q i ; q i ; x i ) de…ned as:
1 8 Remark also to have a complete picture that in the situation investigated by Marino (1989), the "modi…ed market share" rule indeed provides …rms with e¢ cient incentives in care decision, without requiring any damage multiplier to achieve this result. The proof is obvious: with joint but divisible and non cumulative harms, the individual liability cost borne by each …rm under strict liability with this modi…ed market share is equal to individual harm:
L i (q i ; x i ) = h(x i )q i P n i=1 h(x i )q i
P n i=1 h(x i )q i = h(x i )q i Thus, each …rm faces the social cost it imposes to the society, and the e¢ cient goal is attained. 1 9 A classical distinctions in the Law & Economics literature [START_REF] Kaplow | Rules versus standards: An economic analysis[END_REF][START_REF] Sullivan | The justices of rules and standards[END_REF] is made between a rule and a standard of care. When Courts use a rule of care, a …rm will be considered as not negligent once its provides at least a predetermined …xed level of care (x, i.e. the due care level). A natural candidate for such a due care level, usually considered in the literature, is the socially optimal level of care x = x sw (see Charreire and Langlais 2020).
h 0 (x i + x i ):(q i + q i ) 2 = 1 (11) for any positive (q i ; q i ; x i ). The rationale for Courts is that an individual …rm will not be seen as liable as long as its contribution to the safety have been designed in order to minimize the expected cost of the accident, given its own output and its competitors decisions (including output and care), i.e.
x i (q i ; q i ; x i ) = min xi (h(x i + x i ):(q i + q i ) 2 + x i ) whatever (q i ; q i ; x i ).
Let us show that in this regime of negligence, there exists a Nash equilibrium where all …rms comply with the standard of care.
Assume that one …rm is negligent, while its n 1 competitors do comply with the standard required.
The pro…t of the non compliant …rm is: ^ neg (q i ; x i ) = (a bQ)q i x i Q 2 h(X), since it bears the full external cost. As a result, it faces the liability cost of strict liability with the per capita rule when = n, implying that the …rm chooses a level of care that satisfy (11) -this amounts to say that the deviant …rm has inconsistent belief and should have not considered to be liable because Courts shouldn't have conclude for its liability.
As a result, the equilibrium now is such that any …rm abides the standard of care x i (q i ; q i ; x i ), and the individual output is maximizing the pro…t:
^ nl (q i ; x i (q i ; q i ; x i )) = (a bQ)q i x i (q i ; q i ; x i ) (12) under x i (q i ; q i ; x i ) de…ned by the constraint (11). The …rst and second order conditions are given in Appendix 4, such that the symmetric Cournot-Nash equilibrium where q 1 = ::: = q n = q neg and x 1 = ::: = x n = x neg solves the system:
a b(1 + n)q = 2 (h 0 (nx)) 2 h 00 (nx) ! nq (13) h 0 (nx): (nq) 2 = 1 (3)
The RHS in condition (13) shows that the marginal cost of liability accruing to each …rm has now a more complex expression compared with (2); it still depends on the expected harm, although in a more elaborate way (captured through the two …rst derivatives of h(X)). This implies that negligence associated with a standard of care outperforms strict liability as we show now:
Proposition 8 (negligence with a standard of care vs strict liability, and social optimum) i)
Negligence with a standard of care leads to a level of output and a level of care larger than under strict liability associated with an optimal multiplier = n (either with the per capita or the market share:
q neg > q pc > q ms and x neg > x pc > x ms ). ii) If b is large (small) enough, then negligence with a standard of care yields equilibrium levels of output and care smaller (respectively, larger) than their optimal values (b large enough ) q neg < q sw and x neg < x sw ; b small enough ) q neg > q sw and x neg > x sw ).
Proof. i) Under (C2) we have h(nx) > 2 (h 0 (nx)) 2 h 00 (nx) and thus h(nx) > (h 0 (nx)) 2 h 00 (nx) ; as a result (comparing the RHS in ( 5), ( 8) when
= n with ( 13)): q ms (x) < q pc (x) < q neg (x) 8x. On the other hand,
x ms (q) = x pc (q) = x neg (q) 8q if = n. Hence: q ms < q pc < q neg and x ms < x pc < x neg . q neg (x) 8x.
ii) See also Appendix 4. By construction, x sw (q) = x neg (q) for any feasible q > 0. From the comparison of the LSH in ( 2) and ( 13), it comes that a (1 + n)bq < a nbq; in contrast, the comparison of the RHS shows that: 2nqh(nx) > 2nq (h 0 (nx)) 2 h 00 (nx) . Hence the comparison of q sw (x) and q neg (x) at any x > 0 is ambiguous. So it is at equilibrium. In Appendix 5, it is shown that:
i) if b > 2n h(nx) (h 0 (nx)) 2 h 00 (nx)
, it comes q sw (x) > q neg (x) for any feasible x > 0; thus we obtain x sw > x neg and q sw > q neg ; but ii) if
b < 2n h(nx) (h 0 (nx)) 2 h 00 (nx)
, then q sw (x) < q neg (x) for any feasible x > 0; and thus we have x sw < x neg and q sw < q neg . Proposition 8 Part i) re ‡ects that the marginal cost of liability under negligence with a standard of care is smaller than under strict liability associated with the per capita rule, at any level of care. The intuition is that the incentives constraint (11) a¤ord …rms with a strategic advantage: anticipating its care activity as being an e¢ cient response to the output decision, this materializes through a smaller (marginal) cost of liability, at any level of output. As a consequence, the equilibrium output and care levels under negligence are larger than under strict liability (whatever the damages rule).
Part ii) shows that under negligence with a standard of care, …rms face once more two opposite incentives regarding the furniture of the output. On the one hand, …rms'marginal market bene…ts under negligence with a standard of care are smaller than their socially optimal value (this is the standard distortion due to imperfect competition) -this reduces the incentives to produce compared to the social optimum, at any level of care. On the other hand, the marginal cost of liability accruing to …rms under negligence with a standard of care is lower than its socially optimal value (according to the convexity of the expected harm function) -this increases now the incentives to produce under strict liability, compared to the optimum, at any level of care. The net e¤ect at equilibrium is ambiguous, and depends on the relative size of the slopes of the two marginal market proceeds (mainly, the value of b), and the slopes of the two marginal cost of liability (convexity of the expected harm function).
We show now that the equilibrium attained in this regime is ine¢ cient from a social point of view, with either too much or in contrast not enough of both output and care.
Regarding the impact of …rms'entry, the consequences are described in the next proposition.
and solving yields
n neg = b 2(h(X neg ) V neg ) = b 2 h 00 (X neg ) h(X neg ):h 00 (X neg ) (h 0 (X neg )) 2 .
Proposition 9 Part i) re ‡ects that negligence with a standard of care a¤ords …rms a strategic advantage with regard to their output and care decisions, and this leaves them with a degree of freedom that materialize through the adaptation to market conditions and changes in their market power. On the one hand, depending on the price sensibility of market demand, …rms may choose to increase or decrease their output following …rms' entry; on the other hand, depending on whether the marginal bene…ts of care activities decrease or increase, …rms may decrease or increase their investments in care. Although the public good e¤ect is removed by the incentives constraint (11), this liability regime introduces a direct link between equilibrium care expenditures at …rms level, and the variation of the marginal bene…ts of care with n. Part ii) illustrates that once the public good e¤ect is neutralized thanks to the incentives constraint (11), …rms' entry yields the normal, intuitive e¤ect on the aggregate output and agggregate care. Part iii) may be understood as follows: as n becomes great enough, the marginal market proceeds increase, while the average marginal liability cost for the industry (RHS in (13bis) is constant: thus the aggregate output and aggregate care expenditures reach level above their optimal values. Part iv) and v) follow: an excessive aggregate output may entail loss of welfare; this restricting …rms'entry may be socially worth.
6 Environmental law and competition policy: tentative of interpretation
Our results illustrate that when environmental harms are characterized by invisibility, strict liability fails to induce an e¢ cient rule of care from …rms, unless when Courts consider optimal damages multipliers.
When Courts do so, then the speci…c sharing rule they use for apportioning total harms is not innocuous:
if the per capita rule does not distort …rms output decision, in contrast, the market share rule adds to the standard competitive externality (that drives the marginal market proceeds downwards) because it makes …rms bearing an excessive marginal cost of liability, which aggravates the underprovision of output.
In turn negligence associated with a standard of care proves to be an e¤ective rule in obtaining that …rms abide with an e¢ cient rule of care, but provides …rms with a strategic advantage, they use in setting their output and care according to market conditions above or below the socially optimal ones. In all, any environmental liability law alone fails in reaching the social optimum: strict liability has a tendency to reach an equilibrium with too little care and output, while negligence may provide both outcomes at equilibrium, i.e. too much or not enough of both care and output. Thus, there is room for other kinds of public interventions. The point, we now turn to, is to discuss the constraints environmental laws are putting on a Competition Authority, and when competition policy may be oriented with respect to environmental goals.
Without going into details (see above), let us simply remind here that the results of the paper re ‡ect nothing but the way the speci…c design of the environmental law shapes the liability cost borne by a …rm, and the way this individual liability cost will vary with the number of …rms (because of damages sharing rules, and/or because of return to scales in care activity). The debate in the public arena, although not explicitly articulated, seems to focus on protective measures at …rms/individual level. The paper shows it is worth distinguishing between behaviors at …rms/individual level, behaviors at industry/aggregate level, and …nally the general move towards the social optimum.
In the di¤erent scenarios we have analyzed, the individual output and individual care levels are both decreasing with the number of …rms on the market, these results being obtained under very simple assumptions (H1a, b and c). The additional assumption H2 is important to consider only when optimal damages multipliers are introduced; moreover, relaxing H2 in this case does not necessarily revert the comparative statics for the individual care level. 20 Given the strategic complementarity between output and care in our model, this is hardly a surprise: both individual output and care levels decrease with …rms' entry. Does it support the view according to which less competition (the preservation of their market power) allows insider …rms at individual level to invest more in the preservation of environment? Indeed in the present set up, this does re ‡ect anything else but that …rms adapt their care expenditures to their size and to the cost of liability -given that less competition would increase market power and a larger individual output, individual care expenditures would be also larger to avoid an excessive cost of liability.
Most important, our analysis suggests at the same time that the aggregate level of care may have erratic reactions following (restrictions to) …rms entry. It depends on the speci…c design of the environmental liability law, but also on the properties of the technology of care (including the cost of care, and the environmental impact of care activities and productions). Saying di¤erently, the environmental law put some constraints on a Competition Authority, and for achieving better decisions this later needs generally two di¤erent sets of information. On the one hand, it requires the understanding of the legal and the institutional design regarding to the way Courts implement the environmental liability law, which is a practical issue; as we remind before, CERCLA and the European Directive operate at …rst glance a clear separation between activities that are subject to strict liability and those driven by negligence.
However, the interventions of a Competition Authority are designed to improve competition on markets in the sense of business. Thus identifying whether the product on the market is subject to strict liability rather than negligence may be uneasy (when it combines activities that are targeted by strict liability and others by negligence). Moreover, although the distinction between strict liability and negligence is 2 0 To be a little more speci…c: relaxing H2: 1) may not necessarily change dx pc dn < 0 or dx ms dn < 0 for a positive sign, as long as the convexity of the expected harm is large enough (i.e. the term in (C2) is large enough); 2) and change dx neg dn < 0 in dx neg dn > 0 only when b is low enough. a matter of statute law, it is not the case with damages sharing rule; it may be a matter of precedents (Courts may apportion damages between multidefendants according to one rule or the other, following some general guidelines, or conventional use, or the evolution of the doctrinal debate), or it may a right entitled to Plainti¤s. On the other hand, the Competition Authority needs to know the cost structures and bene…ts curve associated with safety activities in di¤erent business and markets at scrutiny, and the assessment of existing returns to scale -this is an empirical issue, but also a technical point since a high level of engineering expertise is required.
For the purpose of this discussion, tables 1 to 3 summarize our central results for a constant marginal cost of care, and add those for a more general cost for care (assuming that c(0) > 0; c 0 (x) > 0; c 00 (x) > 0; and c 000 (x) 0 8x; cf in the Appendix): Skipping to the most clearcut result, we …nd that when the marginal cost of care is increasing (and associated with a large degree of convexity), it is very likely that aggregate care expenditures increase with …rms'entry (thus, decrease with restriction to …rms'entry). This holds under strict liability -whatever the damage sharing rule, and whether an optimal damages multiplier is used or not -as well as under negligence (with a standard of care). In turn when the marginal cost of care is constant (or with low returns to scale, i.e. c 00 (x) ! 0), the impact of (restrictions to) …rms'entry on aggregate care expenditures may depend in a complex manner on the speci…c design of the liability law, as well as on the elasticity of market demand. Once again going to the essence, aggregate care increase for sure with …rms'entry: 1) under strict liability associated with the per capita rule and an optimal damages multiplier, or 2) under the negligence rule with a standard of care. Apart of such designs/cases, things are less predictable with strict liability, and as a consequence, the …ne tuning of competition policy requires more information, some being quite commonplace and standard for a Competition Authority (value of elasticity of demand, number of …rms). But others that are less usual, such as those regarding the way Courts implement the law (use of sharing rules). Our results suggest that the opportunity to use damages multipliers is of great importance here; on the one hand, damages multiplier design the incentives to take care at individual level; on the other hand, their impact on aggregate care expenditures may dramatically change with the regime liability to which they are associated. Under strict liability, aggregate care expenditures always increase with entry restrictions when suboptimal (no) damages multiplier are used; but they decrease (may increase or decrease) with entry restrictions if an optimal damages multiplier is combined with the per capita rule (market share rule). It is worth to remind that if CERCLA considers the use of "punitive damages", the use of "punitive damages" is forbidden up to now in the European context outside of competition law. The point is that suboptimal (no) damages multipliers lead to distortions in care and output decisions at individual level, and restrictions to …rms' entry produce rule-speci…c distortions in aggregate care levels, and thus welfare losses.
n 1 = n c 00 (x) = 0 dQ pc dn > 0 dX pc dn < 0 dQ pc dn > 0 dX pc dn > 0 c 00 (x) > 0 dQ pc dn > 0 dX pc dn ? 0 (c 00 (x) large ) dX pc dn > 0) dQ pc dn > 0 dX pc dn > 0 TABLE 1 -Aggregate equilibrium,
The …nal challenge is whether exist liability rules ‡exible enough, in combination with competition policy, in order to improve Social Welfare? Our results suggest as a …rst best policy mix achieving higher aggregate output and care expenditures, as well as higher Social Welfare, the association of an ("ideal") environmental liability law based on three pillars -a regime of strict liability, a damages sharing arrangement given by the per capita rule, an optimal damages multiplier -on the one hand, and on the other hand, any tools at the disposal of a competition policy focused on the maximization of consumers surplus (here, an increase in the aggregate output thanks to …rms'entry). Each of these tools is in the hands of an independent body (Courts enforce environmental liability law; the Competition Authority implements the competition law), but this policy mix requires a low degree of coordination: as long as each body commits to its targeted objective, …rms undertake e¢ cient decisions at the individual level both (in terms of care and in terms of output), aggregate expenditures for environment preservation increase with the degree of competition on the market (towards the socially optimal level), and …rms' entry is Social Welfare improving.
To the converse, an unusual lenient competition policy (preserving …rms markets power, through restrictions to …rms'entry) may be justi…ed on the grounds that the existing design of the environmental liability law departs from the ideal one, and is constraining the Competition Authority action. The rationale is that any design for environmental liability laws (other than our "ideal" law) will fail to increase aggregate care expenditures and improve Social Welfare when exists an excessive number of …rms on the market -thus restricting market access may be a kind of second best policy for a Competition Authority. But as explained above, this needs a close scrutiny at the situation, case by case, in order to reach a …ne tuning -otherwise, the aggregate care expenditures may decrease with restrictions to …rms' entry. When Courts depart from the per capita rule to adopt the market share rule, the knowledge of the price sensibility of market demand as, well as the speci…c characteristics of the technology of safety (cost structure for care, up to c 00 ; and productivity of care h 00 ) is determinant to establish that restricting …rms' entry should improve aggregate care and Social Welfare. Instead, when the assessment of liability is set according to a negligence test, …rms will always adhere to a ( ‡exible) standard of care, but reaching the …ne tuning of …rms' entry is even more demanding for a Competition Authority: this means collecting even more speci…c information on the technology of safety (up to c 000 ), not only on the price-elasticity of market demand.
Last but not the least, our analysis shows that the our so-called "ideal" environmental liability laws enables a traditional competition policy (focused on the improvement of competition in the industry) to move the equilibrium closer to the social optimum (as n ! 1); in contrast for any other liability regime, there exists a …nite number of …rms that maximizes Social Welfare. The consequence is that adopting restrictions to …rms' entry cannot be seen as a second best solution given the institutional design of the environmental liability law, unless the current number of …rms is above this threshold. This is an empirical/practical issue, but maybe an uneasy job to establish. 21
Concluding remarks
Our analysis rests on a simple framework (linear demand, durable care), but relaxing such assumptions will not change the general story (see for example [START_REF] Charreire | Apportioning indivisible environmental harms and the Joint and Several Liability Doctrine[END_REF]). We are also very focused here on a rough instrument (…rms' entry) for a Competition Authority; additional works in di¤erent competitive environments will be useful to complete the picture. The case for cartels and various forms of collusion will be of speci…c interest. In this perspective Baumann, Charreire and Cosnita-Langlais (2020) develop the analysis of cartels stability under an environmental liability law based on the market share rule.
Remark also that the di¤erent points addressed here, as well as the conclusions of the paper focused on the environmental liability law, extend very generally to any situation with "third-party" victims (that is, having no economic nor contractual relationships with the injurers), e.g. injurers competing on the same legal market, producing a good with potential harmful consequences for human health, society and so on (which are not the consumers of the good produced by …rms). 22 Most of the conclusions will also easily extend to consumers harms as the result of competition distorsions. Nonetheless, this needs more scrutiny. [START_REF] Friehe | Product liability when harm is incurred by both consumers and third parties[END_REF] provide the analysis of harms to both consumers and third-parties victims for a monopolistic market, and …nd signi…cant departures related to the comparison of strict liability and negligence for example.
Finally, remark that (optimal) damages multipliers appear as an important pillar of the …rst best as well as second best policies in our paper. These so-called "punitive damages" are still a matter of debate in Europe. With punitive damages, e¤ective damages paid by each …rm represent a multiple of e¤ective harms to the environment that fall down public pockets, or provide fundings for a Compensation Fund for victims of environmental harms. Such high levels for e¤ective damages raise, and at the same time may help in solving, the judgment-proofness problem or the disappearing defendants issues. These two important problems are beyond the scope of the present paper which consider symmetric …rms. The judgment-proofness issue requires a more detailed analysis of liability sharing in an asymmetric environment, whether this is understood as considering an asymmetric oligopoly, 23 or vertically di¤erentiated markets; the disappearing defendant problem calls for a dynamic approach. This will be the topic of future researches.
0 b
0 …rms'entry, and the per capita rulen 1 = n c 00 (x) = 0 b high (low) ) dQ ms dn > (<)0 dX ms dn < 0 b high (low) ) dQ ms dn > (<)0 b high (low) ) dX ms dn > (<)0 c 00 (x) > 0 dQ ms dn ? 0 (c 00 (x) large ) dQ ms dn > 0)dX ms dn ? 0 (c 00 (x) large ) dX ms dn > 0)dQ ms dn ? 0 (b or c 00 (x) large ) dQ ms dn > 0)dX ms dn ? 0 (b or c 00 (x) large ) dX ms dn > 0)TABLE 2 -Aggregate equilibrium, …rms'entry, and the market share rule c 00 (x) 0; c 000 (xx) > 0; c 000 (x) > 0 b or c 00 (x) large ) dQ neg dn > or c 00 (x) large ) dX neg dn > 0)
TABLE 3 -
3 Aggregate equilibrium, …rms'entry and negligence with a standard of care
https://www.competitionpolicyinternational.com/eu-parliament-demands-fundamental-overhaul-of-competitionpolicy/.
2 One of us has analyzed the stability of cartels under the market share rule[START_REF] Baumann | Market collusion with joint harm and liability sharing[END_REF].
[START_REF] Daughety | Cumulative Harm and Resilient Liability Rules for Product Markets[END_REF] give several examples in the area of environment, food, and health.[START_REF] Friehe | Prevention and cleanup of dynamic harm under environmental liability[END_REF] deal explicitly with cases where the cumulative e¤ect arises because of repeated accidents, and they analyze the issue of dynamic incentives under tort law.
1 Firms' symmetry is also considered in the literature dedicated to (product) liability and market structures, as well as the one about unconventional competition policies. We deal with the asymmetric case inCharreire and Langlais (2017).
2 Considering the di¤erent precautionary measures undertaken at a plant, one may argue obviously that some will have a durable nature and the others will be of a non-durable nature. We contrast the two "pure" cases for pedagogical reasons, whereas the literature about product liability assumes usually that care is non-durable.
3 Such restrictions are just a pure technical point, adding no intuition.
7 Although[START_REF] Daughety | Cumulative Harm and Resilient Liability Rules for Product Markets[END_REF] paper addresses the issue of product liability, their results regarding strict liability are still relevant for environment liability. In turn, what is speci…c to a product liability context is their comparative analysis of no liability, negligence vs strict liability -as well as our comparative analysis of these di¤erent liability regimes is speci…c to the case with indivisible environmental harms we focus on; see above Remark 1.
1 As matter of comparison, see the analysis of the consequence of fusions in European markets for mobile communications[START_REF] Genakos | Evaluating market consolidation in mobile communications[END_REF], or the debate about the introduction of a fourth operator on the French market[START_REF] Thesmar | L'impact macro-économique de l'attribution de la quatrième licence mobile[END_REF].
2 Authorized by Title III of the Superfund Amendments and Reauthorization Act (SARA), the Emergency Planning & Community Right-to-Know Act (EPCRA) was enacted by Congress as the national legislation on community safety. This law is designed to help local communities protect public health, safety, and the environment from chemical hazards.2
[START_REF] Charreire | Compensation of third party victims, and liability sharing rules in oligopolistic markets[END_REF] we provide the comparative analysis of the market share and the per capita rules for an asymmetric oligopoly. We show that the market share rule induces more distortions in output decisions than the per capita rule. On the one hand, high-cost …rms obtain a strategic competitive advantage when aggregate damages are shared according to the market share rule compared with the capita rule, allowing them to reduce their output less than low-cost …rms. On the other hand, the market share rule yields a distribution of equilibrium markets shares that is less spread than the per capita rule, away from the socially optimal one.
Proposition 9 Assume environmental liability law is designed according to negligence with a standard of care; then: i) The individual output and care may increase as well as decrease with the number of …rms.
ii) The aggregate output and care levels increase with the number of …rms. iii) As n ! 1, the equilibrium industry converges to levels (Q neg 1 ; X neg 1 ) above the social optimum (Q sw ; X sw ). iv) If b is large (small) enough, fostering …rms'entry increases (decreases) Social Welfare. v) Social Welfare is maximized for a …nite number of …rms,
Proof. i) See Appendix 6. Speci…cally we show that if b is large (small) enough, then the individual output decreases (increases) with n. The e¤ect on individual care requires more quali…cations: we show that if b is large enough and (C3) holds, then the individual care decreases with n; in contrast, if b is small enough and (C3) does not hold (2h 0 (nx) + nxh 00 (nx) < 0 for any n; x), then the individual care increases with n. ii) Using ( 13)-( 3), the aggregate output and aggregate care levels satisfy:
We will denote as Q neg (X) the aggregate output level that solves (13bis), for any given value of aggregate care; while X ms (Q) is the aggregate care level that solves (3bis), for any given value of the aggregate output. According to (3bis), the aggregate care expenditures X neg (Q) doe not depend on n, but increase with Q. According to (13bis), the marginal market proceeds increases with n, hence Q neg (X) increases with n at any X. Hence ii). iii) In the limit case where n ! 1, then according to the LHS in (13bis), the marginal market proceeds tends to the market price; given that the marginal cost of liability still satis…es (h 0 (X)) 2 h 00 (X) < h(X) 8X > 0, the aggregate output and care levels do not reach their socially optimal values (Q sw 1 ; X sw 1 ) as n ! 1, but in contrast take some positive values larger than (Q sw 1 ; X sw 1 ) that solve (13bis)-(3bis), or: 1) at (Q neg ; X neg ) and di¤erentiating in n (using (17bis)-( 3bis)), we obtain, denoting
In Appendix 6, we show that Social Welfare is maximized neither with perfect competition (n ! 1) nor with a monopoly (n = 1); there exists a …nite number of …rms, n neg > 1, that maximizes SW (Q neg ; X neg ), which is the solution to: |
03525129 | en | [
"spi.mat",
"spi.nano"
] | 2024/03/04 16:41:22 | 2021 | https://hal.science/hal-03525129/file/S0026271421001281.pdf | Anthony Chapel
email: [email protected]
Jérôme Fortineau
Nicolas Porcher
Simon Malandain
Séverine Boucaud-Gauchet
Séverine Boucaud Gauchet
Effect of thermo-mechanical ageing on materials and interface properties in flexible microelectronic devices
Keywords: Flexible microelectronics, thermo-mechanical ageing, epoxy resin, durability, encapsulation
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
The development of flexible microelectronic devices has led to a blossoming of new applications. Flexible microelectronics allows lightweight and conformable devices to be developed and used as biomedical implants, smart textile, or sensors for the internet of things. Münzenrieder et al developed flexible and stretchable oxide-based electronic sensors for skin and smart-implant uses [START_REF] Münzenrieder | Stretchable and Conformable Oxide Thin-Film Electronics[END_REF]. Carta et al. designed flexible sensors for monitoring biomedical parameters such as heartbeat, temperature, humidity, and respiratory rate. Most commonly, such sensors were designed and embedded in smart textiles [START_REF] Carta | Design and implementation of advanced systems in a flexible-stretchable technology for biomedical applications[END_REF].
To achieve such flexibility, however, micro-electronic components have to be interconnected to conformable flexible substrates. Thin, light, conformable, and flexible substrates offer a reliable alternative to rigid printed circuit boards (PCB). Several studies have presented polymers such as polyethylene terephthalate (PET), polyethylene naphthalate (PEN), and polyimide (PI) as suitable candidates to use as flexible substrates [START_REF] Rim | Recent Progress in Materials and Devices toward Printable and Flexible Sensors[END_REF][START_REF] Zardetto | Substrates for flexible electronics: A practical investigation on the electrical, film flexibility, optical, temperature, and solvent resistance properties[END_REF][START_REF] Macdonald | Latest advances in substrates for flexible electronics[END_REF].
Today, the development of flexible, stretchable, and reliable electronic circuits is an important research area. Gonzalez et al. used finite element modeling to determine the geometrical stresses in the metallic connection [START_REF] Gonzalez | Design and implementation of flexible and stretchable systems[END_REF] and arrived at the horseshoe as the optimal geometry. In their review, Rim et al. demonstrated that conductive nanoparticles, deposited via inkjet printing, can offer a suitable solution for processing flexible electronics [START_REF] Rim | Recent Progress in Materials and Devices toward Printable and Flexible Sensors[END_REF].
Like classical rigid electronic systems, flexible microelectronic devices are susceptible to physical conditions in their working environment. Both electrical circuits and silicon chips are subject to attack by humidity and oxygen from the air. To better prolong the lifetime of these devices, their microelectronic systems require protection by a conformable encapsulant. Here, the encapsulant protects against physical and chemical attacks from the external environment and mechanical impacts. In particular, it ensures water-tightness of the system [START_REF] Salem | Thin-film flexible barriers for PV applications and OLED lighting[END_REF]. Lewis has outlined the permeability requirement for flexible electronics [START_REF] Lewis | Material challenge for flexible organic devices[END_REF]. Although inorganic encapsulants such as glass readily meet these requirements, they are particularly impractical as flexible devices.
Several studies have dealt with the development of flexible encapsulation systems, especially those in the field of organic photovoltaics. For example, Gaume et al. developed an ultra-multilayer barrier based on polyvinyl alcohol (PVA) and clay for such applications [START_REF] Gaume | Optimization of PVA clay nanocomposite for ultra-barrier multilayer encapsulation of organic solar cells, 9[END_REF]. However, the durability of this multi-layer depends on the long-term stability of their composite materials and interfaces. Topolniak et al. studied the ageing of ethylene vinyl alcohol (EVOH)/zeolite composites for the encapsulation of flexible photovoltaics [START_REF] Topolniak | Applications of polymer nanocomposites as encapsulants for solar cells and LEDs: Impact of photodegradation on barrier and optical properties[END_REF] and it was later shown that the photo-oxidation of such composites could lead to a weakening of the barrier properties.
Bossuyt et al. studied the stability of flexible microelectronic devices under cyclic mechanical stretching [START_REF] Bossuyt | Cyclic endurance reliability of stretchable electronic substrates[END_REF]. They focused on the mechanical cycling effects on the electrical properties of microelectronic devices.
Fatigue rupture in electrical copper traces was observable after several hundred thousand cycles.
Although the combined effects of thermal ageing of materials and mechanical stress can occur in many flexible systems and lead to early failure, to our knowledge, only a few papers have dealt with this phenomenon.
In this paper, a flexible epoxy formulation based on ECC and EA is studied as a potential candidate for encapsulation of microelectronic devices. Despite the large number of papers dealing with epoxy resin durability, most have focused on bisphenol A diglycidyl ether (DGEBA) and epoxy amine-based resins [START_REF] Ernault | Origin of epoxies embrittlement during oxidative ageing[END_REF][START_REF] Ernault | Thermal-oxidation of epoxy/amine followed by glass transition temperature changes[END_REF][START_REF] Delor-Jestin | Thermal and photochemical ageing of epoxy resin -Influence of curing agents[END_REF][START_REF] Bondzic | Chemistry of thermal ageing in aerospace epoxy composites[END_REF]. Delor-Jestin et al. [START_REF] Delor-Jestin | Thermal and photochemical ageing of epoxy resin -Influence of curing agents[END_REF] showed that t the thermal and photochemical stability of DGEBA-based epoxy were both affected by the chemical nature of the curing agent used. Ernault et al. linked the embrittlement of such epoxies upon thermal ageing to changes in their glass transition temperature [START_REF] Ernault | Thermal-oxidation of epoxy/amine followed by glass transition temperature changes[END_REF], and β-transitions [START_REF] Ernault | Origin of epoxies embrittlement during oxidative ageing[END_REF]. The β-transitions are attributable to local motions of hydroxyether groups present in DGEBA [START_REF] Heux | Dynamic mechanical and 13C n.m.r. investigations of molecular motions involved in the β relaxation of epoxy networks based on DGEBA and aliphatic amines[END_REF]. Embrittlement of DGEBA-based epoxy resin is associated with changes in the transitiontemperature and intensity upon oxidative ageing [START_REF] Ernault | Origin of epoxies embrittlement during oxidative ageing[END_REF].
To be flexible and conformable requires that the chemical structure of the epoxy resins be linear and saturated. Few articles have dealt with saturated epoxy ageing or used mechanical criteria to characterize ageing. Dupuis et al. [START_REF] Dupuis | Photo-oxidative degradation behavior of linseed oil based epoxy resin[END_REF] presented the photo-oxidative degradation mechanism of linseed-based epoxy resin and its effects on the surface and mechanical properties. Their results showed that for shorter irradiation times (<150h), crosslinking reactions induced an increase in nano-hardness, while scission reactions become more competitive at longer irradiations, resulting in a decrease in hardness. The chemical structure of the resin studied mainly consisted of aliphatic chains. In our study, the chemical structure is quite different, with the monomers used based on cyclohexene oxide derivatives. Thus, the main chain segment consisted of a cyclohexane ring linked to an oxygen bridge. The degradation mechanisms also differ from the ones reported in previous publications [START_REF] Delor-Jestin | Thermal and photochemical ageing of epoxy resin -Influence of curing agents[END_REF][START_REF] Bondzic | Chemistry of thermal ageing in aerospace epoxy composites[END_REF]. This paper aimed to identify and understand the destruction mechanisms induced in flexible smartcards by environmental conditions via temperature and mechanical fatigue during their use. The effects of the encapsulant and substrate ageing on the electrical performance and durability of flexible microelectronic devices are presented. This work primarily focused on the polymer material behaviors as the encapsulant and substrate; and the interfacial integrity between the encapsulant/substrate bilayer systems.
Materials and Methods
Materials
The epoxy resin formulation used as encapsulant (named PNE) was developed and supplied by Protavic International.
The formulation comprised a mixture of two monomers:
3,4-epoxycyclohexylmethyl-3',4'-epoxy-cyclohexene carboxylate (ECC) and 3,4-epoxycyclohexyl adipate (EA). Their chemical structures are shown in Figure 1. A weight ratio of 1.5 for the ECC monomer to 1 for the EA monomer was employed. Bis [4-(diphenylsulfonio)phenyl]sulfide bis(hexafluoroantimonate) at (0.25-0.50 wt%), also presented in Figure 1, was used as a photo-initiator. The resin was filled with amorphous silicon oxide (30-40 wt%) and cured under UV radiation.
Sample preparation
Commercial PET films with a thickness of 120 µm were selected for their flexibility and used as the substrate.
A silver electrical circuit was deposited on the substrate and a silicon chip glued to it (Figure 2 Multilayer samples were prepared and used to study the interfacial quality of encapsulant/substrate bilayers using the peeling test (Figure 2 c). Epoxy resin films (17 mm width, 100 µm thick) were deposited on PET substrate film (120 µm thick) and cured under the same conditions as the free-standing films.
DSC analysis was employed to verify that the epoxy resin was 100% cured. No residual precursor was detected.
Ageing conditions
A total of six ageing conditions were performed based on industrial requirements for smartcard applications.
Thermal ageing was carried out in a ventilated oven at 120°C for up to 500 hours. Since there is no ageing temperature definition in the ISO10373 standard, the project partner STMicroelectronics proposed conducting thermal ageing at 120°C as recommended for microelectronics devices. This temperature also allowed the simulation of localized heating points that often appear in materials and interfaces during long-term mechanical fatigue tests.
Mechanical fatigue involved the imposition of bending cycles on the test samples by winding them around a roller at a constant frequency of 24 cycles per min, at room temperature. Ageing involved using a homemade bending bench (Figure 3 All ageing conditions performed on the samples are reported in Table 1. The results presented in this paper were obtained by analyzing up to 7 samples and 4 TVs per condition. 30°C to 250°C at a rate of 10°C/min. DSC analyses allowed us to ascertain curing completion and to monitor changes in the microstructure after ageing. The crystallinity rate (χc) for the PET substrate was calculated using a melting enthalpy for 100% crystalline PET of 140 J/g, in line with the literature [START_REF] Brandrup | Polymer handbook[END_REF].
Dynamic mechanical thermal analyses were done on a TA Instruments DMA Q800 under air. Rectangular shaped (18 mm x 9 mm) and 100 µm thick samples were analyzed. A tensile dynamic strain of 0.1% was imposed on the samples at a frequency of 1Hz. The samples were heated from -100°C to +100°C at a rate of 3°C/min. The glass transition temperature (Tg) and the β-transition were determined using the loss modulus thermograms. The full width at half maximum (FWHM) of the peaks was also determined.
Tensile tests were carried out on an Instron Microtester 5948 equipped with a 2 kN force cell. Three 100 µm thick rectangular samples (1.7 cm*10 cm) cut from free standing films were analyzed at room temperature, under NF ISO527-3 standard. The preload was 0.5 N and the tensile speed 5 mm.min -1 . Three specimen for each ageing condition were analyzed.
Thermo-gravimetric analyses (TGA) were performed on a Setaram Setsys 16/18 apparatus under an air atmosphere. Sample masses of 5 mg were heated from 25°C to 700°C at a rate of 10°C/min. TGA analyses were employed to determine and monitor the degradation temperature corresponding to a mass loss of 5%.
Fourier transform infrared analyses (FTIR) were performed on 20 µm thick films in transmission mode with a JASCO FT/IR-6800 spectrophotometer. FTIR spectra used 32 scan summations at a 4 cm -1 resolution.
Determination of epoxy resin adhesion strengths on the PET substrate involved peeling tests performed at a peeling angle of 90° at room temperature. The peeling strengths were directly measured using an Instron
Microtester 5948 mechanical testing machine equipped with a 2 kN force cell and a specifically designed peeling bench. The applied load was monitored continuously during epoxy film peeling, at a constant displacement rate of 5 mm/s. The peeling force was determined using:
= × 1 -cos /
where G is the peeling force (in N/cm), F is the applied load in the stationary domain (in N), θ is the peeling angle (here 90°), and w is the epoxy resin band width (in cm). Three samples for each ageing condition were measured.
Optical microscopy was performed on A Keyence VHX-1000 microscope and employed a 100X optical magnification. The magnification used enabled delamination or formation of cracks to be detected.
Confocal Raman spectroscopy allowed for the in depth chemical structural changes during ageing to be studied.
The depth profiles of TV were performed starting from the substrate down to the encapsulating resin. The recorded Raman spectra with an InVia Renishaw microscope used the 633 nm laser line. To allow for analysis of the substrate and then the epoxy resin, samples were placed with the substrate on top. The spectra were recorded at room temperature with various focalizations, from 10 µm for the surface to 200 µm for deep inner sample visualizations at 10 µm intervals.
To detect TV functional failures, electrical tests that involved measuring the electrical resistance between the terminals of the chip (Figure 2 d) were performed. A Fluke 175 true RMS multimeter was used. An increase in the value measured corresponded to device failure and thus defined as the end of life criteria for TV in our study.
Results and discussion
Epoxy resin characterization
Ageing-induced changes of the PNE epoxy resin were determined through thermal and mechanical properties using DMTA, tensile tests and TGA. DMTA thermograms of the epoxy resin are presented in Figure 4. The main peak of the loss modulus thermogram (Figure 4 a) is centered at +13°C. This peak is attributable to the glass transition of the PNE resin.
A secondary maximum at -67°C is attributable to the β-transition of the resin. Udagawa et al. studied the dynamic mechanical properties of UV-cured ECC and EA [START_REF] Udagawa | Dynamic mechanical properties of cycloaliphatic epoxy resins cured by ultra-violet-and heat-initiated cationic polymerizations[END_REF] and attributed the presence of β transition in both polymers to molecular motions of the cyclohexyl rings. As the same chemical structures are present in PNE resin, its β-transition is attributable to the same molecular motions.
Mechanical properties were determined at room temperature using tensile test measurements. Figure 5 a presents a stress-strain curve for the PNE epoxy resin. Epoxy resin films exhibited a brittle break. Both Young's modulus and strain at break were determined. The resin films presented Young's modulus (E) of 33 ± 1 MPa and a strain at break (εB) of 22.9 ± 1.1 %. Thermal degradation occurred in two stages. At the end of the experiment, 42.5 % of the original weight remained, corresponding to the silicon oxide fillers that do not degrade at these temperatures. Thus, suggesting that the filler content in the studied samples was 42.5 wt%.
Thermal ageing
Below, the effects of thermal ageing on the encapsulant and the substrate are discussed. Chemical and mechanical analyses for each specimen type are presented.
Epoxy resin ageing
It is important to note that the resin films presented yellowing upon ageing, indicating an alteration of their chemical structure. The evolution of the FTIR spectra with thermal ageing conditions is presented in Figure 6.
The spectral differences (x hours -reference) in the carbonyl domain (1850-1650 cm -1 ) are also presented. A white box hides a saturated part of the spectra to facilitate reading. Spectral band formation (upwards) and disappearance (downwards) were detectable on difference spectra.
The main change visible on the original spectra was the decrease in the broad band for wavelengths over 3 000 cm -1 . This band is attributed to O-H bonds from adsorbed water or alcohol present in the film. Amid the carbonyl domain, was observed the formation of bands at 1814, 1780 and 1764 cm -1 . Whereas the vibration band at 1780 cm -1 corresponds to lactones formations [START_REF] Morlat-Therias | Polymer/carbon nanotube nanocomposites: Influence of carbon nanotubes on EVA photodegradation[END_REF], the bands at 1814 and 1764 cm -1 were attributed to anhydride acids formation.
A decrease in intensity of the band at 1710 cm -1 was noticeable. This band signaled the presence of ketones that degrade during thermal ageing.
Although these attributions relied on a reference table and previous literature findings, they should be confirmed by complementary analysis such as chemical derivatization treatments with NH3 SF4 or 2,4-dinitrophénylhydrazine (2,4-DNPH) [START_REF] Carlsson | Identification of Products from Polyolefin Oxidation by Derivatization Reactions[END_REF][START_REF] Wilhelm | Infrared identification of carboxylic acids formed in polymer photooxidation[END_REF]. Based on their specific reactivity with the different carbonyl functional groups, these treatments used together with FTIR analysis can identify contributions to the carbonyl band. NH3 is reactive with esters, carboxylic acids and anhydrides; SF4 with carboxylic acids; and the 2,4-DNPH with aldehydes and ketones. Table 2 summarizes changes in the thermo-mechanical properties of the resin upon thermal ageing. The Tg and the β-transition were determined using DMTA. The FWHM of the peaks was also calculated. Whereas the Tg increased following thermal ageing, its FWHM remained unchanged. The β-transitions were unaffected by the thermal ageing.
The Young's modulus was four times higher after thermal ageing while the strain at break decreased by a third. This change was well reflected through observed stiffening and embrittlement of the resin films.
The decomposition temperature of the resin increased from 294°C to 307°C. reactions. These reactions are antagonistic and their kinetics are not identical and can vary according to the degree of degradation, ageing conditions and the nature of the polymer. Crosslinking reactions are usually linked to increases in both Young's modulus and Tg [START_REF] Sindt | Molecular architecture-mechanical behaviour relationships in epoxy networks[END_REF][START_REF] Rasoldier | Model systems for thermo-oxidised epoxy composite matrices[END_REF], whereas chain scission reactions have the opposite effect.
The marked stiffening of the resin studied, coupled with the increase in its glass transition temperature during thermal ageing, could be explained by the dominance of crosslinking reactions over chain scissions in the thermal oxidation mechanism. This result is verifiable by the observed increase in the decomposition temperature, as observed by TGA. It is shown that crosslinking reactions during ageing significantly increase the polymer degradation temperature [START_REF] Patel | Durability of hygrothermally aged graphite/epoxy woven composite under combined hygrothermal conditions[END_REF]. Dupuis et al. reported similar result from a study on the photo-degradation mechanism of linseed-based epoxy resin [START_REF] Dupuis | Photo-oxidative degradation behavior of linseed oil based epoxy resin[END_REF]. They showed that crosslinking reactions control degradation for shorter ageing times (200h) and are associated with marked increases in the modulus.
Ernault et al. linked the embrittlement of DGEBA/TTDA and DGEBA/IPDA during thermal-oxidative degradation to crosslinking reactions and β-transition intensity decrease [START_REF] Ernault | Origin of epoxies embrittlement during oxidative ageing[END_REF]. They attributed intensity decreases in the β-transition to the oxidation of the hydroxypropylether groups of DGEBA since their local motions corresponds to this transition. In the present study, β-transitions for the PNE resin were not affected by thermal ageing at 120°C for 500h. As this transition corresponds to the cyclohexyl ring motions, the molecular mobility of this moiety should be exempt from the thermal-oxidative degradation of the epoxy resin. To better decipher the events that underpin the ageing of the bilayer assembly, PET substrate films were aged using the same conditions.
PET substrate ageing
The mechanical and thermal properties of PET films both before and after 500h at 120°C are summarized in Table 3. The Young's modulus, the strain at break and the melting temperature (Tm) remained unchanged after thermal ageing. PET crystallinity increased slightly, presumably as a result of thermal annealing. The Raman spectra of the PET substrate before and after 500h at 120°C are presented in Figure 7. The initial spectrum exhibits the conventional vibration bands of PET [START_REF] Lin | Chemical depth profiling of photovoltaic backsheets after accelerated laboratory weathering[END_REF][START_REF] Rebollar | Physicochemical modifications accompanying UV laser induced surface structures on poly(ethylene terephthalate) and their effect on adhesion of mesenchymal cells[END_REF]. There was no spectral modification after thermal ageing. Instead, a broad peak covering the entire scan range appeared. However, the chemical structure of the PET substrate film should not be modified by thermal ageing under these thermal ageing conditions. Thus, the observed peak broadening is probably due to the fluorescence that emanates from the adjacent PNE layer (see description in the next part).
After thermal ageing at 120°C for 500h, the chemical structure and mechanical properties of the PET substrate films were conserved. The parameter change of note was an increase in PET crystallinity, possibly due to the annealing process.
Bilayer and test vehicle ageing
PET/PNE bilayer systems and TV were all thermally aged under the same conditions. The bilayer systems were employed to characterize thermal ageing on the interface quality between encapsulant and substrate. TVs were used to determine the effects of thermal ageing on their functional properties (i.e. electrical properties).
The results of the peeling tests performed on bilayer systems are presented in Table 4. An adhesive rupture was observed both before and after ageing. The value of the peeling force G increased after thermal ageing at 120°C for 500h. The changes observed followed the same trend as both the Tg and mechanical properties of the epoxy resin while the substrate properties remained unchanged. The crosslinking reactions occurring in the epoxy resin during thermal ageing led to an increased peeling strength, although the exact mechanism that underlies this phenomenon remains unclear. Accordingly, complementary experiments are needed to decipher this mechanism and those will be the subject of a separate study. 5).
After thermal ageing, the epoxy resin deposited on TV presented marked yellowing (Figure 8), whereas the PET substrate was unchanged. Similar results were obtained after free films ageing. Two areas are distinguishable on the reference TV. Intense vibration bands attributed to the PET substrate were detected for depths from 0 to approximatively 100 µm. There were no identifiable vibration bands for depths over 100 µm. The Raman signal of the studied epoxy resin was very low compared to that of PET.
A broader vibration band for depths greater than 100 µm (i.e. in the epoxy resin area) for thermally aged TV was evident. This band presumably arises from fluorescent groups derived from the chemical degradation of the epoxy resin. Indeed, samples undergoing yellowing were also the ones that exhibited Raman spectra modified by fluorescence due to the thermal oxidation of the polymer. The yellowing of polymers during oxidative ageing has already been described [START_REF] Yang | Thermally resistant UV-curable epoxysiloxane hybrid materials for light emitting diode (LED) encapsulation[END_REF][START_REF] Allen | Aspects of the thermal oxidation, yellowing and stabilisation of ethylene vinyl acetate copolymer[END_REF][START_REF] Gardette | Fundamental and technical aspects of the photooxidation of polymers[END_REF]. Moreover, fluorescent oxidation products can form in polymer materials during oxidative ageing [START_REF] Peike | Indoor vs. outdoor aging: polymer degradation in PV modules investigated by Raman spectroscopy[END_REF][START_REF] Allen | Thermal and photo-chemical degradation of nylon 6,6 polymer: Part 1-Influence of amine-carboxyl end group balance on luminescent species[END_REF]. Accordingly, the fluorescence serves as an indicator of the formation of oxidation products during thermo-oxidative ageing. These results are verifiable through FTIR analytical results obtained on aged PNE films. The force needed to peel the epoxy resin from the substrate increased due to the crosslinking reactions occurring during thermal ageing. TV functional properties were not affected by this reinforcement because the intensity of the increase was not sufficient.
Mechanical fatigue
Epoxy resin fatigue
The thermal and mechanical properties of the epoxy resins, following fatigue ageing under F2 conditions were determined using DMTA, tensile tests and TGA. The corresponding results are presented in Table 2.
The Young's modulus increased from a reference value 33 ± 1 MPa to 50 ± 6 MPa after fatigue ageing under F1 conditions, indicating a stiffening of the films. However, the stiffening recorded under F1 condition remained modest upon comparison, with the 133 ± 32 MPa recorded for thermal ageing at 120°C for 500h.
Between F1 and F2 fatigue ageing conditions, PNE films stiffening increased further. Fatigue ageing under F2 conditions did not affect the glass transition (Tg), β-transition (Tβ) or decomposition (Td) temperatures.
Additionally, the thermo-mechanical properties of PNE resin films were not significantly affected by fatigue at room temperature. In order to confirm these trends, fatigue experiments should be either continued under the same or harsher conditions.
PET substrate fatigue
The evolution of the PET substrate films properties subjected to mechanical fatigue appears in Table 3.
After fatigue ageing, the PET Young's modulus increased reflecting a stiffening of the substrate. The crystallinity ratio was modified, whereas the melting temperature was unchanged.
Micro-Raman spectra, obtained for PET substrates before and after mechanical fatigue under F1 and F2 conditions, are presented in Figure 10. Raman spectra after fatigue testing at ambient temperature remained unchanged, suggesting that the PET chemical structure was stable under these conditions. The observed stiffening of PET after F1 is well explained by the increase in crystallinity in the polymer.
However, the additional increase of stiffening, indicated by an increase in Young's modulus after F2, is nonetheless accompanied by a marked decrease in crystallinity back to its initial value. Another stiffening mechanism is involved in this case. The additional stiffening mechanism is presumably due to the presence of crosslinking reactions. For both F1 and F2 fatigue conditions, there was no sign of a modification of the chemical structure of PET. However, the chemical bond formed by the crosslinking reaction might not be active in Raman spectroscopy. Accordingly, complementary analyses are required to confirm the presence of crosslinking reactions.
Bilayer and test vehicle fatigue
The change in peeling force G between the PET substrate and PNE resin with fatigue ageing is reported in Table 4. The peeling force G remained stable after mechanical ageing, and adhesive ruptures were observed for each condition and each sample tested. The interface quality was unaffected under the fatigue conditions of the study.
TVs electrical testing after fatigue in condition F1 was positive. However, the tests were negative after condition F2 (Table 5), indicating a discontinuity in the electrical circuit. Under these harsher conditions, TV failure was observed even earlier, after 5 000 cycles on the 50 mm diameter roll followed by 3 000 additional cycles on the 25 mm diameter roll. Optical microscopy was performed on TVs to determine the origin of these failures. Figure 11 presents a microscopic image of a chip on a TV that failed the electrical test. This micrograph was captured through the transparent PET substrate to examine the electrical circuit under the chip.
For this micrograph, cracks (denoted as red lines) formed within the glue layer at the substrate and the chip interface, under the harshest condition during mechanical ageing. These cracks are probably due to the marked differences in stiffness between the silicon chip and the PET film. Under these conditions, stress accumulates in the glue layer leading to cracking. The chemical structure of both substrate and encapsulant layer presented no detectable chemical modification.
After mechanical fatigue, the studied epoxy resin presented slight stiffening while its other properties remained unchanged. A stiffening of the PET substrate was similarly accompanied by increases in its crystallinity but with no change in its chemical structure. These microstructure modifications are probably due to mechanical effects only. The frequency of mechanical cycles used was probably too low to induce a substantial rise in the internal temperature by the friction of polymer macromolecules. The chemical structure of both PET and epoxy resin was also stable under these ageing conditions.
TV succumbed to electrical failure under the harshest ageing condition following the formation of cracks in the glue layer.
Thermo-mechanical coupling
All sample types were thermo-mechanical aged. Samples were first aged for 500h at 120°C under air, followed by mechanical bending for 5 000 cycles on a 50 mm diameter roll (TM1) and finally to 5 000 cycles on a 25 mm diameter roll (TM2). Mechanical bending cycle experiments were performed at ambient temperature. Samples characterization used the same techniques as previously described.
Epoxy resin TM coupling
Analytical characterization of PNE resin was by DMTA, tensile tests and TGA. The results are summarized in Table 2.
After ageing under condition TM1, Tg, Td, and Young's modulus increased. Tβ was unchanged. Visual inspection of the samples revealed a yellowing of the resin. The effect of thermo-mechanical ageing under TM1 condition was the combination of both thermal ageing for at 120°C for 500h and mechanical fatigue under F1 condition. As previously explained for thermal ageing, the simultaneous increase in these parameters reveals that crosslinking reactions were predominant. Based on the phenomenon of intensity difference between thermal ageing and mechanical fatigue as previously described, most of the changes observed under the combined regimen were due to thermal ageing. Yellowing of the resin observed after 500h at 120°C did not change during the subsequent fatigue.
Both Tg and Young's modulus decreased under the harshest (TM2) ageing conditions. An increase in the effectiveness of chain scission reactions explains this effect.
As for thermal ageing and fatigue, the β-transition was not affected by thermo-mechanical ageing under either condition.
A two steps system is identified as underpinning the thermo-mechanical ageing of epoxy resins. The first step TM1 is the direct addition of thermal ageing and fatigue F1. During this step, the crosslinking reactions proved dominant as evidenced by the stiffening of the films and increasing of their glass transition temperature.
Yellowing of the resins due to thermal ageing was similarly recorded.
The second step TM2 presented different mechanisms, evidenced by a softening of the films and a decrease in Tg. During this step, chain scission reactions became more competitive.
PET substrate TM coupling
The changes in thermal and mechanical properties of PET substrates with thermo-mechanical ageing are presented in Table 3. The Young's modulus, strain at break and melting temperature for the PET substrate, after thermo-mechanical ageing, were not significantly modified under either condition. However, the PET substrate crystallinity ratio increased slightly. As for the recorded increase in crystallinity after thermal ageing, this is probably the result of a thermal annealing effect on the film.
The Raman spectra of PET films after thermo-mechanical ageing are presented in Figure 12. Both the peak positions and their width were unchanged after ageing, suggesting that the chemical structure of PET was stable during ageing under these conditions.
Bilayer and test vehicle TM coupling
The PET/PNE bilayer and TV were all subjected to thermo-mechanical ageing. The values of the peeling force G recorded after ageing are reported in Table 4. After ageing under TM1 conditions, the peeling force increased from 1.74 ± 0.15 N/cm to 2.61 ± 0.19 N/cm. However, this value is of the same magnitude as that recorded after thermal ageing at 120°C for 500h (2.70 ± 0.14 N/cm), suggests that the recorded increase during thermos-mechanical ageing TM1 was mainly due to the thermal ageing step. After ageing under condition TM2, a slight decrease in G was detected. This observation is consistent with the increase in chain-scission efficiency, evidenced for the epoxy resin ageing under this condition.
The results of the electrical tests performed on TV are presented in Table 5. These tests were positive for all the chips tested after thermo-mechanical ageing under both conditions. However, the TV aged under the harshest condition (TM2) presented some delamination between the PET substrate and the encapsulating resin.
Illustrated by the micrograph shown in Figure 13, delamination occurred on each side of the silicon chip and parallel to the bending direction. This is also explained by the stiffening of the resin measured after thermo-mechanical ageing. Thus conformability of the resin is reduced by this significant stiffening. Stresses then accumulate at the interface during the bending function of the TV. These observations are different from those observed for mechanical fatigue under F2 conditions. Here stresses are released when cracks form in the glue layer. Changes of both polymer materials during the thermo-mechanical ageing lead to modifications in the distribution of internal stresses that, in turn, results in different release modes.
Delamination between resin and substrate enabled water and oxygen to diffuse into the system.
Accordingly, the encapsulation layer ceases to play its protective role. Functional properties would be degraded with longer ageing periods. Although yellowing of the epoxy resin developed during the thermal ageing stage, there was no recorded change during the subsequent mechanical ageing stage.
Conclusion
The durability and functional performances of microelectronic test vehicles comprising epoxy resin encapsulant and PET substrates have been thoroughly assessed after thermal, mechanical fatigue and thermo-mechanical ageing. Under moderate (TM1) ageing conditions, the changes observed resulted directly from combined thermal ageing and mechanical fatigue. Under these ageing conditions, oxidation of the epoxy resin resulting from the thermal ageing coupled with a stiffening of the films induces an increase of the peeling force, and reinforcement of the encapsulant/substrate interface. Whereas crosslinking reactions dominated under moderate ageing TM1 conditions, no further oxidation was detected under harsher TM2 conditions.
Instead, chain scission reactions induced by fatigue stress following thermal ageing (TM2) were dominant. A softening of the epoxy resin coupled with a decrease in Tg was evidenced. The peeling force required to decouple the encapsulant from the substrate decreased, and delamination between the encapsulant and the substrate was observed in TV. While delamination does not always necessarily cause electrical failure, it probably induces damage to the barrier properties.
The peeling tests show that adhesion between the encapsulant and substrate is governed by degradation mechanisms occurring within the epoxy resin. The failure mechanism of TV as delamination or cracks in the glue layer is influenced by mechanical changes of all the constituting materials.
To that end, if the thermo-mechanical ageing remains inadequately controlled, a variety of reliability issues, circuit failure, performance degradation, ultimately leading to possible devices damage, will be the results.
Figure 1 .
1 Figure 1. Chemical structures of the two monomers (ECC and EA) constituting the PNE encapsulant and the photo-initiator
a). These systems, called Test Vehicles (TV), were supplied by STMicroelectronics. TVs were encapsulated using a glob top of the PNE resin (Figure 2 b). The glob top architecture, chosen because it advantageously minimizes the amount of protective polymer, also reduced delamination damage by ensuring a lower interfacial surface contact with the substrate. Resin deposition was conducted in a clean room and employed an industrial Asymtek C-341 automatic dispenser. Curing used a 365 nm UV LED for 30 seconds at room temperature under an air atmosphere.
Figure 2 .
2 Figure 2. (a) Schematic side view of the glob top encapsulated TV showing the architectural ensemble, (b) TV encapsulated within the PNE resin, (c) multilayered PET and PNE ensemble for the peeling test, and (d) schematic top-view illustrating the experimental assembly of the resistance measurements
a). The principle of the mechanical tests is shown in Figure 3 b. Samples were rolled (5 000 cycles) around a 50 mm diameter roller (F1) and then subjected to optical microscopy or electrical testing. If no degradation was noticeable, the samples were again rolled ( 5 000 cycles) around a 25 mm diameter roller (F2).
Figure 3 .
3 Figure 3. (a) Photograph and (b) schematic illustration of the functional principle of the bending bench
Figure 4 .
4 Figure 4. DMTA thermograms of unaged PNE epoxy resin showing, (a) storage and loss modulus; and(b) tanδ a) b) -100 -75 -50 -25 0 25 50 75 100
Figure 5 .
5 Figure 5. (a) Stress-strain curve and (b) TGA thermogram of unaged PNE resin film
Figure 6 .
6 Figure 6. Changes in the epoxy resin films with thermal ageing time (a) full FTIR spectra and (b) FTIR difference (xhours-reference) spectra for the carbonyl domain (1650-1850 cm -1 )
Figure 7 .
7 Figure 7. Raman spectra of PET substrate TV before (Reference) and after thermal ageing at 120°C for 500h
Figure 8 .
8 Figure 8. (a) Photographic image of a TV and (b) optical micrograph of on encapsulated chip before (Reference) and after thermal ageing (500h 120°C)
Figure 9 .
9 Figure 9. Raman depth profiles of TVs a) before; and after ageing under b) 500h, c) F1, d) F2, e) TM1 and f) TM2 conditions
Figure 10 .
10 Figure 10. Changes in micro-Raman spectra of PET substrate in TVs before and after mechanical ageing under F1 and F2 conditions
Figure 11 .Figure 9
119 Figure 11. Optical micrograph of a TV after it was subjected to fatigue ageing under F2 conditions
Figure 12 .
12 Figure 12. Changes in micro-Raman spectra of PET substrate in TVs before and after thermo-mechanical ageing under TM1 and TM2 conditions
Figure 13 .
13 Figure 13. Optical micrograph of a TV after it was subjected to thermo-mechanically ageing under TM2 conditions
Table 1 . Ageing conditions and the number of samples used in this study AGEING TYPE REFERENCE THERMAL FATIGUE THERMO-MECHANICAL
1
CONDITION Pristine Up to 500h 5000cy ⌀50 5000cy ⌀50 + 500h 120°C + 500h 120°C
at 120°C 5000cy ⌀25 5000cy ⌀50 + 5000cy
⌀50 +
5000cy ⌀25
NOTATION Ref Xh* F1 F2 TM1 TM2
Total number 4 TV 4 TV 4 TV 4 TV 4 TV 4 TV
of sample 7 film samples 7 film 6 film 7 film 7 film samples 7 film
samples samples samples samples
* X is the ageing time in hours
2.4. Characterization methods
DSC measurements were performed on a Mettler Toledo DSC 1 apparatus under an air atmosphere. Sample masses of 10 mg were placed in aluminum DSC crucibles and subjected to two heating-cooling cycles from
Table 2 .
2 Changes in PNE resin characteristics (Tg, Tβ, E, εB, Td) induced by the different ageing
conditions
Ageing Tg FWHM of Tg Tβ FWHM of Tβ E (MPa) εB (%) Td
Reference 13°C 23.1°C -67°C 36°C 33 ± 1 22.9 ± 1.1 294°C
500h 120°C 26°C 24.2°C -67°C 38°C 133 ± 32 15.1 ± 1.2 307°C
F1 50 ± 6
F2 14°C 23.1°C -71°C 49°C 56 ± 11 15.7 ± 2.2 295°C
TM1 28°C 31.4°C -67°C 42°C 153 ± 21 17.8 ± 3.7 301°C
TM2 20°C 21.5°C -69°C 50°C 126 ± 20 20.2 ± 1.0 306°C
Polymer degradation mechanisms in the presence of oxygen involve both crosslinking and chain scissions
Table 3 .
3 Changes in mechanical and thermal properties of PET films induced by the different ageing
conditions
Ageing E (MPa) εB (%) Tm χc (%)
Reference 5049 ± 42 72.7 ± 5.7 258°C 28%
500h 120°C 5116 ± 289 71.0 ± 17.6 256°C 32%
F1 5212 ± 177 71.8 ± 2.1 257°C 35%
F2 5538 ± 42 74.4 ± 8.6 257°C 30%
TM1 4932 ± 250 61.1 ± 20.2 256°C 33%
TM2 4828 ± 312 72.0 ± 11.3 256°C 34%
Table 4 .
4 Changes in the peeling force between PET and PNE films induced by the different ageing thermally aged TV successfully passed the electrical test (Table
conditions
Ageing (N/cm) G (N/cm) ∆G Rupture
Reference 1.74 0.15 Adhesive
500h 120°C 2.70 0.14 Adhesive
F1 1.63 0.11 Adhesive
F2 1.72 0.05 Adhesive
TM1 2.61 0.19 Adhesive
TM2 2.50 0.16 Adhesive
Both unaged and
Table 5 .
5 Results of electrical tests performed on TV devices after each ageing condition
Ageing type Thermal Fatigue Thermo-mechanical
Ageing condition Reference 500h F1 F2 TM1 TM2
Electrical test Positive Positive Positive Negative Positive Positive
Funding: This work was supported by Région Centre-Val de Loire through Regional project APR2015.
Acknowledgement:
The authors would like to thank STMicroelectronics and Protavic International, the technical platforms of CERTeM located in Tours and of the IUT located in Blois.
Conflicts of Interest:
The authors declare no conflict of interest. |
03462958 | en | [
"sdv.spee"
] | 2024/03/04 16:41:22 | 2021 | https://unilim.hal.science/hal-03462958/file/S1875213620302382.pdf | Thibaut Caltabellotta
Julien Magne
Baptiste Salerno
Valerie Pradel
Pierre-Bernard Petitcolin
Gilles Auzemery
Patrice Virot
Victor Aboyans
email: [email protected]
Characteristics associated with patient delay during the management of ST-segment elevated myocardial infarction, and the influence of awareness campaigns
niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Objectifs. -Nous avons évalué les facteurs liés au délai patient et les effets d'une campagne régionale de sensibilisation.
Méthodes. -Les données provenaient du registre régional des SCA ST+ du Limousin (France) et nous avons de plus recueilli par questionnaire les antécédents, les signes cliniques, le contexte, les conditions socioéconomiques, la perception et comportement des patients. Des groupes "appel précoce" (1 e et 2 e tertile, < 154 minutes) et "appel tardifs" (3 e tertile > 154 minutes) ont été comparés par analyse univariée puis multivariée. L'influence de la sensibilisation a été étudiée en comparant les délais avant et après ces campagnes régionales.
Résultats. -Sur 481 patients, le délai d'appel médian était de 87 minutes. Les facteurs indépendamment liés au délai tardif étaient : l'âge (OR 1,02, IC95 % 1,00 à 1,03), l'horaire entre 0h et 6h (OR 1,86, IC95 % 1,10 à 3,12), le recours à un médecin généraliste (OR 2,58, IC95 % 1,66 à 4,04) ou au service des urgences (OR 4,10, IC95 % 2,04 à 8,32). Le symptôme «sueurs» et le sentiment de gravité étaient associés à un délai précoce. Après les campagnes de sensibilisation, le délai patient n'était pas modifié mais la part d'appel au 15 est passée de 55 % à 62 % (P < 0,001).
Conclusions. -L'allongement du délai patient est multifactoriel. L'impact des campagnes de sensibilisions est mitigé. Les aspects psychologiques et comportementaux semblent déterminants et devraient être pris en considération pour adresser des messages ciblés vers des publics spécifiques.
KEYWORDS
Background
Ischaemic heart disease is the main cause of death worldwide [1]; its most serious form is ST-segment elevation myocardial infarction (STEMI), which requires urgent intervention to restore myocardial perfusion. The organization of care is essential for timely management, because reperfusion delay is prognostic [START_REF] Ibanez | ESC Guidelines for the management of acute myocardial infarction in patients presenting with ST-segment elevation: The Task Force for the management of acute myocardial infarction in patients presenting with ST-segment elevation of the European Society of Cardiology (ESC)[END_REF]. The initial care provider (Emergency Medical Services [EMS]/emergency telephone number, general practitioner, emergency service, etc.) largely determines the patient's path. Indeed, in case of severe chest pain or suspicion of myocardial infarction, the current recommendation is to call the national emergency number ("112", "911" or similar numbers in different countries).
To minimize patient delay and to increase public awareness of how to recognize the common symptoms of STEMI and to call the emergency number, two public campaigns were conducted in our region in 2013 and 2015 to alert the public to call 112 quickly in case of chest pain.
The purpose of this study was to highlight the characteristics (epidemiological, socioeconomic and clinical) of patients who do not request assistance quickly when experiencing a STEMI, to better target the populations requiring the most adapted education to decrease the delay. A second objective was to assess the efficacy of a regional campaign encouraging the public to call the EMS rapidly in case of chest pain.
Methods
The SCALIM registry
Our first data source was SCALIM, which contains data on all STEMI cases occurring in Limousin < 24 hours after the onset of symptoms. Initiated on June 2011, this registry has as its main objective to assess the patients' paths and the delays of all patients presenting STEMI in the region of Limousin, France. The "patient delay" was calculated from the onset of symptoms to the time of contact, seeking assistance. The distance (in minutes) between the patient's home and our centre was calculated using Google Maps for the fastest route.
For the present analysis, patients included were adults from the SCALIM registry who had initially been treated at Limoges University Hospital for STEMI onset between 01 June 2011 and 31 December 2012 (Period A) or between 01 January 2016 and 31 December 2017 (Period B). These periods were used to assess the evolution of patient delays and paths before and after a public campaign advertised in the same area between the two periods.
Telephone survey
The telephone survey aimed to obtain additional information on socioeconomic status, as well as the circumstances in which the STEMI had started. A questionnaire was submitted by telephone.
Public awareness campaigns
Two public campaigns were conducted in our region in November 2013 and November 2015, to raise public awareness about the necessity to call the EMS rapidly when experiencing chest pain. The campaigns comprised newspaper, radio and television advertisements and billboard posters in the main cities in the region, as well as newsletters to general practitioners and other healthcare providers, including pharmacists.
Statistical analysis
The "patient delay", expressed in minutes, was defined according to the recommendations of the European Society of Cardiology as the delay that elapsed between the onset of symptoms and, depending on the patient's path, the call to the EMS or admission to a hospital centre (Fig. 1) [START_REF] Ibanez | ESC Guidelines for the management of acute myocardial infarction in patients presenting with ST-segment elevation: The Task Force for the management of acute myocardial infarction in patients presenting with ST-segment elevation of the European Society of Cardiology (ESC)[END_REF]. The patient delay and the medical delay (related to health system efficiency) are the two components of the overall time elapsing between the first onset of symptoms and the recanalization of the culprit coronary artery. Patient characteristics were compared between the two periods of study (2011-2012 versus 2016-2017). Based on the distribution of patient delay, all cases were categorized as "early callers" or "late callers". The "early callers" group corresponded to the first and second tertiles of the distribution of patient delay, and the "late callers" group included those in the third tertile of patient delay.
Continuous variables are expressed as means ± standard deviations, and qualitative variables are expressed as numbers and percentages. All time periods are reported as means with their corresponding 95% confidence intervals (CIs). The normality of distributions of quantitative variables was tested using Shapiro's test. The comparison between "early callers" and "late callers" was performed using the unpaired Student's t test, Fisher's test or the χ 2 test, as appropriate. For quantitative variables that did not have a normal distribution, the Wilcoxon-Mann-Whitney test was used. The level of significance was P < 0.05. To compare a continuous variable, with skewed distribution, with a categorical variable of more than two levels, we applied a Kruskal-Wallis test followed by multiple paired tests. In this case, the P values were adjusted using the Bonferonni correction. A multivariable analysis of the variables of the entire study population was performed by logistic regression, and odds ratios (ORs) and 95% CIs were reported. We selected for an initial model variables with a P value in univariate analysis > 0.25; then, we established the final model by a backward stepwise selection. In addition, another model that included the variables collected through the survey was performed by the same method. The evolution between the two periods ( 2011
Results
During the two study periods, 545 patients were enrolled in SCALIM, of whom 481 actually had a confirmed diagnosis of STEMI without initial cardiopulmonary arrest, and were therefore included in the analysis. Of these, 208 (43.2%) were from period A, 273 (56.8%) were from period B, 37 had died at the time of administration of the questionnaire, 32 were unable to answer reliably and 141 refused to participate or could not be reached by telephone or post (Fig. 2). A subset of 271 (56.3%) patients took part in the survey. There was no significant difference in sex, age, medical history, cardiovascular risk factors, dates, times or locations between patients in period A compared with those in period B.
The comparison of patients participating in the survey versus those who did not answer showed significant differences in terms of age (62.9 ± 24.5 vs 65.9 ± 33.5 years, respectively; P = 0.02), male sex (77.9% vs 66.7%; P < 0.01), proportion of "late callers" (28.8% vs 40.0%; P = 0.01) and family history of cardiovascular disease (25.1% vs 11.4%; P < 0.001).
Patient delay
The patient delay ranged from 0 to 1397 minutes, with a global median value of 87 minutes (Q1 36; Q3 216). Median (Q1; Q3) delays for patients calling and not calling the EMS were 62 minutes (25; 149) and 138.5 minutes (68.75; 328.5), respectively. Fig. 3 shows the distribution of patient delay. The "early callers" and "late callers" were separated at the threshold of 154 minutes (2 hours 34 minutes, i.e. the third tertile). In the "early callers" group, the median patient delay was 50 (24.5; 86.5) minutes, whereas it was 309 (215.25; 493.25) minutes in the "late callers" group.
Univariate analysis
Table 1 compares "early callers" and "late callers" for each of the variables collected via the SCALIM registry, and Table 2 compares the groups for additional variables collected from the survey. The proportion of women and mean age were significantly higher in the "late callers" group.
Regarding the time of onset of symptoms, the patient delay was on average shorter when this occurred between noon and 6 p.m. The highest rates of "late callers" were found when symptoms occurred between midnight and 6 a.m.
In the "late callers" group, initial care by the emergency department or a general practitioner was more frequent than initial care by the EMS.
Rates of "no" or "mild" pain and "progressive-onset" pain were significantly more frequent in the "late callers" group. Similarly, retired or non-active patients were more frequent among the "late callers". The comparison with other socioprofessional categories did not show any significant difference. The proportions of patients who self medicated and patients who were alone during symptom onset were higher in the "late callers" group. Conversely, considering the situation to be severe, a heart issue or related to a heart attack, anxiety and sweating were more frequent among "early callers".
The study of the other variables did not show a statistically significant difference between the two groups.
Multivariable analysis
A first multivariable analysis included all the data in the registry available for the whole study group.
The final model (Fig. 4) highlighted older age, time of symptom onset between 0:00 and 5:59 a.m. and the initial care provider as factors significantly associated with a longer patient delay.
In a second model, including data from the survey (Fig. 4), the significant factor associated with "late callers" was initial caregiver other than the EMS, with the presence of sweats and the patient feeling that the situation was severe being associated with a shorter delay.
Effects of the regional campaign
Table 3 presents the comparison between patients in periods A (2011-2012) and B (2016-2017); between these periods, the two awareness campaigns were undertaken in our region.
There was no difference between the two periods regarding patient delay and proportion of "late callers". The initial care provider was more often the EMS and the emergency department, and less often the general practitioner during period B (P < 0.001). The proportion of patients planning to call the EMS (emergency telephone number 112) in the event of a recurrence did not change significantly (from 86% to 87%; P = 0.97).
Discussion
The present study shows that in patients with STEMI, patient delay was > 50 minutes for two-thirds of cases, almost > 90 minutes for half of the cases and 2.5 hours for the last third of the cases (the "late callers" group). This is a long period compared with that optimally recommended for the management of STEMI [START_REF] Ibanez | ESC Guidelines for the management of acute myocardial infarction in patients presenting with ST-segment elevation: The Task Force for the management of acute myocardial infarction in patients presenting with ST-segment elevation of the European Society of Cardiology (ESC)[END_REF]. The distribution of this variable was similar to that in other studies of prehospital delays [START_REF] Benamer | Longer pre-hospital delays and higher mortality in women with STEMI: the e-MUST Registry[END_REF][START_REF] Lucas | Factors associated with delay in calling Emergency Medical Services ("15") for patients with ST-elevation myocardial infarction in southern Isere][END_REF].
Older patients are more likely to have an increased patient delay. Several studies have also highlighted the influence of age on patient delay [START_REF] Benamer | Longer pre-hospital delays and higher mortality in women with STEMI: the e-MUST Registry[END_REF][START_REF] Lucas | Factors associated with delay in calling Emergency Medical Services ("15") for patients with ST-elevation myocardial infarction in southern Isere][END_REF][START_REF] Callahan | Facteurs influençant le délai avant la prise en charge médicale dans les syndromes coronariens aigus ST+[END_REF][START_REF] Goldberg | Duration of, and temporal trends (1994-1997) in, prehospital delay in patients with acute myocardial infarction: the second National Registry of Myocardial Infarction[END_REF][START_REF] Goldberg | Prehospital Delay in Patients With Acute Coronary Syndromes (from the Global Registry of Acute Coronary Events [GRACE])[END_REF]. Clinical manifestations are less typical in the older population [START_REF] Bayer | Changing presentation of myocardial infarction with increasing old age[END_REF][START_REF] Solomon | Comparison of clinical presentation of acute myocardial infarction in patients older than 65 years of age to younger patients: the Multicenter Chest Pain Study experience[END_REF]. Care seeking may be delayed in the elderly because of cognitive impairment, motor and sensory disorders, frailty, social isolation or frequency of chronic pain that minimizes the event.
Our univariate analysis showed higher rates of females in the "late callers" group, but this association was no longer significant after further adjustments. Others studies have shown that women are more likely to have prolonged prehospitalization delay [START_REF] Benamer | Longer pre-hospital delays and higher mortality in women with STEMI: the e-MUST Registry[END_REF][START_REF] Lucas | Factors associated with delay in calling Emergency Medical Services ("15") for patients with ST-elevation myocardial infarction in southern Isere][END_REF][START_REF] Goldberg | Duration of, and temporal trends (1994-1997) in, prehospital delay in patients with acute myocardial infarction: the second National Registry of Myocardial Infarction[END_REF][START_REF] Goldberg | Prehospital Delay in Patients With Acute Coronary Syndromes (from the Global Registry of Acute Coronary Events [GRACE])[END_REF]. In 2015, a review [START_REF] Chen | Gender differences in symptoms associated with acute myocardial infarction: a review of the research[END_REF] showed that women were more likely to experience atypical symptoms during the onset of STEMI.
Data from the French FAST-MI programme also showed that women and patients admitted to hospital through an emergency department were associated with higher case-fatality rates [START_REF] Gabet | Early and late case fatality after hospitalization for acute coronary syndrome in France, 2010-2015[END_REF] (especially in the early stage of hospitalization), which could be consistent with our results. The prolonged delay between onset of pain and first medical contact could explain this observation, besides a particular risk profile in women [START_REF] Isorni | Temporal trends in clinical characteristics and management according to sex in patients with cardiogenic shock after acute myocardial infarction: The FAST-MI programme[END_REF].
Socioeconomic factors do not appear to be independently associated with patient delay, whereas retirement was trending towards being more frequent among "late callers", but was confounded by the age factor. This result is consistent with another French study that focused on the delay between pain onset and electrocardiography [START_REF] Callahan | Facteurs influençant le délai avant la prise en charge médicale dans les syndromes coronariens aigus ST+[END_REF]. Ageing is known to be associated with delay, and is a confounding factor, explaining this result in retired patients. Furthermore, another study has shown an association between a low level of education and prehospital delay [START_REF] Fournier | Influence of socioeconomic factors on delays, management and outcome amongst patients with acute myocardial infarction undergoing primary percutaneous coronary intervention[END_REF].
We did not show any statistical relationship between patient delay and the variable "living alone", which is not consistent with some previous papers [START_REF] Callahan | Facteurs influençant le délai avant la prise en charge médicale dans les syndromes coronariens aigus ST+[END_REF][START_REF] Perkins-Porras | Pre-hospital delay in patients with acute coronary syndrome: factors associated with patient decision time and home-to-hospital delay[END_REF]. One study [START_REF] Atzema | Effect of marriage on duration of chest pain associated with acute myocardial infarction before seeking care[END_REF] suggested that the favourable association of marriage with prehospital delay is more complex, and would not apply to women.
The associated symptom "sweating" is another determinant acting independently to reduce the care-seeking delay. This factor has been found in other studies [START_REF] Herlitz | Factors of importance for patients' decision time in acute coronary syndrome[END_REF][START_REF] Johansson | Factors related to delay times in patients with suspected acute myocardial infarction[END_REF], but its association independent of other factors should be confirmed.
The feeling that the situation was severe was independently associated with the "early callers" group, and is an important determinant of prehospital delay [START_REF] Burnett | Distinguishing between early and late responders to symptoms of acute myocardial infarction[END_REF], but other psychological mechanisms may exist [START_REF] O'carroll | Psychological factors associated with delay in attending hospital following a myocardial infarction[END_REF]. The psychological factors influencing patient delay in STEMI have also been studied according to various theoretical frameworks [START_REF] Dracup | Causes of delay in seeking treatment for heart attack symptoms[END_REF]: the Health Belief model, models inspired by symbolic interactionism and the model of self-regulation of behaviour [START_REF] Dracup | Causes of delay in seeking treatment for heart attack symptoms[END_REF][START_REF] Walsh | Factors influencing the decision to seek treatment for symptoms of acute myocardial infarction: an evaluation of the Self-Regulatory Model of illness behaviour[END_REF]. These models are conceptual frames to help us to understand a patient's reactions in case of health problems. For example, a qualitative study [START_REF] Dempsey | Women's decision to seek care for symptoms of acute myocardial infarction[END_REF] presented a model of reaction to an acute phase of a myocardial infarction in women, highlighting "maintaining control" factors that can lengthen the delay (attribution to benign cause; beliefs about acute myocardial infarction in general and acute myocardial infarction in women in particular; denial; concern for others ["fear of disturbing"]) and "relinquishing control" factors pushing women to seek help (persistence of symptoms and anxiety). More generally, among the strategies initially adopted by those who experience a myocardial infarction, waiting or ignorance of the symptoms are found in > 75% of them [START_REF] Meischke | Utilization of emergency medical services for symptoms of acute myocardial infarction[END_REF].
Major cardiovascular risk factors did not appear to be significantly influential in our study, whereas another study reported hypertension and diabetes as factors associated with prolonged patient delay [START_REF] Solomon | Comparison of clinical presentation of acute myocardial infarction in patients older than 65 years of age to younger patients: the Multicenter Chest Pain Study experience[END_REF].
Surprisingly, patients with a history of cardiovascular disease did not call for help more quickly than others. Some studies provided similar results [START_REF] Lucas | Factors associated with delay in calling Emergency Medical Services ("15") for patients with ST-elevation myocardial infarction in southern Isere][END_REF][START_REF] Johansson | Factors related to delay times in patients with suspected acute myocardial infarction[END_REF][START_REF] Puymirat | Patient education after acute myocardial infarction: cardiologists should adapt their message--French registry of acute ST-elevation or non-STelevation myocardial infarction 2010 registry[END_REF], although others found shorter delays in patients with a history of myocardial infarction or angioplasty [START_REF] Goldberg | Duration of, and temporal trends (1994-1997) in, prehospital delay in patients with acute myocardial infarction: the second National Registry of Myocardial Infarction[END_REF][START_REF] Goldberg | Prehospital Delay in Patients With Acute Coronary Syndromes (from the Global Registry of Acute Coronary Events [GRACE])[END_REF]. One of the hypotheses is that patients who have already experienced a myocardial infarction expect to have similar symptoms in case of recurrence, whereas this is not always the case.
We found that the period between midnight and 6 a.m. was a significant and independent factor associated with delayed call; this result is supported by other studies [START_REF] Goldberg | Prehospital Delay in Patients With Acute Coronary Syndromes (from the Global Registry of Acute Coronary Events [GRACE])[END_REF][START_REF] Pereira | Factors influencing patient delay before primary percutaneous coronary intervention in ST-segment elevation myocardial infarction: The Stent for life initiative in Portugal[END_REF].
Experimental studies evaluated the impact, in case of recurrence, of personalized prevention measures in patients with a history of myocardial infarction. In 2012, a study including 1944 patients randomized between a control group and an experimental group showed a significant reduction in prehospital delay in the experimental group [START_REF] Mooney | A randomized controlled trial to reduce prehospital delay time in patients with acute coronary syndrome (ACS)[END_REF]. By comparison, in 2009, a similar study design including 3522 patients did not show a significant difference [START_REF] Dracup | A randomized clinical trial to reduce patient prehospital delay to treatment in acute coronary syndrome[END_REF].
A choice other than EMS as the initial care provider is a patient delay lengthening factor. This result has been found in other French [START_REF] Lucas | Factors associated with delay in calling Emergency Medical Services ("15") for patients with ST-elevation myocardial infarction in southern Isere][END_REF] and European [START_REF] Johansson | Factors related to delay times in patients with suspected acute myocardial infarction[END_REF][START_REF] Pereira | Factors influencing patient delay before primary percutaneous coronary intervention in ST-segment elevation myocardial infarction: The Stent for life initiative in Portugal[END_REF] studies. Also, a study on factors associated with delay in transfer of patients with STEMI showed that, even after the first medical contact, patients who did not call EMS (15/112) and those with prolonged patient delay have, in addition, an increased transfer delay in reaching the catheterization laboratory [START_REF] Range | Factors associated with delay in transfer of patients with ST-segment elevation myocardial infarction from first medical contact to catheterization laboratory: Lessons from CRAC, a French prospective multicentre registry[END_REF]. In another words, patients who have a delayed call have a "double-penalty" as they are those seeking suboptimal assistance, so that both delays, before and after the call until angioplasty, could be increased. This supports the importance of the initial care provider in the patient pathway. The probability of being a "late caller" is higher in patients who directly go to the emergency room compared with those who call their general practitioner because, in the former case, the time to go to the emergency room is added to the delay to decision to seek medical assistance. Another reason could be that the longer patients wait, the more severe the symptoms are, and the more likely it is that patients will to choose to directly go to the emergency room rather than to a general practitioner. Furthermore, this result should be interpreted with caution because of the small sample size, resulting in obvious 95% CI overlap.
Self medication during the first symptoms of STEMI was more common in the "late callers" group, but this association did not persist in the multivariable analysis.
Our study did not show any significant reduction in the delay after the regional campaign. However, we found an increase in first contact with the EMS. The lack of efficacy of the campaign in decreasing the patient delay is probably multifactorial, including the lack of sufficient coverage to sensitize the whole population, the need for more repetitive advertising to change patient behaviour and the fact that the campaign had two messages: "call fast, call 112", and it is plausible that the second message was better retained than the first. New campaigns focusing on the necessity to call quickly should take our results into consideration. The evaluation of a public campaign carried out in Spain in 2012-2013 identified increased use of the EMS number and an increase in treated patients, but did not observe any significant effect on time delays [START_REF] Regueiro | Impact of the "ACT NOW. SAVE A LIFE" public awareness campaign on the performance of a European STEMI network[END_REF]. By comparison, a campaign carried out in Switzerland for a long period between 2005 and 2008 significantly reduced prehospital time in STEMI and non-STEMI. However, this decrease was not significant in the subgroups of women, those aged > 75 years and patients with atypical symptoms [START_REF] Naegeli | Impact of a nationwide public campaign on delays and outcome in Swiss patients with acute coronary syndrome[END_REF].
Practical implications
The awareness campaigns could be improved by targeting specific patient profiles at high risk of STEMI and making late calls. According to our data, the campaigns could mostly focus on elderly patients, those who already have experienced a myocardial infarction and those with cardiovascular risk factors. It could be useful to refine the prevention messages: reiterate the value of a call to the EMS at any time, including at night, especially for the older patients; and inform about typical and atypical symptoms, especially in women. The patient and their relatives should be encouraged to call the EMS if the symptoms persist beyond 15 minutes.
Study limitation
Our study has limitations. First, it was a single-centre study, and our findings need further confirmation in other geographical areas, as cultural dimensions may also affect results. We were unable to know if the patients studied had been hit directly by the campaigns, so the effect of the campaigns on patient delay is difficult to address directly. The intermediate (56%) rate of responders to our survey is also a limitation, as it may generate a selection bias. Some characteristics of "late callers" match with those of "non-responders" -mostly demographic and socioeconomic variables. In particular, the probability of being a "late caller" was higher in elderly patients. It is likely that elderly patients are also most difficult to be reached (by telephone or mail) for reasons of social isolation, cognitive and sensory impairment or physical disability. Hence, the results of our survey should be interpreted with caution.
Conclusions
At the occurrence of signs of STEMI, several factors might influence the delay for in seeking assistance. Psychological and behavioural factors seem to be determining and complex. Other main factors are age, the time period of symptom onset and the choice of initial care provider. The impact of previous public awareness campaigns is mixed. We propose to target awareness messages to specific groups of patients, such as elderly people, and with a special educational emphasis on those who already have risk factors or a cardiovascular history. The awareness campaigns should also highlight atypical presentations and stress that the best option is to call the emergency telephone number (112/911).
Sources of funding
This research received no specific grant from any funding agency in the public, commercial or not-forprofit sectors. The SCALIM registry was funded by the Agence Régionale de Santé du Limousin during the period of this study. a In France, in more remote/rural areas, firefighters can be called for health issues; they can make the connection with the EMS to reduce transport delays.
Disclosure of interest
b Ambulance used to get to hospital or patient already hospitalized because of an issue other than STEMI.
Abbreviations: CI, confidence interval; EMS, Emergency Medical Services; OR, odds ratio; STEMI, ST-segment elevation myocardial infarction.
Questions submitted in 2012-2013 to patients from period A (2011-2012) were included in the questionnaire submitted in 2018 to patients from period B (2016-2017), but the latter also contained questions about the awareness campaigns. The questionnaire covered various topics (medical history, clinical data, context, socioeconomic factors and perception and behaviour of the patient). In May 2018, the questionnaire was also sent via post with a reply envelope to patients inaccessible by telephone. Socioprofessional categories were based on French National Institute for Statistical and Economic Studies (INSEE) nomenclature of occupations and socioprofessional categories [3]. The following cases from the SCALIM registry were excluded from the survey: deceased patients; those who might have difficulty answering the telephone questionnaire (patients aged > 90 years, those with cognitive impairment and those who did not speak French fluently); and those who declined to participate. Data collection was done after informing the patient of the study objectives, and as part of the Cardiology Department of the University Hospital of Limoges, which declares its research activities to the French National Commission for Data Protection and Liberties (CNIL). Data were anonymized after collection.
-12 and 2016-17) was assessed by comparing patient delay and choice of initial caregiver. Statistical analyses and diagrams were performed using R software, version 3.4.4. (R Foundation for Statistical Computing, Vienna, Austria).
Figure 1 .
1 Figure 1. Definition of patient delay and study goal. EMS: Emergency Medical Services; STEMI: STsegment elevation myocardial infarction.
Figure 2 .
2 Figure 2. Flow diagram of inclusion and exclusion of patients. STEMI: ST-segment elevation myocardial infarction.
Figure 3 .
3 Figure 3. A. Patient distribution. B. Box plot of patient delay.
Figure 4 .
4 Figure 4. Results of multivariable logistic regression models of the factors associated with patient delay. A. Whole population; variables included sex, age, history of hypertension, symptom onset time slot and initial care provider. B. Subgroup responding to the survey; variables included sex, age, history of hypertension, symptom onset time slot, initial care provider, history of depression, psychotropic treatment, no or mild pain, progressive-onset pain, retrosternal chest pain, epigastric pain, upper limb pain, associated symptoms (nausea/vomiting, sweats, anxiety), alone during symptom onset, retirement, interpretation of symptoms as severe, cardiac or MI, and self medication. CI: confidence interval; EMS: Emergency Medical Services; OR: odds ratio.
Table 1
1 Comparison of "early callers" and "late callers".
No. with available All patients Early callers Late callers P
variable
(n = 481) (n = 319) (n = 162)
Data are expressed as % or mean ± standard deviation. ACS: acute coronary syndrome; CABG: coronary artery bypass grafting; CV: cardiovascular; CVD: cardiovascular disease; EMS: Emergency Medical Services; No.: number of patients; PCI: percutaneous coronary intervention; PAD: peripheral arterial disease; STEMI: ST-segment elevation myocardial infarction.
Table 2
2 Comparison of additional characteristics from the survey. Data are expressed as %. MI: myocardial infarction; No.: number of patients; NRS: numeric rating scale.
No. with available All patients Early callers Late callers P
variable
(n = 271) (n = 193) (n = 78)
Figure 4 |
03590033 | en | [
"sdu"
] | 2024/03/04 16:41:22 | 2021 | https://insu.hal.science/insu-03590033/file/S0031920121000510.pdf | ) Lopes
) Le Mouël
) Courtillot
) Gibert
On the shoulders of Laplace
can be considered as the central result of the paper. It shows that the sum of forces of the four Jovian planets matches in a striking way the polar motion reconstructed with SSA components (the Markowitz trend removed). All our results argue that significant parts of Earth's polar motion are a consequence (rather than a cause) of the evolution of planetary ephemerids. The Sun's activity and many geophysical indices show the same signatures, including many climate indices. Two different mechanisms (causal chains) are likely at work: a direct one from the Jovian Planets to Earth, another from planetary motions to the solar dynamo; variations in solar activity would in turn influence meteorological and climatic phenomena. Given the remarkable coincidence between the quasi-periods of many of these phenomena, it is reasonable to assume that both causal chains are simultaneously at work. In that sense, it is not surprising to find the signatures of the Schwabe, Hale and Gleissberg cycles in many "Laplace" -11/3/21 -2 nd Revision 3/45 terrestrial phenomena, reflecting the characteristic periods of the combined motions of the Jovian planets.
-Introduction
On July 5 1687, Isaac Newton published the three volumes of his Principia Mathematica, in which he put on a firm ground the law of universal attraction and the general laws of mutual attraction of masses. In the following two centuries, a corpus of laws that explained the motions of celestial bodies was established and vindicated by observations. Foremost among these works, Laplace published his Traité de Mécanique Céleste (Treatise of Celestial Mechanics) in 1799.
Based on Newton's law and the fundamental principle of dynamics, he established the general equations that govern the motions of material bodies (Laplace, 1799, book 1, chapter 7, page 74, system (D)). This system of differential equations of first order was later given the names of Liouville and Euler. It establishes both the rotation and translation of the rotation axis of any celestial body, and in particular the Earth. These same equations can be found in Guinot (in Coulomb and Jobert, 1977, p. 530) and more recently in the reference book of Lambeck (2005, p. 31). They are recalled in Appendices 1 and 2 in their most general form. When the forces and the moments that act on Earth are taken to be zero (i.e. the right hand side of the equations is zero), the
solution for the axis is a free oscillation with a Euler period 1/σ of 306 days (using the known values of the mean angular velocity and axial and equatorial moments of inertia (σ =((C-A)/A)Ω)).
Based on observations made between June 1884 and November 1885, Chandler (1891a,b) obtained a value of 427 days for 1/σ. Data provided by the International Earth Rotation and Reference System Service (IERS) yield a 1/σ that has varied between 431 days in 1846 and 434 days in 2020.
Newcomb (1892) verified Chandler's observations and concluded that the Earth should be viewed as an elastic body submitted to oceanic stresses. For this, Love numbers were introduced [START_REF] Love | The yielding of the Earth to disturbing forces[END_REF]. As a result, the Liouville-Euler system (D) of Laplace was made less general. Hough (1895) reinforced the idea that what made the Chandler period 121 days longer than the "theoretical" value was the fact that the Earth behaved as an elastic body. Based on Poincaré's (1885) work on the stability of rotating fluids with a free surface, Hough showed that the period should decrease rather than increase if one did not take elasticity into account. Works in the following decades strengthened the notion that the fluid envelopes of Earth (ocean, atmosphere and mantle) acted on Earth's rotation axis. An increasingly precise theory was thus proposed, whereas observations seemed to be increasingly remote from predictions.
"Laplace" -11/3/21 -2 nd Revision 4/45 Two papers [START_REF] Peltier | Glacial-isostatic adjustment-I", The forward problem[END_REF][START_REF] Nakiboglu | Deglaciation effects on the rotation of the Earth[END_REF] further strengthened the theory of an elastic Earth whose rotation axis was influenced by both its internal and external fluid envelopes. An important concept was that of Global Isostatic Adjustment (GIA), in which the Earth has a visco-elastic response to stress (load) variations, that originated at the onset of the last ice age. Melting ice would lead to sea level rise and a reorganization of surface masses that eventually modified the inclination of the rotation pole. Rather than writing in a physically explicit way the forces implied in system (D), as done by Laplace and Poincaré, more or less complex "excitation functions" were introduced (Appendix 1).
We return to the founding work of [START_REF] Laplace | Traité de mécanique céleste[END_REF] to see how these problems can be tackled further. In what follows, we refer to volumes, chapters, pages and equation numbers in the original edition of the Traité de Mécanique Céleste. Throughout the Treatise, Laplace (1799) rigorously shows that, whatever the nature of the oceans and atmosphere, the only thing that influences the rotation of celestial bodies is the action of other celestial bodies. On page 347 (chapter 1, volume 5) [START_REF] Laplace | Traité de mécanique céleste[END_REF] writes (this quotation is given in the original French in Appendix 4): "We have shown that the mean rotation movement of Earth is uniform, assuming that the planet is entirely solid and we have just seen that the fluidity of the sea and of the atmosphere should not alter this result. It would seem that the motions that are excited by the Sun's heat, and from which the easterly winds are born should diminish the Earth's rotation: these winds blow between the tropics from west to east and their continued action on the sea, on the continents and on the mountains they encounter, should seem to weaken imperceptibly that rotation movement. But the principle of conservation of areas, shows to us that the total effect of the atmosphere on this movement must be insensible; for the solar heat in dilating equally the air in all directions, should not alter the sum of areas covered by the vector radii of each molecule of the Earth and of the atmosphere, and when multiplied respectively by the corresponding molecules; which requires that the rotation motion be not diminished. We are therefore assured that as the winds analyzed diminish this motion, the other movements of the atmosphere that occur beyond the tropics, accelerate it by the same amount. One can apply the same reasoning to earthquakes, and in general, to all that can shake the Earth in its interior and at its surface. Only the displacement of these parts can alter this motion; if, for instance a body placed at the pole, was transported to the equator; since the sum of areas must always remain the same, the earth's motion would be slightly diminished; but for it to be noticeable, one should suppose the occurrence of great changes in the Earth's constitution."
These views are also shared by [START_REF] Poincaré | Les méthodes nouvelles de la mécanique céleste[END_REF]. They seem to be different from modern "Laplace" -11/3/21 -2 nd Revision 5/45 views as synthesized for instance by [START_REF] Lambeck | The Earth's variable rotation: geophysical causes and consequences[END_REF]. These authors agree on the Liouville-Euler system (D for Laplace) of differential equations, but the forces that act on the Earth are different (and interpreted in a different way, as shown below). In the present paper, we attempt to check Laplace's full system using the observations that have accumulated and improved since Laplace's time (time series starting in 1750 for the oldest and no later than 1850 for the shortest ones).
We first discuss some of the core ideas of the paper, based on Laplace's original developments (section 2). We then recall some concepts and tools that we use in the paper and introduce the data, i.e. the coordinates of the Earth's rotation pole from 1846 to 2020 (section 3). In section 4, we establish a striking result that is central to the paper: the detrended polar motion is highly correlated with the sum of the forces exerted by the four Jovian planets. We next submit the data to Singular Spectral Analysis (SSA) and discuss the first SSA components (section 4): the Markowitz, annual and Chandler rotations. Then, in section 5, we discuss the SSA components of the derivatives of the three components above. In section 6, we give several other examples, such as the excellent correlation of the 40yr SSA component of the derivative of the envelope of the Chandler oscillation with the 40yr SSA component of the combined forces of Uranus and Neptune.
We end with a discussion and concluding remarks (section 7).
-Forces, Moments and the Liouville-Euler System of Equations
In most classical applications of the mechanics of planetary rotation, one uses only the first two components of the trio of Euler angles, i.e. the coordinates of the rotation pole at the Earth's surface (Figure A1, Appendix 1). The Earth rotates about the Sun (and so do the other 7 planets) in the ecliptic plane that is almost perpendicular to the rotation axis. The Sun carries more than 99% of the mass of the solar system, and can be considered rather motionless (its center of gravity actually travels along a "small" variable "ellipse"). In addition to the gravitational attractions, one must consider the orbital kinetic moments of all planets (in others words the moment of the momentum, see eq. B1, Appendix 2), as emphasized by [START_REF] Laplace | Traité de mécanique céleste[END_REF]. Planets carry more than 99% of the total angular momentum of the system (19.3, 7.8, 1.7 and 1.7 x 10 42 kgm 2 s -1 respectively for Jupiter, Saturn, Uranus and Neptune). This can be compared to the Sun's attraction at the Earth's orbit, 3.5 10 22 kg.m.s -2 , that can be transformed to the dimension of a kinetic moment by multiplying it by the Sun-Earth distance and the orbital revolution period of Earth, yielding 1.7 10 41 kg.m 2 .s -1 : that is not negligible compared to the order of magnitude of the kinetic moments of the Jovian planets (to 1 or 2 orders of magnitude).
"Laplace" -11/3/21 -2 nd Revision 6/45
The central idea of this paper is to analyze variations in the Earth's rotation axis under the influence not only, as in many classic treatments (e.g. Dehant and Mathiews, 2015, ch. 2), of gravitational potentials, but also of kinetic moments. The classical system of differential equations that describe the pole's motion (Liouville-Euler) links the sum of simple physical entities with their time derivatives, hence a first order linear system (Appendix 1, part 1). See [START_REF] Bode | Network analysis and feedback amplifier design[END_REF] for more on the definition and consequences that can be drawn from such linear systems. One is that causes and consequences are similar, up to a constant factor, if the system is not too dissipative and is maintained: this implies that gravitational potentials, kinetic moments (of Jovian planets) and polar motions should share characteristic features.
Polar motion is described by three coordinates, usually labeled m1, m2 and m3 (Appendix 1). If one only wants to study the perturbations due to the gravitational potential of a planet in rotation about itself, two coordinates, m1 and m2, are sufficient to describe the motion. In the case of our Solar system, planets revolve about the Sun in (or close to) the ecliptic plane; the moments they generate are perpendicular to that plane (Appendix 2). They act on the inclinations of the rotation axes of all planets, including Earth's. This is the well-known phenomenon of interaction of spinning tops and is adequately described by the Liouville-Euler equations. This was known to Laplace who chose not to use the three Euler angles, but gave all the analytic formulas that allow one to compute the inclination θ of the rotation axis (Figure A1 and Appendix 3) as a function of time, under the influence of the Moon and Sun [START_REF] Laplace | Traité de mécanique céleste[END_REF]book 5, page 317, number 5), and the time derivative of the declination ψ of the rotation axis [START_REF] Laplace | Traité de mécanique céleste[END_REF]book 5, page 318, number 6). θ and ψ are defined in Laplace (1799; book 1, page 73, number 26). [START_REF] Laplace | Traité de mécanique céleste[END_REF]book 5, pages 352-355, number 14) deduces that, when neither the Moon nor the Sun act on Earth (conjunction nodes), the time derivative of the declination (which in modern terms is the Euler period 1/σ) has a value of 306 days (Appendix 3). This value is fully determined by the Earth's moments of inertia When the Moon and Sun act with maximum effect (conjunction bellies) 1/σ reaches a value of 578 days. 1/σ therefore oscillates between 306 and 578 days; Chandler (1891) found a value of 427 days and today one observes values of 432-434 days. Both inclination θ and declination ψ drift.
-The Toolbox: Rotation Pole Data, Ephemerids, Commensurability and Singular
Spectrum analysis
Some of the tools and data needed to pursue our goal are listed in this section. Of course, we require knowledge of planetary ephemerids, that are given by the IMCCE. Then we need/use:
3-1 Rotation pole data: Laplace did not have sufficient observations to demonstrate the influence of planets, though he certainly did not deny their possible role. We now have sufficiently long series of observations to test his full theory.
The rotation pole is defined by its components m1 and m2, respectively on the Greenwich (0°) and 90°E meridians (Figure A1). Two series of measurements of (m1, m2) are provided by IERS 1 under the codes EOP-C01-IAU1980 and EOP-14-C04. The first one runs from 1846 to July 1 st 2020 with a sampling rate of 18.26 days, and the second runs from 1962 to July 1 st 2020 with daily sampling (also giving access to the length of day). Figure 1 shows the components m1 and m2 of the longer series (data are given in milli arc second -mas -and converted here in radians per secondrad.s -1 ). Figure 2 integer numerator and denominator less than 9 [START_REF] Mörth | Planetary motion, sunspots and climate[END_REF][START_REF] Okhlopkov | The gravitational influence of Venus, the Earth, and Jupiter on the 11-year cycle "Laplace" -11/3/21 -2 nd Revision 43/45 of solar activity[END_REF][START_REF] Scafetta | Solar Oscillations and the Orbital Invariant Inequalities of the Solar System[END_REF]. Planets encounter a resonance and can be paired, and each pair can be considered as a single object (an egregor or aggregate). Jupiter/Saturn and Uranus/Neptune form two pairs.
Pairs of pairs can also be considered, thus the set (Jupiter/Saturn)/(Uranus/Neptune). Many analyses of sunspot series [START_REF] Lassen | Variability of the solar cycle length during the past five centuries and the apparent association with terrestrial climate[END_REF]Hataway, 2015;[START_REF] Usoskin | Solar activity during the Holocene: the Hallstatt cycle and its consequence for grand minima and maxima[END_REF][START_REF] Le Mouël | Identification of Gleissberg cycles and a rising trend in a 315-year-long series of sunspot numbers[END_REF][START_REF] Stefani | A Model of a Tidally Synchronized Solar Dynamo[END_REF][START_REF] Courtillot | On the prediction of solar cycles[END_REF]Le Mouël et al., 2020a;Stefani et al., 2020) and of a number of geophysical phenomena [START_REF] Courtillot | Multi-Decadal Trends of Global Surface Temperature: A Broken Line with Alternating~ 30 yr Linear Segments?[END_REF][START_REF] Scafetta | High resolution coherence analysis between planetary and climate oscillations[END_REF]Lopes et al., 2017;[START_REF] Scafetta | Multiscale Analysis of the Instantaneous Eccentricity Oscillations of the Planets of the Solar System from 13 000 BC to 17 000 AD[END_REF][START_REF] Bignami | Are normal fault earthquakes due to elastic rebound or gravitational collapse?[END_REF]Le Mouël et al., 2019a;Le Mouël et al., 2019b;[START_REF] Hilgen | Paleoclimate records reveal elusive~ 200-kyr eccentricity cycle for the first time[END_REF]Le Mouël et al., 2020b;[START_REF] Zaccagnino | Tidal modulation of plate motions[END_REF][START_REF] Courtillot | On the prediction of solar cycles[END_REF] contain components with periods that can be attributed to Jovian planets to first order, and all planets including the telluric ones to second order [START_REF] Courtillot | On the prediction of solar cycles[END_REF]. Table 1 lists planetary commensurabilities following [START_REF] Mörth | Planetary motion, sunspots and climate[END_REF]. The periods found in our analysis of the SSA components of polar motion (section 4) and of the derivatives of their envelopes (section 5) are labeled in red (there are 8, ranging from 1.2 to 165 years).
Note: Inspection of Table 1 may give the impression that there is a risk of "cherry picking".
But certain periods that could have been reconstructed are not present, such as 103 yr that could have been obtained with Neptune. Commensurabilities are built from two consecutive planets and once their effect has been aggregated, they can be used in the next step of aggregation/commensurablity. The concept of commensurability is used by astronomers in order to discriminate between planets and other objects. The corresponding periods are not random: they are directly related to the revolutions of these bodies, and result from calculating means or subtracting periods two by two. Thus what can be obtained is not random. Moreover, as already pointed out by [START_REF] Mörth | Planetary motion, sunspots and climate[END_REF] or more recently [START_REF] Scafetta | High resolution coherence analysis between planetary and climate oscillations[END_REF], uncovering a limited number of common periods in a number of geophysical observables including sunspots cannot be due to chance. The action of kinetic moments of Jovian planets on the Sun's surface is what has allowed us to predict the next solar cycle from the ephemerids in a previous paper [START_REF] Courtillot | On the prediction of solar cycles[END_REF].
3-3 Singular spectral analysis:
Finally we extract the relevant components of polar motion and ephemerids, and other long time series, with the help of Singular spectral analysis (SSA; [START_REF] Vautard | Singular spectrum analysis in nonlinear dynamics, with applications to paleoclimatic time series[END_REF][START_REF] Vautard | Singular-spectrum analysis: A toolkit for short, noisy chaotic signals[END_REF]; for more up to date versions of the technique by the St Petersburg school of mathematics, see [START_REF] Golyandina | Singular Spectrum Analysis for time series[END_REF]. We have described and used our own version of SSA in a number of previous papers (e.g. [START_REF] Lopes | The mantle rotation pole position. A solar component[END_REF][START_REF] Courtillot | On the prediction of solar cycles[END_REF][START_REF] Courtillot | On the prediction of solar cycles[END_REF].
"Laplace" -11/3/21 -2 nd Revision 10/45
We discuss here a point that often comes up. An important factor in any time series analysis is the size of the window used in classical (Fourier) filters, to avoid erroneous interpretations [START_REF] Kay | Spectrum analysis-a modern perspective[END_REF]. In SSA, the lagged-vector analysis window L should be sufficiently large so that each eigen vector carries a large part of the information contained in the original time series. In more mathematical words, one should work in the frame of Structural Total Least Squares (STLS)
for a Hankel Matrix [START_REF] Lemmerling | Analysis of the structured total least squares problem for Hankel/Toeplitz matrices[END_REF]. A second issue is the separability of components. Many solutions are available, an exhaustive list being given by Golyandina and Zhigljavsky (2013, chap. 2.5.3, page 75). In this paper, we have used the sequential SSA. The window width L is variable, but remains close to 145 years.
-First Results
A striking reconstruction -Planets action on Earth's rotation:
We start to test the ideas and use the tools summarized above by comparing the sum of the forces exerted by the four Jovian planets (using the IMCCE ephemerids) and the m1 component of polar motion (1846 to 2020) as reconstructed from its SSA components, with the trend removed.
The polar coordinates m1 and m2 are related to the forces acting on Earth (Appendix 1). To first order, we can consider that the total "force" is simply proportional to the sum of individual (Jovian) planetary kinetic moments, plus the Solar kinetic moment. We have computed these moments from the planetary ephemerids, revolution periods and masses; their sum is given as the top black curve in Figure 3a. The red curve below is the reconstructed m1 polar coordinate from
Figure 1, after it has been decomposed in its SSA components, then reconstructed from them, but with the first component (the trend, see sub-section 4.2) removed. Figure 3b shows an enlargement of the 1980-2019 part of Figure 3a. The correlation is quite striking. It is indeed expected, as already pointed out by Laplace (1799, book 5 in whole), that the Earth's rotation axis should undergo motions with components that carry the periods (and combinations of periods) of the Moon, Sun, and planets, particularly the Jovian planets as far as their kinetic moments are concerned (see also [START_REF] Mörth | Planetary motion, sunspots and climate[END_REF][START_REF] Mörth | Planetary motion, sunspots and climate[END_REF][START_REF] Courtillot | On the prediction of solar cycles[END_REF]. This first exercise demonstrates that one should indeed consider planetary kinetic moments when describing the motions of the Earth's rotation axis. Based on this remarkable result, the aim of the rest of this paper is to see whether characteristic components of the ephemerids are also found in Earth's polar motion and other related (or not obviously related) phenomena. We next analyze one by one the leading SSA components of the Earth's rotation pole coordinates.
First SSA Component (Markowitz)
The first SSA components, shown in Figure 4, correspond to the mean trend of polar motion called the Markowitz drift [START_REF] Markowitz | Concurrent astronomical observations for studying continental drift, polar motion, and the rotation of the Earth[END_REF]. The drift velocity is on the order of 13 cm/yr and is principally carried by the E-W component m2. As noted by a reviewer, these curves show changes in slope and inflection points that are reminiscent of the recent evolution of the Earth's global surface temperature (Le Mouël et al., 2019b, Figure 20). This important point is not discussed further in the present paper.
Second SSA Component (Annual)
The second SSA component is the forced annual oscillation (Figure 5). On that annual oscillation, Lambeck (2005, chapter 7, page 146) writes "The seasonal oscillation in the wobble is the annual term which has generally been attributed to a geographical distribution of mass associated with meteorological causes. Jeffreys in 1916 first attempted a detailed quantitative evaluation of this excitation function by considering the contributions from atmospheric and oceanic motion, of precipitation, of vegetation, and of a polar ice. Jeffreys concluded that these factors explain the observed annual polar motion, a conclusion that is still valid today".
Figure 5 shows that the annual components of m1 and m2 are significantly modulated, and in "Laplace" -11/3/21 -2 nd Revision 14/45 different ways (recall that the excitation functions are sums of sinuses and cosines with constant weights; Lambeck, 2005, page 153, equations 7.1.9). In the generally accepted theory, modulation is thought to be a response to reorganization of oceanic and atmospheric masses. We note in the modulation of m1 the suggestion of a periodicity on the order of 150 years or more that could correspond to the [START_REF] Jose | Sun's motion and sunspots[END_REF] 171.5 yr cycle. Note that, given uncertainties, the Jose cycle could actually be the Suess-de Vries ~200 yr cycle (Stefani et al. 2020). [START_REF] Hinderer | Geomagnetic secular variation, core motions and implications for the Earth's wobbles[END_REF][START_REF] Runcorn | The excitation of the Chandler wobble[END_REF][START_REF] Gibert | Wavelet analysis of the Chandler wobble[END_REF]Bellanger et al, 2001;[START_REF] Bellanger | A geomagnetic triggering of Chandler wobble phase jumps?[END_REF][START_REF] Gibert | Inversion of polar motion data: Chandler wobble, phase jumps, and geomagnetic jerks[END_REF]. The "Laplace" -11/3/21 -2 nd Revision 15/45 Component m1 is in red and m2 in blue.
Chandler oscillation extracted by SSA is similar to that obtained with wavelets by [START_REF] Gibert | Wavelet analysis of the Chandler wobble[END_REF]. It is also as regular as that obtained with SSA by [START_REF] Gorshkov | Manifestation of solar and geodynamic activity in the dynamics of the Earth's rotation[END_REF].
When the first three SSA components of m1 and m2 are added, they account for 73% of the original variance. The quality of that incomplete reconstruction is shown in Figure 7.
Pushing the SSA analysis further reveals an oscillation with period 1.22 yr with an 18.6 yr modulation (the nutation), one with period 1.15 yr with a symmetrical modulation as in the case of the Chandler term, one with period 1.10 yr. Some of these (quasi-) periods have already been found using SSA on time series of sunspots (Le Mouël et al, 2020a). These periods seem to be linked to the ephemerids of solar system planets, which has been used by [START_REF] Courtillot | On the prediction of solar cycles[END_REF] to predict the [START_REF] Usoskin | A history of solar activity over millennia[END_REF], but not all authors agree. Moreover, these components are found only in m2 "Laplace" -11/3/21 -2 nd Revision 16/45 and are much smaller in amplitude, on the order of 10 -13 to 10 -14 rad.s -1 vs 10 -10 to 10 -11 rad.s -1 for the first three [START_REF] Lopes | The mantle rotation pole position. A solar component[END_REF][START_REF] Japaridze | Study of the Periodicities of the Solar Differential Rotation[END_REF]. When all components from the trend to the Hale cycle are added, they account for 95% of the total variance of the original series. Except for the 1.10 and 1.15 yr components, all others are found in the table of planetary interactions (Table 1).
-On Some Derivatives of SSA Components of Polar Motion
System (D) expresses that there is a link between a force and the derivative of the resulting polar motion (Appendix 1, equation 2). In other words Earth acts as a natural integrator (Appendix 1, equation 2 implies that m is an integral of ξ; see Le Mouël et al, 2010). This leads us to analyze the derivatives of the first three (largest) SSA components identified in the previous section.
Markowitz Drift
We first calculate the derivative of the Markowitz drift (Figure 4), and analyze its major SSA components. They are a trend (Figure 8a), a 90 yr pseudo-cycle (Figure 8b), a 40 yr pseudo-cycle (Figure 8c), a 22 yr period and an 11 yr period (Figure 8d). 2017) have obtained a period of 90 ± 3 yr from sunspot series SN_m_tot_V2.02 . It corresponds to a characteristic period in the ephemerids of Uranus (Table 1). There is also a close correspondence of periods for the 11 yr oscillation (Figure 8d). The drift could be linked to the modulation and varying "periodicity" of sunspots (8 to 13 yr). This is close to a characteristic period of Jupiter's ephemeris. The trend (Figure 8a) could be linked to the [START_REF] Jose | Sun's motion and sunspots[END_REF] 171.5 yr cycle, attributed to Neptune (Table 1) or to the Suess-de Vries ~200 yr cycle (Stefani et al., 2020).
Envelope of the Forced Annual Oscillation
We next turn to the derivative of the envelope of the forced annual oscillation (Figure 5). Its first SSA component, the trend, is shown in Figure 9a. For m1 this trend is compatible with a little more than one period of a sine curve with a period close to 170 yr, that is the Jose (1965) solar cycle, corresponding to the ephemeris of Neptune (or again given uncertainties to the Suess-de Vries ~200 yr cycle). The next SSA component is a 70 yr cycle for m1 and a 60 yr cycle for m2 (Figure 9b). These periods, or pseudo-periods, are among those resulting from combinations of ephemerids of the Jovian planets [START_REF] Mörth | Planetary motion, sunspots and climate[END_REF][START_REF] Scafetta | Solar Oscillations and the Orbital Invariant Inequalities of the Solar System[END_REF]; Table 1). The 60 yr cycle had already been found in sunspot series by [START_REF] Scafetta | Empirical evidence for a celestial origin of the climate oscillations and its implications[END_REF] and Le Mouël et al (2020a).
We had also seen it as an important component of series of global temperature and PDO and AMO oceanic indices [START_REF] Courtillot | Multi-Decadal Trends of Global Surface Temperature: A Broken Line with Alternating~ 30 yr Linear Segments?[END_REF].
Free Chandler Oscillation
We now undertake the SSA analysis of the derivative of the envelope of the Chandler oscillation (Figure 6). We find components with periods 70, 40, 30 and 22 yr (Figures 10a to 10d).
It is remarkable that the components for m1 and m2 are quasi-identical and have a very regular behavior, close to sine functions but with some slower modulation: they could be described as "astronomical" (as opposed to "astrophysical", as defined by [START_REF] Mayaud | Derivation, meaning, and use of geomagnetic indices[END_REF].
-Further Examples
We can illustrate further how Jovian planets influence polar motion with the combined effects of the pair Uranus (84 yr) -Neptune (165 yr): this pair has revolution periods compatible with the envelopes in Figures 5, 6 and8b. Figure 11a shows the sum of the kinetic moments of these two Finally, in Figures 12a to 12c, we superimpose the signatures (components) of the ephemeris of Jovian planets on the components of polar motion. In Figure 12a, the 90 yr component of the envelope of m2 matches the ephemeris of Uranus offset by 32 years. In Figure 12b, the 165 yr component of the envelope of m1 matches the ephemeris of Neptune, also offset by 32 years. In 8d) has a variable phase drift with respect to the ephemeris. But, whereas "solar" components (periodicities) do appear at 22 and 11 yr (and 5.5 yr?) in polar motions, they are 3 to 4 orders of magnitude smaller than the leading components we discuss here.
We have seen that the sum of the Markowitz drift, annual oscillation and Chandler oscillation explain some 70% of polar motion. The same is true for the leading components of sunspots, i.e. the sum of the trend (Jose ~171.5 yr cycle), Schwabe cycle (~11 yr) and Gleissberg cycle (~90 yr) (on the same time range). These periods correspond to those of Neptune (~165 yr), Uranus (84 yr ) and Jupiter (11.8 yr ).
Many if not most of the (quasi-)periods found in the SSA components of polar motion, of their modulations, of their derivatives can be associated with the Jovian planets. Only one, the 432-434 day period is due to the Earth's mass and moments of inertia and not to the Jovian planets, as predicted by Laplace (1799).
-Summary, Discussion and Conclusion
The general laws that govern the motions of celestial bodies have been derived and discussed by [START_REF] Laplace | Traité de mécanique céleste[END_REF] in his remarkable Traité de Mécanique Céleste. Laplace established the system of linear differential equations now known as the Liouville-Euler equations. He provided the full set of equations for the three Euler angles that specify the motions of a body's axis of rotation. Laplace differs from most later authors in the way he uses the Liouville-Euler system. Laplace makes full use of the system (D) for a rotating body that undergoes both rotation and translation, and solves the algebraic transcendant equations of Appendix 3, given all astronomical parameters. Most others use a simplified version with the formalism of excitation functions (Appendix 1, equation 2; a second order system) in which the possibility of a translation of the body's rotation axis is denied.
When Laplace obtains system (D) on page 74 of Chapter 7 of Book I, after 7 chapters that led him to these equations, he recognizes the fact that the system accounts for rotation as well as translation of a rotating body's polar axis. When Lambeck (for instance) follows the same route, his Chapter 3 (entitled «Rotational Dynamic») on page 30 begins with the following sentence : «The fundamental equations governing the rotation of a body are Euler's dynamical equations».
Lambeck links the angular momentum to the torque that generated it. One means only rotation: that would be valid if the Earth's inclination were zero or a constant. The equations are the same, but one soon forgets that the momentum that is the source of the torque (Lambeck's system (3.1), page 30) is a 3D vector (with no reason to be restricted to 2D, since the Earth is neither flat, nor is its inclination constant; its rotation axis revolves about the Sun and is therefore subjected at least to our star's kinetic momentum). This oversight has some consequences. Since one only considers rotations, not translations, then the (Chandler) free rotation is obtained by zero-ing all torques and disregarding the third equation for the m3 polar coordinate (that is assumed constant). Then, the forced annual oscillation cannot be due to the revolution of Earth about the Sun and one must find causes for these forced oscillations (the excitation functions). Laplace of course knew that polar coordinates m1 and m2 were connected to m3. Therefore, Laplace did not constrain polar motion to the two surface components (m1, m2) but represented it by two meaningful components, the axis' inclination θ and the time derivative of its declination ψ depends on the inclination "Laplace" -11/3/21 -2 nd Revision 28/45
(previously calculated as a solution of the first Liouville-Euler equation). Laplace showed that there existed a free oscillation that would drift with a period between 306 (conjunction nodes) and 578 days (conjunction bellies), fully determined by the Earth's moments of inertia. This free oscillation, the Chandler oscillation, has a current value of 432-434 days. We now have long time series, up to a couple of centuries long, available and we use series of coordinates of the rotation pole m1 and m2
(Figure 1) to extend some of Laplace's (1799) results. A simple Fourier transform (Figure 2) shows the dominant spectral lines at 1 yr (forced annual oscillation) and 1.19 yr (free Chandler oscillation).
Singular spectral analysis (SSA) allows to better characterize the three leading components, the trend (~13cm/yr) called the (free) Markowitz drift (Figure 4), then the (forced) annual oscillation (showing different modulations for m1 and m2, Figure 5) and the Chandler oscillation (with a very large modulation and a phase change in 1930, similar for m1 and m2, Figure 6). Under the current theory, modulation is thought to be a response to reorganization of oceanic and atmospheric masses (e.g. Lambeck, 2005, chapter 7). Taken together, the first three SSA components explain 73% of the signal's total variance (Figure 7). The smaller components that follow have (pseudo-) periods of 1.22 (with an 18.6 yr modulation), 1.15 and 1.10 yr. Some of these periods have been encountered in sunspot series and in the ephemerids of Jovian planets (Le Mouël et al., 2020a;[START_REF] Courtillot | On the prediction of solar cycles[END_REF].
We have next analyzed in the same way the envelopes of the derivatives of the first three SSA components of polar motion (Figure 8). We find a trend in the derivative of the Markowitz drift, that could also correspond to the 171. One can think in Laplace's terms that the kinetic moments of planets act directly on Earth, or that these moments act on the external layers of the Sun (which is a fluid mass) and perturb its rotation, hence its revolution and kinetic moment M (Appendix 1), eventually affecting the Earth's 1799) has shown that one should consider the orbital kinetic moments of all planets and that the Earth's rotation axis should undergo motions with the signatures/periods of the Sun and planets: the moments of the Jovian planets range from 1.7 to 19.3 10 42 kg.m 2 .s -1 , and for the Sun, an equivalent is 1.7 10 41 kg.m 2 .s -1 . To first order, the total kinetic moment applied to the Earth's rotation axis is simply the sum of individual (Jovian) planetary kinetic moments plus the Solar kinetic moment.
We have shown (Figure 3) that the m1 component of polar motion reconstructed with SSA, with the Markowitz trend removed, matches remarkably well the sum of kinetic moments of the four Jovian planets. We have also computed these kinetic moments from the planetary ephemerids of Uranus and Neptune (Figure 11a). They "predict" remarkably well (Figure 11b) the 40 yr SSA component of the derivative of the envelope of the Chandler oscillation (Figure 11d).
We have previously determined the characteristic SSA components of solar activity, using sunspot numbers as a proxy (Le Mouël et al., 2019b). The sum of the Markowitz drift, annual oscillation and Chandler oscillation explain over 70% of polar motion. The same is true for sunspots, on the same time range, regarding the sum of the trend (Jose ~171.5 yr), Schwabe (~11 yr) and Gleissberg (~90 yr) cycles. These periods correspond to those of Neptune (~165 yr), Uranus (~90 yr) and Jupiter (~11 yr). We have superimposed the signatures (components) of the ephemerids of Jovian planets on the components of polar motion. The 90 yr component of the envelope of m2 matches the ephemerids of Uranus, offset by 32 years (Figure 12a). The 165 yr component of the envelope of m1 matches the ephemerids of Neptune, also offset by 32 years (Figure 12b). And the 30 yr component of the envelope of m1 of the Chandler oscillation matches the ephemerids of Saturn, offset by 15 years (Figure 12c).
We have followed [START_REF] Mörth | Planetary motion, sunspots and climate[END_REF], who determined the commensurable periods of pairs and pairs of pairs of Jovian planets ( 2019d). This has led to attempts to increase the complexity of the model, such as the forcing by climate or the visco-elastic response to glacial isostatic rebound. We have seen that this theory uses only 2 of the 3 Euler angles. By using the full system of equations in the Liouville-Euler system (D for Laplace), Laplace (1799) was able to go beyond the synthetic treatments of (for instance) Guinot (1977) or [START_REF] Lambeck | The Earth's variable rotation: geophysical causes and consequences[END_REF]. We have seen in this paper numerous applications of this theory that explain many pseudo-periodic components of a number of geophysical (and solar) phenomena, making the leading role of planetary ephemeris clear.
The shorter periods (months to a few decades) often show as modulations of even shorter variations. And trends, with about 200 years of data, are possibly due to periods in the ephemeris comparable to or longer than the range of available observations. Still, these 200 years allow us to test Laplace's work further than he himself could. We have for instance been able to use this formalism to predict the future evolution of solar Cycle 25 [START_REF] Courtillot | On the prediction of solar cycles[END_REF].
It is widely assumed that both forced and free oscillations of Earth can, at least in part, be associated with climate forcings. Such has been the case from Jeffreys (1916) to [START_REF] Lambeck | The Earth's variable rotation: geophysical causes and consequences[END_REF], and recently to [START_REF] Zotov | On modulations of the Chandler wobble excitation[END_REF] and [START_REF] Zotov | A possible interrelation between Earth rotation and climatic variability at decadal time-scale[END_REF]. In all these works, causality is absent, be it from a time perspective or based on the orders of magnitude of the forces required to perturb the Earth's rotation. The periods that for instance [START_REF] Zotov | A possible interrelation between Earth rotation and climatic variability at decadal time-scale[END_REF] associate with an interaction between Earth's fluid and rigid envelopes are found in other geophysical phenomena such as the Earth's magnetic field or sunspots (Le Mouël et al., 2019a,b,c,d ;Le Mouël et al., 2020a,b ;[START_REF] Courtillot | On the prediction of solar cycles[END_REF][START_REF] Courtillot | On the prediction of solar cycles[END_REF]and references therein). We have come to the same conclusion regarding many climatic indices (Le Mouël et al., 2019d). If there is a good correlation of many characteristic periods, pseudo-periods and components extracted with SSA, for instance between Earth's rotation and many features of climate, it is reasonable to assume that this "Laplace" -11/3/21 -2 nd Revision 31/45 is because they are subject to some common forcings. This is not an overly speculative hypothesis:
with the views of Laplace on tides, we know that the fluid envelopes react on short time scales (to changes in the Moon's declination for 2/3rds and the Sun for 1/3 rd ). On longer time scales, the whole (including solid) Earth responds (e.g. Dehant et Mathiews, 2015), all being governed by the Liouville-Euler equations.
In the present study, we have been able to find planetary signatures in polar motions, strictly based on observational data and using only classical mechanics. A possible causal chain thus emerges that has gravity potential and kinetic moments of planets acting directly or modulating motions of the fluid parts of celestial bodies, i.e. the Sun's outer layers (sunspots) and the Earth's atmosphere and ocean. These effects are in general not yet modeled: this is a domain where climate modeling warrants significant research advances.
In summary and conclusion of this work, two different mechanisms (causal chains) are likely at work. One is illustrated by the spectacular and direct effect of the kinetic moments of the (Jovian) planets on the Chandler wobble, whose intrinsic period (somewhere between 306 and 578 days) is synchronized to 433 days (a value that depends on Earth properties). The causal chain is directly from the Jovian Planets to Earth. Another causal chain would be an effect of planetary motions on the solar dynamo; variations in solar activity would in turn influence meteorological and climatic phenomena, such as mass transport between the equator and the poles, length of day, sea-level,...
Given the remarkable coincidence between the quasi-periods of many of these phenomena, it is reasonable to assume that both causal chains are simultaneously at work. In that sense, it is not surprising to find the signatures of the Schwabe, Hale and Gleissberg cycles in many terrestrial phenomena, reflecting the characteristic periods of the combined motions of the Jovian planets.
(i.e. the internal distribution of masses). The equations derived by Laplace are: "Laplace" -11/3/21 -2 nd Revision 7/45 All celestial and terrestrial parameters in these equations are defined in Appendix 2. The time variation of declination of the Earth's rotation pole is a function of inclination. Since the (θ ,ψ) and (m1, m2) couples represent the same physics, the pattern of the sum of planetary kinetic moments that "force" part of the Earth's polar motions should be found in m1 and m2 (see below). Laplace obtained these equations taking into account "only" the Moon and Sun.
45 FFigure 1 :Figure 2 :
4512 Figure 1: Components (m1, m2) of polar motion since 1846 (time series EOP-C01-IAU1980)
"
Figure 3a: Upper curve (in black) the sum of the forces of the four Jovian planets affecting Earth. 289 Ephemerids from the IMCCE. Lower curve (in red) the m1 component of polar motion (1846-2020) 290 reconstructed with SSA and with the trend (Markowitz) removed. 291
Figure 5 :
5 Figure 5: 2nd SSA comp. of polar motion (annual oscillation) since 1846. (m1 red, m2 blue).4.4 Third SSA Component (Chandler) Figure 6 shows the third SSA component, that is the Chandler component. Its amplitude is twice that of the annual component and its behavior is very different. The modulations are very large, similar for m1 and m2, and undergo a sharp and simultaneous change in phase and amplitude in 1930. Many scientists have studied this phase change[START_REF] Hinderer | Geomagnetic secular variation, core motions and implications for the Earth's wobbles[END_REF][START_REF] Runcorn | The excitation of the Chandler wobble[END_REF]
Figure 6 :
6 Figure 6: Third SSA components of polar motion (Chandler oscillation) since 1846.
Figure 7 :
7 Figure 7: Reconstruction of polar motion since 1846 using only its first three SSA components. Top: observed component m1 in black and reconstructed in red; bottom: observed component m2 in black and reconstructed in blue.
Figure 8a: First SSA component (trend) of the derivative of the Markowitz drift (first SSA component of polar motion). Component m1 in red and m2 in blue.
Finally
, the 40 yr component has been shown byMörth and Shlamminger (1979; see also[START_REF] Courtillot | On the prediction of solar cycles[END_REF] to correspond to a commensurable revolution period of the four Jovian planets. It is interesting to point out that in both terrestrial polar motion and solar activity (as studied through the proxy of sunspots) the first 3 components that emerge from SSA are a trend, then the Gleissberg and Schwabe quasi-cycles.
Figure 8b :
8b Figure 8b: Second SSA component (90 yr period)) of the derivative of the Markowitz drift (first SSA component of polar motion). Component m1 in red and m2 in blue. In black: Gleissberg cycle extracted from sunspots (sign reversed).
Figure 8c :
8c Figure 8c: Third SSA component (40 yr period)) of the derivative of the Markowitz drift (first SSA component of polar motion). Component m1 in red and m2 in blue.
Figure 8d :
8d Figure 8d: 22 yr SSA component (top, component m2 in blue) and 11 yr SSA component (bottom, component m2 in blue) of the derivative of the Markowitz drift (first SSA component of polar motion). Bottom, black curve: the 11 yr Schwabe cycle extracted by SSA from the sunspot series (sign reversed).
"
Figure 9a: First SSA component (trend) of the derivative of the envelope of SSA component 2 496 (annual oscillation) of polar motion. 497 498
Figure 10a: First SSA component (70 yr quasi-period) of the derivative of the envelope of the Chandler oscillation.
Figure 10c :
10c Figure 10c: Third SSA component (30 yr quasi-period) of the derivative of the envelope of the Chandler oscillation
Figure 10d :
10d Figure 10d: Fourth SSA component (22 yr quasi-period) of the derivative of the envelope of the Chandler oscillation
Figure 11c :
11c Figure 11c: Second component of the Uranus-Neptune pair (top) and forced annual oscillation of the polar motion m1 (bottom).
Figure 12c ,
12c Figure 12c, the 30 yr component of the envelope of m1 of the Chandler oscillation matches the ephemeris of Saturn offset by 15 years. The 11 yr component detected in the m2 component of the derivative of the Markowitz drift (Figure 8d) has a variable phase drift with respect to the
"
Figure 12a: Superimposition of the ~90 yr SSA component of the envelope of m2 (blue curve; see
5 yr Jose cycle (associated with the ephemeris of Neptune) or the ~200 yr Suess-de Vries cycle). Next, a 90 yr component, reminiscent of the Gleissberg solar cycle (associated with the ephemeris of Uranus), a 40 yr component, corresponding to a commensurable revolution period of the four Jovian planets, a 22 yr and an 11 yr component, that can be associated with Jupiter and/or the Sun. For the modulation of the annual component of polar motion, SSA finds periods of 165, 70 and 60 years (Figure 9). The 60 yr component has been found in sunspots, global temperature of Earth's surface, and the oceanic oscillation patterns PDO and AMO (and Saturn). Finally, for the Chandler component, excellent matches are found for m1 and m2 with periods of 70, 40, 30 and 22 years (Figure 10).
Table 1
1 that is to calculate the inclination θ and declination ψ following Laplace's (1799) full treatment of the equations (Appendix 2). Since the Liouville-Euler equations are linear differential equations of first order, we have been able to use the frame of small perturbations and we have considered that the influence of planets can be taken as the sum of individual influences. When one works within this theoretical frame, there remain unexplained observations such as the 434 day value of the current period of the Chandler wobble or the 6 month component of oceanic indices(Le Mouël et al.,
): we find that 8 of them, ranging from
that starts with the Sun and Moon, and continues with the Jovian planets. It would be satisfying to undertake a rigorous demonstration of the influence of all planets on the Sun and on the Earth's
Observatoire royal de Belgique, http://www.sidc.be/silso/datafiles
Acknowledgements: We thank two anonymous reviewers for very useful comments on the original draft of this paper. V.C. acknowledges input from Georges Consolo. IPGP Contribution no
Appendix 1: Polar Coordinates and Excitation Functions
Figure A1 The reference system for the pole (m1 and m2).
Figure A1 gives the notations for the reference system that we use. The rotation pole is defined by its components m1 and m2, respectively on the Greenwich (0°) and 90°E meridians. We follow Lambeck's (2005, chapter 3) formalism. The rotation of the pole ω can be decomposed into three Euler angles (ω1, ω2, ω3) associated with the axes (X1, X2, X3) of the fixed reference frame.
These Euler angles are a function of the Earth's mean angular velocity Ω and of the apparent position of the pole at the Earth's surface m1, m2, m3 :
Noting that the Earth rotates about its axis and that its radius is constant, the Liouville-Euler system of equations (D for Laplace, 1799) becomes (1/σ is the Euler period): One sees the rotation as the sum of two oscillations, one intrinsic to the Earth linked to the constant 3m/4n, that varies like (1+λ).m.cos θ for all the nodes of luni-solar orbits, and is therefore a function of inclination θ, and another one forced by the Moon and Sun, linked to longitudes (ft+ζ) and (f't+ζ') and to right ascensions ν and ν' of these two orbs. Laplace therefore can estimate that the rotation period varies from 306 and 578 days. |
02926923 | en | [
"shs.psy"
] | 2024/03/04 16:41:22 | 2020 | https://shs.hal.science/halshs-02926923/file/S1269176320300419.pdf | Jean-Charles David¹
email: [email protected].
Arnaud Buchet²
Jean-Noël Sialelli³
Sylvain Delouvée
email: [email protected].
The use of antibiotics in veterinary medicine: Representations of antibiotics and biosecurity by pig farmers
par les éleveurs de porcs
Introduction
The use of antibiotics in livestock farming: context and issues
According to the WHO (World Health Organisation), more than half of the antibiotics produced in the world are intended for animals and a large number of these medicines are used in industrial livestock farming. In fact, the rearing conditions characteristic of these facilities (a large number of animals in close proximity) lead to a high consumption of antibiotics and, at the same time, increase the risk of bacterial resistance [START_REF] Rushton | Antimicrobial Resistance: The Use of Antimicrobials in the Livestock Sector[END_REF][START_REF] Schwarz | Use of antimicrobial agents in veterinary medicine and food animal production[END_REF].
Moreover, multi-resistant bacteria originating from farming can be transmitted to humans as the same classes of antibiotics are used in human and veterinary medicine [START_REF] Bonnet | Thèse : Utilisation raisonnée des antibiotiques en élevage porcin. Démarche d'accompagnement dans sept élevages[END_REF]. The challenge is thus to preserve the effectiveness of antibiotics through a One Health approach [START_REF] Parodi | Une seule santé «One world, one health»: la place des vétérinaires[END_REF]. With this objective, the WHO and other international institutions are encouraging countries to implement strategies aimed at reducing the use of antibiotics in veterinary medicine.
In France, in line with these recommendations, the National Agency of Veterinary Medicines (Anses-ANMV) carries out an annual survey of sales of veterinary medicines containing antibiotics. The data collected are used to measure the level of exposure of animals to antibiotics (ALEA: Animal Level of Exposure to Antimicrobials). In November 2011, Ecoantibio (the national plan to reduce the risks of antimicrobial resistance in veterinary medicine) predicted a drop of 25% in this exposure over 5 years from 2012 to 2016. The Anses report of 2018 shows that the overall objective of the Ecoantibio plan was reached with a fall in the ALEA of 36.6% (41.5 % for pigs). Nevertheless, pig farms remain one of the sectors most exposed to antibiotics with an average ALEA of 0.623 (Anses, 2018). The objective of the new 2017-2021 Ecoantibio plan is to improve on these results by developing, for example, communication campaigns about antimicrobial resistance and training in biosecurity.
In addition, an Ifop study (2017) revealed that 49% of French people have little or no knowledge of the term "antimicrobial resistance" while 71% of them think that the public authorities do not communicate enough about this phenomenon. The report also indicated that although the French people are only partially familiar with antimicrobial resistance and its causes, their opinions on this topic are becoming fixed. For example, 76% are in favour of eating meat from animals reared without antibiotics. The expectations of consumers are thus carrying more and more weight and are encouraging farmers to change their practices, especially those enabling them to reduce their reliance on antibiotics. As a result, a large body of research in animal epidemiology is trying to identify the technical, health and structural factors influencing the use of antibiotics [START_REF] Van Der Fels-Klerx | Farm factors associated with the use of antibiotics in pig production[END_REF][START_REF] Chauvin | Usage des antibiotiques en filières porcine, avicole et cunicole en France[END_REF]. Among them, biosecurity figures as one of the most promising strategies in terms of reducing this use (Laanen et al., 2013;[START_REF] Postma | The biosecurity status and its associations with production and management characteristics in farrow-to-finish pig herds[END_REF]. Biosecurity includes actions such as cleaning/disinfecting farm buildings and changing clothing. These types of measures are now the subject of many training courses, particularly in the poultry and pig sectors.
However, in order to communicate effectively about the risks linked to antimicrobial resistance and provide training in biosecurity suitable for farmers, it is essential to understand the different perceptions those involved have of antibiotics and the actions required to reduce their use. Their opinions, attitudes and beliefs are all factors that should be taken into account in order to support farmers appropriately in these changes in practices.
The contribution of social representations to explaining and preventing the use of antibiotics
The concept of social representation was introduced by Serge Moscovici in 1961 in his book entitled Psychoanalysis, its image and its public. In this work, Moscovici defined representations as "forms of naive knowledge meant to organise behaviours and guide communications". These representations are composed of opinions, attitudes, beliefs and information related to an object or a situation (Lo Monaco, Delouvée & Rateau, 2016;[START_REF] Rateau | Social representation theory[END_REF]. The content of a representation as well as its organisation are determined by the individual and the social and ideological environment to which he/she belongs and are modulated by the nature and intensity of the links the individual has with this social system. Thus, a social representation enables individuals in the same group to understand, explain, and take a stand regarding a phenomenon in agreement with the values and ideas of the group in question. The representations have at least four functions in the social environment of a person: a communication function; an identity function in the determination, construction and conservation of the group identity as well as in intergroup relationships and the maintenance of social distance (Jodelet, 1989); a justification function in that they allow the positions taken and the attitudes toward the object of the representation to be justified; and lastly, a behavioural guidance function, they are "prescriptive of behaviours and practices. They define what is lawful, tolerable or inacceptable in a given situation" (Abric, 1994, p. 17).
Today, there are several theoretical orientations in the way of understanding social representations [START_REF] Rateau | Social representation theory[END_REF]. Among them is the sociodynamic approach of [START_REF] Doise | Les représentations sociales[END_REF], one of the principles of which is based on the anchoring process proposed by Moscovici (e.g. [START_REF] Clémence | Social positioning and social representations[END_REF]Doise et al., 1992 ;[START_REF] Palmonari | Le modèle socio-dynamique[END_REF]. Doise considers that "Social representations are principles that generate positions linked to specific insertions in a set of social relationships and organize the symbolic processes involved in these relationships" [START_REF] Doise | Les représentations sociales[END_REF]p.125). This model suggests examining the organization of relations between the social meta-system (i.e., the common positions taken about an object of representation), the cognitive system that corresponds to individual differences, and the social relations in which the positions are produced (Palomari & Emiliani, 2016).
This research therefore sought to study the anchoring points of farmers in relation to antibiotics and biosecurity at these three levels. The first step was to analyse the farmers' common positions on these issues. In a second step, the organizing principles of the individual or sub-group positions were studied. Finally, these position papers were discussed in the light of the context and issues currently surrounding the use of antibiotics in veterinary medicine.
Consequently, the main objective of this study was to highlight the social representations of antibiotics and biosecurity by analysing the associations of a group of farmers. Several factors were taken into account in this analysis: the geographical location of the farmers and their way of farming, namely their reliance on antibiotics.
Research Questions
Moliner (1993) and Bonetto & Lo Monaco (2018) consider that the epistemic and affiliation needs of individuals underlie the processes of construction of social representations. Given the context and the issues associated with the use of antibiotics in the livestock farming sectors (emergence of antimicrobial resistance, regulatory standards, expectations of consumers, societal pressure), we consider that antibiotics and biosecurity have all the necessary characteristics to be objects of social representation among pig farmers.
For these two objects, we hypothesised that their organization and their content would be determined by both the geographical location (French department) and the ALEA of the farms.
The geographical hypothesis refers to the idea of the greater or lesser proximity of these farms to the historic home of the cooperative: the Côtes-d'Armor. It was expected that the sociorepresentational universes of the farms situated within this territory would converge as regards the values, standards and attitudes asserted by the cooperative. The hypothesis taking into account the ALEA of farms is based on the link between the social representations and the practices of individuals in relation to the object considered.
Method
Participants
This research compared the social representations of farmers who belong to the same agricultural cooperative: Cooperl Arc Atlantique. This group, founded in 1966, specialises in the production of pigs and the processing of pork. The cooperative currently has 2,700 farmers and produces 5,800,000 pigs each year. Although over the years the group has become an agroindustrial complex of international stature, Cooperl Arc Atlantique was created on the initiative of a few farmers in the region of Lamballe (Côtes-d'Armor) seeking sustainability for the agricultural activities within their region. By founding this cooperative, these farmers hoped to improve their living conditions, put their production on a long-term footing, change the image of the farming world and curb the exodus of young people from the region. Cooperl Arc Atlantique was thus founded on regional values. Since then, in order to adapt to the changes taking place in agriculture (globalisation, environmental impact of activities, emergence of antimicrobial resistance), the cooperative has diversified its activities and extended its geographical location over all of western France. Gradually, farms in other departments have joined Cooperl Arc Atlantique. As a result, it is now a very mixed group of farms, in terms of livestock units, rearing practices and cultural affiliations, which must tackle antimicrobial resistance. The pig farms in our sample are representative of this diversity. They are situated in different French departments (Côtes-d'Armor, Ille-et-Vilaine, Morbihan, Loire-Atlantique, Mayenne, Manche; see Figure 1) and present variable ALEA. In views of these specific features, we believed that these farmers could highlight differences in the perception of antibiotics and biosecurity.
[INSERT FIGURE 1]
We administered the questionnaire from February to July of 2018; the participants completed the questionnaire in their own offices. Initially, we had at our disposal a sample composed of about a hundred pig farmers. However, some farmers did not consent to participate in the study, and others did not complete the questionnaire in full. Our sample consisted of 87 farmers from the Cooperl Arc Atlantique group: 8 women and 79 men (M = 48.82; SD = 9.55, min = 27; max = 67). The "gender" variable was not taken into account in the analysis because we met very few women. The women we met worked on farms based on a family model. Most of the time, they indicated that they were primarily involved in maternity care. They were selected by the farm veterinary surgeons depending on their ALEA and geographical location in order to provide a representative sample of farms from Cooperl. A large number of them were situated in Brittany.
The average ALEA of the sample was 0.51, i.e. lower than the average ALEA of the pig sector in France. The farms were divided into two classes: the first, called "low" ALEA, included farms with an ALEA lower than 0.50; the second, called "high" ALEA, included farms where the animals were exposed to an ALEA equal to or higher than 0.50. The ALEA cut-off of 0.50 was chosen according to the criteria usually followed by veterinary studies and reports of the National Agency for Food, Environmental and Occupational Health Safety (ANSES in French).
This leads, in our study, to a slightly unbalanced sample (there were 55 farms with a "low" ALEA and 32 with a "high" ALEA) but this allows a comparison with other studies in the field (a high ALEA is then identical).
[INSERT TABLE 1]
Materials
Procedure
After giving their consent to participate in the study, the farmers completed two free association tasks (Lo Monaco, Piermattéo, Rateau, & Tavani, 2017;Moliner & Lo Monaco, 2017): they had to produce the 4 words or expressions that came to mind when they thought of the stimulus "antibiotic" and then again for the stimulus "biosecurity". Thus, we were able to understand how the farmers constructed meanings about these two objects, as well as their discourse associated with their use of antibiotics (real, future or envisaged) and the actions that they carried out in terms of biosecurity. The data collected were then lemmatised. Lemmatisation refers to the lexical analysis of a corpus comprised of words of the same family but taking different forms (noun, plural, infinitive verb, etc.); for example, the words "disinfectant" and "to disinfect" found in our corpus were transformed into "disinfection".
Results
The reason we chose to ask farmers about "antibiotics" and "biosecurity" was, as mentioned in the introduction, because we assumed that the two terms related to different realities and perceptions, [START_REF] Abric | L'approche structurale des représentations sociales : développements récents[END_REF]Flament & Rouquette, 2003) although they should both be linked in the fight against antibiotic resistance. In order to identify the relations of proximities and distances maintained by our two objects of representations questioned on the basis of the examination of the associative answers relative to each of them, we have calculated a corpus similarity index called Ellegård's index (Ellegård, 1959). This index consists of dividing the number of themes common to two corpuses ("antibiotics" and "biosecurity"; numerator) by the square root of the total number of themes evoked for the first corpus ("antibiotics") multiplied by the total number of themes evoked for the second corpus ("biosecurity").
In line with the literature (e.g., Brunel et al., 2017;[START_REF] Di | Intergroup alliances and rejections within a protest movement (analysis of the social representations)[END_REF][START_REF] Di Giacomo | Alliance et rejets intergroupes au sein d'un mouvement de revendication[END_REF]Doise et al., 1992;Moliner & Lo Monaco, 2017;[START_REF] Robieux | L'espoir dans la maladie chronique: représentations sociales de l'espoir chez les patients et soignants[END_REF], we thus calculated a similarity index between our two corpuses. The Ellegård index revealed that farmers share only 25% of common themes between the two corpuses. In other words, they share a very weakly similar vocabulary in their social representations of antibiotics and biosecurity. Farmers with a high ALEA shared a greater similarity with farmers with a low ALEA regarding the term "antibiotic" only (r n =.53). This result was also found with biosecurity (r n =.51). As the common vocabulary seems higher, we will now see, in detail, the organization of the two objects of social representations.
Results for "antibiotics"
The corpus was composed of productions from 87 participants. These were categorised by the authors, independently, and using classical rules of content analysis [START_REF] Di | Intergroup alliances and rejections within a protest movement (analysis of the social representations)[END_REF]Lambert, Graham, & Fincham, 2009;[START_REF] Rosenberg | A method for investigating and representing a person's implicit theory of personality: Theodore Dreiser's view of people[END_REF]. In total, for all the participants, 337 verbal associations were collected as some subjects did not provide the 4 words expected. Once the corpus had been cleaned, 113 different semantic units were obtained (See Appendix 1 for the frequencies of the different categories).
In the same way as the work that is part of the socio-dynamic approach, all the associations were submitted to a Correspondence Factor Analysis (CORR. F. A., [START_REF] Benzécri | L'analyse des correspondances[END_REF][START_REF] Deschamps | Analyse des correspondances et variations des contenus des représentations sociales [Correspondence analysis and variations in the content of social representations[END_REF]Doise, Clémence & Lorenzi-Cioldi, 1992). This factorial analysis highlights the differences in terms of frequencies of behaviours relative to the independent variables. It gives access to a summary of the data by revealing a particular structure. It also enables identification of the most significant factorial axes. The axes are composed of the different modalities of the independent variables. Lastly, it emphasises the correspondences between the modalities of the independent variables and the behaviours associated by the participants. This analysis was performed on associations whose frequency was greater than 5 (N = 18, 76.93% of the corpus without unique terms) and using the departments and the ALEA as independent variables. The two first factors represented 60.33% of total inertia (i.e. Factor 1 = 32.41%; Factor 2 = 27.92%). Only the modalities of variables and the types of responses contributing to the construction of factors were retained. We retained the modalities or the types whose contribution per factor was higher than the average contribution [START_REF] Deschamps | Analyse des correspondances et variations des contenus des représentations sociales [Correspondence analysis and variations in the content of social representations[END_REF].
Thus, the modalities of variables that contributed to the construction of Factor 1 were the departments of Côtes-d'Armor (D22), Loire-Atlantique (D44), Manche (D50) and Morbihan (D56): CPF (D22) = .13 + CPF (D44) = .37 + CPF (D50) = .14 + CPF (D56) = .28. The total contribution to the definition of Factor 1 was .92 (i.e. 92%). Factor 2 was also constructed on the basis of different departments: CPF (D44) = .41 + CPF (D50) = .25 + CPF (D56) = .20, i.e. a contribution of 86% to the formation of Factor 2. Figure 2 illustrates this organisation.
[INSERT FIGURE 2]
Figure 2 shows that the geographical location of the farm (i.e. the department) determines the socio-representational field of antibiotics for the farmers. The Côtes-d'Armor centralises the socio-representational content of farmers for the object antibiotics with evocations such as animal and medicine in a very descriptive register. The graphical representation of the results of the CORR. F. A. shows that the departments of Loire-Atlantique, Manche and Morbihan are opposed on the two axes. The farmers of Morbihan share a socio-representational universe related to antimicrobial resistance, animal well-being and the label pwa (pork without antibiotics). In contrast, the departments of Loire-Atlantique and Manche give rise to evocations related to animals, the cost of antibiotics and alternative solutions enabling their use to be reduced. The results also show that the productions of participants, in contrast to what was expected, are not significantly associated with the consumption of antibiotics of farmers.
Results for "biosecurity"
The corpus was composed of the same participants as for the first phase of the study of productions from 87 participants. In total, for all the participants, 333 verbal associations were collected. Once the corpus had been cleaned, 111 different semantic units were obtained (See Appendix 2 for the frequencies of the different categories according to the variables taken into account).
All the associations were submitted to a Correspondence Factor Analysis (CORR. F. A., [START_REF] Benzécri | L'analyse des correspondances[END_REF][START_REF] Deschamps | Analyse des correspondances et variations des contenus des représentations sociales [Correspondence analysis and variations in the content of social representations[END_REF]. This analysis was performed on associations whose frequency was greater than 5 (N = 22, 76.93% of the corpus without hapax), and using the departments and the ALEA as independent variables. The two first factors represented 57.36% of total inertia (i.e. Factor 1 = 33.17%; Factor 2 = 24.19%). Only the modalities of variables and the types of responses contributing to the construction of factors were retained. We retained the modalities or the types whose contribution per factor was higher than the average contribution [START_REF] Deschamps | Analyse des correspondances et variations des contenus des représentations sociales [Correspondence analysis and variations in the content of social representations[END_REF]. Thus, the modalities of variables that contributed to the construction of Factor 1 were the departments of Manche (D50) and Morbihan (D56) as well as the ALEA: CPF (D50) = .31 + CPF (D56) = .24 + CPF (low ALEA) = .10* + CPF (high ALEA) = .16. The total contribution to the definition of Factor 1 was .81 (i.e. 81%). Factor 2 was constructed only from the variable "department": CPF (D22) = .14 + CPF (D35) = .44, i.e. a contribution of 58% to the formation of Factor 2. Figure 3 illustrates this organisation.
[ INSERT FIGURE 3] The departments of farms determine the productions of participants and the sociorepresentational field of biosecurity. For example, the department of Ille-et-Vilaine features evocations such as disinfection and footbath, whereas the farmers of Côtes-d'Armor (historical site of the creation of the cooperative) use ideas more associated with the health aspects of their farm, well-being at work and consumers. The ALEA of the farm also characterises the productions of farmers. Those who use large quantities of antibiotics mention words such as the constraints, vigilance, performance or standards surrounding this farming practice.
Discussion and conclusion
Our objective was to compare the discourse of farmers depending on their use of antibiotics (low ALEA vs. high ALEA) and the geographical location of their farm. Thus, we were firstly interested in the potential relationships between practices and social representations [START_REF] Abric | Pratiques sociales et représentations [Social practices and representations[END_REF]Jodelet, 1989;Moscovici, 1961). In fact, we investigated this link between the different farming practices related to the use of antibiotics and the social representations associated with them. Our results show that the ALEA of farms is not a determining factor in the representations of farmers regarding this object. Indeed, regardless of the level of consumption of antibiotics, the farmers refer to ideas opposing their use (e.g. alternative solutions, antimicrobial resistance, pwa). In other words, the large consumers of antibiotics do not seem to have particular cognitions within their socio-representational field that would subsequently lead to greater use.
On the contrary, the vast majority of farmers participating in this study produced discourse that demonstrates an awareness of the need to change certain farming practices (e.g. stopping the preventive use of antibiotics). Arguments of different types (economic, health, quality of life at work) appeared in the evocations of farmers. This diversity seems to be the sign of a successful socio-representational structure for this object. In fact, for several years, these farmers have been the target of awareness campaigns to convince them of the benefits of demedication. They have been informed of the risks linked to the "irrational" use of antibiotics, made aware of the economic and social issues inherent in demedication, told about alternative solutions and behaviours to implement in order to reduce this use, etc. The farmers are therefore in constant contact with many arguments inviting them to change some of their behaviours. Nevertheless, although these strategies have led to concrete action in terms of livestock management for some farmers, for others the shift from attitudes to behaviours has been less obvious. This is supported by the social representations of farmers regarding biosecurity. In our study, the ALEA of farms characterises the perception of biosecurity by farmers. When the ALEA is high, these farmers prioritise ideas against implementing biosecurity and their representations are characterised by constraints. They mention the regulatory standards and the attention required for this practice.
They perceive, at the same time, a pressure exerted by some institutions (European Union, consumers, associations), and the ever more demanding expectations concerning how they should organise their activity. An evaluative dimension runs through their socio-representational field in that biosecurity figures in their perceptions as a set of procedures that complicates the exercise of their activity, generating both financial and attentional costs. During discussions at the end of the questionnaire, the farmers mentioned this practice as "extra work" going against one of their main objectives: "their well-being". In fact, these representations may act as obstacles to implementing biosecurity measures on their farm. The temporal dimension, which also plays an important role in their discourse, gives weight to this hypothesis. Biosecurity is seen by these farmers as a future practice whose content remains vague at present as do the benefits (technical, economic, health) it is supposed to bring. Actually, for them, biosecurity is a farming issue for future generations. The farmers with a low ALEA are ideologically opposed to this type of representation. They produced more descriptive and functional evocations, using elements directly from their behaviour in terms of biosecurity. For example, they mentioned words such as "disinfection", "clothing", "footbath", etc. For these farmers, biosecurity consists of concrete measures, applicable as of now and generating appreciable effects in different areas of farming. Firstly, concerning health, these farmers perceive a gain in control over the occurrence of some events (e.g. bacterial proliferation, presence of a virus). Secondly, the feeling of working in a "hygienic" work environment increases their quality of life at work. In contrast to the large consumers of antibiotics, biosecurity for these farmers is currently a meaningful and beneficial practice. Their representational universe is thus focused on the reasons for observing the rules of biosecurity in farming.
Another objective of our research was to explore the potential link between the department of the farm and the representations of antibiotics. Certain results confirm our hypothesis that the perception of antibiotics by the farmer is partly determined by his/her regional roots. The territorial network of Cooperl Arc Atlantique extends over several departments and some are far from the historical home of the cooperative responsible for its history, values, farming standards, etc. In fact, we thought that some territories could give rise to variability in the perception of certain important social objects in farming. The farm's department seems to be a more important factor than its ALEA in predicting farmers' representations of antibiotics. Thus, and regardless of the consumption of antibiotics, the farmers situated in the Côtes-d'Armor produce similar discourse regarding antibiotics. They preferentially mention ideas related to the therapeutic dimension of antibiotics in veterinary medicine. Their representations may be described as "classic" in the sense that their associations are limited to the basic functions of antibiotics. They perceive them as a farming tool among others. The other departments differ in their discourse by stating the consequences of antibiotic use (e.g. "antimicrobial resistance", "cost") and solutions to remedy it. Thus, two types of territory can be contrasted for this object of representation: the Côtes-d'Armor farms that mention global and non-contextualised ideas vs. the farms situated, for example, in the Morbihan that produce discourse taking account of health (i.e. antimicrobial resistance) and economic (e.g. pwa label) issues linked to antibiotic consumption in the livestock farming sectors. The representations for the other departments recall the need to reduce this use and show a certain adherence to the demedication policy of the cooperative. These results seem paradoxical with regard to the relatively late integration of these farms into the cooperative. Our initial hypothesis therefore ran counter to this by thinking that the farmers located in the historic home of the cooperative would act as messengers advocating demedication. Although these farmers are not particularly averse to demedication, it is rather the departments like Morbihan (department in our sample characterised by a low ALEA) that develop a "positive" and supportive discourse on this topic. A historical explanation can be provided here. During its lifetime, the cooperative has faced many challenges, particularly related to the impact of its activity on the environment. The group's original farms (i.e. in Côtes-d'Armor) have therefore become accustomed during their professional experience to changing some of their practices in order to adapt to new contextual data. A phenomenon of habituation could have appeared for these farmers concerning the dynamic aspect of modern agricultural activities. The "more recent" farms of the cooperative have proportionally had to cope less with types of upheaval such as the emergence of antimicrobial resistance. Thus, the seriousness and the exceptional character of this phenomenon are undoubtedly more striking for these pig farms. This variability in the perception of the phenomenon could encourage them to become more actively involved in the field of ideologies.
This study thus reveals several elements. First, the discourse concerning antibiotics does not seem to determine the reliance on their use on farms. Overall, the farmers are all aware of the need to reduce the consumption of antibiotics in livestock farming. The campaigns of persuasion (carried out at national and organisational levels) have thus had real effects on farmers' cognitions. However, these effects are limited in terms of behaviour on many farms. This finding is supported by the representations of farmers regarding biosecurity. Biosecurity requires the implementation of very concrete behaviours in order to lower the consumption of antibiotics. We note that for this object, the ALEA characterises the discourse of farmers. For the large consumers of antibiotics, the representations are abstract and extremely evaluative. Although an ideological opposition to this practice is emerging for these farmers, they are unable to connect this attitude to direct experiences linked to biosecurity measures. These results encourage us to think that supporting farmers in the process of demedication should not only consist of persuasive communication. We believe that measures including communications based on these representations and preceded by the implementation of concrete measures related to demedication (e.g. commitment to carry out some biosecurity procedures on farms) would reduce the consumption of antibiotics within the cooperative more effectively. Thus, this persuasive communication would bridge the gap between the representations of farmersalready in favour of demedication -and the actions needed to achieve it.
Finally, these results have implications for prevention. Several studies show that farmers or slaughterhouse staff are occupationally exposed to resistant bacteria such as resistant Staphylococcus aureus methicillinus aureus (MRSA) [START_REF] Parisi | MRSA in swine, farmers and abattoir workers in Southern Italy[END_REF][START_REF] Van Gompel | Occupational Exposure and Carriage of Antimicrobial Resistance Genes (tetW, ermB) in Pig Slaughterhouse Workers[END_REF]. Resistant bacteria pose a risk if transmitted to these workers. Indeed, they can cause illnesses that are difficult to cure if effective antibiotic treatment is not available (Khachatourians, 1998;Michael et al., 2014). Biosecurity, on the other hand, makes it possible to limit the dissemination and transmission of these pathogens (Moss et al., 2012;[START_REF] Bragg | Bacterial resistance to quaternary ammonium compounds (QAC) disinfectants[END_REF]. In view of the representations collected during this study, it would be appropriate, in a One Health approach, to raise awareness and train farmers in biosecurity. Note: Grayed blocks refer to the experimental conditions. "Experimental conditions" contribute to the formation of Factor 1; "Experimental conditions" refer to the experimental conditions which contribute to the formation of Factor 2; "Experimental condition" refer to the experimental conditions which contribute to the formation of both Factors 1 and 2. "Observations" refer to the observations which contribute to the formation of Factor 1; "Observations" refer to the observations which contribute to the formation of Factor 2; "Observations" refer to the observations which contribute to the formation of both Factors 1 and 2.
* « psa » refers to a label Cooperl. This label concerns pigs without antibiotics.
Figure 1 .Figure 2 .
12 Figure 1. Location of the pig farms in our sample
Figure 3 .
3 Figure 3. Graphical representation of the results obtained by means of the CORR. F. A. concerning Factors 1 and 2.
Table 1 .
1 Distribution of farmers based on the department and the ALEA class
Number of farmers « Low » ALEA « High » ALEA Average ALEA
Côtes-d'Armor 45 29 16 0.58 (0.38)
Ille-et-Vilaine 20 13 7 0.42 (0.39)
Morbihan 15 9 6 0.34 (0.35)
Loire-Atlantique 1 1 0 0.15 (*)
Manche 3 2 1 0.37 (0.28)
Mayenne 3 1 2 1.01 (0.89)
Appendix 2 -Farmers' evocations of the word "biosecurity" as a function of variables.
487
Evocations
Freq.
Côted'Armor |
03505510 | en | [
"info.info-ds"
] | 2024/03/04 16:41:22 | 2021 | https://hal.science/hal-03505510/file/S0304397521002413.pdf | Cristina Bazgan
email: [email protected]
Pierre Cazals
email: [email protected]
Janka Chlebíková
email: [email protected]
Degree-anonymization using edge rotations
Keywords: Degree-anonymization, NP-hardness, Approximation algorithm
à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
In recent years huge amounts of personal data has been collected on various networks as e.g. Facebook, Instagram, Twitter or LinkedIn. Ensuring the privacy of network users is one of the main research tasks. One possible model to formalise these issues was introduced by Liu and Terzi [START_REF] Liu | Towards identity anonymization on graphs[END_REF] who transferred the k-degree-anonymity concept from tabular data in databases [START_REF] Fung | Privacy-preserving data publishing: A survey of recent developments[END_REF] to graphs which are often used as a representation of networks. Following this study a graph is called k-degree-anonymous if for its each vertex there are at least k -1 other vertices with the same degree. The parameter k represents the number of vertices that are mixed together and thus the increasing value of k increases the level of anonymity. In [START_REF] Wu | A survey of privacypreservation of graphs and social networks[END_REF], Wu et al. presented a survey of different anonymization models and some of their weaknesses. Casas Roma et al. [START_REF] Casas-Roma | A survey of graph-modification techniques for privacy-preserving on networks[END_REF] proposed a survey of several graph-modification techniques for privacy-preserving on networks. In this paper we consider the k-degree-anonymous concept of Liu and Terzi [START_REF] Liu | Towards identity anonymization on graphs[END_REF].
The main study problem related to k-degree anonymous graphs is to find a minimum number of graph operations to transform an input graph to a k-degree anonymous graph.
Different graph operations of transforming a graph into a k-degree-anonymous one are considered in research papers where the operations maybe the following: delete vertex/edge, add vertex/edge, or add/delete edge (see more details later). One advantage in the approaches based on vertex/edge deletion/adding is that a solution always exists since in the worst case scenario one can consider the empty or the complete graph that is k-degree-anonymous for any k (at most the number of vertices of the graph). However, the basic graph parameters as the number of vertices and edges could be modified with such transformations.
Vertex/edge modification versions associated to k-degree-anonymity have been relatively well studied. Hartung et al. [START_REF] Hartung | Improved upper and lower bound heuristics for degree anonymization in social networks[END_REF][START_REF] Hartung | A refined complexity analysis of degree anonymization in graphs[END_REF] studied the edge adding modification as proposed by Liu and Terzi [START_REF] Liu | Towards identity anonymization on graphs[END_REF]. For this type of modification Chester et al. [START_REF] Chester | Complexity of social network anonymization[END_REF] established a polynomial time algorithm for bipartite graphs.
The variant of adding vertices instead of edges was studied by Chester et al. in [START_REF] Chester | Why waldo befriended the dummy? k-anonymization of social networks with pseudo-nodes[END_REF] where they presented an approximation algorithm with an additive error. Bredereck et al. [START_REF] Bredereck | The complexity of degree anonymization by vertex addition[END_REF] investigated the parameterized complexity of several variants of vertex adding which differ in the way the inserted vertices can be adjacent to existing vertices. Concerning the vertex deletion variant, Bazgan et al. [START_REF] Bazgan | Finding large degree-anonymous subgraphs is hard[END_REF] showed the NP-hardness even on very restricted graph classes such as trees, split graphs, or trivially perfect graphs. Moreover, in [START_REF] Bazgan | Finding large degree-anonymous subgraphs is hard[END_REF] the vertex and edge deletion variants are proved intractable from the approximability and parameterized complexity point of view.
Several papers study the basic properties of edge rotations, including some bounds for the minimum number of edge rotations between two graphs [START_REF] Chartrand | Rotation and jump distances between graphs[END_REF][START_REF] Chartrand | Edge rotations and distance between graphs[END_REF][START_REF] Ralph J Faudree | On the rotation distance of graphs[END_REF][START_REF] Goddard | Distances between graphs under edge operations[END_REF][START_REF] Elzbieta | Edge rotation and edge slide distance graphs[END_REF].
In this paper we consider the version of transforming a graph into a kdegree-anonymous one using edge rotations which don't modify the number of vertices/edges. It should be noticed that in such case a solution may not always exist, as we discuss in Section 3.
To the best of our knowledge the problem of transforming a graph to a kdegree anonymous graph using the edge rotations has not been fully explored. In some particular cases some research has been done in [START_REF] Salas | Graphic sequences, distances and k-degree anonymity[END_REF] where the authors study the edge rotation distance and various metric between the degree sequences to find a "closest" regular graph. In paper [START_REF] Casas-Roma | An algorithm for k-degree anonymity on large networks[END_REF] the authors proposed an heuristic to compute the edge rotation distance to a k-degree anonymous graph.
Our results. In this paper we study the various aspects of the Min Anonymous-Edge-Rotation problem. An input to the problem is an undirected graph G = (V, E) with n vertices and m edges and an integer k ≤ n. The goal is to find a shortest sequence of edge rotations that transforms G into a k-degree-anonymous graph, if such a sequence exists. We first show that when n 2 ≤ m ≤ n(n-3) 2 and k ≤ n 4 a solution always exists. Moreover for trees a solution exists if and only if 2m
n is an integer. We prove that Min Anonymous-Edge-Rotation is NP-hard even when k = n q and q ≥ 3 is a fixed positive integer. On the positive side we provide a polynomial-time 2-approximable algorithm under some constraints. Finally, we demonstrate that Min Anonymous-Edge-Rotation is solvable in polynomial time for trees when k = θ(n) and for any graph when k = n.
Our paper is organized as follows. Some preliminaries about edge rotations and our formal definitions are given in Section 2. The study of feasibility is initiated in Section 3. Section 4 presents the NP-hardness proof. In Section 5 we study properties of the specific k-degree anonymous degree sequences that are used in Section 6 to present a polynomial-time 2-approximation algorithm and in Section 7 to establish a polynomial time algorithm for trees. Moreover in Section 7 we consider the case k = n in general graphs. Some conclusions are given at the end of the paper.
Preliminaries
In this paper we assume that all graphs are undirected, without loops and multiple edges, and not necessary connected graphs.
Let G = (V, E) be a graph. For a vertex v ∈ V , let deg G (v) be the degree of v in G, and ∆ G be the maximum degree of G.
A vertex v with degree deg G (v) = |V | -1 is called a universal vertex. The neighborhood of v in G is denoted by N G (v) = {u ∈ V : uv ∈ E} and Inc G (v) is the set of all edges incident to v, Inc G (v) = {e ∈ E : v ∈ e}.
If the underlying graph G is clear from the context, we omit the subscript G.
Definition 1. Given a graph G = (V, E) of order n, the degree sequence S G of G is the non-increasing sequence of its vertex degrees,
S G = (deg(v 1 ), . . . , deg(v n )), where deg(v 1 ) ≥ deg(v 2 ) ≥ • • • ≥ deg(v n ). Definition 2. A sequence D of non-negative integers D = (d 1 , d 2 , . . . , d n ) is graphic if
there exists a graph G such that its degree sequence coincides with D.
As follows from Erdős-Gallai theorem (see e.g. [START_REF] Erdős | Gráfok eloírt fokú pontokkal (graphs with points of prescribed degrees, in Hungarian)[END_REF]) the necessary and sufficient conditions for a non-increasing sequence D = (d 1 , d 2 , . . . , d n ) to be graphic are:
n i=1 d i is even (1) i=1 d i ≤ ( -1) + n i= +1 min(d i , ) holds for any 1 ≤ ≤ n. (2)
Furthermore, it is an easy exercise to prove that a sequence of integers D = (d 1 , d 2 , . . . , d n ) corresponds to a degree sequence of a tree on n vertices if and only if each d i ≥ 1 and
n i=1 d i = 2(n -1).
Let G(n, m) be the set of all graphs with n vertices and m edges.
Definition 3. Let G, G ∈ G(n, m).
We say that G can be obtained from G by an edge rotation (uv, uw) if V (G) = V (G ) and there exist three distinct vertices u, v and w in G such that uv ∈ E(G), uw / ∈ E(G), and E(G ) = (E(G) \ {uv}) ∪ {uw}, see Figure 1.
u v w
G such that deg G (v) = deg G (v) -1, deg G (w) = deg G (w)+1
, and the degree of the other vertices is not changed. Let define a (+1, -1)-degree modification of the degree sequence D = (d Note that a solution to the Min Anonymous-Edge-Rotation problem may not exist for all instances. For example, if G is a complete graph without an edge, K n \ {e}, n ≥ 6, then there is no solution for such graph G and k = 3. Therefore, we are only interested in studying of feasible instances (G, k) defined as an instance for which there exists a solution to Min Anonymous-Edge-Rotation. Our initial study of sufficient conditions for feasibility is presented in Section 3.
Obviously, since all graphs are 1-degree-anonymous, we are only interested in cases where k ≥ 2.
The decision version associated to Min Anonymous-Edge-Rotation is defined as follows for a feasible instance (G, k):
Anonymous-Edge-Rotation Input: (G, k, r) where G = (V, E) is an undirected graph, k ∈ {1, . . . , |V |}, and r be a positive integer. Question: Is there a sequence of + 1 graphs G 0 = G, G 1 , G 2 , . . . , G such that ≤ r, G i+1 can be obtained from G i by one edge rotation, and G is k-degree-anonymous?
We also consider the Min Anonymous-Edge-Rotation problem in restricted graph classes, e.g. trees. In that case we require that all graphs in the sequence G 0 , . . . , G must be from the same graph class. Note that the problem can also be studied without this requirement, but the results may be different.
The following theorem shows important properties about the edge rotations. The result was already proved in [START_REF] Chartrand | Edge rotations and distance between graphs[END_REF], but due to the simplicity of our approach, we present an another proof here.
Theorem 1. For any two graphs G, G ∈ G(n, m), we can transform G to G using a sequence of edge rotations.
Proof. Let E 1 = E(G) \ (E(G) ∩ E(G )) be the set of edges that are in G and not in G and E 2 = E(G ) \ (E(G) ∩ E(G )) be the set of edges that are in G and not in G. For all u, v and w such as uv ∈ E 1 and uw ∈ E 2 , we add one edge rotation (uv, uw). In all other cases, let uv ∈ E 1 and u v ∈ E 2 , where all vertices u, v, u , v are distinct. There are two case: 1) uu , uv , vu and vv ∈ E(G) or 2) at least one of these four edges is missing.
In the first case we can make the following two edge rotations to move uv from G to u v in G : (v v, v u ) and (vu, vv ) (see Figure 2). In the second case, if for example vv is missing, we can use the following two rotations (vu, vv ) and then (v v, v u ) (see Figure 3) and similarly if another edge is missing.
Feasibility study
As it was discussed in Section 2, the Min Anonymous-Edge-Rotation problem does not have a solution for every input instance. It is not difficult to see that if a graph is 'almost' complete or 'almost' empty, then there are only restricted options on the number of different degree classes and therefore a solution may not exist.
First we present some sufficient conditions for an instance to be feasible showing that if a graph is not 'almost' complete or an empty graph, then a solution of the problem exists for all k ≤ n 4 , where n is the order of the graph.
Theorem 2. Let G ∈ G(n, m) such that n 2 ≤ m ≤ n(n-3)
= 2m n . Type 1: k ≤ s ≤ n -k Let D 1 = (d 1 1 , d 1 2 , . . . , d 1 s , d 2 1 , d 2 2 , . . . , d 2 n-s
) be a sequence of positive integers where for all i,
1 ≤ i ≤ s, d 1 i = d + 1 and for all j, 1 ≤ j ≤ n -s, d 2 j = d
(see Figure 4). The sequence contains n elements and it is easy to see that
s i=1 (d + 1) + n-s j=1 d = 2m. s n -s d + 1 d Figure 4: The sequence D 1 Following the assumptions s ≥ k and n -s ≥ k, therefore D 1 is a k-anonymous sequence. Type 2 : s < k Let D 2 = (d 1 1 , d 1 2 , . . . , d 1 s+k , d 2 1 , d 2 2 , . . . , d 2 n-s-2k , d 3 1 , d 3 2 , . . . , d 3
k ) be a sequence of positive integers where for all i, 5). The sequence contains n elements and
1 ≤ i ≤ s + k, d 1 i = d + 1; for all r, 1 ≤ r ≤ n -s -2k, d 2 r = d; for all j, 1 ≤ j ≤ k, d 3 j = d -1 (see Figure
s+k i=1 (d + 1) + k j=1 (d -1) + n-s-2k =1 d = 2m s + k n -s -2k k d + 1 d d -1 Figure 5: The sequence D 2 Since n ≥ 4k and s < k, n -s -2k ≥ k, D 2 is a k-anonymous sequence. Type 3: s > n -k Let D 3 = (d 1 1 , d 1 2 , . . . , d 1 k , d 2 1 , d 2 2 , . . . , d 2 s-2k , d 3 1 , d 3 2 , . . . , d 3 k+n-s
) be a sequence of positive integers where for all i, 6). The sequence has n elements and Now we show that all three sequences are graphic, therefore that the condition ( 2) is true for any . We split the proof into several subcases depending on the value of and the type of the sequence.
1 ≤ i ≤ k, d 1 i = d + 2; for all r, 1 ≤ r ≤ s -2k, d 2 r = d + 1; for all j, 1 ≤ j ≤ k + n -s, d 3 j = d (see Figure
k i=1 (d + 2) + k+n-s j=1 d + s-2k =1 (d + 1) = 2m. k s -2k k + n -s d + 2 d + 1 d
From our assumptions
n 2 ≤ m ≤ n(n-3) 2 , it follows 1 ≤ d ≤ n -3. Case A. = 1 Because d ≥ 1, (2) trivially holds. Case B. = 2, due to n ≥ 8, n j= +1 min(d j , ) ≥ 6d -2 i=1 d i ≤ (d + 2) = 2(d + 2) ≤ 2 + (6d -2) ≤ ( -1) + n j= +1 min(d j , ) Case C. 3 ≤ < d i=1 d i ≤ (d + 2) ≤ (n -1) = n -= 2 -+ n -2 = ( -1) + (n -) ≤ ( -1) + n j= +1 min(d j , ) Case D. 3 ≤ = d, Type 1 & 3: i=1 d i ≤ (d + 2) ≤ (n -1) = n -= 2 -+ n -2 = ( -1) + (n -) ≤ ( -1) + n j= +1 min(d j , )
Type 2, following our assumptions also = d ≤ n -3
i=1 d i ≤ (d+1) = ( +1) = ( -1)+2 ≤ ( -1)+3 -3 = ( -1)+3( -1) = ( -1) + ( + 3 -)( -1) ≤ ( -1) + (n -)( -1) ≤ ( -1) + n j= +1 min(d j , ) Case E. 3 ≤ = d + 1. Furthermore, = d + 1 ≤ n -2. Type 1 & 2, ≥ 4: i=1 d i ≤ (d + 1) = 2 = ( -1) + ≤ ( -1) + 2 -4 = ( -1) + 2( -2) = ( -1) + ( + 2 -)( -2) ≤ ( -1) + (n -)( -2) = ( -1) + (n -)(d -1) ≤ ( -1) + n j= +1 min(d j , ) Type 1 & 2, = 3: Due to n ≥ 8, n j= +1 min(d j , ) ≥ 5d -4 ≥ Therefore i=1 d i ≤ (d + 1) = 2 = ( -1) + ≤ ( -1) + n j= +1 min(d j , ) Type 3, 3 ≤ ≤ n -3 i=1 d i ≤ (d+2) = ( +1) = ( -1)+2 ≤ ( -1)+3 -3 = ( -1)+3( -1) = ( -1) + ( + 3 -)( -1) ≤ ( -1) + (n -)( -1) ≤ ( -1) + n j= +1 min(d j , ) Type 3, = n -2 i=1 d i = k(d+2)+(s-2k)(d+1)+(k+n-s-2)d = nd-2d+s ≤ d(n-2)+n-1 ≤ (n -3)(n -2) + n -1 ≤ ( -1) + 2d = ( -1) + n j= +1 min(d j , ). Case F. 3 ≤ = d + 2. Furthermore, = d + 2 ≤ n -1. Type 1 & 2: i=1 d i ≤ (d + 1) ≤ ( -1) ≤ ( -1) + n j= +1 min(d j , ) Type 3, = 3: Due to n ≥ 8, n j= +1 min(d j , ) ≥ 5 ≥ . Then i=1 d i ≤ (d + 2) = 2 = ( -1) + ≤ ( -1) + n j= +1 min(d j , ) Type 3, 4 ≤ ≤ n -2: i=1 d i ≤ (d + 2) = 2 = ( -1) + ≤ ( -1) + 2 -4 = ( -1) + 2( -2) = ( -1) + ( + 2 -)( -2) ≤ ( -1) + (n -)( -2) ≤ ( -1) + n j= +1 min(d j , )
Type 3, = n -1:
i=1 d i = k(d + 2) + (s -2k)(d + 1) + (k + n -s -1)d = s + nd -d ≤ n -1 + (n -1)(n -3) = (n -1)(n -2) = ( -1) ≤ ( -1) + n j= +1 min(d j , ). Case G. d + 2 < < n i=1 d i ≤ (d + 2) ≤ ( -1) ≤ ( -1) + n j= +1 min(d j , ) Case H. = n i=1 d i ≤ (d + 2) ≤ ( -1)
Therefore, we have proved that there exists a k-degree-anonymous graph G ∈ G(n, m) and the graph G can be transformed to G using a sequence of edge rotations due to Theorem 1. Now we extend the feasibility study to the case k = n for which we get necessary and sufficient conditions. Proof. Since k = n in Min Anonymous-Edge-Rotation, every vertex has to be in the same degree class, so if there is a solution, the resulting graph has to be regular. Moreover, a necessary and sufficient condition for a p-regular graph with n vertices to exist is that n ≥ p + 1 and np must be even [START_REF] Tomescu | Problems in combinatorics and graph theory[END_REF].
If 2m n is not an integer then obviously there is no regular graph in G(n, m) and therefore (G, n) is not a feasible instance.
If 2m n is an integer, since n × 2m n = 2m is even, n ≥ 2m n + 1 there is a 2m nregular graph in G(n, m) as it was mentioned before. By Theorem 1 we conclude that there exists a sequence of edge rotations that leads to a 2m n -regular graph starting from G.
Hardness of Min Anonymous-Edge-Rotation
In this section we show that the decision version of Min Anonymous-Edge-Rotation, the problem Anonymous-Edge-Rotation, is NP-hard. The proof is based on a reduction from the restricted version of a cover set problem, Exact Cover By 3-Sets, which is known to be NP-complete ( [START_REF] Garey | Computers and intractability[END_REF]).
Exact Cover By 3-Sets (X3C)
Input: A set X of elements with |X| = 3m and a collection C of 3-elements subsets of X where each element appears in exactly 3 sets. Question: Does C contain an exact cover for X, i.e. a subcollection C ⊆ C such that every element occurs in exactly one member set of C ? Remark 2. Note that |C| = 3m and we can suppose that m is even and larger than 6. If m is odd, we consider the instance I even defined as follows: X even = X ∪ {x | x ∈ X} and C even = C ∪ {c x y z | c xyz ∈ C}, and thus in the new instance I even the set has 6m elements and the collection has 6m 3-elements subsets.
We define a polynomial-time reduction and then prove the NP-hardness of Anonymous-Edge-Rotation.
Reduction. Let I = (X, C) be an instance of X3C with |X| = |C| = 3m and m even and q ≥ 3 a given constant. We describe the construction σ transforming an instance I into the graph G := σ(I) where G = (V, E) is defined as follows:
• For each element x ∈ X, we add a vertex v x to the set V elem ⊂ V and a vertex u x to the set V hub ⊂ V .
• For each 3-element set {x, y, z} of the collection C, we add 4 vertices c 1 xyz , c 2 xyz , c 3 xyz and c 4 xyz to the set V set ⊂ V .
• For each i ∈ {1, . . . , 5m} we add a vertex w i to the set V reg ⊂ V and for each j ∈ {1, . . . , 10m} we add a vertex t j to V single ⊂ V .
Let V -= V elem ∪V hub ∪V set ∪V reg ∪V single and |V -| = 3m+3m+12m+15m = 33m. If q = 3, then let V = V -. If q ≥ 4, then for each i, 4 ≤ i ≤ q, add a set of 11m vertices denoted
V i dummy . Let V dummy = V 4 dummy ∪ • • • ∪ V q dummy and define V = V -∪ V dummy . Obviously, |V | = 33m + (q -3)11m.
Now we define the set E of the edges in G.
• For all x, y ∈ X, such that x = y, we add the edge v x u y between the vertex v x ∈ V elem and u y ∈ V hub , to E X ⊂ E.
• For each 3-element set {x, y, z} of the collection C, ∀i ∈ {1, 2, 3, 4}, we add the edges c i xyz u x , c i xyz u y and c i xyz u z to the set E C ⊂ E.
• We add the set of edges E ⊂ E to the vertex set V elem such that (V elem , E ) is a 11-regular graph. Since the number of vertices in the set |V elem | = 3m is even (m is even) and 11 < 3m such a regular graph exists [START_REF] Tomescu | Problems in combinatorics and graph theory[END_REF]. Furthermore, such a graph can be constructed in polynomial time using Havel-Hakimi algorithm [START_REF] Hakimi | On realizability of a set of integers as degrees of the vertices of a linear graph[END_REF].
• We add the set of the edges E ⊂ E to the vertex set V reg such that (V reg , E ) is a (3m + 11)-regular graph. Since the number of vertices of V reg is even and 3m + 11 < 5m, similarly to the previous case such a regular graph exists and can be constructed in polynomial time.
Finally, let
E -= E X ∪ E C ∪ E ∪ E . If q = 3, then let E = E -. If q ≥ 4, then
the set E contains E -and for any i, such that 4 ≤ i ≤ q, we add the set of edges E i dummy ⊆ E to the vertex set V i dummy such that (V i dummy , E i dummy ) is a (9m + 12)-regular. Since the number of vertices of V i dummy is even (m is even) and 9m + 12 ≤ 11m, similarly to the previous case such a regular graph exists and can be constructed in polynomial time.
Obviously, the graph G = (V, E) has the following properties: (i) 10m vertices of degree 0 (the vertices of the set V single ), (ii) 12m vertices of degree 3 (the vertices of the set V set ), (iii) 8m vertices of degree 3m + 11 (the vertices of the set V reg and V hub ), (iv) 3m vertices of degree 3m + 10 (the vertices of the set V elem ), (v) (q -3)11m vertices of degree (9m + 12) (the vertices of the set V dummy ).
Example. Figure 7 represents the transformation σ for q = 3. Let I 1 be the following instance of X3C: m = 2, X = {1, 2, 3, 4, 5, 6}, and C = {{1, 2, 3}, {2, 3, 4}, {3, 4, 5}, {4, 5, 6}, {1, 5, 6}, {1, 2, 6}}. To simplify the figure, we only consider m = 2, but for the construction m must be at least 6 (due to an (3m + 11)-regular graph on the vertex set of V reg ). where n is the order of the graph G for an input instance (G, k, r) and q is a fixed number greater than or equal to 3.
v 1 v 2 v 3 v 4 v 5 v 6 u 1 u 2 u 3 u 4 u 5 u 6 c 1 123 c 1 234 c 1 345 c 1 456 c 1 156 c 1 126 c 2 xyz c 3 xyz c 4 xyz t j w i Non-edge V hub V set V elem V single V reg
Proof. Let C ⊆ C be an exact cover for X of size m. Now we define 3m rotations which are independent from each other : for every 3-element set {x, y, z} ∈ C , we replace the edge u x c 1 xyz by the edge u x v x , and similarly u y c 1 xyz by u y v y and u z c 1 xyz by u z v z . Since C is of size m, we define exactly 3m rotations. Let G be the graph obtained from G after applying all 3m rotations. Since C is an exact cover of size m: (i) there are m vertices of type c 1 xyz that lost all 3 neighbours and become of degree 0 in G , (ii) all 3m vertices of type v x are attached to a new neighbour, so they become of degree 3m + 11 in G .
Then G has 10m + m = 11m vertices of degree 0, 12m -m = 11m of degree 3 vertices, 8m + 3m = 11m of degree 3m + 11 vertices and it contains q -3 disconnected (9m + 12)-regular subgraphs of size 11m, hence we conclude that G is the 11m-anonymous graph.
Let I be a yes-instance of Anonymous-Edge-Rotation. Then there exists a sequence of 3m rotations such that the graph G = (V, E ) obtained after applying the rotations to G is a 11m-anonymous graph. Since |V | = 33m + (q -3)11m, there must be only q different degrees classes in G . Note that with one rotation, we can change the degree of two vertices, therefore the degree at most 6m vertices can be changed by 3m rotations. Since the graph G has more than 6m vertices of the degrees 3m + 11, 3, 0 and 9m + 12, all these degree classes must be in G . Furthermore, due to the number of vertices of G, these are the only degree classes in G . This means that in G the number of vertices of degree 3m + 11 must be increased by 3m, the number of vertices of degree 0 must be increased by m, the number of vertices of degree 3 must be decreased by m, there are no vertices of degrees 3m + 10 in G and the other degree classes keep the same amount of vertices.
A single rotation can increase or decrease the degree of a vertex by 1 therefore using 3m rotations no vertex of degree 3m + 10 in G can have degree 0 in G and similarly, no vertex of degree 3 in G can have degree 3m+11 in G . Therefore the 3m new vertices of degree 3m + 11 in G must have degree 3m + 10 in G. This is only possible if the degree of each vertex v x from the set V elem is increased by 1. Similarly, the m new vertices of degree 0 in G must have degree 3 in G, let C G be the set of such vertices. Obviously, C G must be a subset of V set , in which the vertices have the form c xyz with x, y, z ∈ X, for any set {x, y, z} ∈ C, and ∈ {1, 2, 3, 4}. For the same reasons, vertices of degree greater than 9m + 12 cannot be degree less than 3m + 12.
To reach the requested degree configuration in G with exactly 3m edge rotations, in each rotation the degree of each vertex from V elem must be increased by 1 and the degree of each vertex from the set C G must be decrease by 1. To achieve that, for each vertex v x from V elem , the only possible rotation is to add the edge u x v x where u x ∈ V hub and remove the edge u x c xyz where c xyz ∈ C G . To fulfil the condition about the degree classes and the number of the rotations, the only way to achieve that is that C = {{x, y, z} | c xyz ∈ C G } is an exact cover of X.
Characterization of the "closest" k-anonymous degree sequence
In this section we suppose that (G, k) is a feasible instance. For any such instance we define a k-anonymous degree sequence S bound that can be computed in polynomial time if k = θ(n). We show that with the (+1, -1)-degree modifications (Remark 1) the graph G can be transformed into a k-degree-anonymous graph G with degree sequence S bound using at most double of edge rotations as in an optimal solution of Min Anonymous-Edge-Rotation for (G, k).
Note that in general a (+1, -1)-degree modification doesn't correspond to an edge rotation, but as we show later in Section 7.1, it is true for trees. Now in the following steps we show how to define the degree sequence S bound .
Step 1: Compute every available target sequence.
Let S = (s 1 , . . . , s n ) be a non-increasing sequence of non-negative integers, r ∈ {1, . . . , n}. Any partition of S into r contiguous subsequences (i.e. if S[a] and S[b] are in one part, then all S[i], a ≤ i ≤ b must be in the same part) is called a contiguous r-partition. The number of contiguous r-partitions of S is n-1 r-1 , therefore bounded by (n -1) r-1 . Then the number of contiguous partitions of S with at most r parts can be bounded by
r-1 i=0 (n -1) i ≤ 2n r-1 .
For each contiguous -partition p, 1 ≤ ≤ r, we use notation p = [p 1 , . . . , p ], where p i denotes the number of elements in part i, 1 ≤ i ≤ . Note that at this stage important is the number of elements in each part, not which elements from S are in it.
Let G be a graph of order n and k an integer, k ≥ 2. If G is a k-degreeanonymous graph, then the vertices of G can be partitioned into at most c = n k parts where the vertices in each part have the same degree. Let P be the set of all such contiguous partitions with at most c parts. As it follows from the initial discussion, the number of such partitions is bounded by 2n For each contiguous partition p with parts, 1 ≤ ≤ c, there are at most n possibilities for a degree on each position. The test whether the generated sequence is graphic and k-anonymous can be done in O(n) operations. Since |P | = O(n c-1 ), there are at most O(n c-1 × n × n) ≤ O(n 2c ) operations to compute all feasible degree sequences of every partition, where c = n k . Obviously, if c is a constant, such number of operations is polynomial.
Step 2: Find the best one. Now based on the previous analysis we can define the degree sequence S bound and prove some basic properties. for a graph, we can define a k-anonymous sequence S T bound for a tree. The only difference is that in the set Pp , every feasible solution must have d i ≥ 1, which would be a subset of Pp . Also for the testing, we don't need to check whether S is graphic, the condition i=1 p i d i = 2|E|, is enough for the degree sequence of a tree. Lemma 1. Let S be a n-sequence of non-negative integers and denote by S the sequence S sorted in non-increasing order. Let S s be another n-sequence of non-negative integers sorted in non-increasing order. Then
n i=1 |S s [i] -S [i]| ≤ n i=1 |S s [i] -S[i]| (3)
Proof. If S is already in non-increasing order then ( 3
A = n i=1 |S s [i] -S[i]| - n i=1 |S s [i] -S 1 [i]| = |S s [a] -S[a]| -|S s [a] -S 1 [a]| + |S s [b] -S[b]| -|S s [b] -S 1 [b]| = |S s [a] -S 1 [b]| -|S s [a] -S 1 [a]| + |S s [b] -S 1 [a]| -|S s [b] -S 1 [b]|
In order to follow easier six different cases, let
x 1 = S s [a], x 2 = S s [b], x 3 = S 1 [a], x 4 = S 1 [b], and thus A = |x 1 -x 4 | -|x 1 -x 3 | + |x 2 -x 3 | -|x 2 -x 4 |.
Following our assumptions x 1 ≥ x 2 and x 3 > x 4 . Now for all possible arrangements of x 1 , x 2 , x 3 , x 4 we discuss the value A:
• x 1 ≥ x 2 ≥ x 3 > x 4 : A = x 1 -x 4 -x 1 + x 3 + x 2 -x 3 -x 2 + x 4 = 0 • x 3 > x 4 ≥ x 1 ≥ x 2 : A = x 4 -x 1 -x 3 + x 1 + x 3 -x 2 -x 4 + x 2 = 0 • x 1 ≥ x 3 > x 4 ≥ x 2 : A = x 1 -x 4 -x 1 +x 3 +x 3 -x 2 -x 4 +x 2 = 2x 3 -2x 4 > 0 • x 3 ≥ x 1 ≥ x 2 ≥ x 4 : A = x 1 -x 4 -x 3 +x 1 +x 3 -x 2 -x 2 +x 4 = 2x 1 -2x 2 ≥ 0 • x 1 ≥ x 3 ≥ x 2 ≥ x 4 : A = x 1 -x 4 -x 1 +x 3 -x 2 -x 1 -x 2 +x 4 = 2x 3 -2x 2 ≥ 0 • x 3 ≥ x 1 ≥ x 4 ≥ x 2 : A = x 1 -x 4 -x 3 +x 1 +x 3 -x 2 -x 4 +x 2 = 2x 1 -2x 4 ≥ 0
We can conclude that in all cases A ≥ 0, therefore
n i=1 |S s [i] -S 1 [i]| ≤ n i=1 |S s [i] -S[i]|.
If the sequence S 1 is still not in non-increasing order, we can repeat the process of swapping for the next two unsorted elements on S 1 until we obtain the non-increasing sequence S . Each process can be repeated independently, therefore
n i=1 |S s [i] -S [i]| ≤ n i=1 |S s [i] -S[i]|.
Theorem 5. Let (G, k) be a feasible instance for the Min Anonymous-Edge-Rotation problem. Let OPT be an optimum solution that is a minimum set of rotations that transform G to a k-degree-anonymous graph G . Then
n i=1 |S G [i]- S bound [i]| ≤ 2|OP T |,
where the degree sequence S bound is defined in Definition 5.
Proof. Let S G be the degree sequence of G sorted in the same order as S G (i.e. for every
v ∈ V , if deg G (v) is in the position i in S G then deg G (v) is in the position i in S G )
. Let S G be the degree sequence S G sorted in non-increasing order. As in the definition of S bound we considered all the options, there must exist p ∈ P and S ∈ Pp such that S = S G , and
n i=1 |S G [i] -S bound [i]| ≤ n i=1 |S G [i] -S G [i]|.
Since the degree sequence S G is sorted in non-increasing order, then
n i=1 |S G [i] -S G [i]| ≤ n i=1 |S G [i] -S G [i]|
by Lemma 1. One rotation from the graph G j to G j+1 in the sequence of the graphs from G to G can only decrease the degree of a vertex by one and increase the degree of another one by one, hence
n i=1 |S Gj [i] -S G [i]| ≤ n i=1 |S Gj+1 [i] - S G [i]| +
Approximation of Min Anonymous-Edge-Rotation
In this section we show that under some constraints on the number of edges and k, there exists a polynomial time 2-approximation algorithm for the Min Anonymous-Edge-Rotation problem for all feasible inputs (G, k). . It can be shown that
n i=1 (x i -A) 2 ≤ n i=1 (x i -A 0 ) 2 ≤ nR 2 4 , hence σ(S) ≤ R 2 .
The mean absolute derivation of S is defined as
MAD[S] = 1 n n i=1 |x i -A|.
It is well known (e.g. applying Jensen's inequality) that MAD[S] ≤ σ(S).
Based on the correlation mentioned in Remark 4, we calculate an upper bound on the values in the degree sequence S bound in the following lemma. Hence
n i=1 |S G [i] -D[i]| ≤ n i=1 max(|S G [i] -A + 2 |, |S G [i] -A -1 |) ≤ n i=1 |S G [i] -A| + n i=1 2 = nM AD[S G ] + 2n ≤ nσ[S G ] + 2n ≤ n ∆ 2 + 2n = n(∆ + 4) 2
Let ∆ be the maximum value of S bound . If ∆ ≤ ∆, then the condition from Lemma holds. If ∆ > ∆, then the distance between the k first elements of S bound and the k first elements of S G is at least k(∆ -∆) since S bound is k-anonymous and sorted in non-increasing order. Because
n i=1 S bound [i] = n i=1 S G [i],
if the value of some elements is increased of a certain amount, the value of some others have to be decreased by the same amount, so
n i=1 |S G [i] - S bound [i]| ≥ 2k(∆ -∆). If ∆ > (1 + n 4k + n k∆ )∆ then n i=1 |S G [i] -S bound [i]| > 2k( n 4k + n k∆ )∆ = n(∆+4) 2 ≥ n i=1 |S G [i]-D[i]|,
which is not possible due to minimality of S bound .
In the following two lemmas we prove that if a graph has 'sufficiently' many edges than edge rotations with the specific properties exist in a graph. Lemma 3. Let G = (V, E) be a graph with |E| > ∆ 2 , let uv ∈ E. Then there exists an edge ab ∈ E such that both vertices a and b are different from u and v and at most one of the following edges {av, au, bv, bu} is in E.
Proof. For an edge xy ∈ E, let N x = N G (x) \ {y} and N y = N G (y) \ {x}. For a contradiction suppose there exists an edge uv ∈ E such that for every edge ab ∈ E \ (Inc(u) ∪ Inc(v)) at least two of the edges {av, au, bv, bu} are in E. Then at least one vertex from {a, b} is incident to both vertices u, v, hence belongs to N u ∩ N v , or both vertices {a, b} are in
(N u ∪ N v ) \ (N u ∩ N v ). Moreover, every vertex in N u ∪ N v has at most ∆ -1 neighbours in V \ {u, v}. Hence, |E \ (Inc(u) ∪ Inc(v))| ≤ (∆ -1) × (|N u ∩ N v | + |(N u ∪ N v ) \ (N u ∩ N v )| 2 ) = (∆ -1) × |N u ∩ N v | + |N u ∪ N v )| 2 = (∆ -1) × |N u | + |N v | 2 ≤ (∆ -1) 2 Then |E| ≤ |Inc(u) ∪ Inc(v)| + |E \ (Inc(u) ∪ Inc(v))| ≤ 1 + 2(∆ -1) + (∆ - 1) 2 = ∆ 2 . This is in contradiction with hypothesis |E| > ∆ 2 . Lemma 4. Let G = (V, E) be a graph and suppose |E| > ∆ 2 . Let v + , v -∈ V such that 1 ≤ d G (v -) ≤ ∆ and 0 ≤ d G (v + ) ≤ ∆ < |V | -1.
Then there exists a sequence of at most two edge rotations that transform G to G such that Proof. Case 1: Suppose there exists a vertex v ∈ V such that v ∈ N G (v -) and v / ∈ N G (v + ). Let G be the graph obtained from G removing the edge v -v and adding the edge vv + , hence using rotation (vv -, vv + ). Obviously,
d G (v + ) = d G (v + ) + 1, d G (v -) = d G (v -) -
d G (v + ) = d G (v + ) + 1, d G (v -) = d G (v -) - 1
and G is obtained by using a single rotation. Case 2 : N (v -) ⊆ N (v + ). Let u ∈ N G (v -). Since |E| > ∆ 2 and uv + ∈ E, by using Lemma 3 then there exists an edge ab ∈ E such that at most one edge of the set {av + , au, bv + , bu} is in E. If au is in E, then the graph G obtained by two rotations (ab, av + ) and (uv -, ub) has the required properties. If av + is in E, then the graph G obtained by two rotations (ba, bv + ) and (uv -, ua) has the required properties. The remaining two cases if bu or bv + are from E are symmetrical to the above cases, it is enough to swap a and b.
Obviously, such an edge ab can be found in O(|E| 2 ).
Theorem 6. The Min Anonymous-Edge-Rotation problem is polynomial time 2-approximable for all instances (G, k), k ≤ n 4 where k = θ(n) and G is the graph with n vertices and m edges, where max{ n 2 , (1
+ n 4k + n k∆ ) 2 ∆ 2 } ≤ m ≤ n(n-3) 2
, and the constant c is defined as c = n k . Proof. Let (G = (V, E), k) be an instance of Min Anonymous-Edge-Rotation and S G be the degree sequence of G. Let the constant c be defined as c = n k . Due to our assumptions about the number of edges and k, all such instances are feasible as follows from Section 3. First we compute a k-anonymous degree sequence S bound following Definition 5 in O(n 2c ) steps. Due to the assumption k = θ(n) and consequently c being a constant, such number of steps is polynomial. Furthermore, the condition on the number of edges ensures that we can always apply Lemma 4 and find suitable edge rotations.
If there exist two vertices v
+ , v -∈ V such that 0 ≤ S G [v + ] < S bound [v + ] ≤ (1 + n 4k + n k∆ )∆ < |V | -1 and S G [v -] > S bound [v -] we apply Lemma 4 to transform G to a graph G 1 with at most two rotations such that d G1 (v + ) = d G (v + ) + 1 and d G1 (v -) = d G (v -) -1.
We'll be executing the above transformations while there are two vertices v + , v -∈ V with the required properties. In each such transformation we decrease the degree of one vertex by 1 and increase the degree of another one by 1 with at most two rotations. Hence we transform G to a final graph G with degree sequence S bound by at most Finally, since S bound is k-anonymous, G is a k-degree-anonymous graph.
Polynomial cases for Min Anonymous-Edge-Rotation
As follows from Section 4, the Min Anonymous-Edge-Rotation problem is NP-hard even for k = n q and q ≥ 3 is a fixed constant where n is the order of an input graph. In this section we show that the problem can be solved in polynomial time on trees when k = θ(n) or in case of any graph when k = n.
Trees
For a tree T = (V, E) rooted in a vertex r, for any v ∈ V , v = r, child(v) is a vertex that is a neighbor of v not on the path from r to v. Lemma 5. Let T = (V, E) be a tree and v -, v + vertices from V such that v -is not a leaf and v + is not a universal vertex. Then using one rotation we can transform T into a tree T such that d T (v -) = d T (v -) -1 and d T (v + ) = d T (v + ) + 1.
Proof. Let v + be the root of T . Since v -is not a leaf, there exists a vertex c ∈ child(v -). Since T is a tree, cv + / ∈ E. Therefore we can define the rotation (cv -, cv + ) (see Figure 8). Let T be the graph obtained after a such rotation. Since there is no edge between the subtree of c and other vertices, T is a tree. Obviously, the algorithm runs in polynomial time.
Conclusion
In this paper we initiate the study of the complexity of Min Anonymous-Edge-Rotation problem in which the task is to transform a given graph to a kdegree anonymous graph using a minimum number of edge rotations. As we were able to prove NP-hardness in case where the number of vertices k in each degree class is θ(n), further research could explore stronger hardness results or cases when k is a constant. Our next research step includes relaxation of the condition on the number of the edges in the presented 2-approximation algorithm as well as extension of the graph classes in which the Min Anonymous-Edge-Rotation problem can be solved in polynomial time. As the problem doesn't have a solution for all graphs and all possible values of k, our initial feasibility study covers a large part of instances. The extensions of the results are still possible, in the sense of necessary and sufficient conditions.
Figure 1 :
1 Figure 1: An edge rotation (uv, uw) from uv to uw
Figure 3 : Case 2 Corollary 1 .
321 Figure 2: Case 1
2 and n ≥ 4 .
24 Then there exists a feasible solution for the Min Anonymous-Edge-Rotation problem, hence a k-degree-anonymous graph G ∈ G(n, m), for any k ≤ n 4 . Proof. Let m, n, k be fixed. Any graph G ∈ G(n, m) is a 1-degree-anonymous graph, hence we can suppose k ≥ 2. In the first part of the proof we describe a construction of a k-anonymous sequence D = (d 1 , d 2 , . . . , d n ) with property n i=1 d i = 2m for any m, n, k satisfying the restriction of the theorem. In the second part we show that the sequence D is graphic, hence that the sequence satisfies the conditions (1) and (2) from Section 2. As n i=1 d i = 2m is the condition for a constructed sequence, the property (1) trivially holds. Now we construct three distinct k-anonymous sequences Type 1, 2, 3 of integers based on the values of k and s ≡ 2m mod n. Denote by d the average degree of the graph G defined as d
Figure 6 :
6 Figure 6: The sequence D 3
Theorem 3 .
3 Let G ∈ G(n, m) for some positive integers n and m. Then (G, n) is a feasible instance of Min Anonymous-Edge-Rotation if and only if 2m n is an integer.
Figure 7 :
7 Figure 7: σ(I 1 ) Theorem 4. Anonymous-Edge-Rotation is NP-hard even in case k = n q
c-1 . Now for each contiguous partition p = [p 1 , p 2 , . . . , p ] ∈ P , ∈ {1, . . . , c}, we compute all non-increasing sequences (d 1 , d 2 , . . . , d ) of integers d i such that 0 ≤ d i < |V |. Let Pp be the set of all feasible k-anonymous degree sequences for p, i.e. S = (d 1 , . . . , d 1 p1-times , d 2 , . . . , d 2 p2-times , . . . , d , . . . , d p -times ) = (d p1 1 , d p2 2 , . . . , d p ) ∈ Pp if and only if i=1 p i d i = 2|E|, S is graphic and k-anonymous.
Definition 5 .
5 Let G be a graph with the degree sequence S G . Then define S bound for G as a degree sequence for which the sumn i=1 |S G [i] -S[i]| achievesthe minimum for all elements S ∈ Pp and p ∈ P .
Remark 3 .
3 Similarly to a k-anonymous sequence S bound defined in Definition 5
) holds. If not then there exist positive integers a, b such that a < b and S[a] < S[b]. Let S 1 be the sequence defined swapping the values S[a], S[b], hence: S 1 [a] = S[b], S 1 [b] = S[a], and S 1 [i] = S[i] otherwise. We denote
2 .
2 This means by one rotation the value n i=1 |S G [i] -S G [i] decreases by at most 2. After |OP T | rotations, the last graph G j+1 in the sequence is G , therefore n i=1 |S G [i] -S G [i]| ≤ 2|OP T | and the theorem follows.
Remark 4 . 2 ,
42 Let S = (x 1 , x 2 , . . . , x n ) be a non-increasing sequence of n nonnegative integers. Denote by R = x 1 -x n , A 0 = x1+xn of S is defined as σ(S) = (xi-A) 2 n
Lemma 2 . 2 , k ≤ n 4 ,
224 Let (G, k) be an instance of the Min Anonymous-Edge-Rotation problem where G is the graph with n vertices and m edges. Suppose that n 2 ≤ m ≤ n(n-3) and let the constant c be defined as c = n k , hence k = θ(n). Let S bound be the k-anonymous degree sequence associated with G defined following Definition 5. Then for every i, S bound [i] ≤ min{(1+ n 4k + n k∆ )∆, n -1}, 1 ≤ i ≤ n.Proof. Let S G be the degree sequence of G sorted in non-increasing order and D the k-anonymous degree sequence constructed following Theorem 2. Denote the unrounded average degree as A = n i=1 S G [i] n . Then using Remark 4, the standard deviation of S G , σ[S G ] ≤ ∆ 2 , and MAD[S G ] ≤ σ(S G ).
1
1 and degrees of other vertices in G are not changed. These rotations can be found in O(|E| 2 ) steps.
n i=1 |S G [i] -S bound [i]| rotations. By Lemma 5 we know that n i=1 |S G [i] -S bound [i]| ≤ 2|OP T |, hence we use at most 2 times the numbers of rotations of an optimal solution. In each transformation loop searching for the vertices v + and v -can be done in time O(n) and searching for an edge ab in time O(m 2 ) (Lemma 3). Due to the modifications in each transformation loop, there can be at most O(n 2 ) loops. Therefore the time complexity is bounded by O(n 2c +n 2 ×m 2 ×n). Since c ≥ 4, O(n 2c +m 2 ×n 3 ) ≤ O(n 2c ).
Moreover d T (v -) = d T (v -) -1 and d T (v + ) = d T (v + ) + 1.
Figure 8 :Theorem 7 . 2 ≤ n 2 2 ≤
87222 Figure 8: Transformation T to T
7. 2 . 1 :
21 One degree class, k = nIn this part we show that Min Anonymous-Edge-Rotation is polynomialtime solvable for instances where k coincides with the number of vertices of the graph, that means all vertices must be in the same degree class.Input : A graph G = (V, E) Output: A sequence S of edge rotations if 2|E| |V | is an integer NO otherwise S = ∅ ; d = 2|E| |V | ; if if d is not integer then return NO ; else while ∃u, v ∈ V such that d G (u) < d and d G (v) > d do Let w ∈ N (v) \ N (u) ; E = E \ {vw}; E = E ∪ {uw}; S = S ∪ {(wv, wu)}; end end Algorithm Algorithm for k = |V |
= G, G 1 , G 2 , . . . , G such that G i+1 can be obtained from G i by one edge rotation, and G is k-degree-anonymous.
for any two indices i, j such that
i, j ∈ {1, . . . , n}. Note that each edge rotation corresponds to a (+1, -1)-degree
modification, but not opposite.
Definition 4. A sequence of integers D = (d 1 , d 2 , . . . , d n ) is called k-anonymous
where k ∈ {1, . . . , n}, if for each element d i from D there are at least k -1 other
elements in D with the same value. A graph G is called k-degree-anonymous
if its degree sequence is k-anonymous. The vertices of the same degree corre-
spond to a degree class.
In this paper we study the following anonymization problem:
Min Anonymous-Edge-Rotation
Input: (G, k) where G = (V, E) is an undirected graph and k a positive
integer, k ∈ {1, . . . , |V |}.
Output: If there is a solution, find a sequence of a minimum number + 1
of graphs G 0
1 , . . . , d n ) in such a way that d i := d i + 1, d j := d j -1
Lemma 6. Let G = (V, E) be a graph and u, v ∈ V . If N G (u) N G (v), then there is an edge rotation that leads to a graph G such that d G (u) = d G (u) -1 and d G (v) = d G (v) + 1.
Proof. Since N G (u) N G (v), there exists w ∈ V such that uw ∈ E and vw / ∈ E. Then we can do the following edge rotation (uw, vw) and get the graph G with E = (E \ {uw}) ∪ {vw}.
Lemma 7. Let (G, n) be an instance of Min Anonymous-Edge-Rotation where G ∈ G(n, m) for some positive integers m, n, and 2m
n is an integer. Then the optimum value of Min Anonymous-Edge-Rotation on (G, n) is
n . By Remark 5, there is an edge rotation that leads to a graph
rotations are necessary to have all the vertices of the same degree 2m n , therefore the optimum value of Min Anonymous-Edge-Rotation on the instance (G, n) is at least w∈V |d G (w)-2m/n| 2 . Now suppose that the optimum value is r strictly less than w∈V |d G (w)-2m/n| 2 . Each rotation increases the degree of a vertex by one and decreases the degree of another vertex by one too. Obviously, each vertex w has to be involved in at least |d G (w) -2m/n| edge rotations to reach the degree 2m n . Hence if there are r < w∈V |d G (w)-2m/n| 2 edge rotations then in any graph G obtained from G using r edge rotations there exists w ∈
n . Theorem 8. The Min Anonymous-Edge-Rotation problem is polynomialtime solvable for instances (G, k) when k = n, where n is the order of the graph G.
Proof. In case k = n, we are looking for a n-degree-anonymous graph with only one degree class, hence for a regular graph. Due to Theorem 3, we can easily decide whether (G, n) is a feasible instance of Min Anonymous-Edge-Rotation: if for G ∈ G(n, m) the fraction 2m n is not an integer, (G, n) is not a feasible input.
For a feasible input (G, n), the result is based on Algorithm 1 and its correctness follows from Lemmas 6 and 7.
Declaration of interests
☒ The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
☐The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Cristina Bazgan Pierre Cazals Janka Chlebikova |
03716310 | en | [
"shs.anthro-bio",
"sdv.mhep.aha"
] | 2024/03/04 16:41:22 | 2021 | https://amu.hal.science/hal-03716310/file/S2468785520301713.pdf | Floriane Remy Msc
DDS b Pierre Bonnaure
MD c Philippe Moisdon
Philippe Burgart Msc
Yves Godio-Raboutet Msc
PhD Lionel Thollon
MD Laurent Guyot
Floriane Remy
email: [email protected]
Preliminary results on the impact of simultaneous palatal expansion and mandibular advancement on the respiratory status recorded during sleep in OSAS
Keywords: Maxillary expansion, Mandibular advancement, Custom-made oral appliance, Child, Obstructive Sleep Apnea, Sleep study
la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Obstructive Sleep Apnea Syndrome (OSAS) is a respiratory disorder characterized by a partial or complete obstruction of the upper airways during sleep (respectively apnea or hypopnea). Its prevalence within the pediatric population ranges between 1% and 5% [START_REF] Marcus | Diagnosis and Management of Childhood Obstructive Sleep Apnea Syndrome[END_REF].
Because of this abnormal ventilation, this disorder leads to an oxygen desaturation and a sleep fragmentation and is associated with various signs and symptoms as snoring, excessive daytime sleepiness, reduced concentration, hypertension or even cardiac arrhythmia [START_REF] Marcus | Diagnosis and Management of Childhood Obstructive Sleep Apnea Syndrome[END_REF]. Its early management and resolution are therefore important for the healthy development of the child.
Several factors should be considered to choose the more efficient therapeutic strategy: the etiology of the obstruction, its degree of severity and the patient's compliance to undergo continuous treatment. Among the therapies available, the adenotonsillectomy presents good short-term results but several cases of recurrence were reported, especially in children with an underdeveloped maxilla or a retrusive mandible [START_REF] Guilleminault | Sleep disordered breathing: surgical outcomes in prepubertal children[END_REF]. Even though less invasive, the Continuous Positive Air Pressure (CPAP) technique outcomes depend on the obstruction severity and the child's compliance [START_REF] Lynch | Quality of Life in Youth With Obstructive Sleep Apnea Syndrome (OSAS) Treated With Continuous Positive Airway Pressure (CPAP) Therapy[END_REF].
Constricted maxillary dental arch and a retrusive position of the mandible are considered as common characteristics in the pediatric OSAS [START_REF] Flores-Mir | Craniofacial morphological characteristics in children with obstructive sleep apnea syndrome: a systematic review and meta-analysis[END_REF]. Besides, according to the systematic review of Caron et al., OSAS is associated to a craniofacial anatomy anomaly in 7% to 67% of the pediatric population [START_REF] Caron | Obstructive sleep apnoea in craniofacial microsomia: a systematic review[END_REF]. Indeed, considering their anatomical connections, a posterior displacement of the mandible results in a fall of the tongue in the pharynx and an anteroposterior narrowing of the upper airways [START_REF] Muto | A cephalometric evaluation of the pharyngeal airway space in patients with mandibular retrognathia and prognathia, and normal subjects[END_REF]. Likewise, a narrow palate is associated with an insufficient development of the nasal cavities, so an oral breathing pattern is commonly observed in these patients [START_REF] Baroni | Craniofacial features of subjects with adenoid, tonsillar, or adenotonsillar hypertrophy[END_REF].
Considering these anatomic abnormalities, dentists are in favorable position to identify and help in the management of pediatric OSAS [START_REF] Academy | Oral healt policy on obstructive sleep apnea[END_REF]. Besides, oral appliances have been widely used in adults, with great amount of success [START_REF] Ramar | Clinical Practice Guideline for the Treatment of Obstructive Sleep Apnea and Snoring with Oral Appliance Therapy: An Update for 2015[END_REF]. Among the various designs of oral appliances, rapid maxillary expanders and mandibular advancement devices improve the upper airways patency and relieve their obstruction: by widening the palatal vault and the nasal cavities, by moving the tongue outside the pharynx, by changing hyoid bone position or by modifying airways dimensions. Even though the effects of these techniques on the oral space have already been observed [START_REF] Machado-Júnior | Rapid maxillary expansion and obstructive sleep apnea: A review and meta-analysis[END_REF][START_REF] Noller | Mandibular advancement for pediatric obstructive sleep apnea: A systematic review and meta-analysis[END_REF], very few studies applied them simultaneously [START_REF] Galeotti | Effects of simultaneous palatal expansion and mandibular advancement in a child suffering from OSA[END_REF][START_REF] Rădescu | Effects of rapid palatal expansion (RPE) and twin block mandibular advancement device (MAD) on pharyngeal structures in Class II pediatric patients from Cluj-Napoca, Romania[END_REF].
In the present study, the effects of an innovative custom-made oral device combining these two technics were reported in children with OSAS and Class II malocclusion: this study aimed to demonstrate that simultaneous palatal expansion and mandibular advancement modifies sufficiently the craniofacial anatomy to improve the respiratory status during sleep.
Materials and Methods
The study sample
The study sample was selected from the patients of a French private orthodontic practice who consulted for a Class II malocclusion. Only children who were identified as OSAS during a diagnostic polysomnography prescribed by the orthodontist were selected. OSAS was diagnosed with an Apnea/Hypopnea Index (AHI) ≥1 event per hour of sleep. These children were systematically treated with the personalized innovative oral appliance described below.
Thus, the study sample was composed of 103 children aged between 3 and 12 years (mean ±standard deviation = 7.0 ± 1.8 years old).
According to the French law n°2012-300 of March 5 th , 2012 on researches involving human beings, because this study was retrospective and non-interventional, and because data were anonymized, no signed consent was required. This study was reviewed and approved by the Institutional Review Board of Aix-Marseille University (no. 2019-14-03-001).
The custom-made oral appliance
All children were treated with a personalized innovative oral appliance composed of two thermoformed intra-buccal pieces (Fig. 1) and described as follows:
-The maxilla expander is a hyrax-type screw 13 mm (RMO®, Denver, Colorado) fixed with an imbedded 0.045-inch wire framework on an acrylic maxillary plate (Duran®, Scheu Dental, Iserlohn, Germany) glued to the buccal, lingual and occlusal surfaces of the canine and first and second deciduous molars (with the glue Fuji ORTHO™ LC, GCAmerica, Alsip, Illinois). Activation of the expander screw corresponded to a maxillary arch expansion of about 0.25 mm.
-The mandibular advancement device is a removable acrylic plate (Durasoft®, Scheu Dental, Iserlohn, Germany) which covers all the lower teeth. The device was designed to create an initial maximal jumping, to achieve a normal dental occlusion (Class I occlusion). For two months, this protrusion could be readjusted by the physician according to the achieved outcomes and the child's comfort, with an advancement of 1-2 mm.
-The upper and lower plates could be connected by sliding an embedded 0.045-inch wire framework on the mandibular plate in the embedded acrylic tubes located on the labial side of the maxillary plate. The child's parents were instructed to advance the maxillary expander screw once a night for 20 days and then twice a night for 10 days. During a consultation at the end of these 30 days, the extension rate should have been > 10 mm. If not, new activations were planned; otherwise the patient was instructed to stop activations and to wear the mandibular advancement device, only during night. At this time, the maxillary expander screw remained in the child's mouth for another two months to ensure the calcification of the palatine suture. After these three months (30 days of activations + two months of calcification), the expander screw was deposed whereas the acrylic maxillary plate remained glued to the superior deciduous teeth during the whole treatment period to fix the mandibular advancement device. At this time, a lingual rehabilitation with a speech therapist was also initiated. After approximately nine months of treatment, a functional Class I bite was achieved. At that time, the device (both the lower and upper parts) was removed. This treatment protocol is summarized in Fig. 2 where T0 referred to the initiation of the treatment.
Sleep studies
The respiratory status during sleep was assessed with sleep studies performed with SOMNOlab ® (Weinmann, Hamburg, Germany) at patient's home, at the beginning and at the end of the treatment.
An Apnea/Hypopnea Index (AHI) was defined as the number of apneas and hypopneas events per hours of recorded sleep. In accordance with the reviewed literature, this index was used to assess the severity of the child's OSAS as: mild (1≤ AHI <5), moderate (5≤ AHI <10) or severe (≥10) [START_REF] Álvarez | Documento de consenso del síndrome de apneas-hipopneas durante el sueño en niños (versión completa)[END_REF].
An Oxygen Desaturation Rate (ODR) was defined as the amount of time the blood oxygen saturation was ≤96% over the recorded sleep period. A pediatric OSAS was considered when this rate was >1.4% [START_REF] Álvarez | Documento de consenso del síndrome de apneas-hipopneas durante el sueño en niños (versión completa)[END_REF].
An Arousal Index (AI) was computed as the number of arousals per hour of total sleep time.
An AI ≥11 was considered pathological [START_REF] Goh | Sleep Architecture and Respiratory Disturbances in Children with Obstructive Sleep Apnea[END_REF][START_REF] Scholle | Normative values of polysomnographic parameters in childhood and adolescence: arousal events[END_REF]. This index was classified as respiratory-related (AISRRD) if the arousal occurred immediately after an apnea or hypopnea. If so, a Sleep Respiratory-Related Disorder (SRRD) was identified when the AISRRD was >1 [START_REF] Scholle | Normative values of polysomnographic parameters in childhood and adolescence: arousal events[END_REF][START_REF] Franco | Overnight polysomnography versus respiratory polygraphy in the diagnosis of pediatric obstructive sleep apnea][END_REF].
Likewise, when the arousal occurred together with a series of consecutive leg movements, it was classified as related to a Periodic Limb Movement syndrome (PLM) [START_REF]Atlas Task Force. Recording and scoring leg movements[END_REF]. When this kind of arousals (reported as AIPLM) occurred ≥5 times per hours of recorded sleep, it was considered pathologic [START_REF]The international classification of sleep disorders: diagnostic & coding manual[END_REF].
Finally, the Chervin's French translated pediatric sleep questionnaires were sent to the treated children's parents, 2 to 6 years after the end of the treatment [START_REF] Chervin | Pediatric sleep questionnaire (PSQ): validity and reliability of scales for sleep-disordered breathing, snoring, sleepiness, and behavioral problems[END_REF][START_REF] Chervin | Pediatric sleep questionnaire: prediction of sleep apnea and outcomes[END_REF]. This questionnaire is composed of 22 items which can be answered by Yes / No / Do not know. Parents were asked if they noticed moments of apnea, snoring, mouth breathing, daytime sleepiness or behavioral issues after their child's treatment. According to Chervin et al., a score superior to 8 (i.e. 8 positive answers) demonstrated a reoccurrence of the OSAS symptoms, with a sensibility and a specificity of 85% and 86% respectively [START_REF] Chervin | Pediatric sleep questionnaire (PSQ): validity and reliability of scales for sleep-disordered breathing, snoring, sleepiness, and behavioral problems[END_REF][START_REF] Chervin | Pediatric sleep questionnaire: prediction of sleep apnea and outcomes[END_REF].
Any remarks reported by the parents on their child's sleep quality and feelings during the consultations were also collected.
The statistical analyses
Statistical analyses were performed with RStudio [START_REF] Core | R: A language and environment for statistical computing[END_REF] (version 3.3.2) with a significance threshold fixed at 5%. Data are expressed as means ± standard deviation. The significance of the evolution of the polysomnographic parameters collected pretreatment versus posttreatment was evaluated with Student's T-Tests for paired groups.
Results
The evolution of the polysomnographic data is summarized in Table 1, associated to the Student's T-Tests results assessing the significance of the observed pre-and post-treatment differences.
All children presented a pathological AHI (≥ 1). For 31% of the sample, the OSAS was moderate (5≤ AHI <10) and for 7 children, it was severe (AHI ≥10). After the treatment, the AHI systematically decreased <5, except for 2 children, even though it remains close to it (from 7 to 5.7 and from 8.3 to 5.1 respectively). This index was even <1 for 41% of the sample. Beside this significant decrease of the AHI values, their post-treatment spread was also reduced, assessing a normalization of this parameter.
The desaturation rate also decreased, even though not significantly. However, for majority of the patients, this index did not become inferior to the pathological threshold (ODR < 1.4/h) at the end of the treatment: an improvement was observed for 48% of the sample.
Sleep became less fragmented following the treatment: for 76% of the sample, the AI decreased <11. This improvement of the sleep quality was more particularly characterized by the significant diminution and normalization of the SRRD (AISRRD <1). However, the PLM syndrome did not evolve for 82% of the sample. The AIPLM even increased ≥5 for 22% of the sample.
Finally, 75 of the 103 treated children answered to the questionnaires (i.e. 73% of the total study sample). These results are summarized in Table 2. The mean score of the questionnaire was 4.16.
Among these 75 children, only 12 (16%) had a final score superior to 8, marking a reoccurrence of the OSAS symptoms. In more than half of the cases (68%), parents answer YES to the question n°10, reporting behavior problems with easy distraction and a child generally excited. For almost 50% of the cases, parents also reported that, despite of the treatment, their child trend to have an oral breathing (question n°3).
The biggest number of "Do Not Know" answers (around 20%) has been counted for the questions referring to the apneic event observed during sleep (question n°2) and to the oral or nasal breathing mode of the child (question n°3).
Discussion
The present study demonstrated that the simultaneous rapid maxilla expansion and nocturne mandibular advancement effectively modified the oral space in the three spatial dimensions (Fig. 3) associated with a significant improvement of the sleep breathing quality. A significant increase in the oropharyngeal space was also observed after the treatment. As cephalometric changes induced by such treatment strategy has already been described in the literature, they will not be discussed here [START_REF] Galeotti | Effects of simultaneous palatal expansion and mandibular advancement in a child suffering from OSA[END_REF][START_REF] Rădescu | Effects of rapid palatal expansion (RPE) and twin block mandibular advancement device (MAD) on pharyngeal structures in Class II pediatric patients from Cluj-Napoca, Romania[END_REF].
Indeed, all the collected polysomnographic parameters, which assess the severity of the OSAS symptoms, decreased following the orthopedic treatment. This decrease was even significant for the main pathologic criteria: the AHI which decreased ≤ 5 for 98% of the sample.
Moreover, there was a complete recovery (AHI < 1, ODR <1.4/h and AISRRD < 1) for 14% of the sample. This significant diminution of the AHI at the end of an orthodontic treatment was already observed by Villa et al. who performed during 12 months a rapid maxillary expansion therapy on 40 patients aged between 4-10 years old and initially showing clinical signs of malocclusion and an AHI >1 [START_REF] Villa | Rapid maxillary expansion outcomes in treatment of obstructive sleep apnea in children[END_REF]. In the present study, the same outcomes were obtained with a shorter treatment period: 9 months. This short length of intervention may be explained by the early treatment intervention, when the bone is plastic, and growth is maximal. In another study on an oral jaw-positioning appliance for the management of 32 OSAS children with clinical signs of dysgnathia, Villa et al. [START_REF] Villa | Randomized controlled study of an oral jaw-positioning appliance for the treatment of obstructive sleep apnea in children with malocclusion[END_REF] also reported a decrease of at least 50% in the AHI for 64% of the sample. According to this definition, the custom-made oral device used in the present study achieved a better success rate of 79%. This demonstrate the importance of the combination of two effective technics (rapid maxillary expansion and nocturnal mandibular advancement) so their effects are optimized.
Few months after the end of the treatment, 59% of the treated children presented a residual OSAS with an AHI which remained >1. Pirelli et al. [START_REF] Pirelli | Rapid maxillary expansion in children with obstructive sleep apnea syndrome[END_REF], who also performed a 4-months rapid maxillary expansion on 31 OSAS children, get for their part a systematic decrease of the AHI <1. However, their sample included children showing orofacial anomalies restricted to the maxilla whereas the present study analyzed children with Class II malocclusion. Associated to this significant decrease of the AHI, a significant improvement of the sleep quality was also noticed, which was less fragmented: most particularly, there was a significant decrease of the arousals related to respiratory disorders (AISRRD). The oxygen desaturation rate (ODR) and the global arousal index (AI) also decreased below the pathological threshold (<1.4 and <11 respectively) for 48% and 76% of the sample respectively. However, arousals related to Periodic Leg Movements (AIPLM) were still omnipresent following the treatment.
As suggested by several authors, this emphasizes the importance to identify the cause of airways obstruction: the pathophysiology of the pediatric OSAS is complex, the respiratory disorders being induced by a combination of factors [START_REF] Woodson | Physiology of sleep disordered breathing[END_REF].
Despite this significant improvement of the respiratory status during sleep, demonstrated with the significant decrease of both the AHI and AISRRD, the sleep quality was not totally restored. Indeed, even though the ODR and the AI did decrease after the treatment, this decrease was not significant. Nevertheless, parents reported during the consultations an immediate improvement of the sleep quality of their child. These results may highlight the importance of a long-term follow-up as an improvement of the pediatric OSAS symptoms may not appears as soon as few months after the end of the treatment: normal cephalometric growth recovery is a slow process. Besides, according to pediatric sleep questionnaires results, only 17% of the present study sample showed a reoccurrence of the OSAS symptoms several years after the end of the treatment.
Thus, these outcomes may suggest that when the orofacial anomaly affects both the maxilla and the mandible, the normal ventilation could be progressively and permanently restored. It also highlights the importance of the intervention of a multidisciplinary team in the management of pediatric OSAS. Longitudinal studies should still be performed to observe the evolution of the quantitative sleep parameters several years after the end of the treatment. This significant improvement of OSAS symptoms through the simultaneous maxillary expansion and mandibular advancement has been already observed [START_REF] Galeotti | Effects of simultaneous palatal expansion and mandibular advancement in a child suffering from OSA[END_REF][START_REF] Rădescu | Effects of rapid palatal expansion (RPE) and twin block mandibular advancement device (MAD) on pharyngeal structures in Class II pediatric patients from Cluj-Napoca, Romania[END_REF]. However, their samples were smaller than the sample of the present study and were mainly focused on the cephalometric effects of the orthopedic treatment.
The present study did not include a control group of untreated subjects with Class II malocclusion and OSAS: because of the ethical concern of not treating patients during their pubertal growth spurt. Indeed, not treating OSAS children may results in various problems as behavioral disorders or serious morbidities as growth failure, cor pulmonale, hypertension … [START_REF] Chang | Obstructive sleep apnea syndrome in children: Epidemiology, pathophysiology, diagnosis and sequelae[END_REF]. However, considering the literature, the authors may assert the real impact of this therapeutic strategy. Indeed, Pavoni et al. observed more significant changes in mandibular length, airways dimensions or hyoid and tongue position in children treated with an resolution of OSAS symptoms, Chervin et al. observed it for for 42% of the 5 to 9 years old children included in the ChildHood Adenotonsillectomy Trial (CHAT), but without defining the cephalometric status of these children [START_REF] Chervin | Prognosis for Spontaneous Resolution of OSA in Children[END_REF].
Finally, this innovative therapeutic strategy, used at an early stage of the child's growth and associated with myofunctional rehabilitation exercises, significantly increased the OSAS child's oral space in the three spatial dimensions. Moreover, the present outcomes suggest the effectiveness of the custom-made oral appliance for OSAS children management since it prevents airflow obstruction by enlarging the upper airways, and by promoting the lingual advancement during the night. These results are in accordance with previous studies investigating a similar therapeutic strategy [START_REF] Galeotti | Effects of simultaneous palatal expansion and mandibular advancement in a child suffering from OSA[END_REF][START_REF] Rădescu | Effects of rapid palatal expansion (RPE) and twin block mandibular advancement device (MAD) on pharyngeal structures in Class II pediatric patients from Cluj-Napoca, Romania[END_REF]. Thus, simultaneously treating OSAS children with a rapid maxillary expansion and a nocturne mandibular advancement should be recommended.
Table 1
Mean and standard deviation of the polysomnographic parameters collected before and after the treatment, associated to the Student's T-Tests p.value assessing the significance of the differences. Shaded cases correspond to p.values < 5%
Parameter
Figures caption:Fig. 1 Fig. 2 Fig. 3
123 Figures caption:
Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Declaration of Interest Statement
The authors report no conflict of interest.
Ethical approval
According to the French law n°2012-300 of March 5 th , 2012 on researches involving human beings, because this study was retrospective and non-interventional, and because data were anonymized, no signed consent was required. This study was reviewed and approved by the Institutional Review Board of Aix-Marseille University (no. 2019-14-03-001).
Table 2
Summary of the replies to the post-treatment pediatric sleep questionnaires, as perceived by the parents (for more information about the questions, please refer to the article of Chervin |
03178033 | en | [
"spi.meca"
] | 2024/03/04 16:41:22 | 1983 | https://hal.science/hal-03178033/file/B1-1.pdf | Roger Ohayon
Roger Valid Symmetric
Keywords: |
03586796 | en | [
"info"
] | 2024/03/04 16:41:22 | 2021 | https://hal.science/hal-03586796/file/S0020025521004436.pdf | Zhi Lu
Jin-Kao Hao
email: [email protected]
Una Benlic
David Lesaint
Iterated Multilevel Simulated Annealing for Large-Scale Graph Conductance Minimization
Keywords: Combinatorial optimization, large-scale optimization, metaheuristics, heuristics, graph theory
Given an undirected connected graph G = (V, E) with vertex set V and edge set E, the minimum conductance graph partitioning problem is to partition V into two disjoint subsets such that the conductance, i.e., the ratio of the number of cut edges to the smallest volume of two partition subsets is minimized. This problem has a number of practical applications in various areas such as community detection, bioinformatics, and computer vision. However, the problem is computationally challenging, especially for large problem instances. This work presents the first iterated multilevel simulated annealing algorithm for large-scale graph conductance minimization. The algorithm features a novel solution-guided coarsening method and an effective solution refinement procedure based on simulated annealing. Computational experiments demonstrate the high performance of the algorithm on 66 very large real-world sparse graphs with up to 23 million vertices. Additional experiments are presented to get insights into the influences of its algorithmic components. The source code of the proposed algorithm is publicly available, which can be used to solve various real world problems.
Introduction
Graph partitioning problems are popular and general models that are frequently used to formulate numerous practical applications in various domains.
Given an undirected graph with a vertex set and an edge set, graph partitioning is to divide the vertex set into two or more disjoint subsets while meeting a defined objective. For example, the popular NP-hard (non-deterministic polynomial-time hardness) 2-way graph partitioning problem is to minimize the number of edges crossing the two partition subsets [START_REF] Garey | Computers and Intractability: A Guide to the Theory of NP-Completeness[END_REF].
The minimum conductance graph partitioning problem (MC-GPP) studied in this work is another typical graph partitioning problem stated as follows. Let G = (V, E) be an undirected connected graph with vertex set V and edge set E. A cut in G is a partition of its vertex set V into two disjoint subsets S and S = V \ S, while the cut-set is the set of edges that have one endpoint in each subset of the partition. By convention, a cut is denoted by s = (S, S) and its cut-set is denoted by cut(s).
Given a cut s = (S, S) from the search space Ω composed of all the possible cuts of a graph G, its conductance Φ(s) is the ratio between the number of cut edges and the smallest volume of the two partition subsets, i. The conductance of a graph G is the minimum conductance over all the cuts of the graph: Φ(G) = min s∈Ω Φ(s). In mathematics and statistical physics, the conductance is also called the Cheeger constant, Cheeger number or isoperimetric number [START_REF] Cheeger | A lower bound for the smallest eigenvalue of the laplacian[END_REF] with a different denominator that depends upon the number of vertices in S and S.
The minimum conductance graph partitioning problem (MC-GPP) is then to determine the conductance of an arbitrary graph G, i.e., find the cut s * in G such that Φ(s * ) ≤ Φ(s) for any s in Ω.
(MC-GPP) s * = arg min s∈Ω Φ(s)
In terms of the computational complexity theory, MC-GPP is known to be an NP-hard problem [START_REF] Šíma | On the NP-completeness of some graph cluster measures[END_REF], and thus computationally challenging. From a practical perspective, MC-GPP has a number of relevant applications in different fields such as clustering [START_REF] Cheng | A divide-and-merge methodology for clustering[END_REF], community detection [START_REF] Fortunato | Community detection in graphs[END_REF], bioinformatics [START_REF] Voevodski | Finding local communities in protein networks[END_REF], computer vision [START_REF] Shi | Normalized cuts and image segmentation[END_REF], and large graph size estimation [START_REF] Lu | Uniform random sampling not recommended for large graph size estimation[END_REF]. As the result, designing effective solution methods for MC-GPP is both challenging and important.
Given the relevance of MC-GPP, considerable efforts have been dedicated to developing various algorithms for solving the problem, which mainly fall into three families: exact, approximation, and heuristic algorithms. Time-efficient exact algorithms were introduced in [START_REF] Hochbaum | Polynomial time algorithms for ratio regions and a variant of normalized cut[END_REF] to solve a variant of the conductance problem and other ratio region problems in the context of image segmentation. Later, a polynomial time algorithm for the Rayleigh ratio minimization problem on discrete variables was presented in [START_REF] Hochbaum | A polynomial time algorithm for rayleigh ratio on discrete variables: Replacing spectral techniques for expander ratio, normalized cut, and cheeger constant[END_REF]. The first (and considered weak) approximation algorithm for MC-GPP was presented in [START_REF] Cheeger | A lower bound for the smallest eigenvalue of the laplacian[END_REF]. Improved approximation algorithms were studied including an O(log(|V |))-approximation algorithm [START_REF] Leighton | Multicommodity max-flow min-cut theorems and their use in designing approximation algorithms[END_REF], and an O( log(|V |))-approximation algorithm [START_REF] Arora | Expander flows, geometric embeddings and graph partitioning[END_REF].
Several algorithms with performance guarantees for conductance minimization were also proposed for large-scale graph clustering in social and information networks. For instance, local algorithms were presented in [START_REF] Spielman | A local clustering algorithm for massive graphs and its application to nearly linear time graph partitioning[END_REF] for clustering massive graphs based on the conductance criterion. Another local algorithm was introduced in [START_REF] Zhu | A local algorithm for finding well-connected clusters[END_REF] that is able to find well-connected clusters for large graphs in terms of the clustering accuracy and the conductance of the output set. The conductance measure was used to characterize the "best" possible community in [START_REF] Leskovec | Community structure in large networks: Natural cluster sizes and the absence of large well-defined clusters[END_REF] where approximation algorithms with performance guarantee were proposed for the related graph partitioning problem over a wide range of size scales. Although these algorithms can theoretically provide provable performance guarantees on the quality of the obtained solutions, they are either designed for specific cases or require large computation time for large graphs due to the intrinsic intractability of MC-GPP.
To handle large problems that cannot be solved by exact methods, heuristic methods provide an interesting alternative approach to produce high-quality (not necessarily optimal) solutions within a reasonable amount of computation time. In particular, a number of generic metaheuristics (see e.g., [START_REF] Boussaïd | A survey on optimization metaheuristics[END_REF]) are available including single-solution based methods (e.g., simulated annealing and tabu search) and population-based methods (e.g., evolutionary algorithms and particle swarm optimization). These methods have been used to tackle various large and computationally complex problems [START_REF]Handbook of Metaheuristics[END_REF]. However, the success of these methods depends strongly on a careful design and dedicated adaptations of the methods to the problem at hand [START_REF] Blum | Metaheuristics in combinatorial optimization: Overview and conceptual comparison[END_REF]. For MC-GPP, several heuristic and metaheuristic algorithms have been proposed, which are reviewed as follows.
A max-flow quotient-cut improvement algorithm (MQI) was introduced in [START_REF] Lang | A flow-based method for improving the expansion or conductance of graph cuts[END_REF] to refine an initial cut of the Metis graph partitioning heuristic [START_REF] Karypis | Metis 5.1.0: Unstructured graphs partitioning and sparse matrix ordering system[END_REF]. This work was extended in [START_REF] Andersen | An algorithm for improving graph partitions[END_REF], which solves a sequence of minimum cut problems to find a larger-than-expected intersection with lower conductance. In [START_REF] Lim | MTP: discovering high quality partitions in real world graphs[END_REF], a minus top-k partition (MTP) method was studied for discovering a global balanced partition with low conductance. In [START_REF] Van Laarhoven | Local network community detection with continuous optimization of conductance and weighted kernel k-means[END_REF], a continuous optimization approach was proposed for conductance minimization in the context of local network community detection. The first metaheuristic algorithms for MC-GPP including simple local search and basic memetic algorithms were studied in [START_REF] Chalupa | Hybrid bridge-based memetic algorithms for finding bottlenecks in complex networks[END_REF], which were used for finding bottlenecks in complex networks. A stagnation-aware breakout tabu search algorithm (SaBTS) was presented in [START_REF] Lu | Stagnation-aware breakout tabu search for the minimum conductance graph partitioning problem[END_REF], which combines a dedicated tabu search procedure to discover high-quality solutions and a self-adaptive perturbation procedure to overcome hard-to-escape local optimum traps. Computational results were reported on real world graphs with up to 500,000 vertices. Recently, a hybrid evolutionary algorithm (MAMC) with a powerful local search and a quality-and-diversity based pool management was introduced in [START_REF] Lu | A hybrid evolutionary algorithm for finding low conductance of large graphs[END_REF], which reported new results on large graphs with up to 23 million vertices. These studies greatly contributed to better solving the given problem. However, two main limitations can be identified in the existing approaches, which motivate the current work. First, these approaches have difficulties in robustly and consistently producing high-quality solutions for large-scale graphs with more than several million vertices. Second, they may require a substantial amount of computation time to reach satisfactory solutions.
On the other hand, the multilevel approach [START_REF] Walshaw | Multilevel refinement for combinatorial optimisation problems[END_REF] is known to be a powerful framework for large graph partitioning in various domains (e.g., complex network analysis [START_REF] Valejo | A critical survey of the multilevel method in complex networks[END_REF]). However, this approach has not been investigated for solving MC-GPP. This work aims to fill in the gap by presenting an effective multilevel algorithm for MC-GPP that is able to produce high-quality solutions within a reasonable time frame for large-scale sparse graphs arising from real world applications. The contributions can be summarized as follows.
First, an iterated multilevel simulated annealing algorithm (IMSA) is introduced for MC-GPP. The algorithm consists of a novel solution-guided coarsening method and a powerful local refinement procedure to effectively sample the search space of the problem.
Second, extensive computational assessments are presented on three sets of 66 very large real-world benchmark instances (including 56 graphs from the 10th DIMACS Implementation Challenge and 10 graphs from the Network Data Repository online, with up to 23 million vertices). The computational experiments indicate a high competitiveness of the proposed IMSA algorithm compared to the existing state-of-the-art algorithms.
Third, the code of the IMSA algorithm will be publicly available, which can help researchers and practitioners to better solve various practical problems that can be formulated as MC-GPP.
The remainder of the paper is structured as follows. Section 2 presents the IMSA algorithm. Section 3 shows the computational studies and comparisons between the proposed IMSA algorithm and state-of-the-art algorithms. Section 4 provides analyses of the key algorithmic components. Section 5 presents concluding remarks and perspectives for future research.
Iterated multilevel simulated annealing
This section presents the proposed iterated multilevel simulated annealing algorithm for MC-GPP. After illustrating its general algorithmic framework, the key algorithmic components are described.
General framework
The multilevel approach is a general framework that has shown to be very successful in tackling the classic graph partitioning problem. Generally, this approach consists of three basic phases (coarsening, initial partitioning, and uncoarsening) [START_REF] Hendrickson | A multi-level algorithm for partitioning graphs[END_REF]. During the coarsening phase, the given graph is successively reduced to obtain a series of coarsened graphs with a decreasing number of vertices. An initial partition is then generated for the coarsest graph. Finally, during the uncoarsening phase, the coarsened graphs are successively unfolded in the reverse order of the coarsening phase. For each uncoarsened graph, the partition of the underlying coarsened graph is projected back to the uncoarsened graph and then improved by a refinement procedure. This process stops when the input graph is recovered and its partition is refined.
In [START_REF] Walshaw | Multilevel refinement for combinatorial optimisation problems[END_REF], Walshaw introduced iterated multilevel partitioning (analogous to the use of V-cycles in multigrid methods [START_REF] Trottenberg | Multigrid[END_REF]) where the coarsening-uncoarsening process is iterated and the partition of the current iteration is used for the whole coarsening phase of the next iteration.
The proposed IMSA algorithm for MC-GPP is based on the ideas described above and includes two original features: 1) a solution-guided coarsening method, and 2) a powerful refinement procedure which is applied during both the coarsening and the uncoarsening phase. Fig. 1 illustrates the iterated multilevel framework adopted by the proposed IMSA algorithm, while the entire algorithmic framework is presented in Algorithm 1. Informally, IMSA performs a series of V-cycles, where each V-cycle is composed of a coarsening phase and an uncoarsening phase, mixed with local refinement for each intermediate (coarsened and uncoarsened) graph.
It is worth noting that IMSA requires a seeding partition s 0 of the input graph to initiate its first iteration (the first V-cycle). Generally, the seeding partition can be provided by any means. For instance, for the experimental studies reported in Section 3, the (fast) MQI algorithm [START_REF] Lang | A flow-based method for improving the expansion or conductance of graph cuts[END_REF] was adopted. For each subsequent iteration, the best partition found during the previous Vcycle then serves as the new seeding partition. The whole process terminates when a given stopping condition (e.g., cutoff time limit, maximum number of V-cycles) is reached and the global best partition found during the search is returned.
Solution-guided coarsening procedure
Given a graph G = (V, E) (renamed as G 0 = (V 0 , E 0 )) and the seeding partition s 0 , the solution-guided coarsening phase progressively transforms G 0 into smaller intermediate graphs
G i = (V i , E i ) such that |V i | > |V i+1 | (i = 0, . . . , m -1)
, until the last coarsened graph G m becomes sufficiently small (i.e., the number of vertices in V m is below a given threshold ct). During the coarsening process, all intermediate graphs are recorded for the purpose of uncoarsening.
Generally, to generate a coarsened graph G i+1 from G i , the coarsening process performs two consecutive steps: an edge matching step and an edge contraction step. For each V-cycle, the initial graph G 0 = (V 0 , E 0 ) is supposed to have a "unit weight" for all the vertices and edges.
The edge matching step aims to find an independent set of edges M ⊂ E i such that the endpoints of any two edges in M are not adjacent. For this purpose, IMSA adopts the fast Heavy-Edge Matching (HEM) heuristic [START_REF] Karypis | A fast and high quality multilevel scheme for partitioning irregular graphs[END_REF] with a time Algorithm 1 Iterated multilevel simulated annealing (IMSA) for MC-GPP.
Require: Graph G = (V, E), seeding partition s 0 , coarsening threshold ct. Ensure: The best partition s * found during the search. ). HEM considers vertices in a random order, and matches each unmatched vertex v with its unmatched neighbor vertex u such that 1) v and u are in the same subset of the current partition s i ; and 2) edge {v, u} has the maximum weight over all edges incident to v (ties are broken randomly). In this work, edge matching is guided by the current partition s i of G i in the sense that the cut edges in the cut-set cut(s i ) are ignored and only vertices of the same partition subset are considered for matching.
The edge contraction step collapses the endpoints of each edge {v a , v b } in M to form a new vertex v a + v b in the coarsened graph G i+1 , while vertices which are not endpoints of any edge of M are simply copied over to G i+1 . For the new vertex v a + v b ∈ V i+1 , its weight is set to be the sum of the weights of v a and v b . The edge between v a and v b is removed, and the edges incident to v a and v b are merged to form a new edge in E i+1 with a weight that is set to be the sum of the weights of the merged edges.
Once the coarsened graph G i+1 is created, the partition s i of G i is projected onto G i+1 , followed by improvement with the local refinement procedure. The improved partition is then used to create the next coarsened graph. edges M = ({a, b}, {c, d}, {e, f }) guided by s 0 . Then for each edge in M , say {a, b}, its endpoints are merged to form a new vertex a + b in G 1 with vertex weight w(a + b) = w(a) + w(b). The edge {a, b} is removed, while the edges {a, h} and {b, h} that are incident to both a and b are merged to form a new edge {a+b, h} in G 1 with edge weight w({a+b, h}) = w({a, h})+w({b, h}) = 2. The same operations are performed to merge vertices c and d, e and f . After merging all the vertices involved in the edges of M , the remaining vertices h and g which are not incident to any edge of M are simply copied to G 1 , completing the new coarsened graph G 1 .
Uncoarsening procedure
In principle, the uncoarsening phase performs the opposite operations of the coarsening phase and successively recovers the intermediate graphs G i (i = m, m-1, . . . , 0) in the reverse order of their creations. To recover G i from G i+1 , each merged vertex of G i+1 is unfolded to obtain the original vertices and the associated edges of G i . For an illustrative example, the same process applies as presented in Fig. 2 but with the reversed direction of each transition arrow. In practice, each corresponding intermediate graph recorded during coarsening is just restored.
For each recovered graph G i , the partition of G i+1 is projected back to G i and is further improved by the local refinement procedure. The uncoarsening process continues until the initial graph G 0 is recovered. At this point, IMSA terminates the current V-cycle and is ready to start the next V-cycle.
Generally, the partition quality is progressively improved throughout the uncoarsening process. Precisely, the partition quality of G i is usually better than that of G i+1 since there are more degrees of freedom for the local refinement procedure of an uncoarsened graph.
Local refinement with simulated annealing
For local refinement, IMSA mainly uses a dedicated simulated annealing procedure, which is complemented by an existing tabu search procedure.
Simulated annealing refinement procedure
Multilevel algorithms typically use pure descent algorithms for solution refinement at each level. This ensures fast convergence towards a local optimum that may be of mediocre quality compared to a global optimum. To reinforce the search capacity of the IMSA algorithm, the powerful simulated annealing method [START_REF] Kirkpatrick | Optimization by simulated annealing[END_REF] is used, which has shown its effectiveness on the popular 2-way graph partitioning problem [START_REF] Johnson | Optimization by simulated annealing: An experimental evaluation; part I, graph partitioning[END_REF]. Moreover, a solution sampling strategy specially designed for MC-GPP (Section 2.4.2) is adopted to further ensure an effective examination of candidate solutions. To avoid search stagnation in a local optimum, the SA based refinement is complemented with a tabu search procedure (see Section 2.5).
The main scheme of the SA procedure is presented in Algorithm 2. Specifically, SA performs a number of search rounds (lines 3-21) with different temperature values to improve the current solution s (i.e., a cut (S, S)). At the start of each search round, the move counter mv is initialized to 0 (line 4). The procedure enters the 'for' loop to carry out saIter iterations (saIter is a parameter) with the current temperature T (initially set to T 0 ). At each iteration, SA randomly samples a neighbor solution s of s from a set of constrained candidates (lines [START_REF] Benlic | A multilevel memetic approach for improving graph kpartitions[END_REF][START_REF] Blum | Metaheuristics in combinatorial optimization: Overview and conceptual comparison[END_REF][START_REF] Boussaïd | A survey on optimization metaheuristics[END_REF][START_REF] Chalupa | Hybrid bridge-based memetic algorithms for finding bottlenecks in complex networks[END_REF][START_REF] Cheeger | A lower bound for the smallest eigenvalue of the laplacian[END_REF][START_REF] Cheng | A divide-and-merge methodology for clustering[END_REF][START_REF] Fortunato | Community detection in graphs[END_REF][START_REF] Garey | Computers and Intractability: A Guide to the Theory of NP-Completeness[END_REF][START_REF]Handbook of Metaheuristics[END_REF][START_REF] Glaria | Compact structure for sparse undirected graphs based on a clique graph partition[END_REF][START_REF] Hamby | A review of techniques for parameter sensitivity analysis of environmental models[END_REF][START_REF] Hauck | An evaluation of bipartitioning techniques[END_REF][START_REF] Hendrickson | A multi-level algorithm for partitioning graphs[END_REF][START_REF] Hochbaum | Polynomial time algorithms for ratio regions and a variant of normalized cut[END_REF][START_REF] Hochbaum | A polynomial time algorithm for rayleigh ratio on discrete variables: Replacing spectral techniques for expander ratio, normalized cut, and cheeger constant[END_REF]. Given a set of critical vertices CV (s) (see Section 2.4.2), this is achieved by displacing a random vertex v ∈ CV (s) from its current subset to the opposite subset (lines 6-7, where s ← s ⊕ Relocate(v) denotes this move operation). The new solution s then replaces the current solution s according to the following probability (lines 8-15),
P r{s ← s } = 1 , if δ < 0 e -δ/T , if δ ≥ 0 (3)
where δ = Φ(s ) -Φ(s) is the conductance variation (also called the move gain) of transitioning from s to s .
Algorithm 2 Simulated annealing based local refinement.
Require: A better neighbor solution s in terms of the conductance (δ < 0) is always accepted as the new current solution s. Otherwise (δ ≥ 0), the transition from s to s takes place with probability e -δ/T . After each solution transition, the move counter mv is incremented. The best solution s best is updated each time a better solution is found (lines [START_REF] Hauck | An evaluation of bipartitioning techniques[END_REF][START_REF] Hendrickson | A multi-level algorithm for partitioning graphs[END_REF][START_REF] Hochbaum | Polynomial time algorithms for ratio regions and a variant of normalized cut[END_REF]. Once the number of sampled solutions reaches saIter, the temperature T is cooled down by a constant factor θ ∈ [0, 1] (θ is a parameter, line 20), and SA proceeds to the next search round with this lowered new temperature. The termination criterion (the frozen state) of SA is met when the acceptance rate becomes smaller than a threshold ar (ar is a parameter) for 5 consecutive search rounds, where the acceptance rate is defined as mv/saIter (line 21). Upon the termination of the SA procedure, the procedure returns the best recorded solution s best (line 22).
Graph G = (V, E), input
One critical issue for the SA procedure concerns the initial temperature T 0 . While a high T 0 leads to acceptance of many deteriorating uphill moves, a low initial temperature will have the same impact as a pure descent procedure resulting in the search to easily become trapped in a local optimum. To identify a suitable initial temperature, a simple binary search is used to provide a tradeoff between these two extremes. Starting from an initial temperature range T ∈ [1.0e-20, 1.0], T 0 is initialized with the median value from this range, i.e., T 0 = (1.0 + 1.0e-20)/2. If the last round of the search using the current value of T 0 resulted in an acceptance rate of 50%, the value of T 0 is left unchanged. Otherwise, the value for T 0 used in next round of the search is set to be the median value of the lower or the upper temperature range depending on whether the acceptance rate is higher or lower than 50%. This process continues until a suitable initial temperature T 0 is found for a given problem instance. The other parameters (saIter, θ, and ar) of SA are investigated in Section 4.4.
Solution sampling with critical vertices
To generate a new candidate solution s from the current solution s, the popular Relocate operator is applied that displaces a vertex from its current set to the opposite set. To avoid the generation of non-promising candidate solutions, the set of critical vertices are identified with respect to the current solution s. Specifically, let s = (S, S) be the current solution. Vertex v ∈ V is called a critical vertex if v is the endpoint of an edge in the cut-set cut(s) [START_REF] Lu | A hybrid evolutionary algorithm for finding low conductance of large graphs[END_REF]. Let CV (s) denote the set of all the critical vertices in the cut-set cut(s). Then, Relocate only operates on the vertices of CV (s). By constraining the Relocate operation to the critical vertices, the algorithm avoids sampling many non-promising candidate solutions, as demonstrated in [START_REF] Lu | Stagnation-aware breakout tabu search for the minimum conductance graph partitioning problem[END_REF].
To ensure a fast computation of the conductance variation of a neighbor solution generated by Relocate, a streamlined incremental evaluation technique [START_REF] Lu | Stagnation-aware breakout tabu search for the minimum conductance graph partitioning problem[END_REF]
Additional solution refinement with tabu search
To further reinforce the solution refinement and go beyond the local optimum attained by the SA procedure, the constrained neighborhood tabu search (CNTS) procedure of [START_REF] Lu | Stagnation-aware breakout tabu search for the minimum conductance graph partitioning problem[END_REF] is additionally applied. CNTS relies on the same constrained neighborhood defined by the critical vertices and employs a dynamic tabu list to explore additional local optima. CNTS requires two parameters: the maximum number of consecutive non-improving iterations of tabu search D, and the tabu tenure management factor α. Section 4.4 explains the procedure used to tune these parameters and analyzes their sensitivity. More details about CNTS can be found in [START_REF] Lu | Stagnation-aware breakout tabu search for the minimum conductance graph partitioning problem[END_REF]. The use of CNTS is based on the preliminary observation that it can provide performance gains especially for the class of relatively small-sized instances with vertices less than one million (such as 'ok2010', 'va2010', 'nc2010', etc. See Appendix A, Tables A.2-A.4). Generally, the CNTS component can be safely disabled for large graphs without impacting the performance of the IMSA algorithm. In terms of solution refinement, CNTS is more focused on search intensification, as opposed to the more stochastic and diversified SA procedure. In practice, CNTS plays a complementary and secondary role by performing a much shorter search (determined by the parameter D) than SA during a round of IMSA.
Complexity of the proposed algorithm
The time complexity of the local refinement procedure is first considered. The SA procedure (Section 2. In practice, as long as the graph has a low or very low density, the space requirement is approximately linear in |V |. On the contrary, for dense and very dense (near-complete) graphs, the space requirement becomes quadratic in |V |.
Computational studies
This section is dedicated to a computational assessment of the proposed IMSA algorithm based on various benchmark instances.
Benchmark instances
The assessment was based on 66 very large benchmark instances with 54,870 to 23,947,347 vertices. The first 56 instances are very large graphs from The 10th DIMACS Implementation Challenge Benchmark1 , which were introduced for graph partitioning and graph clustering [START_REF] Bader | Graph partitioning and graph clustering[END_REF]. The remaining 10 instances are large real-world network graphs [START_REF] Rossi | The network data repository with interactive graph analytics and visualization[END_REF] from The Network Data Repository online2 . These 66 instances are divided into three sets: Small set (24 instances, |V | < 500, 000), Medium set (25 instances, |V | ∈ [500, 000, 5, 000, 000], and Large set (17 instances, |V | > 5, 000, 000). Among these 66 instances, 29 DI-MACS graphs and 10 massive network graphs have previously been used for experimental evaluations in [START_REF] Lu | A hybrid evolutionary algorithm for finding low conductance of large graphs[END_REF]. Additional 27 large-scale DIMACS benchmark graphs (with 114,599 to 21,198,119 vertices) were considered to better assess the scalability of the multilevel algorithm.
Experimental setting and reference algorithms
The proposed IMSA algorithm was implemented in C++3 and compiled using the g++ compiler with the "-O3" option. All the experiments were conducted on a computer with an AMD Opteron 4184 processor (2.8GHz) and 32GB RAM under Linux operating system. This computer was previously used to perform experimental evaluations reported in [START_REF] Lu | A hybrid evolutionary algorithm for finding low conductance of large graphs[END_REF]. IMSA being a sequential Frozen state parameter of SA search 5%
D §2.5
Maximum number of consecutive non-improving iterations of tabu search 10,000
α §2.5
Tabu tenure management factor of tabu search 80
algorithm, it was run on a single core of the AMD Opteron processor, as in [START_REF] Lu | A hybrid evolutionary algorithm for finding low conductance of large graphs[END_REF].
Table 1 shows the setting of the IMSA parameters, which can be considered as the default setting of the algorithm. The procedure to tune these parameters is explained in Section 4.4. For meaningful assessments, this default setting was consistently used throughout all the experiments presented in this work.
As shown in Section 3.3, IMSA achieves highly competitive results with this unique setting. Generally, the parameters can be fined-tuned to obtain improved results. Such a practice is useful when one seeks the best possible solution for a given graph.
To evaluate the performance of IMSA, comparisons are performed against three best performing MC-GPP algorithms from the literature: the two most recent metaheuristic algorithms (i.e., the hybrid evolutionary algorithm MAMC [START_REF] Lu | A hybrid evolutionary algorithm for finding low conductance of large graphs[END_REF] and the breakout local search algorithm SaBTS [START_REF] Lu | Stagnation-aware breakout tabu search for the minimum conductance graph partitioning problem[END_REF]), as well as the popular and powerful max-flow algorithm MQI [START_REF] Lang | A flow-based method for improving the expansion or conductance of graph cuts[END_REF]. To ensure a fair comparison, all the compared algorithms were run on the same computing platform mentioned above with their default parameters setting. Each algorithm was independently executed 20 times per instance with a cutoff time of 60 minutes per run. Exactly like SaBTS and MAMC, each run of IMSA was initialized with a seeding solution provided by the MQI algorithm.
Out of the 66 benchmark instances tested in this work, the results for 39 instances (29 DIMACS graphs and the 10 massive network graphs) for the three reference algorithms are available in [START_REF] Lu | A hybrid evolutionary algorithm for finding low conductance of large graphs[END_REF]. Since these results were obtained following the same experimental protocol as in the current work, they are directly used for the comparative studies. The compared algorithms were only ran on the 27 remaining instances.
Table 2
Summary results reported by the IMSA algorithm and the three reference algorithms (MAMC [START_REF] Lu | A hybrid evolutionary algorithm for finding low conductance of large graphs[END_REF], SaBTS [START_REF] Lu | Stagnation-aware breakout tabu search for the minimum conductance graph partitioning problem[END_REF], and MQI [START_REF] Lang | A flow-based method for improving the expansion or conductance of graph cuts[END_REF]) on the three sets of 66 benchmark instances.
Computational results and comparisons with state-of-the-art algorithms
This section presents the computational results of the proposed IMSA algorithm, together with the results of the three reference algorithms (MAMC, SaBTS, and MQI) on the three sets of 66 benchmark instances.
Table 2 summarizes the overall comparison while the detailed results are listed in Appendix A (Tables A.2-A.4). Additionally, a global comparison is provided using the geometric mean metric [START_REF] Hauck | An evaluation of bipartitioning techniques[END_REF] in Table 3 (the smaller the better). Section 4.1 shows a time-to-target analysis to investigate the computational efficiency of the compared algorithms.
In Table 2, column 1 shows the benchmark sets with the number of instances Table 3 The geometric means of the best and the average conductance values (G best and G avg ) reported by the IMSA algorithm, and the three reference algorithms (MAMC [START_REF] Lu | A hybrid evolutionary algorithm for finding low conductance of large graphs[END_REF], SaBTS [START_REF] Lu | Stagnation-aware breakout tabu search for the minimum conductance graph partitioning problem[END_REF], and MQI [START_REF] Lang | A flow-based method for improving the expansion or conductance of graph cuts[END_REF]) on the three sets of 66 instances. show the number of instances on which IMSA reached a better (#Wins), worse (#Losses), or equal results (#Ties) in terms of each quality indicator. Furthermore, the last column provides the p-value of the Wilcoxon signed-rank test with a confidence level of 99% to assess whether there exists a statistically significant difference between IMSA and each reference algorithm in terms of the best and the average performances.
Table 3 shows the geometric means of each algorithm using the best and the average conductance values for the three sets of 66 instances (G best and G avg ).
The first two columns show the algorithms and the quality indicators (G best and G avg ). Columns 3-5 present the G best (or G avg ) results for each benchmark set, while the last column reports the overall G best (or G avg ) results for all the 66 instances. The best values of G best (or G avg ) across all the algorithms (the smaller the better) are highlighted in boldface.
From Tables 2, 3 and Tables A.2-A.4, the following observations can be made.
1) IMSA competes very favorably with the best existing MC-GPP algorithms in terms of solution quality, by reaching better results for more than 63% cases compared to all the reference algorithms considered jointly. Remarkably, even IMSA's average results are much better than the best results of the reference algorithms in most cases. The dominance of IMSA over the reference algorithms is better shown on medium and large instances, by reporting the best results for all but one or two instances in terms of Φ best and Φ avg . The small geometric means of IMSA further confirm its superiority over the reference algorithms.
2) IMSA is computational effective compared to the reference algorithms. Thanks to its multilevel strategy, IMSA requires similar or shorter time to find equal or better solutions for a number of medium and large graphs.
The time-to-target analysis of Section 4.1 provides more evidences. 3) Relating the results of Tables A.2-A.4 and the main features of the tested graphs shown in Table A.1, it can be concluded that IMSA is particularly suitable for massive and sparse graphs, while its performance decreases on dense graphs. This behavior is consistent with the general multilevel optimization methods for graph partitioning.
This comparative study shows that the IMSA algorithm is a valuable tool for partitioning large and sparse graphs in complement with existing methods. To understand its behavior and functioning. Section 4 shows experiments to shed light on the contributions of the key algorithmic components.
Analysis
This section presents experiments to get insights into the influences of the components of the IMSA algorithm: iterated multilevel framework, simulated annealing local refinement, and parameters.
A time-to-target analysis of the compared algorithms
To investigate the computational efficiency of the compared algorithms: IMSA, MAMC [START_REF] Lu | A hybrid evolutionary algorithm for finding low conductance of large graphs[END_REF], SaBTS [START_REF] Lu | Stagnation-aware breakout tabu search for the minimum conductance graph partitioning problem[END_REF], and MQI [START_REF] Lang | A flow-based method for improving the expansion or conductance of graph cuts[END_REF], a time-to-target (TTT) analysis [START_REF] Aiex | TTT plots: a perl program to create time-to-target plots[END_REF] is performed. This study uses visual TTT plots to illustrate the running time distributions of the compared algorithms. Specifically, the Y-axis of the TTT plots displays the probability that an algorithm will find a solution that is at least as good as a given target value within a given run time, shown on the X-axis. The TTT plots are produced as follows. Each compared algorithm is independently run E x times on each instance. For each of the E x runs, the run time to reach a given target objective value is recorded. For each instance, the run times are then sorted in an ascending order. The i-th sorted run time t i is associated with a probability p i = i/E x , and plotted as point (t i , p i ), for i = 1, . . . , E x . This TTT experiment was based on 4 representative instances: To make sure that the compared algorithms are able to reach the target objective value in each run, the target value was set to be 1% larger than the best objective value found by MQI. The results of this experiment are shown in Fig. 3. Fig. 3 clearly indicates that the proposed IMSA algorithm is always faster in attaining the given target value than the reference algorithms. The probability for IMSA to reach the target objective value within the first 100 to 200 seconds is around 90%, while MAMC, SaBTS, and MQI require much more times to attain the same result (at least 2500 seconds for the two large NLR and delaunay n24 graphs). One notices from Tables A.2-A.4 that MAMC and SaBTS show a shorter computation time for a number of instances, but they report typically worse results (especially on the medium and the large instances) than the IMSA algorithm. This indicates that IMSA can take full advantage of the allotted time budget to find solutions of better quality and avoid premature convergence. This is particularly true for large graphs. This experiment demonstrates that the IMSA algorithm is much more time efficient than the reference algorithms.
Usefulness of the iterated multilevel framework
This section assesses the usefulness of the iterated multilevel framework. For this purpose, an IMSA variant called (SA+TS) restart was created where the multilevel component was removed while keeping only the refinement procedure. To ensure a fair comparison, (SA+TS) restart was performed in a multistart way, until the cutoff time t max (60 minutes) was reached. This experiment was conducted on 20 representative instances of a reasonable size and difficulty: preferentialAttachment, smallworld, delaunay n16, delaunay n17, delaunay n18, delaunay n19, ga2010, oh2010, tx2010, wing, 144, ecology2, ecology1, thermal2, kkt power, NACA0015, M6, AS365, luxembourg, belgium. Each algorithm was independently run 20 times per instance with a cutoff time of 60 minutes per run. As a supplement, the p-values were also computed from the Wilcoxon signed-rank test in terms of the best and the average results. Fig. 4 shows the best/average conductance gap between the two algorithms on these instances. The X-axis indicates the instance label (numbered from 1 to 20), while the Y-axis shows the best/average conductance gap in percentage, calculated as (Φ A -Φ IMSA )/Φ IMSA × 100%, where Φ A and Φ IMSA are the best/average conductance values of (SA+TS) restart and IMSA, respectively.
As observed in Fig. 4, IMSA clearly dominates (SA+TS) restart in terms of the best (p-value = 1.32e-04) and the average (p-value = 1.03e-04) conductance values for the 20 instances. This experiment confirms the usefulness of the iterated multilevel framework, which positively contributes to the high performance of the algorithm.
Benefit of the SA-based local refinement
To evaluate the benefit of the SA local refinement procedure to the performance of the IMSA algorithm, an IMSA variant (denoted by IMSA descent ) was created where the SA procedure was replaced by a pure descent procedure using the best-improvement strategy. This experiment relies on the same 20 instances used in Section 4.2 and reports the same information. Fig. 5 plots the best/average conductance gap between the two algorithms on these instances. The X-axis indicates the instance label, while the Y-axis shows the best/average conductance gap in percentage. Fig. 5 shows that IMSA descent reports significantly worse results in terms of both the best (p-value = 1.32e-04) and the average (p-value = 8.86e-05) conductance values for all the instances. This indicates that the SA procedure is the key element that ensures the high performance of IMSA and disabling it greatly deteriorates the performance.
Analysis of the parameters
The proposed IMSA algorithm requires six parameters: ct, saIter, θ, ar, D, and α. ct denotes the coarsening threshold in the solution-guided coarsening phase. saIter, θ, and ar are the three parameters related to the SA local refinement, where saIter is the maximum number of iterations per temperature, θ is the cooling ratio, and ar is the frozen state parameter. D and α are the maximum number of consecutive non-improving iterations of tabu search and the tabu tenure management factor, respectively. To study the effect of these parameters on the performance of IMSA and to determine the most setting for these parameters, a one-at-a-time sensitivity analysis [START_REF] Hamby | A review of techniques for parameter sensitivity analysis of environmental models[END_REF] was performed as follows. For each parameter, a range of possible values were tested, while fixing the other parameters to their default values from Table 1. Specifically, the following values were used: ct ∈ [20000, 30000, 40000, 50000, 60000], saIter ∈ [50000, 100000, 150000, 200000, 250000], θ ∈ [0.90, 0.92, 0.94, 0.96, 0.98], ar ∈ [1%, 3%, 5%, 7%, 9%], D ∈ [5000, 10000, 15000, 20000, 25000], and α ∈ [START_REF] Walshaw | Multilevel refinement for combinatorial optimisation problems[END_REF]60,80,100,120]. This experiment was conducted on 9 representative instances from the set of instances used in Sections 4.2 and 4.3, and based on 20 independent runs per parameter value with a cutoff time of 60 minutes per run. Fig. 6 shows the average values of Φ best and Φ avg obtained for the 9 instances, where the X-axis indicates the values of each parameter and the Y-axis shows the best/average conductance values over the 9 representative instances: preferentialAttachment, smallworld, delaunay n18, delaunay n19, ga2010, oh2010, wing, 144, thermal2. Fig. 6 shows the impact of each parameter on the performance of IMSA. Specifically, for the parameter ct, ct = 60000 yields the best results for both Φ best and Φ avg . The default value of ct was set to 60000 in this study. For saIter, the value of 200000 is the best choice while a larger or a smaller value weakens the performance of IMSA. For the parameter θ, IMSA obtains the best performance with the value of 0.98 while smaller values decrease its performance. Furthermore, ar = 5% appears to be the best choice for IMSA. For the parameters D and α, the value 10000 and 80 were adopted respectively as their default values according to Fig. 6. The default values of all the parameters are summarized in Table 1.
Conclusion and future work
The iterated multilevel simulated annealing algorithm presented in this work is the first multilevel algorithm for the challenging NP-hard minimum conductance graph partitioning problem. Based on the general (iterated) multilevel framework, IMSA integrates a novel solution-guided coarsening method to construct a hierarchy of reduced graphs and a powerful simulated annealing local refinement procedure that makes full use of a constrained neighborhood to rapidly and effectively improve the quality of sampled solutions.
The performance of IMSA was assessed on three sets of 66 benchmark instances from the literature, including 56 graphs from the 10th DIMACS Implementation Challenge and 10 graphs from the Network Data Repository online, with up to 23 million vertices. Computational results demonstrated the competitiveness of the algorithm in finding high-quality partitions for largescale sparse graphs. This work proves for the first time the value of the general multilevel approach for conductance graph partitioning.
From the application perspective, MC-GPP is a general and powerful graph model able to formulate a variety of real problems related to community detection, clustering, bioinformatics, computer vision, and large graph size estimation. Consequently, researchers and practitioners working on these real world problems can benefit from the proposed approach to find improved solutions.
The availability of the source code of the algorithm will further facilitate such applications.
For future research, several directions could be followed. First, parallel computations are known to be quite useful for large graph partitioning (e.g., parallel graph partitioning for complex networks [START_REF] Meyerhenke | Parallel graph partitioning for complex networks[END_REF], distributed evolutionary partitioning [START_REF] Zheng | Towards a distributed local-search approach for partitioning large-scale social networks[END_REF] and distributed local search for partitioning large social networks [START_REF] Zheng | Towards a distributed local-search approach for partitioning large-scale social networks[END_REF]). Parallel algorithms were also integrated in popular partition packages such as ParMETIS (http://glaros.dtc.umn.edu/gkhome/views/ metis) and KaHIP (https://kahip.github.io/). It would be highly relevant to create parallel versions of the IMSA algorithm to further increase its power. Second, population-based evolutionary algorithms have been successfully employed for solution refinement under the multilevel framework [START_REF] Benlic | A multilevel memetic approach for improving graph kpartitions[END_REF]. It would be interesting to study this approach for conductance partition. Third, the current IMSA implementation is more suitable for partitioning large sparse graphs than for dense graphs. It would be useful to investigate additional strategies to be able to handle both types of graphs. In particular, other graph representations using compact structure [START_REF] Glaria | Compact structure for sparse undirected graphs based on a clique graph partition[END_REF] may be considered to reduce the space complexity of the algorithm. Fourth, in addition to the studied memetic and local search methods in the literature, it is worthy investigating other metaheuristic-based algorithms to better handle various types of graphs and further enrich the MC-GPP toolkit. Finally, given the scarcity of exact approaches to MC-GPP in the existing literature, there is clearly a lot room for research in this direction. Table A.2: Computational results on the Small set of 24 benchmark instances obtained by the IMSA algorithm and the three reference algorithms (MAMC [START_REF] Lu | A hybrid evolutionary algorithm for finding low conductance of large graphs[END_REF], SaBTS [START_REF] Lu | Stagnation-aware breakout tabu search for the minimum conductance graph partitioning problem[END_REF], and MQI [START_REF] Lang | A flow-based method for improving the expansion or conductance of graph cuts[END_REF]).
IMSA MAMC [START_REF] Lu | A hybrid evolutionary algorithm for finding low conductance of large graphs[END_REF] SaBTS [START_REF] Lu | Stagnation-aware breakout tabu search for the minimum conductance graph partitioning problem[END_REF] MQI [START_REF] Lang | A flow-based method for improving the expansion or conductance of graph cuts[END_REF] Instance [START_REF] Lu | A hybrid evolutionary algorithm for finding low conductance of large graphs[END_REF], SaBTS [START_REF] Lu | Stagnation-aware breakout tabu search for the minimum conductance graph partitioning problem[END_REF], and MQI [START_REF] Lang | A flow-based method for improving the expansion or conductance of graph cuts[END_REF]).
IMSA MAMC [START_REF] Lu | A hybrid evolutionary algorithm for finding low conductance of large graphs[END_REF] SaBTS [START_REF] Lu | Stagnation-aware breakout tabu search for the minimum conductance graph partitioning problem[END_REF] MQI [START_REF] Lang | A flow-based method for improving the expansion or conductance of graph cuts[END_REF] Instance Table A.4: Computational results on the Large set of 17 benchmark instances obtained by the IMSA algorithm and the three reference algorithms (MAMC [START_REF] Lu | A hybrid evolutionary algorithm for finding low conductance of large graphs[END_REF], SaBTS [START_REF] Lu | Stagnation-aware breakout tabu search for the minimum conductance graph partitioning problem[END_REF], and MQI [START_REF] Lang | A flow-based method for improving the expansion or conductance of graph cuts[END_REF]).
IMSA MAMC [START_REF] Lu | A hybrid evolutionary algorithm for finding low conductance of large graphs[END_REF] SaBTS [START_REF] Lu | Stagnation-aware breakout tabu search for the minimum conductance graph partitioning problem[END_REF] MQI [START_REF] Lang | A flow-based method for improving the expansion or conductance of graph cuts[END_REF] Instance
e., Φ(s) = |cut(s)| min{vol(S), vol(S)} (1) where vol(S) = v∈S deg(v) and vol(S) = v∈S deg(v) are the volume of S and S, respectively, and deg(v) is the number of vertices incident to v in G.
Fig. 1 .
1 Fig. 1. An illustration of the iterated multilevel framework for the proposed algorithm.
Fig. 2 Fig. 2 .
22 Fig.2illustrates how the first coarsened graph is created from an initial graph G 0 with 8 vertices (unit weight for both vertices and edges) and the partition s 0 = ({a, b, h}, {c, d, e, f, g}) (indicated by the red dashed line). The edge matching step first uses the HEM heuristic to identify the set of independent
is adopted. Let s = {S, S) be the current solution, s = {S , S } be the neighbor solution after relocating a vertex v of s from S to S. The conductance of s can be evaluated in O(1) time by simply updating |cut(s )|, vol(S ), vol(S ) by |cut(s )| = |cut(s)| + deg S (v) -deg S (v), vol(S ) = vol(S) -deg(v), and vol(S ) = vol(S) + deg(v), where deg S (v) (resp. deg S (v)) is the number of vertices adjacent to v in S (resp. in S). Moreover, for each vertex w adjacent to v, deg S (w) and deg S (w) can be updated in O(1) time by deg S (w) = deg S (w) -1 and deg S (w) = deg S (w) + 1.
4) performs saIter iterations. At each iteration, SA first identifies the set of critical vertices CV (s), which can be achieved in O(|V | × deg max ) time where deg max is the maximum degree of the graph. A candidate solution is then sampled in O(1) time. When a neighboring solution s = {S , S } is obtained, the conductance variation is calculated in O(1) time. Moreover, deg S (w) and deg S (w) are updated in O(deg(v)) time. Thus, SA requires O(saIter × |V | × deg max ) time. For the CNTS procedure (Section 2.5), it can be achieved in O(D ×|V |×deg max) time [30]. Since the dominating part of the local refinement is the SA procedure, the time complexity of local refinement is O(saIter × |V | × deg max ). The whole IMSA algorithm (Algorithm 1) is composed of a series of V-cycles. For each V-cycle, the solution-guided coarsening procedure with the HEM heuristic requires O(m × |E|) time, where m is the number of levels of each V-cycle. At each level, the partition of the intermediate graph is improved by the local refinement procedure with a time complexity of O(saIter × |V | × deg max ). The uncoarsening procedure requires O(1) time, because it simply restores each corresponding intermediate graph recorded during the coarsening process, followed by the local refinement procedure. Consequently, the total time complexity of one V-cycle of the IMSA algorithm is bounded by O(m × saIter × |V | × deg max ). The space complexity of IMSA can be evaluated as follows. First, IMSA represents a graph with an adjacency list whose space complexity is O(|V | + |E|). For each level, IMSA keeps an auxiliary graph, requiring O(m × |E|) space for a V-cycle. Moreover, IMSA uses a table to maintain the sum of degrees of each vertex of G with O(m × |V |) space. Therefore, the total space complexity of the IMSA algorithm for each V-cycle is given by O(m × (|V | + |E|)).
sc-pkustk13 (|V | = 94, 893), ga2010 (|V | = 291, 086), NLR (|V | = 4, 163, 763), delaunay n24 (|V | = 16, 777, 216) with 200 independent runs per algorithm and per instance (i.e., E x = 200).
Fig. 3 .
3 Fig. 3. Probability distributions of the run time (in seconds) needed to attain a given target objective value by each algorithm on the 4 representative instances.
Fig. 4 .
4 Fig. 4. Comparisons of the IMSA algorithm with a multi-start simulated annealing plus tabu search (denoted by (SA+TS) restart ) on the 20 representative instances.
Fig. 5 .
5 Fig. 5. Comparisons of the IMSA algorithm with an IMSA variant where the SA-based local refinement is replaced by a descent search (denoted by IMSA descent ) on the 20 representative instances.
Fig. 6 .
6 Fig.6. Analysis of the parameters (ct, saIter, θ, ar, D, α) on its performance of the proposed IMSA algorithm.
solution s, initial temperature T 0 , move counter mv, maximum number of iterations per temperatures saIter, cooling ratio θ, frozen state parameter ar. Ensure: Best partition s best found during the search.
1: T ← T 0
2: s best ← s
3: repeat
4: mv ← 0
5: for iter = 1 to saIter do
6: v ← a random vertex from CV (s)
7: s ← s ⊕ Relocate(v) / * Section 2.4.2 * /
8: if δ(v) < 0 then
9: s ← s
10: mv ← mv + 1
11: else
12: With probability defined in Equation (3) / * Section 2.4.1 * /
13: s ← s
14: mv ← mv + 1
15: end if
16: if Φ(s) < Φ(s best ) then
17: s best ← s
18: end if
19: end for
20:
T ← T * θ / * Cooling down the temperature * / 21: until Acceptance rate (mv/saIter) is below ar for 5 consecutive rounds 22: return s best
Table 1
1 The parameters setting of the IMSA algorithm.
Parameter Section Description
Table A .
A 1 Main features of the 66 benchmark instances from "The 10th DIMACS Implementation Challenge Benchmark" and "The Network Data Repository online".
Instance |V | |E| Density Instance |V | |E| Density
Small set (24) delaunay n20 1,048,576 3,145,686 5.72e-06
sc-nasasrb 54,870 1,311,227 8.71e-04 inf-roadNet-PA 1,087,562 1,541,514 2.61e-06
wing 62,032 121,544 6.32e-05 thermal2 1,227,087 3,676,134 4.88e-06
delaunay n16 65,536 196,575 9.15e-05 belgium 1,441,295 1,549,970 1.49e-06
sc-pkustk13 94,893 3,260,967 7.24e-04 G3 circuit 1,585,478 3,037,674 2.42e-06
preferentialAttachment 100,000 499,985 1.00e-04 kkt power 2,063,494 6,482,320 3.04e-06
smallworld 100,000 499,998 1.00e-04 delaunay n21 2,097,152 6,291,408 2.86e-06
luxembourg 114,599 119,666 1.82e-05 netherlands 2,216,688 2,441,238 9.94e-07
delaunay n17 131,072 393,176 4.58e-05 M6 3,501,776 10,501,936 1.71e-06
144 144,649 1,074,393 1.03e-04 333SP 3,712,815 11,108,633 1.61e-06
web-arabic-2005 163,598 1,747,269 1.31e-04 AS365 3,799,275 11,368,076 1.58e-06
soc-gowalla 196,591 950,327 4.92e-05 venturiLevel3 4,026,819 8,054,237 9.93e-07
delaunay n18 262,144 786,396 2.29e-05 NLR 4,163,763 12,487,976 1.44e-06
ok2010 269,118 637,074 1.76e-05 delaunay n22 4,194,304 12,582,869 1.43e-06
va2010 285,762 701,064 1.72e-05 hugetrace00 4,588,484 6,879,133 6.53e-07
nc2010 288,987 708,310 1.70e-05 channel 4,802,000 42,681,372 3.70e-06
ga2010 291,086 709,028 1.67e-05 Large set (17)
cnr-2000 325,557 2,738,969 5.17e-05 hugetric00 5,824,554 8,733,523 5.15e-07
mi2010 329,885 789,045 1.45e-05 hugetric10 6,592,765 9,885,854 4.55e-07
mo2010 343,565 828,284 1.40e-05 italy 6,686,493 7,013,978 3.14e-07
oh2010 365,344 884,120 1.32e-05 adaptive 6,815,744 13,624,320 5.87e-07
soc-twitter-follows 404,719 713,319 8.71e-06 hugetric20 7,122,792 10,680,777 4.21e-07
pa2010 421,545 1,029,231 1.16e-05 great-britain 7,733,822 8,156,517 2.73e-07
il2010 451,554 1,082,232 1.06e-05 delaunay n23 8,388,608 25,165,784 7.15e-07
soc-youtube 495,957 1,936,748 1.57e-05 germany 11,548,845 12,369,181 1.85e-07
Medium set (25) asia 11,950,757 12,711,603 1.78e-07
soc-flickr 513,969 3,190,452 2.42e-05 hugetrace10 12,057,441 18,082,179 2.49e-07
delaunay n19 524,288 1,572,823 1.14e-05 road central 14,081,816 16,933,413 1.71e-07
ca-coauthors-dblp 540,486 15,245,729 1.04e-04 hugetrace20 16,002,413 23,998,813 1.87e-07
soc-FourSquare 639,014 3,214,986 1.57e-05 delaunay n24 16,777,216 50,331,601 3.58e-07
eu-2005 862,664 16,138,468 4.34e-05 hugebubbles00 18,318,143 27,470,081 1.64e-07
tx2010 914,231 2,228,136 5.33e-06 hugebubbles10 19,458,087 29,179,764 1.54e-07
ecology2 999,999 1,997,996 4.00e-06 hugebubbles20 21,198,119 31,790,179 1.41e-07
ecology1 1,000,000 1,998,000 4.00e-06 road usa 23,947,347 28,854,312 1.01e-07
NACA0015 1,039,183 3,114,818 5.77e-06
Table A .
A 3: Computational results on the Medium set of 25 benchmark instances obtained by the IMSA algorithm and the three reference algorithms (MAMC
https://www.cc.gatech.edu/dimacs10/downloads.shtml
http://networkrepository.com/index.php
The code of the IMSA algorithm will be made publicly available at: http://www. info.univ-angers.fr/pub/hao/IMSA.html
Acknowledgment
The authors are grateful to the reviewers for their useful comments and suggestions which helped us to significantly improve the paper. Support from the China Scholarship Council (CSC) (No.: 201606070096) for the first author is also acknowledged.
A Appendix
This appendix reports the detailed results of the proposed IMSA algorithm and the three reference algorithms (MAMC [START_REF] Lu | A hybrid evolutionary algorithm for finding low conductance of large graphs[END_REF], SaBTS [START_REF] Lu | Stagnation-aware breakout tabu search for the minimum conductance graph partitioning problem[END_REF], and MQI [START_REF] Lang | A flow-based method for improving the expansion or conductance of graph cuts[END_REF]) on the three sets of 66 benchmark instances whose main features are shown in Table A
According to the experimental protocol of Section 3.2, each algorithm was run with its default parameters setting and executed 20 times per instance with a cutoff time of 60 minutes per run. The results of 39 small and medium instances (29 DIMACS graphs and the 10 massive network graphs) for the three reference algorithms were extracted from [START_REF] Lu | A hybrid evolutionary algorithm for finding low conductance of large graphs[END_REF]. The results of the compared algorithms on the remaining instances were obtained with the above protocol. Tables A.2-A. [START_REF] Bader | Graph partitioning and graph clustering[END_REF] show the computational results of the compared algorithms.
In Tables A.2-A.4, columns 1 and 2 indicate the name and the number of vertices |V | for each instance. The remaining columns show the results reached by IMSA and the reference algorithms (MAMC, SaBTS, and MQI): the best conductance (Φ best ), the average conductance (Φ avg ), the number of times Φ best was reached across 20 independent runs (hit), and the average computation time per run in seconds to reach the final solution (t(s)). The best of the Φ best (or Φ avg ) values among all the compared algorithms for each instance is highlighted in boldface. An asterisk ( * ) indicates a strictly best solution among all the results. Bold and asterisked values thus correspond to the best upper bounds for the associated graphs.
Note that it is meaningful to compare the computation times of two algorithms on a graph only if they report the same objective value. Given that IMSA reached better results on many instances, the timing information is provided for indicative purposes. For a meaningful comparison of computational efficiency of the algorithms, the reader is referred to the time-to-target analysis shown in Section 4.1. |
03541561 | en | [
"sdv.mhep.psr"
] | 2024/03/04 16:41:22 | 2021 | https://hal.inrae.fr/hal-03541561/file/S027252312100040X.pdf | MD MSc Jolene H Fisher
email: [email protected]
MD Vincent Cottin
email: [email protected].
Care-delivery models and interstitial lung disease: the role of the specialized center
Keywords:
Interstitial lung disease
INTRODUCTION
The field of interstitial lung disease (ILD) is rapidly evolving and multifaceted with patients that experience debilitating symptoms and poor prognosis. Care delivery models that utilize specialized centers with access to ILD specific resources have emerged as a way to provide comprehensive care to these patients with complex diseases. The goals of the specialized center are multifold and centered around providing timely access to an accurate diagnosis and effective care plan. Other deliverables include management of treatment side effects and patient comorbidities, patient education and support groups, medical education of both practicing clinicians and trainees and access to clinical trials, lung transplant and end of life care. A multidisciplinary team is integral to providing such complex care delivery and requires access to pulmonology, rheumatology, pathology, thoracic surgery/interventional pulmonology, radiology, palliative care, lung transplant, pharmacy, nursing, social work and administrative support (both clinical and research), with ILD expertise (Figure 1).
Universal access to these highly specialized centers with optimal resources and expertise remains a significant challenge that requires innovative strategies to overcome. The aims of this chapter are to (1) summarize the key components of ILD care, (2) describe the role of the specialized center in the delivery of each component of ILD care, and (3) identify the current challenges facing ILD care delivery models and propose viable strategies to overcome these gaps. Key messages are summarized in Table 1.
KEY COMPONENTS OF ILD CARE AND THE ROLE OF THE SPECIALIZED CENTER IN
DELIVERY
Diagnosis
ILD subtypes have variable epidemiology, clinical course and management. A timely and accurate ILD diagnosis is critical for clinical decision making, patient counseling and advancing research. The current gold standard for ILD diagnosis is multidisciplinary discussion (MDD) between 'experts' that integrate clinical, radiologic and where available, pathologic features in order to reach a consensus diagnosis. [START_REF] Raghu | Diagnosis of Idiopathic Pulmonary Fibrosis. An Official ATS/ERS/JRS/ALAT Clinical Practice Guideline[END_REF][START_REF] Raghu | An official ATS/ERS/JRS/ALAT statement: idiopathic pulmonary fibrosis: evidence-based guidelines for diagnosis and management[END_REF][START_REF] Travis | An official American Thoracic Society/European Respiratory Society statement: Update of the international multidisciplinary classification of the idiopathic interstitial pneumonias[END_REF][START_REF] Johannson | Evaluation of patients with fibrotic interstitial lung disease: A Canadian Thoracic Society position statement[END_REF] Within the tripod of clinical, radiologic and pathologic features, the MDD further incorporates a variety of information that contribute to diagnosis, including autoimmune serology, precipitins, clinical or molecular biology genetic information, molecular classifiers, or reports from other healthcare providers (e.g. occupational medicine specialist, domiciliary visit looking for exposures that may cause hypersensitivity pneumonitis, etc). Utilization of MDDs have been shown to improve diagnostic confidence and decrease inter observer variability. [START_REF] Prasad | The interstitial lung disease multidisciplinary meeting: A position statement from the Thoracic Society of Australia and New Zealand and the Lung Foundation Australia[END_REF][START_REF] Jo | Evaluating the interstitial lung disease multidisciplinary meeting: a survey of expert centres[END_REF][START_REF] Flaherty | Idiopathic interstitial pneumonia: what is the effect of a multidisciplinary approach to diagnosis?[END_REF][START_REF] Sadeleer | Diagnostic Ability of a Dynamic Multidisciplinary Discussion in Interstitial Lung Diseases: A Retrospective Observational Study of 938 Cases[END_REF] MDDs, at minimum, include a pulmonologist and radiologist with ILD expertise and depending on the individual case, other specialty involvement, such as, pathology and rheumatology. [START_REF] Furini | The Role of the Multidisciplinary Evaluation of Interstitial Lung Diseases: Systematic Literature Review of the Current Evidence and Future Perspectives[END_REF] Specialized diagnostic resources, including extensive autoimmune serology and histopathology obtained by videothoracoscopic surgical lung biopsy (SLB) or cryobiopsy, may be needed to make an accurate diagnoses. MDD groups that do not include all ILD specific experts benefit from access to larger and more versed MDDs for their more complex cases as the quality of MDD is dependent on experience. [START_REF] Cottin | Should Patients With Interstitial Lung Disease Be Seen by Experts?[END_REF] Widespread availability of safe and accurate diagnostic testing for ILD remains a significant challenge. Although decreased by the use of video assisted techniques and appropriate patient selection, SLB is still associated with high morbidity and mortality, [START_REF] Hutchinson | In-Hospital Mortality after Surgical Lung Biopsy for Interstitial Lung Disease in the United States. 2000 to 2011[END_REF] particularly in centers with less experience. [START_REF] Fisher | Procedure volume and mortality after surgical lung biopsy in interstitial lung disease[END_REF] In addition, there can be significant inter observer variability in histopathology interpretation. [START_REF] Lettieri | Discordance between general and pulmonary pathologists in the diagnosis of interstitial lung disease[END_REF] While less invasive, similar issues have been identified with the use of transbronchial lung cryobiopsy. [START_REF] Johannson | Diagnostic Yield and Complications of Transbronchial Lung Cryobiopsy for Interstitial Lung Disease. A Systematic Review and Metaanalysis[END_REF][START_REF] Iftikhar | Transbronchial Lung Cryobiopsy and Video-assisted Thoracoscopic Lung Biopsy in the Diagnosis of Diffuse Parenchymal Lung Disease. A Meta-analysis of Diagnostic Test Accuracy[END_REF][START_REF] Sethi | Are Transbronchial Cryobiopsies Ready for Prime Time?: A Systematic Review and Meta-Analysis[END_REF] Such 'volumeoutcome' relationships have been described across a variety of medical and surgical specialties and used to support the regionalization of specialty care. [START_REF] Birkmeyer | Hospital volume and surgical mortality in the United States[END_REF][START_REF] Halm | Is volume related to outcome in health care? A systematic review and methodologic critique of the literature[END_REF] Awareness of the risk and limitations associated with SLB has contributed to efforts aimed at improving ILD diagnosis in its absence, such as developing the concept of a working/provisional diagnosis, the probable usual interstitial pneumonia definition and a molecular classifier. [START_REF] Raghu | Diagnosis of Idiopathic Pulmonary Fibrosis. An Official ATS/ERS/JRS/ALAT Clinical Practice Guideline[END_REF][START_REF] Ryerson | A Standardized Diagnostic Ontology for Fibrotic Interstitial Lung Disease. An International Working Group Perspective[END_REF][START_REF] Lynch | Diagnostic criteria for idiopathic pulmonary fibrosis: a Fleischner Society White Paper[END_REF][START_REF] Raghu | Use of a molecular classifier to identify usual interstitial pneumonia in conventional transbronchial lung biopsy samples: a prospective validation study[END_REF] The expertise of an ILD center may reduce the number of cases in which a biopsy is contemplated and performed, as compared to centers with less ILD familiarity.
The complexities surrounding ILD diagnosis have led to increasing regionalization of care with a push to refer to expert centers. While these regional, expert centers can increase diagnostic accuracy for ILDs, they can also be disadvantaged by limited accessibility.
Delayed access to an ILD center is associated with increased mortality, independent of disease severity. [START_REF] Lamas | Delayed access and survival in idiopathic pulmonary fibrosis: a cohort study[END_REF] Disparate access varies widely and can result from geography, marginalization and lack of healthcare resources. These specific barriers are dependent on jurisdiction with issues such as geography typically more relevant in places like Canada and Australia as opposed to Europe. Innovative strategies are needed to overcome these barriers and increase access to MDDs and specialized diagnostic testing for patients with ILD.
Management
The comprehensive management of ILD is complex and rapidly evolving with both pharmacologic and non-pharmacologic components. [START_REF] Assayag | Comprehensive management of fibrotic interstitial lung diseases: A Canadian Thoracic Society position statement[END_REF][START_REF] Richeldi | Pharmacological management of progressive-fibrosing interstitial lung diseases: a review of the current evidence[END_REF][START_REF] Wijsenbeek | Spectrum of Fibrotic Lung Diseases[END_REF][START_REF] Wijsenbeek | Comprehensive Supportive Care for Patients with Fibrosing Interstitial Lung Disease[END_REF] The are several reasons for this complexity. ILDs include multiple conditions with variable disease behavior and therefore different goals of therapy. Some of these conditions are potentially reversible or partially reversible such as non-fibrotic hypersensitivity pneumonitis while others, such as idiopathic pulmonary fibrosis (IPF), are progressive. Treatment decisions regarding ILD medications can be nuanced given that many therapies are aimed at slowing disease progression as opposed to disease reversal, necessitating careful consideration of the risk versus benefit profile in each patient. While there has been a recent increase in quality data for the treatment of non-IPF ILD, [START_REF] Flaherty | Nintedanib in Progressive Fibrosing Interstitial Lung Diseases[END_REF][START_REF] Maher | Pirfenidone in patients with unclassifiable progressive fibrosing interstitial lung disease: a double-blind, randomised, placebo-controlled, phase 2 trial[END_REF] more is required, and treatment decisions for conditions such as hypersensitivity pneumonitis and unclassifiable ILD are often heavily influenced by 'expert opinion'. In the absence of management guidelines, decisions regarding treatment of connective tissue disease (CTD)-ILD are best done in conjunction with rheumatology to identify therapies that ideally treat both pulmonary and non-pulmonary disease manifestations.
Comprehensive management further includes access to pulmonary rehabilitation, lung transplant, symptom management/palliative care, advanced care planning and patient education, support and advocacy. [START_REF] Wijsenbeek | Comprehensive Supportive Care for Patients with Fibrosing Interstitial Lung Disease[END_REF][START_REF] Jo | Treatment of idiopathic pulmonary fibrosis in Australia and New Zealand: A position statement from the Thoracic Society of Australia and New Zealand and the Lung Foundation Australia[END_REF][START_REF] Kreuter | Palliative care in interstitial lung disease: living well[END_REF] The infrastructure of specialized ILD centers are typically best equipped to deliver such comprehensive care, with access to the required resources often limited outside of these select programs.
Longitudinal monitoring
Longitudinal monitoring of ILD patients is central to informing management decisions. [START_REF] Wijsenbeek | Spectrum of Fibrotic Lung Diseases[END_REF][START_REF] Fisher | Long-term monitoring of patients with fibrotic interstitial lung disease: A Canadian Thoracic Society Position Statement[END_REF][START_REF] Travis | An official American Thoracic Society/European Respiratory Society statement: Update of the international multidisciplinary classification of the idiopathic interstitial pneumonias[END_REF][START_REF] Wells | Any fool can make a rule and any fool will mind it[END_REF] Regular clinical assessments provide a mechanism to monitor symptoms, screen for disease progression, and identify treatment side effects and/or comorbidities. Information obtained from these interactions guide decisions related to treatment initiation, alteration, or discontinuation and appropriate timing of lung transplant referral, and/or end-of-life planning. In patients with provisional or working diagnoses, longitudinal monitoring provides an opportunity to reconsider the diagnosis which can become clearer overtime (e.g. development of extrapulmonary CTD symptoms). While the specialized center often plays an important role in the long-term monitoring of fibrotic ILD, access to a local pulmonologist has several advantages. Disease behavior is widely variable among patients with ILD, and access to an expedited clinical assessment is ideal in the event of acute deterioration. Shared-care models can facilitate faster access to appropriate expertise for patients. Local care providers can liaise with 'expert' ILD centers on a patient's behalf in order to make timely care decisions. This type of care delivery model has the additional advantage of 'off-loading' ILD centers with long waiting lists of some follow-up visits, allowing them to efficiently focus their expertise where it provides the most added value.
Medical education
ILD-related continuing medical education for trainees, general practitioners, radiologists, pulmonologists, thoracic surgeons/interventional pulmonologists, pathologists and rheumatologists, is an essential component of ILD care delivery. ILD patients frequently report a lack of ILD awareness among their health care providers which results in delayed specialist referral, diagnosis and treatment. [START_REF] Bonella | European IPF Patient Charter: unmet needs and a call to action for healthcare policymakers[END_REF][START_REF] Cosgrove | Barriers to timely diagnosis of interstitial lung disease in the real world: the INTENSITY survey[END_REF] 'Real world' registry data has shown high rates of SLBs (even in those with a definite usual interstitial pneumonia pattern on high resolution computed tomography) and corticosteroid use with lower than expected rates of antifibrotic therapy in patients with IPF. [START_REF] Behr | Management of patients with idiopathic pulmonary fibrosis in clinical practice: the INSIGHTS-IPF registry[END_REF] These findings are not necessarily surprising, given the complexity and rapid evolution of ILD diagnosis and management, and implies there is a need for ongoing ILD related knowledge translation and dissemination to the medical community. There is data to support such endeavors with a national French survey of physicians caring for IPF patients showing improved knowledge and management of IPF after implementation of an education outreach program. [START_REF] Cottin | Adherence to guidelines in idiopathic pulmonary fibrosis: a follow-up national survey[END_REF] Specialized ILD centers are typically equipped with the necessary infrastructure, such as staff and accreditation, required to deliver these programs.
It is also important to recognize that clinical feedback to physicians referring to ILD centers provides a mechanism for informal medical education. For example, a referring pulmonologist would typically propose a first-choice diagnosis, which may or may not be modified by the MDD at the ILD center. These clinical feedback loops have important educational roles. The MDD has an additional training effect on participating physicians, whether trainees or more senior members. [START_REF] Sadeleer | Diagnostic Ability of a Dynamic Multidisciplinary Discussion in Interstitial Lung Diseases: A Retrospective Observational Study of 938 Cases[END_REF][START_REF] Cottin | Should Patients With Interstitial Lung Disease Be Seen by Experts?[END_REF]37 Ensuring pulmonology trainees gain competence in the diagnosis and management of ILD is an integral component of improving ILD care delivery. The specialty clinic plays an important role in trainee education, providing a mechanism for high volume ILD clinical exposure. A survey of British Thoracic Society trainee members found that the majority felt their ILD training was inadequate. [START_REF] Sharp | UK trainee experience in interstitial lung disease: results from a British Thoracic Society survey[END_REF] Additional studies identifying specific barriers to trainee ILD education and mechanisms for improvement are required. The ILD specialty clinic is also a key component of ILD subspecialty clinical and research training programs.
Patient education, support and advocacy
Many ILDs are progressive conditions associated with poor quality of life and limited life expectancy. ILD patients and caregivers frequently report inadequate emotional and psychological support. [START_REF] Bonella | European IPF Patient Charter: unmet needs and a call to action for healthcare policymakers[END_REF] A survey of patient perspectives on the benefits of a specialized ILD center found patients placed a high value on gaining a better understanding of their disease and access to a specialized nurse that provided education and support, [START_REF] Mclean | Priorities and expectations of patients attending a multidisciplinary interstitial lung disease clinic[END_REF] reenforcing the importance of patient education, support and advocacy as key components of ILD care delivery. Patients frequently search the Internet for information related to their disease, yet ILD websites are often inaccurate and outdated. [START_REF] Fisher | Accuracy and Reliability of Internet Resources for Information on Idiopathic Pulmonary Fibrosis[END_REF] Specialized ILD clinics can provide highly valued disease related education to patients through many avenues, including direct discussion at clinic visits with an ILD physician and/or nurse educator, provision of disease specific education handouts, lists of reliable online resources and formal education programs. Specialized clinics can also facilitate patient and caregiver support groups that provide additional psychosocial and emotional support for patients and families dealing with ILD.
Specialized ILD clinics can promote community engagement through relationships with non-medical partners, such as patient foundations or patient associations. ILD physicians or nurses from the specialized center may take an active role in the scientific committee of a patient foundation. Here the role of the ILD center goes beyond providing education to the patient during a clinic visit or a specific educational activity, by delivering important educational messages that will be conveyed by the patient foundation. These relationships are a key component of ILD care delivery and provide valuable platforms for education, fundraising and research advancement. They also give the patient a seat at the table, helping identify and advocate for patient relevant outcomes.
Research and access to clinical trials
There has been significant advancement in drug therapies for ILD and ongoing rapid development of new drugs that require rigorous testing. In many cases of fibrotic ILD, current therapies only offer a slowing of disease progression as opposed to reversal or cure. Subsequently, the ability to offer and effectively conduct clinical trials is a key component of ILD care delivery. Without this ability, drug development will be stifled and we will fall short of our ability to find effective therapies to advance the field of ILD.
There are several barriers to physician and patient participation in clinical trials, including access to the required organizational structure, which is often only available at specialized centers. Components often needed include access to a MDD, research personnel (such as trial coordinators), institutional infrastructure (such as research ethics boards), and clinics with a high volume of ILD patients.
Specialized ILD clinics can facilitate other important forms of research that require access to high volumes of ILD patients, such as prospective patient registries. This 'real world' data on epidemiology, disease course, treatments, and outcomes is especially valuable when studying heterogeneous and less common diseases like ILDs. Large, national patient registries have published findings on ILD natural history, treatment and outcomes that would be challenging to obtain by other research avenues. [START_REF] Behr | Management of patients with idiopathic pulmonary fibrosis in clinical practice: the INSIGHTS-IPF registry[END_REF][START_REF] Jo | Baseline characteristics of idiopathic pulmonary fibrosis: analysis from the Australian Idiopathic Pulmonary Fibrosis Registry[END_REF][START_REF] Jo | Disease progression in idiopathic pulmonary fibrosis with mild physiological impairment: analysis from the Australian IPF registry[END_REF] In addition, registry research can give patients a voice [START_REF] Mclean | Priorities and expectations of patients attending a multidisciplinary interstitial lung disease clinic[END_REF] and provide opportunities to study and develop useful patient centered outcomes. [START_REF] Tsai | Minimum important difference of the EQ-5D-5L and EQ-VAS in fibrotic interstitial lung disease[END_REF]
CHALLENGES FACING ILD CARE DELIVERY MODELS AND PROPOSED SOLUTIONS
Specialized centers are generally highly appreciated by both referring physicians and patients, playing a central role in ILD care delivery. However, limitations of these models can include a lack of widespread and timely access for patients. Disparities in accessing the specialized center can exist due to long waiting lists, marginalization and geography.
Innovative strategies that leverage technology are required to bridge these gaps.
Multidisciplinary discussion
Limited availability of MDD is a key barrier to accurate and timely ILD diagnosis. Access to MDD can be increased by utilizing various virtual platforms, including remote chart and imaging review, telemedicine and video conferencing. A retrospective, crosssectional study from Canada showed that remotely accessed MDDs are feasible, decrease waiting time and frequently lead to a change in diagnosis and management. [START_REF] Grewal | Role of a Regional Multidisciplinary Conference in the Diagnosis of Interstitial Lung Disease[END_REF] In addition, referring physicians report satisfaction with the process and improvement in their own assessment and management of ILD patients. While routine use of a remote MDD will benefit from additional study and validation, it is likely to have an increasing place in ILD care delivery. However, the organization of virtual MDD remains a barrier to use in many centers due to lack of secure virtual platforms for data sharing, the time commitment required, absence of remuneration/funding for this type of care model and the importance of an in-person assessment for determining CTD features, antigenic exposures and frailty. Virtual consultations with patients may overcome some of these barriers.
Virtual MDDs may not solve the issue of patient volume 'overload' experienced at some ILD centers. Additional strategies to most efficiently utilized MDD resources can be considered in this setting. For example, ILD cases may be triaged to follow a specific MDD 'path' according to complexity. Similar approaches have been successfully used in other disciplines, such as Oncology. [START_REF] Selby | The Value and Future Developments of Multidisciplinary Team Cancer Care[END_REF] In the case of ILD, a patient with a UIP pattern, no identifiable cause for ILD and over the age of 65 may not require a full MDD discussion, in contrast to more complex cases, such as, unclassifiable ILD or chronic hypersensitivity pneumonitis without an identified antigen.
Longitudinal monitoring
Longitudinal monitoring of ILD patients solely at specialized centers has several disadvantages. First, many specialized centers do not have the capacity to see new referrals in a timely fashion and to conduct all follow-up visits. Second, attending ongoing follow-up visits can be a significant burden for patients that have to travel far distances. Third, it may be challenging (both for the patient and ILD clinic) to accommodate urgent assessments in the event of a clinical deterioration. Lastly, for successful collaboration, it is important to respect the pre-existing patient-doctor relationship of referring physicians, who are intellectual and financial stakeholders in patient care. There are several viable strategies that can be used to circumvent some of these issues. Certain follow-up visits can be done remotely using phone or videoconference by the ILD physician or nurse. Blood work and pulmonary function testing can be done locally with results sent to specialized centers and prescription renewals can be done remotely.
Home-based patient monitoring programs that incorporate home spirometry have emerged as viable vehicles for ILD care delivery. [START_REF] Moor | Home Monitoring in Patients with Idiopathic Pulmonary Fibrosis. A Randomized Controlled Trial[END_REF] These programs utilize eHealth technology to monitor patients remotely. Patient reported outcomes and physiologic data, such as spirometry, are collected and transmitted real time to ILD care providers.
Home spirometry has shown promise for detecting ILD progression and use as a clinical trial endpoint. [START_REF] Russell | Daily Home Spirometry: An Effective Tool for Detecting Progression in Idiopathic Pulmonary Fibrosis[END_REF][START_REF] Johannson | Home monitoring improves endpoint efficiency in idiopathic pulmonary fibrosis[END_REF] Home-based monitoring programs can decrease cumbersome travel for patients and empower them to take an active role in their disease management, while allowing their physicians to do more frequently monitoring.
In many cases, it is helpful for the patient to have a local pulmonologist who can alternate follow-up visits with the ILD center. There are several advantages to this model including decreasing travel burden on patients, off-loading ILD centers with long waiting lists and local access to an expedited assessment in the event of a clinical deterioration. While the ILD center often plays a central role in confirming or changing the diagnosis and treatment plan, local pulmonologists can provide valuable ongoing monitoring and liaise with expert centers on behalf of their patient in the event of a clinical change. Some ILD centers provide expert opinions to local pulmonologists without directly seeing patients. The best care pathway and balance between remote, in-person and shared follow-up varies between centers and is dependent on local needs and priorities.
Racial, ethnic and gender disparities and ILD care delivery
Racial, ethnic and gender disparities are well described determinants of health and healthcare access. [START_REF] Bach | Primary care physicians who treat blacks and whites[END_REF][START_REF]Medicine Io. Unequal Treatment: Confronting Racial and Ethnic Disparities in Health Care[END_REF][START_REF] Shannon | Gender equality in science, medicine, and global health: where are we at and why does it matter?[END_REF] It is not known how these characteristics influence access to specialized ILD care or its delivery. Patient gender has been shown to significantly influence whether a patient receives an IPF diagnosis, suggesting gender bias may exist amongst ILD physicians. [START_REF] Assayag | Patient gender bias on the diagnosis of idiopathic pulmonary fibrosis[END_REF] In addition, a patient's ILD care needs, expectations and goals may vary according to culture and region of origin and are factors that need to be considered when delivering care. [START_REF] Kreuter | Palliative care in interstitial lung disease: living well[END_REF] Research studies evaluating the role that race, ethnicity and gender play in how specialized ILD care is accessed and delivered are urgently required.
Future of ILD care delivery
With the exponential pace of knowledge generation, medicine is becoming increasingly subspecialized. ILD specifically has seen many changes in the past decade, with ongoing new discoveries. With such rapid advances, effective knowledge translation and dissemination becomes an increasingly daunting task as care delivery becomes more complex. As a field, the ILD community can leverage collective international expertise to continuously improve ILD care delivery. There are many opportunities for integrating both national and international expertise into local care delivery models. One example would be providing the ability to bring exceptional cases to a national or international MDD. The framework for these types of endeavors already exists in some places, with the European Research Network (ERN-LUNG, https://ern-lung.eu/) being one example. Such frameworks can be expanded upon to include other jurisdictions. Infrastructure for national and international clinical care collaborations can also provide a mechanism for less formal interactions, such as asking for advice on a specific aspect of a case (i.e. chest computed tomography or histopathology slides). These types of collaborations can increase access to highly specialized ILD care for both remote and marginalized communities and provide additional platforms for education and research. Establishing funding structures that support the time and resources needed to deliver this complex care is critical.
SUMMARY
In summary, comprehensive ILD care delivery has several key components including diagnosis, treatment, monitoring, support/advocacy, education and research. The role of the specialized center in care delivery is multifaceted (Figure 1), with an overarching goal of improving patient care and advancing the field of ILD. The current role of the specialized center in ILD care delivery models faces significant feasibility and generalizability challenges. Creative and innovative strategies are needed to find ways to optimally deliver ILD care to the highest number of patients possible. • MDD requires access to specialty resources that are typically only available at specialized ILD centers.
Clinics Care Points:
Management
• ILD management is multifaceted with pharmacologic and nonpharmacologic components.
• Treatment decisions can be nuanced and are often guided by expert opinion given that many cases do not fall into existing management guidelines.
• The infrastructure needed to provide comprehensive ILD management is typically limited outside of the specialized center.
Longitudinal monitoring
• Longitudinal monitoring of ILD patients is essential to informing management decision making. • The infrastructure, momentum and commitment to research required to conduct clinical trials in ILD is typically available at specialized centers.
CHALLENGES AND SOLUTIONS FACING ILD CARE DELIVERY
Challenge Key messages
Multidisciplinary discussion
• Limited access to an ILD center and a specialized MDD is a barrier to a timely diagnosis and effective treatment plan for patients with ILD.
• Using virtual MDDs are a strategy to increase MDD access.
Figure 1 .
1 Figure 1. Key components of interstitial lung disease care delivery. CME, continuing
••••
A 'shared-care' model between local pulmonologists and ILD centers can facilitate timely access to care, minimize patient travel and 'offload' specialty clinics. Medical education • ILD related medical education for clinicians and trainees is a key component of ILD care delivery. Additional research on ILD knowledge gaps and effective education strategies are required. Patient education, support and advocacy Patient education, support and advocacy is a key component of ILD care delivery.• Specialized ILD clinics facilitate patient education, support and advocacy through direct patient interaction and community engagement.Research and access to clinical trials Access to clinical trials is an important part of ILD care delivery.
•
Longitudinal monitoring• Remote monitoring and shared care models with local physicians can decrease travel burden for ILD patients, provide timely access to care, off-load specialized clinics and contribute to the continuing medical education of referring physicians.Racial, ethnic and gender disparities and ILD care delivery Research on race, ethnic and gender disparities in ILD care access and delivery is urgently needed.Future of ILD care delivery • International collaborations and networks are the future of ILD care delivery.
Table 1 . Summary of Key Messages KEY COMPONENTS OF ILD CARE AND THE ROLE OF THE SPECIALIZED CENTER IN DELIVERY Care delivery component Key messages
1 Diagnosis• Accurate and timely ILD diagnosis is a key component of ILD care.
Monitoring |
04105479 | en | [
"spi"
] | 2024/03/04 16:41:22 | 2022 | https://enac.hal.science/tel-04105479/file/20220728_thesis.pdf | Contents
List of abbreviations xvii
1.6 Dual-band dielectric resonator antenna fed by a coplanar waveguide [START_REF] Lin | Compact dual-band hybrid dielectric resonator antenna with radiating slot[END_REF]. . . . 17
General Introduction
Context
Over the last couple of years, the use of three-dimensional (3D) printing, or additive manufacturing, has been increasing in several industry sectors. It consists in the additive construction of 3D objects based on computer-aided design (CAD) files on a layer-by-layer basis [START_REF] Redwood | The 3D printing handbook: technologies, design and applications[END_REF]. 3D printing is in the spotlight due to its several advantages such as flexible design, lowmanufacturing cost, reduced waste, variety of printing materials, and fabrication on demand, to name a few. Due to these advantages, it is possible to find applications of 3D-printing in the aviation [START_REF] Gisario | Metal additive manufacturing in the commercial aviation industry: A review[END_REF], medical [START_REF] Mallikarjuna N Nadagouda | A review on 3D printing techniques for medical applications[END_REF], automotive [START_REF] Lecklider | 3D printing drives automotive innovation[END_REF], civil construction [START_REF] Furet | 3D printing for construction based on a complex wall of polymer-foam and concrete[END_REF], and food industries [START_REF] Jeffrey I Lipton | Additive manufacturing for the food industry[END_REF], for instance.
When it comes to antenna applications, it is possible to find several 3D-printed structures proposed in the literature [START_REF] Diogo | Antenna design using modern additive manufacturing technology: A review[END_REF]. Due to the several advantages offered by this technology, the interest in 3D-printed antennas has been exponentially growing with each passing day, as can be seen in Fig. 1, where the trend of the term "3D printed antennas" in the Google Scholar database is presented over the last years. Usually, 3D-printed antennas are classified in terms of the materials used in the printing process: 1) antennas printed with dielectric materials and subsequently metalized, 2) the ones printed directly with conductive materials, and 3) fulldielectric antennas. Figure 2(a) shows an example of a Voronoi antenna, where the structure Figure 1: Trend of the term "3D printed antennas" from 2010 to 2020 in the Google Scholar database [START_REF] Paolo Chietera | Dielectric Resonators Antennas Potential Unleashed by 3D Printing Technology: A Practical Application in the IoT Framework[END_REF]. is 3D-printed in plastic and, then, coated in metal [START_REF] Bahr | Novel uniquely 3D printed intricate Voronoi and fractal 3D antennas[END_REF]. The 3D-printed plastic coated with a highly conductive layer has a cheap fabrication cost but the metal coating may limit the electric performance of the antenna at high frequency due to the surface roughness and nonuniformity of the metallization process. In Fig. 2(b), a full-metal horn antenna is presented [START_REF] Agnihotri | Design of a 3D metal printed axial corrugated horn antenna covering full Ka-band[END_REF]. The 3D printing of metal is used to manufacture this axial corrugated horn for mm-wave frequencies. Also, the surface roughness is a challenge as well for the antennas printed directly with conductive materials. Therefore, 3D-printed antennas with conductive materials are still challenging and full-dielectric 3D-printed ones can be an interesting alternative. 3D-printing technology allows the creation of full-dielectric antennas with high precision and fine details [START_REF] Diogo | Antenna design using modern additive manufacturing technology: A review[END_REF], which opens up new possibilities and more design flexibility. For instance, it is possible to control the permittivity of an artificial homogeneous and isotropic lens used to enhance the gain of a broadband antenna, as can be seen in Fig. 3(a). In [START_REF] Papathanasopoulos | 3-D-Printed Shaped and Material-Optimized Lenses for Next-Generation Spaceborne Wind Scatterometer Weather Radars[END_REF], an inhomogeneous Luneburg lens is created using the 3D printed technology, as can be observed in Fig. 3(b). Besides, 3D-printing has already been used to create an artificial anisotropic dielectric [START_REF] Ding | Wideband Omnidirectional Circularly Polarized Antenna for Millimeter-Wave Applications Using Conformal Artificial Anisotropic Polarizer[END_REF], as shown in Fig. 3(c), where a polarizer is proposed to convert a linearly-polarized radiator into a circularly-polarized one. Therefore, these three examples show the possibilities and degrees of freedom that the 3D-printed dielectrics can bring. Using isotropic and homogeneous printing materials with low dielectric constant, i.e. around 3 for these examples, it is possible to create artificial isotropic or anisotropic, homogeneous and/or inhomogeneous materials with a controlled dielectric constant to design different structures.
Most of the 3D-printed dielectric-only structures proposed in the literature for antenna applications are non-resonant, electrically large, and use printing materials with low dielectric constant as in Fig. 3. However, due to the recent developments in this technology, it is now possible to use ceramics with high permittivity for this type of application, which opens up new possibilities to design resonant and electrically-small structures. In France, the company 3DCeram develops 3D printers that use ceramics as printing materials such as zirconia, which is a material with high permittivity (ε r = 32.5), low loss (tanδ = 1.9 • 10 -4 ), and excellent physical and chemical properties ideal for harsh environments. From this technology, dielectric resonator antennas (DRA) using zirconia were recently designed and some examples are shown 8.0λ 0 (b) Inhomogeneous lens (εr ( r)) [START_REF] Papathanasopoulos | 3-D-Printed Shaped and Material-Optimized Lenses for Next-Generation Spaceborne Wind Scatterometer Weather Radars[END_REF] 2.8 0 (c) Anisotropic polarizer (εr) [START_REF] Ding | Wideband Omnidirectional Circularly Polarized Antenna for Millimeter-Wave Applications Using Conformal Artificial Anisotropic Polarizer[END_REF] Figure 3: Examples of 3D-printed, non-resonant, and electrically-large dielectric structures for antenna applications. in Fig. 4. In Toulouse, Anywaves proposed a homogeneous and isotropic DRA, where the effective permittivity is controlled by the use of 3D-printed periodic sub-wavelength unit cells [START_REF] Mazingue | 3D Printed Ceramic Antennas for Space Applications[END_REF], as shown in Fig. 4(a). On the other hand, Anywaves and XLIM designed an artificial inhomogeneous and isotropic dielectric resonator presented in Fig. 4(b) to create a dual-band circularly-polarized DRA [START_REF] Lamotte | Multi-permittivity 3D-printed Ceramic Dual-Band Circularly Polarized Dielectric Resonator Antenna for Space Applications[END_REF]. Also, 3D-printing of anisotropic dielectric samples has been demonstrated in [START_REF] David | 3D-printed ceramics with engineered anisotropy for dielectric resonator antenna applications[END_REF] with a circularly-polarized antenna that has been realized in Toulouse by ENAC, ISAE-SUPAERO, and Anywaves, as seen in Fig. 4(c). In these three cases, zirconia is used as the printing material, which is an isotropic and homogeneous material. Using the 3D-printing, it is therefore possible to design artificial anisotropic, isotropic, homogeneous, and/or inhomogeneous dielectrics and, then, to create new possibilities to design electricallysmall dielectric resonator antennas. 0.25λ 0 (a) Homogeneous DRA (εr) [START_REF] Mazingue | 3D Printed Ceramic Antennas for Space Applications[END_REF] 0.31λ 0 (b) Inhomogeneous DRA (εr ( r)) [START_REF] Lamotte | Multi-permittivity 3D-printed Ceramic Dual-Band Circularly Polarized Dielectric Resonator Antenna for Space Applications[END_REF] 0 .2 1 λ0 (c) Anisotropic DRA (εr) [START_REF] David | 3D-printed ceramics with engineered anisotropy for dielectric resonator antenna applications[END_REF] Figure 4: Examples of 3D-printed, resonant, and electrically-small dielectric structures for antenna applications.
So far, dielectric resonator antennas have not been as popular as microstrip antennas, due to their high cost and difficult manufacturing process. However, a DRA, by itself, has advantages such as lack of ohmic losses, small size, several radiating modes, design flexibility, and large impedance bandwidth. The 3D printing of ceramics has the potential to change this scenario and give prominence to DRA once and for all with the possibility to fabricate it with different shapes and a wide variety of artificial effective dielectric properties.
The new degrees of freedom to design DRA offered by 3D printing can help to develop multiband antennas, which is very demanded in different applications. For instance, unmanned aerial vehicles (UAVs) and nanosatellites are experiencing high demand and a lot of effort has been made to reduce their size, which decreases the space available for their components such as antennas. At the same time, these platforms require different communication links at different frequencies for telemetry, control, data, and/or geolocalization, for instance. Each of these applications may also demand different radiating properties and frequencies. As aforementioned, in [START_REF] Lamotte | Multi-permittivity 3D-printed Ceramic Dual-Band Circularly Polarized Dielectric Resonator Antenna for Space Applications[END_REF], a dual-band circularly-polarized 3D-printed inhomogeneous DRA has already been proposed. However, the circular polarization (CP) is achieved mainly by means of a complex feeding method and not directly due to the dielectric properties of the antenna. On the other hand, the CP obtained in [START_REF] David | 3D-printed ceramics with engineered anisotropy for dielectric resonator antenna applications[END_REF] is due to the anisotropic DR but the antenna presents a single-band operation. Nevertheless, the possibility of using the concept of inhomogeneity and anisotropy together in a resonator sounds promising and increases the design flexibility to develop multiband DRAs with different polarizations and radiation patterns according to a given application. This is very important for different communication systems since it allows the use of a single antenna for different frequencies instead of multiple ones, which helps to save precious space in miniaturized platforms.
Objectives and Contribution
The main goal of this Ph.D. thesis is to study the capacity of 3D-printed ceramics for the design of new multiband and compact dielectric resonator antennas with good electromagnetic performances and respecting the constraints of small platforms. The possibility of having an inhomogeneous and anisotropic dielectric resonator with controlled permittivity is investigated for the first time with the use of 3D-printing technology.
Objectives
In this thesis, two main research axes are investigated: I) Proposal of sub-wavelength periodic structures to design artificial inhomogeneous dielectric mixing isotropic and anisotropic media.
II) Design and fabrication of multiband antennas with different radiation patterns and polarizations by 3D printing.
Contribution
The contribution of this thesis consists of two distinct levels: I) Some works in the literature already explored the use of DRA with anisotropic dielectrics, however, in most of these works, they were homogeneous. In this work, the electric properties of the dielectric are locally controlled to have circular polarization and/or control the radiation patterns at different frequency bands. In other words, inhomogeneous isotropic and/or anisotropic dielectrics are introduced within a rectangular DRA and, then, the expected electromagnetic behavior is achieved while keeping the same overall shape of the antenna.
II) To realize these antennas, the concept of periodic structures is used, where sub-wavelength unit cells are proposed to create artificial isotropic and anisotropic media inside the DRA. Thus, the dielectric properties of the antenna can be locally manipulated. To fabricate these antennas, 3D printing is employed, which allows the manufacturing of such small cells using zirconia, which is a physically robust material with excellent dielectric properties at microwave frequencies. A prototype is 3D-printed and measurements are carried out to validate the proposed antenna.
Outline
This Ph.D. thesis is structured as follows.
Chapter 1 aims to detail the necessary background for the comprehension of this work. An introduction about dielectric resonator antennas is presented, where their main characteristics are discussed and their equations for the electric field, resonance frequency, and Q-factor are developed. Moreover, a state-of-the-art to discuss different techniques to obtain multiband DRAs is presented, which helps to explain the motivation of this work.
Chapter 2 presents a single-fed dual-band linearly-polarized rectangular DRA with broadside radiation patterns. The first part shows an attempt to do so, however, the presence of an undesirable radiating mode is observed around the upper band. To overcome this issue, the electric field distributions of the modes of the DRA are analyzed and, thus, the permittivity of the dielectric is locally controlled to increase the resonance frequency of this undesirable mode, while keeping the expected radiation patterns at the lower and upper bands.
Chapter 3 aims to design a dual-band rectangular DRA with circular polarization. To do so, in the first part, the electric fields of the modes at issue are investigated and the possibility of using an anisotropic dielectric in some specific parts of the dielectric resonator is discussed. This is theoretically confirmed since the conditions to have CP are respected at both bands. The second part of this chapter provides the detailed results of the antenna with a finite ground plane and fed by a single slot. Thirdly, a parametric analysis is performed to provide a better understanding of the effect of each parameter of the anisotropic region on the performance of the antenna and a design guideline is presented. Lastly, a study is presented to show the trade-off between DRA volume and theoretical 3-dB axial-ratio bandwidth of the proposed design.
Chapter 4 introduces the concept of periodic structures made out of sub-wavelength unit cells by 3D-printing. Indeed, in the previous chapters, the dielectric properties of the antennas are directly assigned on Ansys HFSS. However, to manufacture these antennas, it is necessary to come up with ideas on how to achieve the necessary dielectric properties. Here, we propose to 3D-print periodic structures of sub-wavelength unit cells to create artificial media with controlled effective permittivity. Taking this into account, we first describe the 3D-printing technologies. In the second part, a state-of-the-art about 3D-printed dielectric resonator antenna is presented. In the last part, the concept of periodic structures is introduced, and sub-wavelength unit cells that emulate artificial isotropic and anisotropic media are proposed.
Chapter 5 then presents the simulated results of the dual-band circularly-polarized DRA made out of periodic sub-wavelength unit cells. These results are compared with the antenna model used in Chapter 3, where the dielectric properties are directly assigned on Ansys HFSS. Secondly, the manufacturing process is described as well as the measurement setup. Finally, simulated and measured results are discussed.
In Chapter 6, a third band with a linearly-polarized omnidirectional radiation pattern is added to the previously proposed dual-band DRA. This new antenna operates at the GNSS L5 and L1 bands as well as the 2.45-GHz ISM band. An air cavity is here added to the center of the antenna in order to control the matching and resonance frequency of the resonant mode at the ISM band. To still have circular polarization at the L5 and L1 bands, the radiating modes are carefully analyzed so that the addition of the third band and its new elements do not disturb the performance of DRA at the GNSS bands.
Finally, the Conclusions are synthesized from the results obtained in this Ph.D. thesis. Also, perspectives for this work are discussed as well.
Chapter 1
Introduction to Dielectric
Resonator Antennas Contents The purpose of this chapter is to present an introduction to dielectric resonator antennas (DRAs). At first, some reminders on dielectrics are briefly given in Section 1.1. In Section 1.2, a brief historical perspective about DRAs is presented. Then, the definition of DRAs, their different shapes, and feeding methods are detailed, and their main characteristics are discussed in Section 1.3. The theoretical aspects of rectangular DRAs are debated and the expressions for electric and magnetic field distribution, resonance frequency, and Q-factor are developed in Section 1.4. Finally, Section 1.5 shows the state-of-the-art about multiband DRAs.
Reminders on Dielectrics
Dielectrics are materials that become polarized when an external electrical field E a is applied. More precisely, the centroids of the bound negative and positive charges of the atoms inside a dielectric are slightly displaced in opposite directions when E a is applied, which is illustrated in Fig. 1.1, then creating numerous electric dipoles. The effect of these dipoles is accounted for by the electric polarization vector P . In other words, P can be understood as the response of the dielectric when an external electrical field E a is applied. To grasp these concepts, it is necessary to understand that a dielectric material is polarized when an electric field E a is applied and this material response is accounted for by P and both electric fields are related to each other through the following relation in the time-harmonic domain for a linear, homogeneous and non-dispersive medium:
D = ε 0 E a + P , ( 1.1)
where D is the electric flux density vector and ε 0 is the free-space permittivity. The terms ε 0 E a and electric polarization P account for the vacuum and material responses, respectively. Also, P can be expressed as
P = ε 0 [χ e ] E a , ( 1.2)
where [χ e ] is the tensor of electric susceptibility, which is a measure of how easily bound charges are displaced due to an applied electric field. Therefore, (1.1) can be rewritten as
D = [ε] E a , (1.3)
where
[ε] = ε 0 (1 + [χ e ]) = [ε r ] ε 0
is the permittivity of the material. So, the most general relation between D and E in the time-harmonic domain, assuming a linear, homogeneous, and nondispersive medium, takes the form of a tensor of rank two, which can be expressed as follows
D x D y D z = ε 0 ε xx ε xy ε xz ε yx ε yy ε yz ε zx ε zy ε zz E x E y E z . (1.4)
For reciprocal materials/media, their tensor of permittivity is a symmetrical matrix. Also, a symmetric matrix implies the existence of coordinate transformation that diagonalizes the tensor of permittivity [START_REF] Au | Electromagnetic wave theory[END_REF]. Thus, [ε] can be written as
[ε] = ε 0 ε x 0 0 0 ε y 0 0 0 ε z .
(1.5)
If all three components of this diagonal matrix are equal, the dielectric is said to be isotropic, which means that its electric permittivity is not a function of the direction of the applied electric field.
Electrically anisotropic, or simply anisotropic, materials are those whose electric permittivity is a function of the direction of the applied electric field, i.e. their charges are more easily displaced in some directions than in others. When only two out of three elements of the diagonal of the tensor are equal, the material is said to present uniaxial anisotropy. In this case, the dielectric has an ordinary permittivity ε o and an extraordinary permittivity ε e . If ε o < ε e , the material is said to present positive birefringence. On the contrary, if ε o > ε e , the dielectric is said to have negative birefringence. Also, the principal axis that presents the anisotropy is referred to as the optic axis. For instance, for a uniaxial dielectric with
[ε] = ε 0 ε o 0 0 0 ε o 0 0 0 ε e , (1.6)
the z-axis is the optic axis. In addition, if ε x = ε y = ε z , the material is said to be biaxial anisotropic.
A Brief History of the DRA
In 1939, Richtmyer theoretically demonstrated for the first time that open dielectric resonators (DRs) in form of spheres and toroids could radiate into free-space [START_REF] Rd Richtmyer | Dielectric resonators[END_REF], but this theoretical work did not create a continuous interest in this subject at the short/medium-term. However, in the early 1960s, the interest in DRs was renewed due to the introduction of materials with high relative permittivity, such as rutile, but it was still not possible to develop practical microwave components due to their poor temperature stability leading to large resonant frequency changes [START_REF] Balanis | Advanced engineering electromagnetics[END_REF]. In 1962, their modes were still experimentally investigated for the first time by Okaya and Barash [START_REF] Okaya | The dielectric microwave resonator[END_REF].
In the 1970s, a new material evolution impulsed the usage of DRs with the introduction of low-loss ceramics, such as barium tetratitanate and (Zr-Sn)TiO 4 . These ceramics were employed to fabricate DRs for monolithic microwave integrated circuits (MMICs) and semiconductor devices, due to their lightweight, temperature stability, high Q-factor, and low cost. In this scenario, DRs replaced traditional waveguide resonators in MIC applications, especially after the development of materials with dielectric constant higher than 80 with temperature stability and low-loss [START_REF] Balanis | Advanced engineering electromagnetics[END_REF]. In addition to the evolution of the materials, this decade also witnessed some important theoretical studies. In 1975, Van Bladel investigated the general nature of internal and radiated fields of DRs by reporting a rigorous asymp-totic method for evaluating their modes considering complex shapes with high permittivity [START_REF] Van Bladel | On the resonances of a dielectric resonator of very high permittivity[END_REF]. Later, he and his research group investigated a cylindrical ring dielectric resonator and presented numerical results for the resonance frequencies, fields, and radiation Q-factor of low-order modes [START_REF] Verplanken | The Electric-Dipole Resonances of Ring Resonators of Very High Permittivity (Short Papers)[END_REF][START_REF] Verplanken | The magnetic-dipole resonances of ring resonators of very high permittivity[END_REF].
Until the early 1980s, DRs had been mainly used as circuit elements in microwave integrated circuits [START_REF] Balanis | Advanced engineering electromagnetics[END_REF]. However, in 1983, at the University of Houston, Long, McAllister, and Shen presented the first theoretical and experimental systematic investigation about the use of DRs as antennas, namely dielectric resonator antennas (DRAs) [START_REF] Long | The resonant cylindrical dielectric cavity antenna[END_REF][START_REF] Mcallister | Rectangular dielectric resonator antenna[END_REF][START_REF] Mcallister | Resonant hemispherical dielectric antenna[END_REF]. In this work, they developed cylindrical DRAs, where their analysis of radiation patterns, excitation methods, and resonant modes demonstrated their potential for millimeter-wave frequency applications. Still, in the 1980s, Birand and Gelsthorpe demonstrated the first linear array of DRAs [START_REF] Birand | Experimental millimetric array using dielectric radiators fed by means of dielectric waveguide[END_REF], which was fed by a dielectric waveguide, and Haneishi and Takazawa proposed a linear array with broadband circular polarization [START_REF] Haneishi | Broadband circularly polarised planar array composed of a pair of dielectric resonator antennas[END_REF].
In the late 1980s and early 1990s, most of the focus of the researchers around the globe interested in DRAs was on proposing different feeding methods to excite DRAs and the employment of analytical and numerical techniques to predict their Q-factor and input impedance. Some of the main studies were carried out at the University of Mississippi by Kishk and his research group [START_REF] Aa Kishk | Accurate prediction of radiation patterns of dielectric resonator antennas[END_REF][START_REF] Aa Kishk | Broadband stacked dielectric resonator antennas[END_REF][START_REF] Ahmed | Radiation characteristics of dielectric resonator antennas loaded with a beam-forming ring[END_REF], and at the City University of Hong Kong by Leung and Luk [START_REF] Leung | Input impedance of hemispherical dielectric resonator antenna[END_REF][START_REF] Leung | Theory and experiment of a coaxial probe fed hemispherical dielectric resonator antenna[END_REF]. In 1994, Mongia and Bhartia presented a review paper to summarize most of these works and standardize the modes nomenclature and provide simpler equations to theoretically calculate the resonance frequency and Q-factors for DRAs with different shapes. Moreover, at the Communications Research Centre (CRC) in Ottawa, a research group led by Ittpiboon, Mongia et al. was responsible for some other important works on DRAs in this decade as well [START_REF] Rk Mongia | Half-split dielectric resonator placed on metallic plane for antenna applications[END_REF][START_REF] Ittipiboon | Aperture fed rectangular and triangular dielectric resonators for use as magnetic dipole antennas[END_REF].
Since then, different characteristics of DRAs were examined as well as different feeding methods and complex shapes. However, DRAs with multiple bands, circular polarization, pattern diversity, and broadband, to name just a few features, are still in the spotlight due to their numerous advantages when compared to metallic antennas such as design flexibility, small size, easy excitation, and high efficiency of radiation [START_REF] Petosa | Dielectric resonator antenna handbook[END_REF]. The most important characteristics of DRAs will be presented in the following section.
Definition and Main Characteristics of the DRA
Dielectric resonator antennas are resonant antennas that consist of a block of low-loss dielectric material that is normally mounted over a metallic ground plane. The resonance frequency of a DRA is a function of its shape, size, and relative permittivity ε r . Also, it is reported in the literature the use of dielectrics with ε r ranging from about 6 to 100, which provides easy control over the size and bandwidth of the DRA. Indeed, small size is usually realized with high ε r while wide bandwidth is achievable using low permittivity. Moreover, DRAs present high radiation efficiency due to the absence of conductors, which is an advantage when compared with metallic antennas, especially at millimeter-wave frequencies [START_REF] Petosa | Dielectric resonator antenna handbook[END_REF].
The DRAs can be designed considering a great variety of shapes and some of them can be seen in Fig. 1.2, which provides a high degree of freedom to the design of this kind of antenna, where each shape can present its particular radiating modes and characteristics. The most popular and traditional shapes are the hemispherical, cylindrical, and rectangular ones, which are shown in Fig. 1.2(a), (b), and (c), respectively. Their design equations for the radiating modes and Q-factor can be easily found in the literature [START_REF] Petosa | Dielectric resonator antenna handbook[END_REF]. The hemispherical DRA presents two degrees of freedom to its design, which are its radius a and relative permittivity ε r . On the other hand, the cylindrical DRA is characterized by its radius a, relative permittivity ε r , and height h, which leads to one degree of freedom more than the hemispherical case. Finally, the rectangular DRA is ruled by its width w, length l, height h, and relative permittivity ε r , offering one degree of freedom more than the cylindrical DRA. Note that the rectangular shape is the one used throughout this work and more details about it are given in Section 1.4. Finally, some non-conventional shapes can be used as well, as can be observed in Fig. 1.2(d), (e), and (f), to achieve some given characteristics. Dielectric resonator antennas can be excited by different types of feeding methods, such as slots, microstrip lines, probes, dielectric image guides, or co-planar lines. Some of them can be observed in Fig. 1.3. This feature is really important since it facilitates the integration of DRAs with various communication systems. Also, the choice of the proper feeding method depends on the radiating mode of the DRA that needs to be excited, and, then, it is important to know the electromagnetic field distribution of these modes.
Including the aforementioned features, DRAs are attractive for several applications due to their major characteristics, which can be summarized as follow:
• The size of DRAs is proportional to λ 0 / √ ε r , where λ 0 and ε r is the free-space wavelength and dielectric constant of the material, respectively. This feature is important since the size of DRAs can be controlled by choosing the proper ε r , in which DRAs with ε r ranging from 6 to 100 were reported in the literature;
• Various radiating modes can be excited to generate different types of radiation patterns, such as broadside or omnidirectional ones, for different coverage requirements;
• The gain, bandwidth, and polarization can be controlled by adjusting the shape of the DRA, choosing the proper feeding method, and/or using multiple excitation ports, for instance.
The Rectangular DRA
In this section, the expressions for the field distribution, resonance frequencies, and qualityfactor of the radiating modes of a rectangular dielectric resonator antenna are developed and discussed. The rectangular shape is considered throughout this work due to its higher degree of freedom when compared to the cylindrical and hemispherical ones.
At first, an isolated rectangular dielectric resonator (DR) is considered, as shown in Fig. 1.4, where the origin of the cartesian coordinate system is placed at the center of the DR. According to Okaya and Barash, the modes in an isolated rectangular DR can be divided into TE and TM [START_REF] Okaya | The dielectric microwave resonator[END_REF]. However, the existence of TM modes in rectangular DRs has not been proven experimentally. In this work, the TE x δnp and TE y mδp modes are mainly considered and they resonate like short magnetic dipoles placed along the x-and y-directions, respectively [START_REF] Petosa | Dielectric resonator antenna handbook[END_REF]. The value δ represents the fraction of a half-cycle of the field variation in the x-and y-directions for the TE x δnp and TE y mδp modes. Also, m, n, and p are positive integers that represent the field variation along the x-, y-, and z-directions, respectively.
Field Distribution
In this subsection, the Dielectric Waveguide Model (DWM) [START_REF] Marcatili | Dielectric rectangular waveguide and directional coupler for integrated optics[END_REF] is used to predict the magnetic and electric field distribution of an isotropic and homogeneous isolated rectangular DR shown in Fig 1 .4. For the TE x δnp modes, for instance, this DR is assumed to be firstly an isolated infinite dielectric waveguide along the x-direction with perfect magnetic conductor (PMC) walls [START_REF] Henry | Mutual Coupling Between Rectangular Dielectric Resonator Antenna Elements[END_REF], which is then truncated at x = ± d 2 . Finally, the field components inside the DR can be derived from the x-directed magnetic potential F = xF x (x, y, z) [START_REF] Roger F Harrington | Time-harmonic electromagnetic fields[END_REF]. In this work, the TE x δ11 and TE x δ13 modes are mainly used and their magnetic and electric field components can be written as
E x =0, (1.7
)
E y =Ak z cos(k x x)cos(k y y)sin(k z z), (1.8
)
E z = -Ak y cos(k x x)sin(k y y)cos(k z z),
(1.9)
H x =A k 2 y + k 2 z jωµ 0 cos(k x x)sin(k y y)cos(k z z), (1.10) H y =A k x k y jωµ 0 sin(k x x)sin(k y y)cos(k z z), (1.11) H z =A k x k z jωµ 0 sin(k x x)cos(k y y)cos(k z z), (1.12)
where A is an arbitrary amplitude, k x = δ π d , k y = n π w , and k z = p π h denote the wavenumber along the x-, y-, and z-directions inside the resonator, respectively, n and p are odd positive integers, ω is the angular frequency, and µ 0 is the free-space permeability.
To provide a better understanding about the characteristics of the TE modes in a rectangular DR, Fig. 1.5 shows the electric field distribution, i.e. E = E x x + E y ŷ + E z ẑ, of the TE x δ11 and TE x δ13 modes at x = d 2 in the yz-plane. In Fig. 1.5(a), one can note that the electric field distribution TE x δ11 mode is similar to an x-directed magnetic dipole. On the other hand, from Fig. 1.5(b), the field distribution of the TE x δ13 mode is similar to an array of three x-directed magnetic dipoles arranged along the z-direction.
Resonance Frequencies
To calculate the resonance frequencies of the rectangular DR, their faces at x = ± d 2 are truncated with imperfect magnetic walls (IMW) according to Marcatili's approximation [START_REF] Marcatili | Dielectric rectangular waveguide and directional coupler for integrated optics[END_REF]. This condition allows the fields to propagate through the walls where they will decay exponentially outside of the resonator. For instance, considering that H x and H x are the x-component of the magnetic field inside and outside the dielectric resonator, respectively, it is possible to state that H x = H x and ∂Hx ∂x = ∂H x ∂x at x = ±d/2, due to the continuity imposed by boundary conditions at these interfaces. Thus, H x can be expressed as
H x = A k 2 y + k 2 z
jωµ 0 e -jk x x sin(k y y)cos(k z z), (1.13) where A is an arbitrary amplitude and k x is the wavenumber along the x-direction outside the dielectric resonator. Also, at x = d/2, H x = H x can be written as
A cos k x d 2 = A e -jk x d 2 , (1.14)
and, ∂Hx ∂x = ∂H x ∂x is represented as
-Ak x sin k x d 2 = -A jk x e -jk x d 2 .
(1.15)
Dividing the equantion (1.15) by (1.14), it is possible to find the following transcendental equation
k x tan k x d 2 = (ε r -1) k 2 0 -k 2 x , (1.16)
where ε r is the relative permittivity of the DR. Therefore, using the following separation equation
k 2 x + k 2 y + k 2 z = ε r k 2 0 , (1.17)
the transcendental equation for k x , k y , and k z , it is possible to calculate the resonance frequency of the DR for a given mode.
Quality Factor
The quality factor, or simply Q-factor, is an important figure-of-merit when it comes to DRAs. This parameter is interrelated with the impedance bandwidth of an antenna and there is no way to optimize one of them without affecting the performance of the other one [START_REF] Balanis | Antenna theory: analysis and design[END_REF]. Moreover, in general, antennas present conducting, radiation, dielectric, and surface-wave losses, and, then, the total quality factor Q t can be expressed as
1 Q t = 1 Q rad + 1 Q c + 1 Q d + 1 Q sw , ( 1.18)
where Q rad , Q c , Q d , and Q sw are the quality factors due to radiation (space wave), conduction (ohmic), dielectric, and surface-wave losses, respectively. It is important to point out that for DRAs, Q t ≈ Q rad is usually considered, since low-loss dielectrics are normally employed and there are no conductor and surface-wave losses.
The fractional impedance bandwidth BW can be calculated from the Q-factor and is found as [START_REF] Petosa | Dielectric resonator antenna handbook[END_REF]
BW = ∆f f 0 = s -1 Q rad √ s , (1.19)
where ∆f is the absolute bandwidth, f 0 is the resonance frequency, and s is the maximum acceptable voltage standing wave ratio (VSWR).
The radiation quality factor Q rad of a DR can be also written as [START_REF] Petosa | Dielectric resonator antenna handbook[END_REF] Q rad = 2ω
W e P rad , (1.20)
where W e is the stored energy and P rad is the radiated power. Also, W e is defined as [START_REF] Balanis | Advanced engineering electromagnetics[END_REF]]
W e = 1 4 ε v | E|dv, (1.21)
where v denotes the volume of the DRA and
E = E x x + E y ŷ + E z ẑ.
For instance, considering the TE x δ11 mode, the stored energy W e of a rectangular DRA is given by
W e = ε 0 ε r A 2 dwb 32 1 + sin(k x d) k x d k 2 x + k 2 z . (1.22)
In [START_REF] Van Bladel | On the resonances of a dielectric resonator of very high permittivity[END_REF], it is seen that the TE 111 modes of a rectangular DR radiate like magnetic dipoles and their radiated power can be represented as
P rad = 10k 4 0 p m 2 , ( 1.23)
where p m is the magnetic dipole moment [START_REF] Rk Mongia | Theoretical and experimental resonant frequencies of rectangular dielectric resonators[END_REF], expressed as
p m = 1 2 v r × J p dv, ( 1.24)
and r = xx+y ŷ+z ẑ is a vector from the origin, J p = jωε 0 (ε r -1) E is the volume polarization current density, and E is the electric field intensity inside the resonator. Applying (1.7), (1.8) and (1.9) in (1.24), the magnetic dipole moment p m can expressed as
p m = -j 8Aωε 0 (ε r -1) k x k y k z sin k x d 2 x. (1.25)
Therefore, knowing P rad and W e , the radiation quality factor can be calculated by employing (1.20). Besides, the same reasoning can be used to calculate the radiation quality factor for the TE y 1δ1 and TE z 11δ modes.
State-of-the-Art of Multiband DRAs
Nowadays, the exponential technology evolution demands that devices such as UAVs and nanosatellites, for instance, must be connected to each other, to base stations, and/or to some other devices, which means that normally their antennas must cover different frequency bands for different applications. Moreover, as the size of these devices has been decreasing with each passing day, the use of a single antenna for multiple frequencies instead of multiple ones has become essential. In this context, DRAs can be very useful, since they present a wide variety of radiating modes that can be excited using different techniques. Thus, in this section, a state-of-the-art of multiband DRAs is presented.
In [START_REF] Lin | Compact dual-band hybrid dielectric resonator antenna with radiating slot[END_REF], a coplanar waveguide (CPW) is used to feed a dual-band DRA with linear polarization, which can be observed in A dual-band linearly-polarized cylindrical DRA with omnidirectional radiation patterns has been proposed in [START_REF] Mei Pan | Design of dual-band omnidirectional cylindrical dielectric resonator antenna[END_REF], as can be observed in Fig. 1.7. The TM 011 and TM 012 modes are excited at 3.5-GHz WiMAX and 5.8-GHz WLAN bands by a probe placed at the center of the DRA. Also, analytical formulas to design this type of antenna at different frequencies with different materials have been developed. So far, only multiband DRAs with linear polarization were presented. However, the fastgrowing development of wireless systems, such as satellite navigation, leads to an increasing interest in circularly-polarized (CP) antennas, since they are less affected by atmospheric conditions and insensitive to the transmitter and receiver orientations. In this context, DRA can be an interesting solution since it presents a relatively wide bandwidth, interesting radiation characteristics, small size, and low cost. Besides, the possibility of employing different feeding techniques and dielectrics with a diversity of shapes and electric properties allows the design of efficient CP DRAs at different frequency ranges with multiband.
The shape of the DRA can play an important role to design a CP antenna since it allows the possibility of exciting orthogonal modes with the same amplitude and in-phase quadrature.
In the literature, it is possible to find CP DRAs with shapes such as trapezoidal [START_REF] Pan | Wideband circularly polarized trapezoidal dielectric resonator antenna[END_REF], stairshaped rectangular [START_REF] Chair | Aperture fed wideband circularly polarized rectangular stair shaped dielectric resonator antenna[END_REF], elliptical [START_REF] Kishk | An elliptic dielectric resonator antenna designed for circular polarization with single feed[END_REF], and cross-shaped [START_REF] Zou | A cross-shaped dielectric resonator antenna for multifunction and polarization diversity applications[END_REF]. In most of these examples, the circular polarization happens mainly due to the shape of the DRA itself.
In [START_REF] Sheng | Linear-/circular-polarization designs of dual-/wide-band cylindrical dielectric resonator antennas[END_REF], a cylindrical DRA is used to achieve CP and dual-band, where a pair of degenerate modes are excited at each frequency band, i.e. the HEM x 111 and HEM y 111 modes are used at the lower band and the HEM x 113 and HEM y 113 modes at the upper band. These modes are excited by a quadrature strip-fed, as can be observed in Fig. 1.8. One can note that a complex feeding method is here necessary to have dual-band and circular polarization. In [START_REF] Zhang | Cross-slot-coupled wide dual-band circularly polarized rectangular dielectric resonator antenna[END_REF], both shape and feeding are combined to create a dual-band circularly-polarized DRA, as can be seen in Fig. 1.9. The fundamental TE 111 and higher-order TE 121 modes are excited at the lower band, and the TE 131 mode is employed at the upper band. Also, the DRA is fed by a cross-slot, where the length of each arm is adjusted to ensure the CP at both bands. Thus, to achieve these results, it is necessary to manipulate both the shape of the DRA and the dimensions of the cross-slot. However, as the authors are dealing with three sets of different modes, the radiation patterns are different depending on the frequency band and cut plane. In [START_REF] Li | A dual-mode quadrature-fed wideband circularly polarized dielectric resonator antenna[END_REF], the fundamental TE 111 and higher-order TE 113 modes of a rectangular DRA are excited to operate around 3.04 GHz and 3.65 GHz, respectively. This antenna is fed by two strips, as can be seen in Fig. 1.11, so that, at the lower band, the TE x 111 and TE y 111 modes are excited and, at the upper band, the TE x 113 and TE y 113 modes are used. Thus, as this antenna is square-based, the circular polarization at two bands is only achieved due to the control of the phase of its two ports. In [START_REF] Wang | Single-feed dual-band circularly polarized dielectric resonator antenna for CNSS applications[END_REF], a dual-band circularly-polarized DRA is proposed for the B3 and B1 bands of the Compass Navigation Satellite System (CNSS), as can be observed in Fig. 1.12. In this work, a cross-slot is used to feed a rectangular DRA with a square cross-section, and the TE 111 and TE 113 modes are excited. The width and length of the arms of the cross are carefully chosen to have, at each frequency band, two pairs of near-degenerate orthogonal modes with near-equal amplitudes and in-phase quadrature, thus determining a dual-band CP operation.
Conclusion
This chapter introduced the dielectric resonator antennas by explaining their physical shapes, different feeding methods, and characteristics, and showing the potential of this type of antenna for different applications at different frequency ranges. Also, different techniques are presented to achieve circular polarization, such as using multiple ports, manipulating the feeding network, changing the physical shape of the DRA, and so on. However, none of them employed the control of the dielectric properties of the DRA to satisfy the CP conditions at two bands. To do so, it is necessary to know the behavior of the electric field distribution, resonance frequencies, and Q-factor of the radiating modes of the DR and to be able to feed them accordingly, which is presented in Section 1.4. Of course, one has also to know to control the permittivity of a dielectric medium. Therefore, all of these points will be addressed in the following chapters. In this chapter, a dual-band dielectric resonator antenna is proposed to operate at the L5 (1176.45 MHz ± 10.23 MHz) and L1 (1575.42 MHz ± 10.23 MHz) bands of the Global Navigation Satellite System (GNSS). In Section 2.1, the DRA is initially designed to excite the TE y 1δ1 and TE x δ11 orthogonal modes to cover both L5 and L1 bands, respectively. However, the presence of an undesirable higher-order mode leads to the distortion of the expected radiation pattern at the L1 band. To overcome this issue, a method to shift this mode away from the L1 band is presented and the main results are discussed in Section 2.2. It consists in drilling an air cavity inside the dielectric resonator to obtain an inhomogeneous material.
Homogeneous Linearly-Polarized Dual-Band DRA
A dual-band rectangular DRA is here considered due to its main advantages when compared to other classic shapes, as described in Section 1.3. The first step to design an antenna is to define its operating frequency and, then, the L5 (1176.45 MHz ± 10.23 MHz) and L1 (1575.42 MHz ± 10.23 MHz) bands of the GNSS are chosen in this example. For this kind of application, it is interesting to have broadside radiation patterns, which can be achieved by exciting the fundamental TE x δ11 and TE y 1δ1 modes of a rectangular DRA since these modes have magnetic dipole-like patterns as observed from their electric field distributions in Fig. 2.1, where a homogeneous dielectric resonator (DR) over an infinite ground plane is considered.
Due to the field distribution of the TE y 1δ1 and TE x δ11 modes, a coaxial probe is used to excite them and the configuration of the dual-band DRA can be observed in Fig. 2.2, where w, d, b, and ε r are the width, depth, heigth, and relative permittivity of the DRA, respectively, which is mounted over a metallic ground plane. Using the Dielectric Waveguide Model (DWM) and after some optimization on Ansys HFSS so that the TE At the lower band, as expected for the TE x δ11 mode, broadside patterns are observed for both planes. Note that this DRA is linearly polarized whereas GNSS receiving systems usually require circularly-polarized antennas. At the upper band, the radiation patterns are not as expected for the TE y 1δ1 . To be more specific, in xz-plane, the θ-component of the gain for the DRA presents a null for θ = 285 • , which creates a blind spot, and the direction of the main lobe is shifted to around 30 • as well. Therefore, it is necessary to investigate the reasons behind this unexpected result. Note also that the polarization is still linear but orthogonal to the one in the L5 band as expected. As the radiation patterns of the fundamental modes are well-known, this unexpected result for the TE y 1δ1 mode may happen due to the presence of a higher-order mode that is being excited around the L1 band. To investigate this, the Eigenmode solution of Ansys HFSS. A rectangular dielectric resonator (DR) is considered without any sources and over an infinite metallic ground plane. The DR is slightly redesigned so that the TE x δ11 and TE y 1δ1 resonate at the L5 and L1 bands, respectively, and, thus, the parameters of the DR are d = 53 mm, w = 23 mm, b = 25.5 mm, and ε r = 23. Table 2.1 presents the results of the Eigenmode analysis. It is observed that the higher-order mode TE x δ21 resonates at 1.575 GHz that is to say in the vicinity of the expected TE y 1δ1 mode. Due to its field distribution shown in Fig. 2.5, one can note that this mode can be excited by our coaxial probe. Besides, the TE x δ21 mode can be viewed as two equivalent magnetic dipoles along the x-direction that will modify the final radiation pattern in the L1 band. Therefore, some strategy must be employed to shift this undesirable mode away from the L1 band.
Mode Frequency
Inhomogeneous Linearly-Polarized Dual-Band DRA
Considering the field distribution of the modes at issue, one way to shift the TE x δ21 away from the L1 band is to locally control the electric permittivity of the dielectric in a region where the electric field distributions are weak for the fundamental modes and strong for the TE x δ21 one. Also, as this undesirable mode is resonating at the upper band, one strategy would be to consider an inhomogeneous DRA to increase its resonance frequency by decreasing the electric permittivity at the center of the DRA.
In this context, we propose to increase the resonance frequency of the TE x δ21 mode by adding an air cavity in the center of the DRA. To understand the effect of this cavity, the Eigenmode solution of Ansys HFSS is used with the DR configuration shown in Fig. Note that the presence of an air cavity means that the overall permittivity of the DR is reduced and, then, the size of the DRA tends to increase. As a matter of comparison, Table 2.2 shows the main dimensions of the DRA, where both models are designed so that the TE x δ11 and TE y δ11 resonate at the center of the L5 and L1 bands, respectively. Besides, the percentage difference is presented as well, which is calculated as (| X nocavity -X cavity | /X nocavity ), where X nocavity and X cavity are the parameters of the DRA without and with cavity, respectively. Thus, one can note that the volume of the dielectric with air cavity is [START_REF] Muhammad S Anwar | 3D printed dielectric lens for the gain enhancement of a broadband antenna[END_REF] Practically, circularly-polarized GNSS antennas are more suitable than linear-polarized ones, due to their immunity to multipath distortion and the Faraday rotation effect, and insensitivity to the transmitter and receiver orientations. To do so, for a dual-band operation, one may excite a pair of orthogonal modes around the same frequency with the same amplitude and in phase quadrature at each band. For the proposed dual-band DRA with the air cavity, the axial ratio is shown in Fig. 2.12 and it is possible to note that this antenna does not have circular polarization as expected. Indeed, the values of axial ratio are higher than 3 dB at the L5 and L1 bands. Using the dual-band LP DRA as a starting point to design a CP antenna, some structures were investigated to try to have circular polarization at two bands. For example, L-and crossshaped DRAs have been studied as observed in Fig. 2.13. These designs intend to excite with only one probe a pair of orthogonal modes at each band using two identical DRAs with an air cavity at the center to get rid of the higher-order modes in the operational bandwidth. More precisely, at the L5 band, the TE x δ11 of the DRA 2 and the TE y 1δ1 of the DRA 1 should be excited at the L1 band while the TE x δ11 of the DRA 1 and the TE y 1δ1 of the DRA 2 should be excited at L5 band. At each band, the orthogonal modes have the same resonance frequency and amplitude, due to the position of the probe. However, we did not succeed in achieving the phase quadrature with this approach. Some other small variations from these models were investigated but it was not possible to have circular polarization at two bands using only one feeding point. We thus decided to explore another approach as shown in the next chapter.
Conclusion
An approach to shifting an undesirable mode away from a given frequency band is presented. It consists in locally controlling the permittivity of the DRA. To be more specific, a rectangular DR is designed so that the fundamental TE x δ11 and TE y 1δ1 modes resonate at the GNSS L5 and L1 bands, respectively. However, the presence of the higher-order TE x δ21 mode is observed around the L1 band and, then, distorts the expected radiation pattern. To overcome this issue, an air cavity is introduced at the center of the DR leading to an inhomogeneous dielectric material. The results of reflection coefficient and radiation patterns demonstrate the efficiency of this approach for this type of issue. Some other structures were investigated to achieve circular polarization in two bands. Nonetheless, it was not possible to achieve CP using only one feeding probe considering Land cross-shaped DRAs, since the condition of the phase quadrature for the pair of orthogonal modes was not respected.
Even though we did not succeed to have a dual-band and CP antenna with the dielectric with an air cavity using only one probe, the development described in this chapter can open up new possibilities to do it. So far, two isotropic dielectrics were used to locally control the permittivity and, thus, controlling the resonance frequency of a given mode. However, instead of using only isotropic materials, the possibility of using anisotropic dielectrics could be explored to have circular polarization and this approach will be discussed in the following chapters.
Chapter 3
Inhomogeneous and Anisotropic
Dual-Band DRA with Circular Polarization In the previous Chapter, a rectangular inhomogeneous dielectric has been considered to design a dual-band DRA, where the inhomogeinity is obtained by adding an air cavity in the center of the antenna. However, it has not been possible to achieve circular polarization due to lack of degrees of freedom of this approach. This Chapter presents a dual-band DRA with circular polarization, in which the electrical permittivity of the dielectric resonator is locally manipulated to achieve the expected performance. In Section 3.1, the principle of operation of the proposed DRA is introduced with the support of the Eigenmode solution of Ansys HFSS, which is based on the natural resonance of the dielectric resonator by itself and the radiating TE x δ11 , TE y 1δ1 , TE x δ13 , and TE y 1δ3 modes. Moreover, in Section 3.2, the proposed DRA is optimized taking into account a feeding slot, and results such as reflection coefficient, axial ratio, efficiency, and radiation pattern are presented. In Section 3.3, a parametric analysis of the DRA is made and a design guideline is presented. Finally, in Section 3.4, the relation between the 3-dB axial ratio bandwidth and the volume of the DRA is discussed.
Principle of Operation
The circular polarization in single-fed antennas is often achieved by exciting two orthogonal modes with the same amplitude and in phase quadrature at the same operational frequency. Considering a square-based rectangular DRA over an infinite ground plane as shown in Fig. 3.1, for instance, the fundamental TE x δ11 and TE y 1δ1 modes could be used to achieve CP around a given frequency. Both modes can be modeled as a dual-parallel RLC equivalent circuit as described and developed in [START_REF] Joshua | Using thick substrates and capacitive probe compensation to enhance the bandwidth of traditional CP patch antennas[END_REF][START_REF] William | Impedance, axial-ratio, and receive-power bandwidths of microstrip antennas[END_REF]. From this approach, the resonance frequency where the perfect circular polarization happens, i.e. axial ratio equal to 0 dB, can be predicted by using the following equations
Infinite ground plane
f 01 f 1 1 -1 2Q 1 , ( 3.1)
f 02 f 2 1 + 1 2Q 2 , ( 3.2)
where the CP happens at f 0 when f 0 = f 01 = f 02 . In these equations, f 1 and f 2 are the resonance frequencies of the orthogonal modes, with f 1 lower than f 2 , and Q 1 and Q 2 are their respective Q-factors. The goal of these equations is to find a good frequency shift between f 1 and f 2 that gives the phase shift and amplitudes necessary to have circular polarization, which happens when f 01 = f 02 . To illustrate this condition, Fig. 3.2 presents a simplified drawing of the quality factor as a function of the frequency of two orthogonal modes. For instance, in Fig. 3.2(a), there is frequency shift between f 1 and f 2 but not enough to make f 0 = f 01 = f 02 . In this case, it necessary to increase this frequency shift to fulfill the CP conditions, which is illustrated in Fig. 3.2(b), where
f 0 = f 01 = f 02 .
In a previous work done in our team, a uniaxial anisotropic dielectric material has been considered to achieve circular polarization and the orthogonal TE x δ11 and TE y 1δ1 modes are used. The relative permittivity has been expressed by the following tensor ε r = [ε x 0 0; 0 ε y 0 ; 0 0 ε z ], where [START_REF] David | 3D-printed ceramics with engineered anisotropy for dielectric resonator antenna applications[END_REF]. This method has been proven to be useful for tuning the fundamental modes in order to obtain CP at a single frequency band. Figure 3.3 shows the manufactured anisotropic DRA wtih CP as wel as its axial ratio (for more details, see [START_REF] David | 3D-printed ceramics with engineered anisotropy for dielectric resonator antenna applications[END_REF]). Therefore, to have a dual-band CP DRA, besides the fundamental modes, a pair of orthogonal modes can be considered to achieve CP at the upper band. When it comes to a dual-band operation, it is necessary to analyze the effects of the presence of a uniaxial dielectric for the higher-order modes as well as for the fundamental ones and whether it would be possible to achieve CP at both bands. To do so, besides the fundamental modes TE x δ11 and TE y 1δ1 , the higher-order modes TE x δ13 and TE y 1δ3 are used throughout this Chapter, since they present similar broadside radiation patterns and can be excited employing the same feeding method. Also, it is important to point out that, to have CP at two bands, each pair of orthogonal modes, namely (TE x δ11 , TE y 1δ1 ) and (TE x δ13 , TE y 1δ3 ), must satisfy Eq. 3.1 and 3.2 at once.
ε x = ε y = ε z or ε y = ε x = ε z
Q 1 Q 2 f 1 f 2 f 02 f 01 Mode 1 Mode 2 (a) Q 1 Q 2 f 1 f 2 f 0 f 02 f 01 Mode 1 Mode 2 (b)
To investigate the influence of the use of a uniaxial dielectric to have a dual-band circularlypolarized DRA, the first step is to design a square-based DR using the Dielectric Waveguide Method (DWM) and the Eigenmode solution of Ansys HFSS. The center frequency of the L5 (1176.45 MHz ± 10.23 MHz) and L1 (1575.42 MHz ± 10.23 MHz) bands of the Global Positioning System (GPS) are taken as reference for the fundamental and higher-order modes, respectively. At first, considering an isotropic dielectric with ε r = 10, the width w and height b of the DRA are defined as 50.5 mm and 61.0 mm, respectively. The resonance frequencies of the fundamental and higher-order modes are around 1.176 GHz and 1.575 GHz, respectively. Then, we assume a uniaxial anisotropic dielectric with:
ε r = ε x 0 0 0 ε y 0 0 0 ε z , ( 3.3)
and ε x = ε z = ε y , and, thus, ε y is varied and the resonance frequencies and Q-factors of the fundamental and higher-order modes are calculated using the Eigenmodes solution of Ansys HFSS, as can be seen in Fig. 3.4. One can note in Fig. 3.4(a) that the resonance frequencies of the TE y 1δ1 and TE y 1δ3 modes are almost constant for different values of ε y . This result is expected since the electric field does not have a y-component for these two modes. However, the resonance frequencies decrease for the TE x δ11 and TE x δ13 modes when increasing ε y . Also, it is possible to realize that the TE x δ13 mode is much more sensible than the TE x δ11 for variations of ε y . From Fig. 3.4(b), the Q-factors of the higher-order modes are much higher than for the fundamental ones, as expected. Besides, the difference according to ε y variation between the Q-factors of the fundamental modes is lower than for the higher-order modes.
With the resonance frequencies and Q-factors calculated for each mode at issue (Fig. 3.4), the parameters of Eq. 3.1 and 3.2 can be found. First, the resonance frequencies of each pair of orthogonal modes must be sorted into f 1 and f 2 , where f 2 is always higher than f 1 , which can be observed in Fig. 3.5(a). Q 1 and Q 2 are organized as well according to f 1 and f 2 , as shown in Fig. 3.5(b). From these parameters, the frequencies f 01 and f 02 can be calculated using Eq. 3.1 and 3.2 as a function of ε y , which are presented in Fig. 3.5(c). One can note that, for the higher-order modes, the curves of f 01 and f 02 intersect each other for ε y = 8.2 and ε y = 11.9, which means that the conditions for having CP are fulfilled for these two values of ε y . On the other hand, for the fundamental modes, the curves of f 01 and f 02 do not intersect, which means that the CP conditions are not respected at the lower band whatever the value of ε y . Therefore, with a homogeneneous uniaxial anisotropic dielectric resonator, it would not be possible to have CP at both bands at the same time, since f 01 and f 02 of the fundamental and higher-order modes do not intersect each other for the same value of ε y . Instead of using a homogeneous and anisotropic dielectric resonator, one possible solution is to mix isotropic and anisotropic dielectric in a inhomogeneous material. Hence, it is necessary for our application to understand the electric field distribution of the modes at issue, as can be observed in Fig. 3.6, where a homogeneous and isotropic DR over an infinite PEC ground plane is considered. Looking to these figures, it is easier to understand what happened for the homogeneous and anisotropic case. Indeed, one can realize the reason why the TE x δ13 mode is more sensible than the TE x δ11 as ε y changes, because the y-directed electric fields of the TE x δ13 mode are stronger than the TE x δ11 . Thus, to have a dual-band and CP operation, more degrees of freedom must be found to design the antenna to control the CP at the lower and upper bands independently. To do so, it is possible to observe that, at the faces of DR parallel to the xz-plane, the z-directed electric field of the TE x δ11 mode is stronger and more concentrated than for the TE x δ13 , especially at the lower region, for instance. So, if the electric permittivity were locally changed around this area, it would be possible to find variables to control the fundamental and higher-order modes more independently and, then, respect the CP conditions at both bands at the same time. Taking this information into account, the design shown in Fig. 3.7 is proposed, which presents an inhomogeneous dielectric resonator, with an isotropic permittivity region represented by ε ri , and an anisotropic one, in which its relative permittivity ε ra is described by the following tensor:
ε ra = ε x 0 0 0 ε y 0 0 0 ε z , ( 3.4)
where To verify the possibility of having CP at both L5 and L1 bands, Eq. 3.1 and 3.2 must be calculated but, first, it is necessary to optimize the parameters of the antenna. Thus, the electric field distributions of the modes at issue are analyzed to do so. As the Q-factors of the fundamental modes are much lower than the higher-order ones, the CP at the lower band is adjusted first, which is done by controlling l a . Indeed, the electric field distribution of the TE x δ11 is stronger and more distributed than the TE x δ13 along the length of the anisotropic region, and ε z is optimized since the z-componenent of the electric of the TE x δ11 are stronger at the anisotropic region. Secondly, the CP at the upper band is controlled by varying b a , since the electric field of the TE x δ13 is more concentrated in the upper part of the anisotropic region than the fundamental mode. Last, the width w a of the anisotropic region is optimized. Taking all these information into account and using the Eigenmode solution of Ansys HFSS, the parameters of the antenna can be optimized and the resonance frequencies and Q-factors are, for instance, shown as a function of ε z in Fig. 3.8, where ε ri = ε x = ε y = 10, b = 75.0 mm, w = 41.5 mm, b a = 42.0 mm, l a = 29.0 mm, and w a = 10.5 mm. As expected, the resonance frequencies of the fundamental modes are more sensitive than the higher-order ones to the variation of ε z , as can be observed in Fig. 3.8(a). From Fig. 3.8(b), one can note that the Q-factors are directly proportional to ε z and, as expected, they are higher for the higher-order modes. From this analysis, the resonance frequencies found on Ansys HFSS can be sorted into f 1 and f 2 , as presented in Fig. 3.9(a), as well as the Q-factors into Q 1 and Q 2 , as can be seen in Fig. 3.9(b), so that f 01 and f 02 can be calculated using Eq. 3.1 and 3.2, as shown in Fig. 3.9(c). It is possible to note that f 01 and f 02 intersect each other for both pairs of orthogonal modes for ε z = 25.1 and, then, the CP conditions are fulfilled at both bands at the same time, showing the possibility of having a dual-band circularly-polarized DRA. 3.10(a), one can note that the variable b a has more influence on the CP for the higher-order modes than for the fundamental ones, since the curves for TE x δ13 and TE y 1δ3 modes diverge more from each other as b a varies. It agrees with their electricfield distributions since the z-components of the electric field of the TE x δ13 mode are more concentrated at the upper part of the anisotropic region, as can be verified in Fig. 3.6. On the other hand, l a affects more the CP of the fundamental modes since the z-components of the electric field of the TE x δ11 mode are more distributed along the x-direction than the TE x δ13 . Finally, f 01 and f 02 of the fundamental modes are more sensitive than the higher-order ones for values w a up to 9 mm, however, their behaviors are similar from around 9 mm to 18 mm, as can be observed in Fig. 3.10(c).
ε ri = ε x = ε y = ε z .
Antenna Design and Results
In Section 3.1, the analyses were conducted considering a DR with an infinite ground plane without any feeding. In this section, a simple feeding scheme is implemented. As one of the goals of this work is to propose a dual-band DRA with CP due to the material by itself, a simple slot coupled to a microstrip line is considered rather than complex feeding methods or parasitic elements, as can be observed in Fig. 3.11. One can note that this slot, with width w s and length l s , is placed along the diagonal of the DR to excite the TE x δ11 , TE y 1δ1 , TE x δ13 , and TE y 1δ3 modes at once with the same amplitude. Moreover, the slot is coupled to a 50-Ω microstrip transmission line with width w l and length l l , which are printed on an RF-301 Taconic (ε r = 2.97 and tanδ = 0.0012) substrate with thickness h s equal to 1.524 mm and a finite hexagonal-shaped ground plane is considered. To control the impedance matching at both bands, the dimensions of the slot and the stub length, i.e. the distance from the end of the microstrip line to the slot, are optimized. Also, the microstrip transmission line and the ground plane are made out of copper. The dimensions of the DR optimized with the Eigenmode solution are considered as a starting point and this first design is referred to as Initial model. At first, the magnitude of the reflection coefficient |S 11 |, in dB, for this model is computed using a full-wave simulation on Ansys HFSS of the DRA with a finite ground plane with radius w g equal to 135 mm, as shown in Fig. 3.12(a). One can note that the antenna is well-matched at both L5 and L1 bands. Figure 3.12(b) presents the axial ratio (AR) at the boresight direction (θ = 0 • and φ = 0 • ). It is possible to realize that the CP is achieved only at the lower band, since the AR is below 3 dB, while, at the upper band, the AR is a little bit above 3 dB. This result is expected since the Eigenmode analysis does not consider a feeding and the finite ground plane. Then, it is necessary to optimize the DRA considering these elements. In addition, regarding the slot and microstrip line, for both initial and solid models, their dimensions are w s = 4.16 mm, l s = 57.4 mm, w l = 3.86 mm, and l l = 163.9 mm, and the stub length is 28.9 mm. For the sake of comparison, Table 3.1 shows the dimensions and properties of both models. The left-and right-handed components of the gain pattern, in dBi, at the L5 and L1 bands, i.e. calculated at the frequency of minimum axial ratio within the frequency bands at issue, are shown in Fig. 3.14. It is possible to observe that the proposed antenna presents broadside radiation patterns at both bands and cut planes, as expected for the TE x δ11 , TE y 1δ1 , TE x δ13 and TE y 1δ3 modes. Moreover, the simulated peak gains at 1.17 GHz and 1.57 GHz are 6.50 dBi and 5.38 dBi, respectively. One can observe also in Fig. 3.13(b) that the 3-dB AR bandwidth is narrower at the upper band than at the lower one and, to better understand it, the magnitude and phase of the φ-and θ-components of the electric field of the solid DRA are shown in Fig. 3.16. To obtain perfect CP and dual-band, each pair of orthogonal components of the electric field, i.e. E φ and E θ , must have the same amplitude and be in phase quadrature at each frequency band. As can be noted in Fig. 3.13(b), the minimum values of the AR are at 1.169 GHz and 1.573 GHz, where, in Fig. 3.16, the magnitude of the E φ and E θ are the same and the phase difference is approximately 270 • and 90 • at the lower and upper bands, respectively. Considering the magnitude of the electric fields, one can realize that their curves are steeper at the upper band than at the lower one, which means that the difference between |E φ | and |E θ | increases in a faster rate when the frequency slightly deviates from 1.573 GHz than from 1.169 GHz. So, this is the reason why the 3-dB AR bandwidth is broader at the L5 band than at the L1 band, but both bands are broad enough to cover both the L5 and L1 bands. This discussion is completed in Section 3.4, where we demonstrate the link with the Q-factors of these modes. Figure 3.17 presents the simulated axial ratio as a function θ at φ = 0 • and φ = 90 • calculated at 1.17 GHz and 1.57 Ghz, which are the frequencies of minimum axial ratio at the boresight direction (θ = 0 • and φ = 0 • ). At the lower band, the axial ratio is below 3 dB for -44. The simulated radiation efficiency of the proposed DRA is shown in Fig. 3.18. One can note that the efficiency is higher than 97 % from 1.0 to 1.7 GHz. In addition, considering the impedance bandwidth (|S 11 | < 10 dB), i.e. from 1065.0 MHz to 1275.1 MHz and from 1513.8 MHz to 1668.4 MHz, the simulated efficiency is higher than 99 %. These results are good and highlight one of the main advantages of DRA over patch antennas which is due to the absence of conductor and surface-wave losses in DRA.
Copper
Parametric Analysis
In Section 3.1, the influence of the new parameters of the DR has been investigated. However, this analysis was made considering an infinite ground plane without any feeding source and using the Eigenmode solution of Ansys HFSS where the theoretical frequencies of minimum axial ratio were computed. Therefore, in this Section, a similar study is carried but considering a finite ground plane and the feeding, and using the Driven Terminal solution of Ansys HFSS. Also, the reflection coefficient S The influence of the height b a of the anisotropic region in the reflection coefficient and axial ratio is shown in Fig. 3.19. One can realize that both impedance bandwidth and AR are more sensitive at the upper band than at the lower one when b a deviates from its optimal value of 47 mm. These results are following the electric field distribution of the modes at issue (see Fig. 3.6), since the z-component of the electric field of the higher-order modes around the anisotropic regions is more concentrated at the upper part of the DRA when compared to the fundamental modes. It agrees as well with the curves of f 01 and f 02 from the Eigenmode analysis, as it was observed in Fig. 3.10(a). Figure 3.20 presents the |S 11 | and AR as a function of the length l a of the anisotropic regions. It is possible to note that variations of l a result in a bigger relative frequency shift at the L5 band than at the L1 band, which can be better visualized in the AR curves. This conclusion agrees with the electric field distribution of the mode at issue since the electric field of the TE x δ11 mode in the anisotropic region is more distributed along the x-direction in comparison to the TE x δ13 mode. Also, the same behavior can be observed from the curves of f 01 and f 02 in Fig. 3.10(b). In Fig. 3.21 the effect of w a on the antenna is investigated. It is possible to note that this parameter has virtually the same effect on both bands, which is in agreement with the Eigenmode analysis from Fig. 3.10(a). The 3-dB AR bandwidths shift downwards as w a increases at the L5 and L1 bands. The effect of the z-component of the tensor of permittivity of the uniaxial dielectric ε z is presented as well in Fig. 3.22. The level of axial ratio is more sensitive to variations of ε z at both bands than for the other parameters, as expected, since a variation of ε z means a change in the relative permittivity of the whole volume of the anisotropic regions. The |S 11 | and AR of the proposed DRA are calculated as well considering different sizes of ground plane, as can be observed in Fig. 3.23. It is important to point out that the stub length is the same for all different values of w g and, thus, one can note that there are no relevant frequency shifts in the |S 11 | as w g varies. When it comes to the axial ratio, the ground plane plays an important role, as can be seen in Fig. 3.23(b). As the size of the ground plane is getting smaller than one half of the wavelength, i.e. 2 • w g = λ L5 /2 = 95 mm and 2 • w g = λ L1 /2 = 128 mm, the axial ratio is getting worse. Therefore, for optimal performance in terms of AR, it is necessary to choose sizes of ground plane higher than λ/2, namely 2 • w g = 128 mm in our case. From the parametric study, a design guideline of the proposed DRA can be devised as follows. First, the width w and height b of the DRA are computed using the DWM so that the TE 111 and TE 113 modes resonate at frequencies a little bit higher than the L5 and L1 bands, respectively. Then, to excite properly these modes, the width and length of the slot and microstrip line are calculated as well [START_REF] Petosa | Dielectric resonator antenna handbook[END_REF] and a size of ground plane bigger than λ/2 at the lower band is defined. At this stage, the DRA is dual-band with linear polarization and, to turn this antenna into a circularly-polarized one, the anisotropic regions are introduced. Next, w a and ε z are tuned to achieve values of axial ratio at least close to 3 dB at both bands. Afterward, l a is optimized to tune the results mainly at the lower band and, finally, b a is adjusted to achieve almost perfect circular polarization at the upper band as well.
1 1.1 1.2 1.3 1.4 1.5 1. 6 1.7 -40 -35 -30 -25 -20 -15 -10 -5 0 (a)
Trade-off Between Axial Ratio Bandwidth and DRA Volume
The goal of this chapter is to control the permittivity of a dielectric to create a dual-band CP DRA. In other words, this performance is achieved without relying on complex feeding methods and/or modifications of the overall shape of the DRA. In this scenario, it is important to understand the limitations of the 3-dB axial ratio (AR) bandwidth that could be achieved by relying only upon the dielectric itself. From [START_REF] William | Impedance, axial-ratio, and receive-power bandwidths of microstrip antennas[END_REF], the theoretical AR bandwidth BW AR CP of a single-fed antenna exploiting orthogonal modes can be written as
BW AR CP ≈ AR dB max ln (10) 20 1 Q ≈ 0.115 AR dB max Q , ( 3.5)
where Q is the quality factor of the resonating mode and AR dB max is the level of axial ratio considered as reference to calculate the bandwidth. Throughout this work, AR dB max will be equal to 3 dB. From this equation, it is important to have in mind that the AR bandwidth BW AR CP is inversely proportional to the Q-factor.
To understand the natural limitation in terms of 3-dB AR bandwidth, an isotropic and homogeneous DRA with a square base is considered. The Q-factors and resonance frequencies of its TE x,y 111 and TE x,y 113 modes are calculated using the Eigenmode solution of Ansys HFSS considering an infinite ground plane. In addition, the theoretical 3-dB AR bandwidths of the fundamental and higher-order modes are calculated using Eq. 3.5 as a function of the relative permittivity ε r of the dielectric. For every single value of ε r the width w and height b of the dielectric are recalculated so that the resonance frequencies of the TE x,y 111 and TE x,y 113 modes appear always at the center of the L5 and L1 bands, respectively. Therefore, Fig. 3.24 shows the Q-factor and the theoretical 3-dB axial ratio (AR) bandwidth calculated using Eq. 3.5. One can note that the Q-factor of the TE x,y 113 modes are always higher than for the TE x,y 111 ones, which means that these higher-order modes will always present a narrower AR bandwidth. Also, the AR bandwidth is inversely proportional to ε r . It is important to point out that, as the value of ε r decreases to values close to 1, it becomes more difficult to excite the modes, especially the ones with lower Q-factor.
To keep TE x,y
111 and TE x,y 113 modes resonating at the L5 and L1 bands, respectively, for different values of ε r , it is necessary to adjust the width and height of the square-based DRA. The volume of the DRA thus changes as ε r varies. Figure 3.25 presents the volume, in cm 3 , of the DRA as a function of ε r . As expected, it is possible to note that the volume is inversely proportional to the relative permittivity of the dielectric. Therefore, as the AR bandwidth is inversely proportional to ε r as well, it is necessary to take into account the specifications of the system in order to properly chose the permittivity of the DRA.
Even though the analysis of the 3-dB AR bandwidth is made considering an isotropic and homogeneous DRA, this analysis can be used as a reference for the inhomogeneous and anisotropic DRA to provide a good estimation of the maximum 3-dB AR bandwidth that could be achieved, since most of the proposed antenna is isotropic and only one component of the tensor of permittivity of the anisotropic region has a different value. For the sake of comparison, Table 3.2 presents the values of maximum theoretical 3-dB AR bandwidth for ε r = 10 calculated using Eq. 3.5 and the values obtained in Section 3.2 for the inhomogeneous and anisotropic DRA, which are referred to as theoretical and simulated, respectively. One can note that the simulated results are lower than the theoretical ones, as expected, as the ε z is higher than 10, which increases the Q-factor and decreases the AR bandwidth. However, the results are reasonable, which means that the isotropic and homogeneous case can be used to estimate the maximum AR bandwidth of the proposed model. Also, there is an important relationship between the volume and AR bandwidth, which can be chosen according to the specifications of a given communication system. For instance, for the L5 and L1 bands, the axial ratio bandwidth presented in Section 3.2 would be enough to cover both bands, once the required bandwidths are 1.74% and 1.30% and it was achieved 5.00% and 1.46% at the lower and upper bands, respectively. However, depending on the platform, the proposed antenna could be too bulky. Therefore, it would be necessary to carefully check the required specifications and the space available. Then, the permittivity can be chosen and the same reasoning presented in Sections 3.1 and 3.2 can be applied to design the proposed dielectric resonator antenna at different frequencies with different materials.
3-dB AR bandwidth lower band upper band Theoretical 6.1% 1.8% Simulated 5% 1.46% Table 3.2: Maximum theoretical and simulated 3-dB AR bandwidth.
Conclusion
A dual-band circularly-polarized dielectric resonator antenna has been proposed and investigated. It is designed to operate at the L5 and L1 bands of the GNSS system. The circular polarization is achieved by adding two anisotropic regions inside the square-based rectangular DRA, which allows the presence of circular polarization at two bands using a single microstrip-coupled slot to excite the radiating modes of the DRA.
The novelty of the proposed DRA relies on the fact that the circular polarization is achieved at two bands due to the manipulation of the artificial electric properties of the dielectric by itself and not due to shape modifications and/or complex feeding methods. A single slot is used to excite a pair of fundamental (TE x δ11 and TE y 1δ1 ) and higher-order (TE x δ13 and TE y 1δ3 ) modes at the L5 and L1 bands, respectively, with the same amplitude, while the phase quadrature at each band is achieved due to the presence of two uniaxial anisotropic regions.
The proposed design has simulated impedance bandwidths (|S 11 | < 10 dB) of 17.96% (1065.0 MHz -1275.1 MHz) and 10.62% (1513.8 MHz -1668.4 MHz). The simulated 3-dB axial ratio bandwidths are 5.00% (1140.3 MHz -1198.8 MHz) and 1.46% (1562.8 MHz -1585.7 MHz), covering both L5 and L1 bands, respectively. Finally, broadside radiation patterns are observed as expected, due to the radiating modes at issue. In this chapter, the properties of the isotropic and anisotropic dielectrics have directly been assigned on Ansys HFSS. However, in real life, it would not be so simple to manufacture this antenna, since materials with such electric properties would not necessarily be available. On the other hand, the concept of periodic structures and additive manufacturing can be used to do so, as presented in the following chapter. In the previous chapter, an inhomogeneous and anisotropic dielectric resonator antenna has been proposed. This antenna presents isotropic and anisotropic regions with optimized values of permittivity. To manufacture this antenna, dielectrics with the needed features will not necessarily be available. To overcome this issue, additive manufacturing can be employed. Thus, in this chapter, an introduction to additive manufacturing (AM) of dielectric periodic structures is presented. In Section 4.1, an overview of AM is introduced, where a historical perspective and their different processes are discussed. Section 4.2 then presents more specifically a state-of-the-art of 3D-printed dielectric resonator antennas. In Section 4.3, an isotropic unit cell is proposed and different methods to retrieve their effective permittivity are discussed. Finally, in Section 4.4, a uniaxial anisotropic unit cell is presented.
Additive Manufacturing
Additive manufacturing (AM), also referred to as three-dimensional (3D) printing, is a transformative approach to create three-dimensional objects by deposing a printing material. This material can be metal, plastic, or ceramic, to name a few. Also, 3D-printing technology uses data computer-aided-design (CAD) models or 3D object scanners to control the 3D printer, allowing the fabrication of complex objects, layer upon layer, with great flexibility and lowmanufacturing cost [START_REF] Redwood | The 3D printing handbook: technologies, design and applications[END_REF].
Even though the AM has been in the spotlight over the last decade, its first stage dates from 1981, when Hideo Kodama developed a rapid prototyping method using a layer-uponlayer approach [START_REF] Diogo | Antenna design using modern additive manufacturing technology: A review[END_REF]. In 1986, Charlie Hull filed the first patent for a 3D-printing technology known as stereolithography (SL), and, then, in 1988, he founded the 3D Systems Corporation, which was the first company to commercialize a 3D printer. This method is the most common vat photopolymerization process, where a photosensitive-liquid resin is polymerized by ultraviolet light, layer by layer, and this process is illustrated in Fig. 4.1. Also, Hull developed the STL file format, which is the file that 3D printers most commonly use today. In 1989, Carl Deckard, at the University of Texas, submitted the patent for Selective Laser Sintering (SLS) technology. This method belongs to the class of powder bed fusion (PBF) processes, which are techniques that employ a laser or an electron beam to sinter a powdered material into a solid object. The usual setup of the SLS technology is illustrated in Fig. 4.2. Moreover, SLS can be used for a variety of plastics, ceramics, glass, metals, and alloy powders [START_REF] Tagliaferri | Environmental and economic analysis of FDM, SLS and MJF additive manufacturing technologies[END_REF]. In the meantime, Scott Crump filed a patent for the Fused Deposition Method (FDM) technology. Differently from the SLS method that uses a laser or an electron beam, in the FDM, the filament is directly extruded from a heated nozzle to create the desired 3D object and the scheme of this method can be seen in Fig. 4.3. With his wife Lisa Crump, he founded the company Stratasys Inc., which is a worldwide leader in FDM technology. Even today, the SL, SLS, and FDM are the most used AM techniques. However, with the widely interesting in AM, different techniques have been proposed lately, such as Laminated Engineered Net Shaping (LENS) [START_REF] Izadi | A review of laser engineered net shaping (LENS) build and process parameters of metallic parts[END_REF], Polyjet [START_REF] Lu | PolyJet 3D printing of composite materials: experimental and modelling approach[END_REF], Direct Metal Laser Sintering (DMLS) [START_REF] Simchi | On the development of direct metal laser sintering for rapid tooling[END_REF], Selective Laser Melting (SLM), and Electron Beam Melting (EBM) [START_REF] Larsson | Rapid manufacturing with Electron Beam Melting (EBM)-A manufacturing revolution?[END_REF], among others.
With the fast development of different AM methods, different industry segments have been exploring this technology. Nowadays, 3D-printed components are highly demanded in the aviation [START_REF] Stephan | Additive manufacturing's impact and future in the aviation industry[END_REF], automotive [START_REF] Ak Matta | Metal Prototyping the future of Automobile Industry: A review[END_REF], medical [START_REF] Mallikarjuna N Nadagouda | A review on 3D printing techniques for medical applications[END_REF], construction [START_REF] Furet | 3D printing for construction based on a complex wall of polymer-foam and concrete[END_REF], food [START_REF] Jeffrey I Lipton | Additive manufacturing for the food industry[END_REF], and aerospace [START_REF] Kalender | Additive manufacturing and 3D printer technology in aerospace industry[END_REF] industries due to their design flexibility, low cost, easy and fast prototyping when compared to other traditional methods [START_REF] Redwood | The 3D printing handbook: technologies, design and applications[END_REF].
When it comes to the 3D-printing technology, the selection of the printing material is a key element. It is necessary to ensure that the printing material has the characteristics needed for the application. In this context, 3D printing brings a lot of possibilities since it is possible to use a wide variety of materials such as ceramics, metals, polymers, carbon fibers, and concrete, among others.
Owing to their various excellent properties, the use of ceramics in 3D printing has been increasing with each passing day due to their advantages over traditional materials such as polymers and metal. Ceramics present outstanding mechanical, electrical, chemical, and thermal properties such as electrical insulation, resistance to corrosion and high temperature, ionic conduction, chemical inertness, strength, and stiffness, which are very important characteristics for different applications. Due to these features, ceramics are highly demanded in applications that go from gas turbines, nuclear reactors, batteries, and heat exchangers to medical tooling, dental products, and decoration components [START_REF] Naslain | Design, preparation and properties of non-oxide CMCs for application in engines and nuclear reactors: an overview[END_REF][START_REF] Denry | State of the art of zirconia for dental applications[END_REF][START_REF] Oh | Microwave dielectric properties of zirconia fabricated using NanoParticle Jetting TM[END_REF][START_REF] Dceram | Additive Manufacturing for industrial applications[END_REF].
In this work, zirconia (zirconium dioxide, ZrO 2 ) is used as printing material due to its electromagnetic properties such as low loss (tanδ = 1.9 × 10 -4 ) and high relative permittivity ε r equal to 32.5 while presenting high mechanical robustness and resistance. Also, this material has been extensively investigated due to its biocompatibility [START_REF] Denry | State of the art of zirconia for dental applications[END_REF] and chemical resis-tance [START_REF] Rhj Hannink | Friction and wear of partially stabilized zirconia: basic science and practical applications[END_REF]. Such characteristics make the zirconia very useful to create components for harsh environments such as space, for instance.
Additive Manufacturing for Dielectric Resonator Antennas
Over the last couple of years, the three-dimensional (3-D) printing technology, also known as additive manufacturing (AM), has been extensively employed and investigated in the context of antenna applications due to its numerous advantages such as low cost, high degree of freedom regarding the design, eco-friendliness, easy and fast prototyping, to name a few [START_REF] Redwood | The 3D printing handbook: technologies, design and applications[END_REF][START_REF] Calignano | Overview on additive manufacturing technologies[END_REF]. In this context, 3D printing allows the realization of different devices for a variety of purposes such as horn [START_REF] Addamo | 3-D printing of high-performance feed horns from Ku-to Vbands[END_REF], reflector [START_REF] Nayeri | 3D printed dielectric reflectarrays: Low-cost high-gain antennas at sub-millimeter waves[END_REF], lens [START_REF] Tinh Nguyen | Design and characterization of 60-GHz integrated lens antennas fabricated through ceramic stereolithography[END_REF][START_REF] Liang | A 3-D Luneburg lens antenna fabricated by polymer jetting rapid prototyping[END_REF], spiral [START_REF] Ahmadloo | Application of novel integrated dielectric and conductive ink 3D printing technique for fabrication of conical spiral antennas[END_REF], and dipole antennas [START_REF] Adams | 3D-printed spherical dipole antenna integrated on small RF node[END_REF].
The 3D-printing technology has the potential to increase the use of DRAs since it allows the design of complex shapes at a reduced cost. The influence of the use of this technology in DRAs can be summarized by Stuart A. Long, one of the first authors to publish about DRAs in the early 1980s, and David R. Jackson, who stated recently in [START_REF] David | History of microstrip and dielectric resonator antennas[END_REF]: "The dielectric resonator antenna has to date not seen as many applications as the microstrip antenna; the main obstacle seems to be the higher cost of its more complicated fabrication. However, once new manufacturing techniques such as 3-D printing are further established, and reduced size versions of these radiators for use at very high-frequency operations become available, their use is likely to become much more pervasive." Finally, as the developments of 3D-printed DRAs are recent and not completely established, the literature about the subject is still limited and some of these works are discussed in the following.
In [START_REF] Basile | Design and manufacturing of super-shaped dielectric resonator antennas for 5G applications using stereolithography[END_REF], to show the potential of the 3D-printing technology, super-shaped dielectric resonator antennas (S-DRAs) have been manufactured, as shown in Fig. 4.4, using a photopolymer (ε r = 2.7 and tanδ = 0.003) as printing material. From the measured results, the S-DRAs presented good results and a drastic volume reduction is achieved when compared to the rectangular and cylindrical DRAs made out of the same material. In this paper, the manufactured DRAs are isotropic and their effective permittivities are the same as the printing material. The 3D printing technology has been also used to design circularly-polarized DRAs, as in [START_REF] Mazingue | 3D Printed Ceramic Antennas for Space Applications[END_REF]. In this paper, the authors propose an isotropic elliptical DRA to operate at the L1 band (1575.42 MHz ± 10.23 MHz) for the Global Navigation Satellite System (GNSS) applications, as can be observed in Fig. 4.5(a). Zirconia (ε r = 32.5 and tanδ = 1.9 • 10 -4 at 10 GHz) is employed as reference material to design the antenna. The unit cell shown in Fig. 4.5(b) was used, where ε eff is controlled by varying the air-dielectric ratio inside the cell. Additionally, the antenna has circular polarization over the L1 band since two orthogonal modes are excited by a coaxial probe, where its position is carefully chosen to excite these modes with the same amplitude and in phase quadrature. A parametric study is performed in [START_REF] Cuevas | Parametric Study of a Fully 3D-Printed Dielectric Resonator Antenna Loaded With a Metallic Cap[END_REF] of a 3D-printed hemispherical DRA loaded with a metallic cap. To do so, low-loss dielectric filaments and high-conductive filaments are employed to print the antenna using a low-cost dual-extruding 3D printer. Five hemispherical DRA with different internal patterns with the same external size were designed and evaluated, which can be observed in Fig. 4.6. Also, it has been shown that the introduction of metallic caps can compensate for the different internal designs, i.e. different infill percentage, and, thus, maintaining the same resonant frequency for the proposed DRAs. Finally, a maximum reduction of 22% of the weight of the DRA is observed compared to the DRA with 100% of infill percentage. In [START_REF] Xia | 3-D-printed wideband multi-ring dielectric resonator antenna[END_REF], Xia et al. propose a 3D-printed multi-ring dielectric resonator antenna to obtain a wideband of operation, as can be noted in Fig. 4.7(a). This structure consists of four concentric rings in which the value of permittivity decreases from the inner to the outermost ring. More specifically, the relative effective permittivity ε eff of the first (central), second, third, and fourth layers are 10, 8.25, 4.0, and 2.5, respectively. Also, the same printing material is used for the different layers of the antenna, where ε r = 10 ± 0.4 (0.1 -6.0 GHz).
To achieve different values of ε eff , dielectric square rings are used, as can be seen in Fig. 4.7(b). The filling ratio theory [START_REF] Liang | A 3-D Luneburg lens antenna fabricated by polymer jetting rapid prototyping[END_REF] is employed by varying the periodicity a and thickness t c of the unit cell to control the value of ε eff . The prototype finally presented a wide measured 10-dB impedance bandwidth of 60.2% (4.3-8.0 GHz) and the overall material cost was less than $20 USD. In [START_REF] Lamotte | Multi-permittivity 3D-printed Ceramic Dual-Band Circularly Polarized Dielectric Resonator Antenna for Space Applications[END_REF], a nonhomogeneous dielectric resonator antenna is 3D-printed in ceramics for space applications, as can be seen in Fig. 4.8. This DRA presets two isotropic regions with different values of effective permittivity, even though the antenna is printed using only zirconia as the printing material. The different values of permittivity are achieved due to the use of periodic subwavelength unit cells. The antenna presents dual-band and circular polarization at upper-L-Band (1.559 -1.61GHz) and S-band (2.025 -2.29GHz), and the CP is achieved mainly due to a complex feeding network.
In [START_REF] David | 3D-printed ceramics with engineered anisotropy for dielectric resonator antenna applications[END_REF], a uniaxial anisotropic rectangular DRA, as can be seen in Fig 4.9(a), is proposed to achieve circular polarization at 2.45 GHz. The authors proposed a unit cell able to create an effective uniaxial anisotropic medium due to its lack of rotational symmetry as shown in Fig. 4.9(b). In addition, this cell presents an effective relative permittivity tensor ε eff equal to [20 0 0; 0 14.5 0; 0 0 14.5]. Differently from [START_REF] Mazingue | 3D Printed Ceramic Antennas for Space Applications[END_REF], the CP is achieved mainly due to the effective dielectric properties of the antenna and not due to its physical shape, since the proposed rectangular DRA is square-based. Also, the reference material to design the antenna is zirconia as well. Finally, in this section, some of the few works on 3D-printed DRAs found in the literature have been shown, which can demonstrate the advantages of this type of technology. To be more specific, from only one reference material, it is possible to design artificial isotropic, anisotropic, homogenous, and/or inhomogeneous dielectrics, which opens up a variety of new possibilities in this research field. In the previous chapter, the use of isotropic and anisotropic dielectric regions is proposed to design a dual-band circularly-polarized DRA, which has not been investigated so far in the literature. Therefore, the possibility of having these artificial dielectric media is studied in the next sections by using the 3D printing of ceramics.
Design of the 3D-Printed Isotropic Unit Cell
This section presents a unit cell proposed to emulate an artificial isotropic medium considering different parameter retrieval methods to provide a better and complete understanding of it.
To have an effective isotropic medium, the cell has to present symmetry along the x-, y-and z-axes. In this scenario, different cubic lattices can be used and the main ones are the simple cubic, body-centered cubic (BCC), and face-centered cubic (FCC), which are presented in Fig. 4.10 [START_REF] Gupta | A unit cell approach to model and characterize the metal powders and metal-dielectric composites at microwave frequencies[END_REF]. All of these structures are typically used to design isotropic cells, however, the effective permittivity ε ef f can slightly change according to the electric-field polarization used to calculate it. In other words, depending on the electric-field polarization and on the symmetry of the cell, the ε ef f can be slightly anisotropic. The list from the most to the least symmetric lattices is: FCC, BCC, and simple cubic [START_REF] Brillouin | Wave propagation in periodic structures: electric filters and crystal lattices[END_REF]. In this work, the simple cubic has been chosen because the proposed antenna needs an anisotropic region with a large birefringence, which could be achieved by modifying a lesser symmetric lattice initial isotropic unit cell. The proposed unit cell is presented in Fig. 4.11, where zirconia is used as a reference material, which has ε r = 32.5 and tanδ = 1.9 • 10 -4 at 10 GHz. It consists of cubic zirconia elements in each corner of the unit cell connected to each other by small pieces of zirconia. It should be noted that the elementary cell must be much smaller than the operating wavelength so that the medium can be considered homogeneous and characterized by an effective permittivity. Different parameter retrieval methods are now presented in the following to calculate the effective permittivity of the proposed unit cell.
Parameter Retrieval Methods
S-Parameters Retrieval Method
The S-parameters are here used to characterize the isotropic unit cell using Ansys HFSS, which consists in the employment of Floquet ports and Master/Slave boundary conditions, as described in Appendix A. This method is used in different works to compute the effective permittivity of unit cells for dielectric resonator antennas [START_REF] Mazingue | 3D Printed Ceramic Antennas for Space Applications[END_REF][START_REF] Lamotte | Multi-permittivity 3D-printed Ceramic Dual-Band Circularly Polarized Dielectric Resonator Antenna for Space Applications[END_REF][START_REF] David | 3D-printed ceramics with engineered anisotropy for dielectric resonator antenna applications[END_REF]. The idea here is to control the permittivity of a zirconia-made unit cell in order to have an effective permittivity ε ef f equal to 10, which is the optimal value needed to design the antenna proposed in Chapter 3.
The effective relative permittivity of the unit cell ε ef f is controlled by the filling ratio
δ i = V zirconia V cell
, where V zirconia is the volume of zirconia inside the unit cell and V cell is the total volume of a cube of periodicity a. In other words, to control the effective permittivity, it is necessary to control the volume of air and zirconia inside the cell. To illustrate it, Fig. 4.12 shows ε ef f as a function of δ i computed considering an x-directed electric field polarization at 1.575 GHz, where a is equal to 4 mm. As expected, one can note that the ε ef f and δ i are proportional to each other. Finally, the effective permittivity ε ef f is equal to 10 for a filling ratio δ i = 0.64. A common rule-of-thumb found in the literature suggests that the lattice of the unit cell is supposed to be smaller than a quarter of the wavelength to avoid dispersion, which happens when the unit cells start to resonate by themselves [START_REF] Raymond C Rumpf | Engineering the dispersion and anisotropy of periodic electromagnetic structures[END_REF]. To verify it, Fig. 4.13(a) shows the effective permittivity ε ef f as a function of the frequency with a = 4 mm, w si = 0.6 mm, and l si = 2.99 mm, namely what has been calculated to achieve permittivity of 10 at the L5 and L1 bands. One can note that the calculated effective permittivity decreases as the frequency increases. To understand better this variation, Fig. 4.13(b) presents the absolute difference using ε ef f = 10 as a reference. For instance, for a = λ 10 the difference is only 0.6%, which could be acceptable depending on the application. Therefore, to avoid dispersion and eventually a lack of accuracy of the S-parameters retrieval method for electrically-large cells, the best practice is to use a unit cell as small as possible taking into account the manufacturing limitations. Finally, the S-parameters retrieval method is a fast and straightforward way to calculate the effective permittivity of a periodic structure made up of subwavelength unit cells. However, it relies on several assumptions that restrict its ability to compute the unit cell dispersion with a good accuracy, which can be a problem for electrically-large cells. Therefore, the dispersion diagram can be useful in this scenario, which is presented following in this section.
Dispersion Diagram
Dispersion can be defined as any situation where the electromagnetic properties change. Different types of dispersion can be observed such as chromatic, polarization, and spatial ones. The chromatic dispersion occurs when the material/medium electromagnetic properties change as a function of frequency. A classical example is a dispersive prism since it separates the light into its spectral components, i.e. the colors of the rainbow, since each color has a different wavelength and, thus, experiencing a different refractive index and refracting at different angles, as can be seen in Fig. 4.14(a). On the other hand, when the material properties are a function of the polarization of the wave, the polarization dispersion takes place, as can be observed in Fig. 4.14(b). Finally, spatial dispersion is present when the material property is a function of the direction of the wave. To be more specific, this effect appears when waves in different directions travel at a different velocity, as can be observed in Fig. 4.14(c). In this scenario, the dispersion relation is an important expression that relates the wave vector k to the frequency ω, which can be interpreted as quantification of k as a function of the direction and frequency. Considering a linear, homogeneous and isotropic (LHI) medium, and replacing the plane wave solution with its wave equation, the dispersion relation can be written as
k 2 x + k 2 y + k 2 z = k 2 = (k 0 n) 2 , (4.1)
which describes the surface of a sphere, since waves in LHI media experience the same refractive index in all directions and where n is the refractive index of the material. This surface is known as a dispersion surface and describes the spatial dispersion of a material.
For anisotropic media, the dispersion relation is a little bit more complicated, however, almost the same reasoning used for LHI media can be used here. The only difference is that the permittivity in the wave equation in an LHI medium is scalar and, thus, the mathematical development is easier. On the other hand, for a linear, homogeneous, and anisotropic (LHA) medium, the permittivity is a tensor and, then, the mathematical development is a little bit more complicated but it is still straightforward. For a biaxial medium, i.e. n x = n y = n z , the dispersion relation can be written as [START_REF] Raymond C Rumpf | Engineering the dispersion and anisotropy of periodic electromagnetic structures[END_REF]
| k| 2 k x n y n z 2 + k y n x n z 2 + k z n x n y 2 -k 2 0 k 2 y + k 2 z n 2 x + k 2 z + k 2 x n 2 y + k 2 x + k 2 y n 2 z + k 4 0 = 0, (4.2)
where n x , n y , and n z are the refractive index experienced by the waves purely polarized in the x-, y-, and z-directions, respectively.
When n x = n y = n z , the material is said to be uniaxial anisotropic and its dispersion relation can be rewritten as [START_REF] Raymond C Rumpf | Engineering the dispersion and anisotropy of periodic electromagnetic structures[END_REF]
k 2 x + k 2 y + k 2 z n 2 o -k 2 0 Sphere Ordinary Wave k 2 x + k 2 y n 2 e + k 2 z n 2 o -k 2 0 Ellipse Extraordinary Wave = 0, (4.3)
where n o = n x = n y is the ordinary refractive index and n e = n z is the extraordinary refractive index. One can note that the dispersion relation of the uniaxial material has two solutions, which correspond to the two polarizations (TE and TM). The first solution is the same as an isotropic medium and describes a sphere, i.e. it is like the wave is propagating through an isotropic medium with refractive index n o . However, the second solution describes an ellipsoid, which means that the refractive index will be somewhere between n o and n e , depending on its direction.
When it comes to periodic structures, it is important to retrieve their effective electromagnetic properties to properly use them at a given frequency. However, these properties can be different for different frequency ranges. To be more specific, for periodic structures made out of dielectric with small lattices compared to the wavelength, it is possible to retrieve consistent effective properties from DC up to a cutoff frequency. Below this cutoff frequency, the periodic structure is nonresonant and, above it, a resonant behavior is observed and, thus, its properties can change significantly. This cutoff frequency can be determined from the electromagnetic band diagram, or simply dispersion diagram, by identifying the frequency in which the lowest-order electromagnetic band of the periodic structure starts to deviate from the light line, i.e. when the dispersion begins. Also, the dispersion diagram can be used to calculate the effective permittivity of the unit cell by using the following equation
ε r = kλ 0 2π 2 .
(4.4)
In this work, the dispersion diagram is computed using the Eigenmode solution of the Ansys HFSS, where three pairs of Master/Slave boundary conditions are assigned on each face of the proposed isotropic unit cell shown in Fig. 4.11. Here, the analyses are made considering the dimensions of the unit cells calculated previously with the S-parameter retrieval method and, then, the dispersion diagram is presented to provide a complete investigation of the unit cell at issue.
Considering a = 4 mm, w si = 0.6 mm, and l si = 2.99 mm, the dispersion diagram of the lowest-order electromagnetic (EM) band is shown in Fig. 4.15, taking into account the irreducible Brillouin zone (IBZ) shown in the same figure, which is appropriate for simple cubic crystal structures. It is possible to note that the EM band starts to deviate from the light line around 4 GHz and, thus, the structure is dispersive above this cutoff frequency. In other words, the effective parameters of the unit cell is consistent for frequencies up to 4 GHz, which is illustrated by the blue region. Considering an effective permittivity of 10, at the cutoff frequency, the lattice of the unit cell is equivalent to λ/5.93. Using Eq. (4.4), the effective permittivity is calculated considering the path Γ-X of the IBZ of the proposed unit cell and compared to the one obtained from the S-parameters retrieval method, as shown in Fig. 4.16, where the same x-directed electric field polarization is taken into account for both analyses. The dimensions of the cell are a = 4 mm, w si = 0.6 mm, and l si = 2.99 mm. In Fig. 4.16(a), one can note that the curves of effective permittivity have different behaviors. More precisely, for cells smaller than λ/10, the results are different but the difference considering ε ef f = 10 as a reference is lower than 2%, as can be seen in Fig. 4.16(b), which we consider acceptable since two different analytical and numerical methods are employed. It is important to point out that this difference is calculated using ε ef f = 10 as a reference and the cells are optimized with the S-parameters method, which means that the difference is relative to the retrieval method used in this analysis. Moreover, as the cell is getting electrically bigger, the permittivity for the dispersion diagram method increases whereas it reduces for the S-parameters method, which could indicate a limitation of some of these methods for electrically-large cells. However, when the cell is smaller than λ/10, both methods are equivalents with a small discrepancy. As aforementioned, the effective permittivity can be extracted from the dispersion diagram using Eq. (4.4), however, depending on the path of the IBZ considered, the electric field polarization may change. To verify the effective permittivity from 1 to 6 GHz, the paths Γ-X, M-Γ, and Γ-R from the dispersion diagram are used, as shown in Fig. 4.17 for a = 4 mm and a = 12 mm. Both unit cells are initially designed with the S-parameters retrieval method to achieve ε ef f = 10 at 1.575 GHz, assuming that this method works regardless of the electric size of the cell. For a = 4 mm, one can note that the curves of permittivity are the same at lower frequencies and they slightly separate from each other as the frequency is getting higher. More precisely, at 6 GHz, the maximum difference between the permittivity curves for a = 4 mm is 3.3%, where the cell is electrically bigger, i.e. λ/3.95. On the other hand, for a = 12 mm, the difference between the curves of permittivity is important even for lower frequencies. At 3 GHz, the maximum permittivity difference is 18%, where a is equivalent to a < λ/2.64. Therefore, if the cell is electrically small, i.e. a < λ/10, even considering a simple cubic lattice, the permittivity curves are the same, which means that the level of isotropy of this cell remains good. However, if the cell is electrically large, the lack of isotropy can be a problem, and the FCC or BCC lattices would certainly be more appropriate. Figure 4.17: Effective permittivities calculated from the dispersion diagram considering different paths of the IBZ for a = 4 mm, w si = 0.6 mm, and l si = 2.99 mm, and for a = 12 mm, w si = 2.9 mm, and l si = 7.49 mm.
Dielectric Resonator Based on Periodic Structures
Previously in this section, the S-parameters retrieval and dispersion diagram methods were used to calculate the effective permittivity of the proposed isotropic unit cell. However, especially for electrically-large cells, the results from both methods are quite different. To evaluate which method is more accurate, a cubic dielectric resonator made up of several unit cells can be used to estimate the effective permittivity of the periodic structure using the Eigenmode solution of Ansys HFSS. In this analysis, the resonance frequency of the fundamental TE x δ11 mode is tracked by identifying its electric field distribution.
As a starting point, a cubic dielectric resonator is designed so that the fundamental TE x δ11 mode resonates at 1.57 GHz considering a homogeneous bulk dielectric with permittivity of 10 directly assigned on Ansys HFSS. This model is referred to as solid model and can be observed in Fig. 4.18(a), where w is found to be 37.4 mm. Then, unit cells are used to achieve ε ef f = 10 for different sizes a of unit cells, and the models made up of these periodic structures are referred to as 3D-printed models, as shown in Fig. 4.18(b). It is important to point out that both models have the same value of w. In this analysis, the unit cells are designed using the S-parameters retrieval method to achieve an effective permittivity ε spar of 10 at 1.57 GHz for different values of a assuming that this method works regardless of the electric size of the cell. Also, only cells with lattice a that give w/a equal to an entire number are taken into account to avoid issues with incomplete cells in the DR. Table 4.1 shows the dimensions of the cells considered in this analysis. 4.1. One can note that, as w/a is increasing, i.e the cells are getting electrically and physically smaller, the resonance frequency is getting closer to 1.57 GHz. In other words, for electrically-small cells, the effective permittivity of the cell as calculated by the S-parameters retrieval method is accurate. Using the results from Fig. 4.19 and solid model (see Fig. 4.18(a)), it is possible to estimate the effective permittivity of the full DR made up og unit cells. Using this model, its effective permittivity is varied in order to achieve the same resonance frequency for the TE x δ11 mode obtained with the 3D-printed models. Thus, the effective permittivity ε dr extracted from the dielectric resonator using the Eigenmode solution of Ansys HFSS is presented in Fig. 4.20. As expected, for electrically-small cells, the permittivity is close to 10. However, the effective permittivity increases as a is getting bigger, which is the opposite of what happens with the S-parameters retrieval method described in Subsection 4.3.1.1. On the other hand, the behavior is similar to the one found with the dispersion diagram analysis in the Subsection 4.3.1.2. Table 4.2 compares the effective relative permittivity found with the S-parameters retrieval method (ε spar ), dispersion diagram (ε dd ), and Eigenmode analysis with the DR (ε dr ). One can note that the column of ε spar is equal to 10 since the cells have been designed to have effective permittivity of 10 using the S-parameters retrieval method. However, for this method, it was observed previously that permittivity decreases as the electric size of the cell increases, which is the opposite behavior observed for the dispersion diagram method (see Fig. 4.16(a)). Using a dielectric resonator to try to understand which method is more accurate for our case, it is possible to note that the dispersion diagram method is more appropriate, especially for electrically-large unit cells, since the effective permittivity increases with the frequency. Figure 4.21 shows the absolute difference of the effective permittivity obtained with the dispersion diagram and S-parameters retrieval method using the results from the Eigenmode analysis with the DR as a reference. One can note that the difference is lower for effective permittivity obtained with the dispersion diagram, as expected. Therefore, it is important to point out that, for electrically-small cells, i.e. a < λ/10 both S-parameters retrieval method and dispersion diagram can be used. Nevertheless, when it comes to electrically-large cells, the dispersion diagram is more accurate to design dielectric resonators using periodic unit cells. Table 4.2: Effective permittivity computed using the S-parameters retrieval method (ε spar ), dispersion diagram (ε dd ), and Eigenmode analysis with the DR (ε dr ).
So far, the analyses are made considering a dielectric resonator without any feeding source. However, the main goal of this thesis is to design antennas, and, thus, it would be interesting to compare the Eigenmode analysis with the results considering a full-wave simulation of a dielectric resonator antenna. To do so, the models presented in Fig. 4.22 are considered, which are mounted over an infinite ground plane. The antennas are here fed by a coaxial probe and its position allows the excitation of the TE x δ11 . Also, the same dimension for the dielectric resonator is considered, i.e. w = 37.4 mm, as well as the unit cell parameters presented in Table 4.1. 4.1. For this simulation, the overall dimensions of the DRA are the same for all cases with exception of the height of the coaxial probe that is adjusted for each model to assure impedance matching to 50-Ω. One can note that, as the cells are getting larger, the resonance frequency of the antenna decreases, which means that its effective permittivity is increasing. The magnitude of the reflection coefficient |S 11 | can be used as well to estimate the effective permittivity of the unit cells. In this case, the permittivity of the solid model (see Fig. 4.22(a)) is varied in order to have the minimum |S 11 | at the same frequencies as the 3Dprinted models. Table 4.3 compares the results of effective permittivity obtained with the full-wave and Eigenmode analyses, which are referred to as ε f w and ε eigen , respectively. In the last column, the absolute difference between these methods is presented which is lower than 2%. Therefore, the results are similar for both Eigenmode and full-wave analyses, which agree with the dispersion diagram method as well. 4.3: Effective permittivity computed using the S-parameters retrieval method (ε spar ), dispersion diagram (ε dd ), and Eigenmode analysis with the DR (ε dr ).
1.3 1. 4 1.5 1.6 1.7 1.8 -50 -40 -30 -20
Conclusion
The effective permittivity is here calculated considering three methods that are S-parameters retrieval method, dispersion diagram, and Eigenmode analysis with a DR. It is observed that, for electrically-small cells, both S-parameters retrieval method and dispersion diagram give equivalent results. However, as the cells get electrically larger, their behaviors are completely different. To overcome this issue, an Eigenmode analysis using a DR has been performed. This analysis indicates that the dispersion diagram is more accurate to extract the effective permittivity of the cells for the designing of dielectric resonators since it better takes into account the chromatic dispersion. In other words, the S-parameters retrieval method is not useful when the periodic structure is not homogeneous anymore, which happens when the cell is electrically large. Therefore, if the cell is smaller than λ/8, i.e. considering a difference lower than 1.5% (see Fig. 4.16(b)), we suggest the use of the S-parameters retrieval method, which is much faster than the dispersion diagram analysis, and its results were already verified in different works on DRAs [START_REF] Mazingue | 3D Printed Ceramic Antennas for Space Applications[END_REF][START_REF] Lamotte | Multi-permittivity 3D-printed Ceramic Dual-Band Circularly Polarized Dielectric Resonator Antenna for Space Applications[END_REF][START_REF] David | 3D-printed ceramics with engineered anisotropy for dielectric resonator antenna applications[END_REF]. Nevertheless, if electrically larger unit cells are used, the dispersion diagram seems to be more accurate for dielectric resonators. Also, the Eigenmode analysis can provide accurate results taking into account the dispersion. However, this method demands a lot of computational effort.
Manufacturing Limitations
Previously, it has been observed that it is possible to achieve different values of effective permittivity by just controlling the dimensions of the cell. However, to manufacture these cells, it is important to take into account the capabilities of the 3D printer. In this work, the 3D-printer C900 developed by 3DCeram in France is employed. Like any manufacturing process, some constraints must be respected to ensure the fabrication of a prototype. For the 3D-printer C900 and considering our cell, the main constraints are: the zirconia walls must be thicker than 1 mm and the diameter of the holes must be at least 0.6 mm. Therefore, these constraints may limit the range of effective permittivity that could be achieved with the proposed isotropic unit cell shown in Fig. 4.11.
Taking all of this into account, in Fig. 4.24(a), the effective permittivity of the proposed unit cell is represented at 1 GHz for a equal to 4 mm. Here, it has been calculated with the S-parameters retrieval method that is supposed to work correctly (see Subsection 4.3.1.1). The inner part of the region highlighted on the plot corresponds to the 3D printer capabilities. One can note that it is possible to have printable unit cells with effective permittivity from 1.62 up to 29.7. It is important to understand the maximum frequencies for which these printable cells could be used, taking into consideration the dispersion and limitations of the retrieval method at issue. The maximum frequency depends thus on the criterion adopted to design the cells. For instance, if the criterion is a < λ/10, the maximum frequency varies from 1.38 GHz to 5.88 GHz, as seen in Fig. 4.24(b). Otherwise, if the criterion is a < λ/6, as shown in Fig. 4.24(c), the maximum frequency goes from 2.29 GHz up to 9.80 GHz. Therefore, the maximum frequency that a printable unit cell could be used depends on the chosen criterion. To increase f max , it is intuitive that it is necessary to reduce the lattice a of the unit cell. Considering the printable values of w si and l si for for a smaller a = 2.5 mm, the effective permittivity ε ef f now varies from 1.46 to 18.54, as seen in Fig. 4.25(a). The maximum frequency for a < λ/10 goes from 1.99 GHz to 6.23 GHz as can be observed in Fig. 4.25(b).
For a < λ/6, the maximum frequency varies between 3.32 GHz and 11.08 GHz, as illustrated in Fig. 4.25(c). Therefore, it is possible to conclude that, as expected, it is possible to increase the maximum frequency by decreasing the size a of the cell, but the range of printable effective permittivity decreases since it is limited by the 3D-printer constraints.
Design of the 3D-Printed Anisotropic Unit Cell
The dual-band CP DRA proposed in Chapter 3 presents an isotropic region as well as two anisotropic regions with high birefringence. To facilitate the transition between these different dielectric regions, the isotropic unit cell proposed in Section 4.3 is slightly modified to have a uniaxial anisotropic behavior. The anisotropic unit cell, shown in Fig. 4.26, exhibits a lack of rotational symmetry. Thus, this structure presents an artificial anisotropy. The effective permittivity of this unit cell is represented by the following tensor [ε x 0 0; 0 ε y 0; 0 0 ε z ], where ε x = ε y = ε z . In addition, zirconia (ε r = 32.5 and tanδ = 1.9 • 10 -4 at 10 GHz) is again assigned as the material of the proposed unit cell. To control the effective permittivity and birefringence of the unit cell, the parameters w sa and l sa can be varied, while the lattice a = 4 mm and ε r = 32.5 are fixed. For instance, Fig. 4.27(a) shows the effective permittivity of the unit cell calculated with the S-parameters retrieval method at 1.575 GHz as a function of w sa for l sa = 2.89 mm. As w sa increases, the permittivity components decrease, as expected, since the volume of air inside the cube is increasing as well. The permittivity components experience a similar behavior when l sa increases, as can be seen Fig. 4.27(b) for w sa = 0.76 mm. Also, the strength of anisotropy, or birefringence, ∆ε = ε z -ε y,x is shown as a function of w sa and l sa in Fig. 4.27(c) and 4.27(d), respectively. One can note that ∆ε tends to increase as w sa decreases until it reaches its maximum value, which is 13.5 for w sa equal to 0.45 mm. However, the maximum printable birefringence is 13.1 since it would not be possible to print cells with holes with diameter smaller than 0.6 mm. On the other hand, l sa is proven to be proportional to ∆ε and its maximum printable value is 15.3 for l sa = 3.5 mm.
The dispersion diagram and the irreducible Brillouin zone of the proposed anisotropic cell is shown in Fig. 4.28, where a = 4 mm, w sa = 0.76 mm, l sa = 2.89 mm, and ε r = 32.5. One can note that the cutoff frequency defining a homogenous medium is around 3.0 GHz since this is the frequency where the EM band starts to deviate from the light line. So, to retrieve the effective parameters of this periodic structure, it is recommended to work with frequencies below 3 GHz, since, above this frequency, the unit cells start to resonate by themselves and some unusual and complex effects can appear. Therefore, the proposed anisotropic unit cell allows the possibility of having large birefringence, which is necessary to design the proposed dual-band CP DRA developed in Chapter 3. Also, its shape facilitates the transition between the isotropic and anisotropic of the DRA.
Conclusion
Even though dielectric resonator antennas have several advantages when compared to some other traditional antennas, their usage was still limited due to the complicated fabrication process in terms of materials and complex shapes. The establishment of 3D-printing technology has the potential to overcome this issue, which can be combined with periodic structure theory to allow the realization of artificial isotropic, anisotropic, homogeneous, inhomogeneous, dispersive, and/or nondispersive media, for instance. In other words, from a single printing material and properly choosing the unit cell that will form this periodic medium, different media with different effective dielectric properties can be achieved.
In this chapter, it has been proposed two different types of electrically-small unit cells, which are made out of zirconia (ε r = 32.5 and tanδ = 1.9 • 10 -14 GHz) that is a ceramic material with high performance. The unit cells are electrically small at the L5 and L1 bands, thus, avoiding the dispersion effects. First, an isotropic cell is proposed, which has rotational symmetry along the x-, y-and z-axes and, then, presents an isotropic behavior. The value of effective relative permittivity ε ef f of the cell is controlled by the relation between the amount of air and zirconia in it.
Three different methods are used to investigate the isotropic unit cell which are the Sparameters retrieval method, dispersion diagram, and Eigenmode analysis with a DR. It is demonstrated that for electrically-small cells both S-parameters retrieval method and dispersion diagram can be used to compute the effective permittivity. On the other hand, using an Eigenmode analysis, it is shown that the dispersion diagram is more accurate for electricallylarge unit cells that can be used to design dielectric resonators, since this method takes into account the dispersion, as its name suggests, which is a limitation of the S-parameters retrieval method.
A unit cell with a uniaxial anisotropic behavior is proposed as well. The anisotropy is achieved due to the lack of its rotational symmetry and its design allows the realization of an effective medium with an effective permittivity of [ε x 0 0; 0 ε y 0; 0 0 ε z ], where ε x = ε y = ε z , using the zirconia, which is an isotropic material. For a unit cell with lattice of 4 mm and considering the restrictions of the 3D-printer, birefringence, i.e. ∆ε =| ε z -ε x,y |, up to 15.7 can be achieved. Also, the effective permittivity components can be easily adjusted by controlling the dimensions of the proposed unit cell.
As in Chapter 3 the permittivity of the isotropic and anisotropic regions were directly assigned on Ansys HFSS, the unit cells proposed in this chapter allow the realization of the proposed inhomogeneous and anisotropic DRA with the use of 3D-printing technology. Therefore, in the next chapter, the unit cells and the proposed DRA will be combined to manufacture the proposed antenna. In Chapter 3, a dual-band circular-polarized dielectric resonator antenna (DRA) has been proposed. This design relies on the local control of the dielectric properties of the resonator, which is made up of an isotropic and two anisotropic regions. So far in this thesis, the properties of these dielectric regions have directly been assigned on Ansys HFSS. To manufacture this antenna, it is therefore necessary to use the unit cells proposed in Chapter 4 to create artificial isotropic and anisotropic media with the needed permittivities. In this chapter, isotropic and anisotropic unit cells are first designed for the proposed dual-band circularly-polarized DRA. In Section 5.1, the simulation results of the antenna used in Chapter 3 and the antenna made out of subwavelength unit cells are presented. In Section 5.2, the manufacturing process is described. Finally, in Section 5.3, the measured results are presented.
3D-Printed Model
Previously, the local control of the dielectric properties of a resonator is used to conceive a dual-band CP antenna. However, these properties were directly assigned on Ansys HFSS. To manufacture this antenna, it is not sure that these dielectrics will be available on market, which limits the degree of freedom to design this type of structure. Nevertheless, subwavelength unit cells can be used to do so.
In Chapter 3, the proposed antenna presented an isotropic dielectric with relative permittivity ε ri equal to 10, and an anisotropic one represented by the following tensor:
ε ra = ε x 0 0 0 ε y 0 0 0 ε z = 10 0 0 0 10 0 0 0 22.1 .
(5.1)
To create artificial media with these dielectric properties, the isotropic and anisotropic subwavelength unit cells proposed in Section 4. The proposed unit cell shown in Fig. 5.1(a) has symmetry along the x-, y-, and z-axes, presenting isotropic behavior. After optimization, we find w si and l si equal to 0.6 mm and 2.99 mm, respectively. Considering these dimensions, the curve of effective permittivity over frequency can be observed in Fig. 5.2 as calculated using the S-parameters retrieval method. One can note that effective permittivity is equal to 10.08 and 10.06 around 1.17 GHz and 1.57 GHz, respectively. This difference is only about 0.2% because the unit cell is only a fraction of the wavelength, i.e. a λ 15 at the L1 band. For the proposed uniaxial anisotropic unit cell shown in Fig. 5.1(b) and considering a = 4 mm, w sa = 0.76 mm, and l sa = 2.89 mm, the permittivity as a function of the frequency is presented in Fig. 5.3. It is possible to observe that at 1.17 GHz, ε x = ε y = 9.9 and ε z = 22.2, and at 1.57 GHz, ε x = ε y = 9.8 and ε z = 22.2. Moreover, at the L1 band and considering the highest permittivity component, the lattice a of the unit cell is equivalent to around λ 10 . For the 3D-printed model, the 3-dB AR bandwidths are 4.84% (1121.9 MHz -1177.5 MHz) and 1.42% (1552.9 MHz -1575.1 MHz). One can note a downward frequency shift for the 3D-printed model in comparison to the solid one and, to fix it, it is necessary to re-optimize the 3D-printed model. However, the simulation of this model requires high-computational effort, which does not allows computational optimization. The left-and right-handed components of the gain pattern, in dBi, at the L5 and L1 bands, are shown in Fig. 5.7 for both 3D-printed and solid models in φ = 0 • and φ = 90 • planes. One can note that the patterns are similar at both frequencies. Also, broadside radiation patterns are observed, as expected for the radiating modes at issue. Both models are observed to present right-hand circular polarization at the L5 and L1 bands.
Manufacturing Process
The 3DCeram's C900 printer is used to manufacture the proposed dual-band CP DRA. It employs a stereolithography laser during the fabrication process. Like any manufacturing process, some constraints must be considered and the main ones are: the zirconia walls must be thicker than 1 mm and the diameter of the holes must be at least 0.6 mm. Even though the constraints were respected, these types of unit cells were printed for the first time using a technology not completely established and, despite several attempts, the DRA broke into small pieces during the sintering, as can be seen in Fig. 5.8. To increase the chance of 3D printing the proposed antenna, 3DCeram suggested reducing the volume of the antenna, reducing l si , and increasing w si . Regarding the volume issue, the DRA has to be re-designed to operate at higher frequencies. To properly choose the frequencies, it is necessary to take into account the dispersion of the unit cells (see Fig. 4.15 and 4.28) and the frequency range that it would be possible to measure in the anechoic chamber available in our facilities. Also, this DRA could be designed for different frequencies by controlling mainly its width w, height b, and permittivity ε r of the DRA, as can be seen in Fig. 5.9. For this analysis, an isotropic, homogeneous, and rectangular DRA with a square base and w = 51 mm and ε r = 10 is considered while b varies, for instance. From this plot, it is possible to realize that the separation between fundamental TE x,y 111 and higher-order TE x,y 5.1 and compared to the DRA designed to operate for the L5 and L1 bands. The new design has an overall volume of 59.41 cm 3 , which is 56.21% smaller than the L5 and L1 bands antenna, in which its volume is 135.68 cm 3 . Therefore, a relevant size reduction is achieved by simply increasing the operating frequencies of the DRA.
= ε x = ε y .
In Table 5.1, it is possible to note that the tensor of permittivity of the anisotropic unit cell changed with the new design, i.e. ε x = ε y = 10 and ε z = 20.5. Using the concepts discussed in Section 4.4, the dimensions of the anisotropic cell are a = 4 mm, w sa = 0.91 mm, and l sa = 2.72 mm. Moreover, to increase our chance of successfully 3D-print the proposed DRA, the isotropic cell is re-designed also, where a = 4 mm, w si = 1.08 mm, and l si = 2.52 mm. Note that w si increases and l si decreases when compared to the first design. The curves of effective permittivity are presented in Fig. 5.10 for the isotropic and anisotropic cells considering the aforementioned parameters and the desired permittivity values are achieved. Also, the electric size of the isotropic and anisotropic at the up-link of the TT&C band is equivalent to λ/11.5 and λ/8.0, respectively. After these modifications and some effort in the manufacturing process, the inhomogeneous and anisotropic DRA has finally been successfully 3D-printed, as can be observed in Fig. 5.11(a). However, it is possible to identify a small default on one of the sides of the dielectric resonator, namely in the isotropic region as shown in Fig. 5.11(b). Besides, the final dimensions of the DR are slightly lower than what was expected with w = 32.9 mm and b = 51.5 mm, i.e. a 2.6% and 1% error, respectively. Even with this default, the antenna has been measured and the results are presented in the following section.
Measurements and Discussion
The 3D-printed DRA is mounted over a ground plane and an SMA connector is used to connect the antenna to a 50-Ω coaxial cable, as can be observed in Fig. 5.12. At first, the magnitude of the reflection coefficient |S 11 | in dB has been measured with a Copper Mountain S5065 Vector Network Analyzer (VNA), and the measured and simulated results are presented in Fig. 5.13. For the simulated result, the impedance bandwidth (|S 11 | < 10 dB) are 22.42% (1.37 GHz -1.72 GHz) and 8.27% (1.97 GHz -2.14 GHz). For the measured results, the impedance bandwidth is 20.95% (1.41 GHz -1.74 GHz) and 8.57% (2.01 GHz -2.19 GHz). One can note an upward frequency shift for the measurements of around 2%, which may certainly be due to the manufacturing tolerances since some defaults are noted and the width and height of the 3D-printed antenna are 2.6% and 1% smaller than the expected, respectivelly. In this work, the axial ratio and radiation patterns are measured in a SIEPEL anechoic chamber with a Rohde & Schwarz ZVL13 VNA at ISAE-SUPAERO. Considering this setup, Fig. 5.14 shows simulated and measured axial ratio as a function of the frequency at the boresight direction, i.e. for φ = 0 • and θ = 0 • . The simulated 3-dB axial ratio (AR) bandwidths are 5.06% (1.54 GHz -1.62 GHz) and 1.45% (2.05 GHz -2.08 GHz). For the measured results, the 3-dB axial ratio (AR) bandwidths are 4.31% (1.59 GHz -1.66 GHz) and 2.38% (2.08 GHz -2.13 GHz). An upward frequency shift of around 2% of the measured results is observed once again, as expected. Therefore, one can note that even with this slight frequency shift, the antenna presents dual band and circular polarization, which is the goal of the proposed design. The simulated and measured left-and right-handed components of the gain pattern in dBi and calculated at the frequency of minimum axial ratio are shown in Fig. 5.15. At both bands and for both simulation and measurements, it is possible to observe that the proposed antenna presents broadside radiation patterns, as expected for the TE x δ11 , TE y 1δ1 , TE x δ13 , and TE y 1δ3 modes. Also, one can note that the antenna presents right-handed circular polarization and both models present similar radiation patterns. Figure 5.17 shows the measured and simulated realized gain in dBi at boresight direction, i.e. φ = 0 • and θ = 0 • . At the L1 band, the maximum realized gains are 7.02 dBi and 6.45 dBi for the simulation and measurement, respectively. At the up-link TT&C band, the maximum realized gain in simulation is 8.27 dBi and, for the measurement, is 7.30 dBi. Therefore, one can note that the measured realized gains are slightly lower than simulated ones. Finally, with the measurements, it is observed that the 3D-printed inhomogeneous and anisotropic DRA presents circular polarization with a dual-band operation, as expected. Table 5.2 summarizes the main results obtained from the measurements and simulations. The measured results present a frequency shift of around 2% in the |S 11 | and axial ratio compared to the simulated ones. However, we consider this difference acceptable since the antenna presents a visible default and size difference due to the manufacturing process.
Conclusion
In this chapter, isotropic and anisotropic unit cells are used to realize the proposed dual-band CP dielectric resonator antenna proposed in Chapter 3. Previously, the dielectric properties of this antenna were directly assigned on Ansys HFSS for simulations. Here, the needed dielectric properties are achieved due to the use of 3D-printable subwavelength unit cells and the main results are presented and discussed.
Initially, the dual-band CP DRA has been designed to operate at the L5 (1166.22 MHz -1186.68 MHz) and L1 (1565.19 MHz -1585.65 MHz) bands of the GNSS. Nevertheless, pro-totypes of this antenna broke during the 3D printing. After some trials, 3DCeram concluded that the antenna was too big and fragile. To overcome this problem, they suggest reducing the size of the antenna and changing some dimensions of the isotropic unit cell.
The demanded modifications of the isotropic unit cell are easily done due to the development in Section 4.3. To reduce the volume, the operating frequency of the prototype has been increased to operate at the L1 (1565.19 MHz -1585.65 MHz) and up-link of the TT&C (2025 MHz -2110 MHz) bands with a volume reduction of 56.21%. Finally, the dual-band CP DRA has been successfully 3D-printed.
Measurements have been performed and the results are compared with the simulated ones. A good agreement between the results is observed. Nevertheless, in the measured |S 11 | and axial ratio, an upward frequency shift of around 2% is observed. This shift is attributed to some visible defaults and size difference in the prototype due to the manufacturing tolerances. These problems during the manufacturing seem to be related to the fact that the proposed antenna is inhomogeneous, which leads to different coefficients of thermal expansion in this object during the sintering process. However, these types of issues are normal since the process of 3D-printing ceramics is not completely established yet.
One of the main goals of this thesis is to demonstrate the possibility of controlling the dielectric properties in an electrically-small resonator. With the measured results, we demonstrated that it is possible to do so. This is done by 3D-printing an inhomogeneous and anisotropic DRA with dual-band and circular polarization. In the next chapter, we propose an extension of this design to a third band. used for the dual-band DRA proposed in Chapter 3, it is necessary to verify whether this mode exists in this structure.
As a starting point, the dual-band DRA proposed in Chapter 3 is considered to investigate the existence of the quasi-TM 001 mode. The DR is placed over an infinite ground plane without any sources, as can be seen in Fig. 6.1. An Eigenmode analysis is then performed using Ansys HFSS. The parameters optimized in Section 3.1 are considered for the Eigenmode analysis, where ε ri = ε x = ε y = 10, b = 75.0 mm, w = 41.5 mm, b a = 42.0 mm, l a = 29.0 mm, and w a = 10.5 mm. Nevertheless, instead of using at first the optimized value of ε z = 25.1, its initial value is defined as 10, i.e. the isotropic and homogeneous case, where the presence of the quasi-TM 001 is already proven [START_REF] Mei Pan | Study of resonant modes in rectangular dielectric resonator antenna based on radar cross section[END_REF]. Considering these parameters, the quasi-TM 001 mode is found at 2.52 GHz as identified in Fig. 6.2 by its dipole-like field distribution. Now that the resonance frequency of the quasi-TM 001 mode for the isotropic and homogeneous scenario has been identified, one can track this mode when varying ε z to verify its existence. Figure 6.3 shows the resonance frequency and Q-factor of the quasi-TM 001 mode as a function of ε z . This mode is observed for all values ε z considered in this analysis, i.e. from 6 to 28. It means that, even with the lack of symmetry of the dielectric properties of the DR, this mode can be excited. Also, for the initial value of ε z = 25.1, the quasi-TM 001 mode now resonates at 2.32 GHz with a Q-factor of 7.42. Here, the goal is to have quasi-TM 001 mode working at the ISM band. It is thus necessary to shift it from 2.32 GHz to around 2.45 GHz. The same strategy used in Chapter 2 can be employed here. It consists of using an air cavity at the center of the DRA, where the electric field of the TE x δ11 , TE y 1δ1 , TE x δ13 , and TE y 1δ3 modes are weak while it is strong for the quasi-TM 001 mode. It would therefore be possible to increase the resonance frequency of this mode without affecting too much the modes used for the GNSS bands. As a result, the structure shown in Fig. 6.4 is considered, which is very similar to the dual-band DRA except for the presence of a square-based air cavity at its center with height h ag and width w ag . To analyze the behavior of the different modes with the presence of the air cavity, their resonance frequencies are calculated as a function of the height h ag using the Eigenmode solution of Ansys HFSS as observed in Fig. 6.5. One can note that the quasi-TM 001 mode is more sensitive to variation of h ag than the modes used for the GNSS bands. Also, the resonance frequencies of the TE x δ11 and TE y 1δ1 modes remain almost the same for the different values of h ag , while the TE x δ13 and TE y 1δ3 modes are just a little bit more sensitive, especially for values of h ag bigger than 20 mm. Therefore, depending on the height of the air cavity, it would be needed to slightly readjust the dimensions of the DRA so that the circular polarization is achieved at the L5 and L1 bands using the same reasoning presented in Chapter 3.
Isotropic Dielectric
Antenna Design and Results
In the previous section, the natural resonances of the DR have been investigated using an infinite ground plane without any sources. Here, we describe the final DRA design, where two ports are implemented. One for the GNSS system and the other for the ISM band. To excite the quasi-TM 001 mode, a coaxial probe is placed at the center of the DR and connected to an SMA connector, as presented in Fig. 6.6. Thereafter, it is referred to as port 2. Also, this connector is grounded using metallic vias. A conducting line is used to assure that these vias are not below the DRA, which could make difficult the manufacturing of the antenna. Besides, the presence of these vias close to the slot can affect the performance of the antenna at the GNSS bands as discussed in the next Section. Finally, as the probe and SMA connector are at the center of the substrate, the slot is coupled now to a microstrip line with a T-junction, which is here considered as port 1. Also, the height h pin of the probe is equal to 36.5 mm and the length l s and width w s of the slot are 51.0 mm and 3.22 mm, respectively. Regarding the microstrip line, its dimensions are w l = 3.86 mm, l l = 40 mm, l T = 25 mm, w T = 1 mm, l T 1 = 92 mm, l m = 35 mm, w m = 4 mm, and the stub length is 30 mm. Finally, a Taconic RF-301 substrate with tickness of 1.524 mm and dielectric constant of 2.97 is employed.
Considering the aforementioned properties, the S-parameters of the proposed dual-part triple-band DRA are computed with Ansys HFSS, as can be seen in Fig. 6.7. At the GNSS band, the simulated impedance bandwidths (|S 11 | < 10 dB) are 29.6% (0.98 GHz -1.32 GHz) and 8.7% (1.53 GHz -1.67 GHz) for the port 1, which is related to the microstrip line that is coupled to the slot. In addition, one can note that the two feeding ports are well-isolated since the |S 21 | is below -30 dB from 0.9 GHz to 1.7 GHz. At the ISM band, namely Fig. 6.7(b), the simulated impedance bandwidth (|S 22 | < 10 dB) is 8.9% from 2.35 GHz to 2.57 GHz for port 2, which is the port that excites the quasi-TM 001 mode. Moreover, at this frequency band, the mutual coupling is still below -40 dB, which means that the two ports are not coupled. Figure 6.9 shows the left-and right-handed components of the gain, in dBi, at the L5 and L1 bands, which are calculated while port 1 is excited and port 2 is matched to a 50-Ω load. One can note that the antenna presents broadside radiation patterns, which is expected for the TE x δ11 , TE y 1δ1 , TE x δ13 and TE y 1δ3 modes. In addition, it is possible to realize that, at the L5 and L1 bands, the DRA is still right-handly circular polarized. For the ISM band, the φand θ-components are presented in Fig. 6.10. They are calculated while port 2 is excited and port 1 is matched to a 50-Ω load. A monopole-like radiation pattern with linear polarization is observed at the different planes as expected for the quasi-TM 001 mode. Figure 6.11 simulated realized gain in dBi for the proposed triple-band DRA. In Fig. 6.11(a), the realized gain at the boresight direction, i.e. θ = 0 • and φ = 0 • , is presented for port 1 while port 2 is matched to a 50-Ω load. One can note that the gain at the 6.5 dBi and 6.3 dBi at the center of the L5 and L1 bands, respectively. At ISM band, the realized gain is calculated for θ = 60 • and φ = 0 • for port 2 while port 1 is matched to a 50-Ω load. From Fig. 6.11(b), it is possible to observe that the realized gain is 2.5 dBi at 2.45 GHz.
Feeding Method
One important difference between the proposed dual-and triple-band DRAs is the feeding network. For the dual-band circularly-polarized DRA, a simple slot is coupled to a 50-Ω microstrip line to excite the fundamental TE x δ11 and TE y 1δ1 , and higher TE x δ13 and TE y 1δ3 modes. On the other hand, for the triple-band antenna, a second port is connected to a coaxial probe placed at the center of the DRA to excite the quasi-TM 001 mode. It is thus necessary to find a feeding scheme to keep the same performance at the GNSS band.
Regarding port 1, which excites the modes for the GNSS bands and can be seen in Fig. 6.6, a simple modification is made when compared to the dual-band DRA. Instead of using a simple and straight 50-Ω microstrip transmission line, a T-junction is used to open up some space at the center of the antenna for the probe. More precisely, port 1 is connected to a 50-Ω microstrip line that is split into two 100-Ω lines, which are coupled to the slot. Figure 6.13: Perspective view of the proposed triple-band DRA with the initial feeding scheme.
The quasi-TM 001 mode can now be fed by a 50-Ω coaxial probe that is connected to an SMA connector that needs to be grounded. A straightforward way to do so is using metallic vias connecting the SMA straight to the ground plane, as can be observed in Fig. 6.13. However, it was noted that these vias were disturbing the performance of the antenna at the GNSS bands since they are close to the slot. More precisely, the reflection coefficient of port 1 is highly affected by the vias around the slot and, even with optimizations of the microstrip lines and slot, better results than the one shown in Fig. 6.14 have not been observed. To keep the same performance at the GNSS bands, the vias are moved away from the slot and the feeding scheme shown in Fig. 6.15(b) is proposed, where microstrip transmission lines are used. To understand the influence of these lines, a parametric analysis of the reflection coefficient at the port 1 is performed as a function of their length l m as can be observed in It is possible to note that the reflection coefficient tends to improve at both bands as the value of l m increases, i.e. as the vias are moving away from the slot. Also, it is important to point out that the optimized value of the l m found in the previous Section is 35 mm. It is necessary to verify as well the behavior of the antenna at the ISM band as the vias are getting far away from the slot, i.e. as the length l m of the lines connected to the vias increases. The |S 22 | is shown as a function of l m at the ISM band in Fig. 6.17. It is noted that each of the metallic arms acts as stubs connected to the coaxial cable, then, it is necessary to pay attention to the value of l m in terms of λ g , which is the guided wavelength at 2.45 GHz. For l m = 0 mm, as expected, the reflection coefficient of port 2 presents a good result at the ISM band. However, for l m = 17 mm, the antenna is not well-matched anymore. This value of l m is approximately λg 4 and, then, the short-circuited microstrip transmission lines act as an open circuit at their input. On the other hand, for l m = 35 mm, the antenna is well-matched again since l m is approximately λg 2 and, thus, the lines act as a short circuit from the point of view of the SMA connector.
Conclusion
An inhomogeneous and anisotropic dual-part triple-band rectangular DRA is presented in this Chapter to operate at the L5 and L1 bands of the GNSS and the 2.45-GHz ISM band. At the GNSS bands, the antenna is excited by a simple slot and presents circular polarization and broadside radiation patterns as for the antenna detailed in Chapter 3. On the other hand, at the ISM band, the antenna is fed by a probe and presents an omnidirectional pattern with linear polarization.
At the GNSS band, the operation principle is the same developed in Chapter 3, where a square-based dielectric is used and two anisotropic regions are introduced to manipulate the TE x δ11 , TE y 1δ1 , TE x δ13 , and TE y 1δ3 modes to have circular polarization at two bands. However, instead of using a straight microstrip line to couple to the slot, a T-junction is used to open up some space at the center of the DRA so that it can be explored to excite the third band.
In the literature, the presence of quasi-TM 011 mode, which radiates as a vertical electric dipole, was observed in isotropic rectangular DRAs with a square base. The possibility of exciting this mode in an anisotropic and inhomogeneous DRA with a square base was investigated in this Chapter and, it was verified that this mode still exists under these conditions. The quasi-TM 011 mode is excited at 2.45 GHz by a probe and its resonance frequency can be controlled by adding and adjusting the dimensions of an air cavity at the center of the DRA, which is similar to the method used in Chapter 2.
The modes used at the GNSS and ISM bands can coexist at the same DRA without disturbing each other since the quasi-TM 011 mode is orthogonal to the other ones. Also, due to the analysis of the electric field distribution of each one of these modes, the electric properties of the DRA can be locally controlled to achieve the expected results. An orthogonal feeding system is proposed to properly excite the modes at the GNSS and ISM bands without disturbing the performance of each other.
Conclusion and Perspectives
The main objective of this Ph.D. thesis was to investigate and design multiband dielectric resonator antennas with special radiation and polarization properties. More precisely, in this work, the use of 3D-printing technology, which has the potential to unlock the full potential of DRAs, has been explored to design these DRAs. The conclusion of each chapter is presented as follows:
Chapter 1 has been dedicated to the necessary background for the comprehension of this work as well as the state-of-the-art about multiband dielectric resonator antennas, which is essential to establish the research axes of this thesis. Some challenges have been identified in designing multiband DRAs with given radiation properties where most of these properties were achieved by using complex feeding networks and/or physical modifications in the shape of the DRA. None of them explored the dielectric properties of the DRA to do so, which could be realized by using additive manufacturing. Therefore, this chapter has paved the way for all further development found in this work.
Chapter 2 has studied the design of a dual-band rectangular DRA for the L5 and L1 bands of the GNSS. For this application, it is interesting to have broadside radiation patterns and, then, the fundamental TE x δ11 and TE y 1δ1 modes are used at the L5 and L1 bands, respectively. However, at the L1 band, it has been observed that the radiation pattern is not as expected. To find out the reasons behind it, an investigation using the Eigenmode solution of HFSS detected the presence of the TE x δ21 mode at the L1 band and, due to its electric field distribution, this mode could be excited as well. To overcome this issue, the permittivity of the dielectric has locally been controlled to increase the resonance frequency of this undesirable mode by adding an air cavity at the center of the antenna. This is because the electric fields of the fundamental modes are weak in this region while they are strong for the TE x δ21 mode. The radiation pattern results have demonstrated that this method is useful to manipulate the resonance frequency of a given radiating mode.
Chapter 3 has been dedicated to the design of a dual-band dielectric resonator antenna with circular polarization at the L5 and L1 bands of the GNSS. The fundamental TE x δ11 and TE y 1δ1 modes are excited at the lower band as well as the higher-order TE x δ13 and TE y 1δ3 modes at the upper band. To fulfill the circular polarization conditions, the electric field distributions of these modes were carefully analyzed and, then, the electric properties of the DRA were locally controlled by the introduction of two anisotropic regions. As all of these modes are fed by a single slot and a square-based DRA is considered, the achievement of circular polarization at two bands relies exclusively on its dielectric properties, which is a novelty itself since similar results found in the literature are achieved by controlling the shape of the DRA and/or using multiple ports or complex feeding networks.
Chapter 4 has introduced the concept of artificial materials made up of periodic structures and 3D printing. Indeed, the electric properties of the antenna proposed in Chapter 3 were directly assigned on Ansys HFSS for the full-wave simulations and it required a practical 123 implementation. Periodic structures made up of subwavelength unit cells to emulate artificial anisotropic and isotropic media have thus been proposed and studied to investigate their design and their limitations regarding 3D printing. To retrieve the effective permittivity of these cells, the S-parameters retrieval method and dispersion diagram have been used at first. However, the curves of extracted permittivity presented different behaviors as the frequency increased for these methods. Then, the effective permittivity considering a dielectric resonator has been computed using the Eigenmode solution of Ansys HFSS. This analysis has demonstrated that the dispersion diagram is more accurate to compute the effective permittivity of periodic structures to design DRA, especially for electrically-large unit cells.
Chapter 5 has been devoted to combine the antenna proposed in Chapter 3 and the unit cells presented in Chapter 4. The simulated results have demonstrated the efficiency of periodic structures to design an anisotropic and inhomogeneous DRA. Also, the first attempt to 3D-print the DRA for the L5 and L1 bands fails, and, then, following the feedback of the technicians to reduce the size of the antenna, the DRA was re-designed to operate at the L1 and up-link of the TT&C bands. The antenna has finally been 3D-printed and the measured results are in good agreement with the simulated ones.
Chapter 6 has presented a triple-band dielectric resonator antenna operating at the L5, L1, and 2.45-GHz ISM bands. The idea was to keep the same performance at the GNSS bands, as seen in Chapter 3, and add a third band with the excitation of the quasi-TM 001 mode. However, by that time, this mode has been only observed in isotropic and homogeneous DRAs with a square base. An investigation has thus been developed to verify the existence of this mode in an inhomogeneous and anisotropic DRA, which is proved due to the identifications of its electric field distribution. Moreover, as the quasi-TM 001 mode behaves as a vertical electric dipole, an air cavity placed at the center of the DRA has been used to control its resonance frequency. Compared to the model proposed in Chapter 3, some modifications have been made in the feeding network to leave some space at the center of the DRA for the presence of the probe. The results of S-parameters, radiation pattern, and axial ratio have demonstrated that the antenna operates almost independently at the GNSS and ISM bands.
To summarize, the developments from this Ph.D. thesis provide several contributions to the current literature:
I) The local control of the effective permittivity in electrically-small and resonant structures by using 3D-printed subwavelength isotropic and anisotropic unit cells. In this work, three different methods to calculate the effective permittivity of these cells have been presented, showing the pros and cons of them to design dielectric resonators.
II) A dual-band circularly-polarized dielectric resonator antenna has been proposed by using an inhomogeneous and anisotropic dielectric. The circular polarization in both bands happens due to the local control of the electric permittivity of the antenna, which is completely new in the literature since this type of performance is most of the time achieved by modifying the shape of the DR and/or using complex feeding networks.
The same concept has been used as well to design a triple-band antenna.
III) A prototype of the dual-band circularly-polarized DRA has been 3D-printed in ceramic and experimental measurements have been carried out to validate the simulated results.
Perspective for Future Works
The future investigations from the developments provided in this Ph.D. thesis can be listed as follows:
Antenna performance: To show the potential of the local control of dielectric properties of a resonator by using the 3D-printing, we used a single slot to excite the antenna. However, the employment of a single feeding source limits the 3-dB axial bandwidth as seen in Section 3.4. Therefore, complex feeding networks could be combined with the local control of the dielectric permittivity of the dielectric to improve the performance of the antenna in terms of axial ratio.
Antenna size reduction: Depending on the application, the antennas developed in this thesis can be too bulky. Thus, some techniques could be investigated to overcome this issue. The use of metallic parasitic elements sounds promising as well as the use of nonconventional dielectric resonator shapes.
Antenna manufacturing: At the end of this Ph.D. thesis, a prototype of a dual-band circularly-polarized DRA is 3D-printed in ceramics and the experimental results agreed with the simulated ones. However, before successfully 3D-printing the DRA, several prototypes have been broken during the manufacturing process. It sounds thus that the 3D-printing process presents some mechanical weaknesses when it comes to inhomogeneous DRAs. In this context, the study on the properties of the unit cells considering different topologies can be envisaged to find the compromise between mechanical and electrical properties.
Artificial materials applied to DRAs: So far, the periodic subwavelength unit cells have been used to create anisotropic or isotropic and homogeneous or inhomogeneous for DRA applications. However, it is possible to go further by exploring some other features of periodic structures. In this context, dielectrics with engineered dispersion could be investigated to design multi or wideband DRAs.
Antenna integration: In this work, the measurement of the dual-band CP DRA is performed in an anechoic chamber, i.e. in a controlled environment. Nevertheless, it would be interesting to do the integration of this antenna into a platform for a practical application such as a UAV or nanosatellite. This integration would allow the investigation of the performance of the antenna in a noncontrolled environment. In case of compatibility issues, some solutions may be proposed to improve the performance of the antenna under these conditions. with master/slave boundary conditions [92]. Moreover, Floquet modes are used to represent the fields on the port boundary, which are plane waves with propagation directions defined by the frequency and shape of the periodic structure. The distance h between the port and the surface of the unit cell must be great enough to allow the evanescent modes to decay and a rule of thumb used for this variable is a quarter of the wavelength at the lowest frequency of the band at issue.
When it comes to the parameter retrieval of anisotropic materials, it is important to know the polarization of the incident electromagnetic wave. Considering the following tensor of permittivity
ε r = ε x 0 0 0 ε y 0 0 0 ε z , (A.8)
and to calculate ε x , the electric field must be x-directed and the same reasoning must be used to calculate ε y and ε z . However, with the configuration presented in Fig. A.1, where the incoming plane wave propagates in the ±z-direction, it is possible to calculate only ε x and ε y . To calculate ε z , it is necessary to rotate the air box so that the propagation direction is along the x-or y-axes. Also, to know which component of the permittivity tensor is being calculated, the proper Floquet modes must be chosen in the Floquet Port Setup on HFSS, and the electric field polarization can be seen in Floquet Port Display.
Abstract -Additive manufacturing, or simply three-dimensional (3D)-printing, has been playing an important role in different fields due to its rapid manufacturing, energy savings, customization, and material waste reduction, to name a few. When it comes to antenna applications, it turns out that most of the dielectric-based 3D-printed solutions proposed in the literature deal with non-resonant and large structures in comparison to the wavelength. More recently, the possibility of using ceramics as printing material opened new possibilities for the design of small and resonant structures due to the higher dielectric constant. In this context, the main goal of this work is to show the possibility of locally controlling the inhomogeneity and anisotropy of a dielectric. To demonstrate it, multi-band dielectric resonator antennas (DRAs) are proposed to achieve circular polarization and/or specific radiation patterns by manipulating the electric permittivity of the dielectric. At first, an original approach to design a dual-band and circularly polarized DRA is presented. The circular polarization is achieved only due to the manipulation of the permittivity of the dielectric instead of using complex feeding techniques and/or using complex dielectric shapes as most of the works found in the literature. The results demonstrate the efficiency of the proposed model and a design guideline for this type of antenna is presented. Besides, an extension to a triple-band antenna is proposed with the two lower bands presenting circular polarization and broadside radiation pattern, while the upper band has linear polarization and omnidirectional pattern. Both antennas require an assembly of anisotropic and isotropic dielectrics with specific values of permittivity to achieve the desired results. However, not necessarily, these materials with these characteristics would be available. To overcome this issue, periodically arranged sub-wavelength cells allow to create artificial and heterogeneous media by controlling their effective permittivity. For this purpose, only zirconia is used as printing material in a single manufacturing process using 3D printing technology.
Keywords: 3D-printing, anisotropy, circular polarization, dielectric resonator antennas, inhomogeneity, multi-band, periodic structures. Mots clés : Impression 3D, anisotrope, polarisation circulaire, antennes à résonateur diélectrique, inhomogène, multibande, structures périodiques.
Résumé -
Introduction 1 Introduction to Dielectric Resonator Antennas 1 . 1 3 3 . 1 6 2 3 3 4 3 1. 1 A 8 1. 2 11 1. 3 12 1. 4 13 1. 5
1131233182113124135 Reminders on Dielectrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 A Brief History of the DRA . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Definition and Main Characteristics of the DRA . . . . . . . . . . . . . . . . 1.4 The Rectangular DRA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 Field Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 Resonance Frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.3 Quality Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 State-of-the-Art of Multiband DRAs . . . . . . . . . . . . . . . . . . . . . . . 1.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Inhomogeneous Dual-Band DRA with Linear Polarization 2.1 Homogeneous Linearly-Polarized Dual-Band DRA . . . . . . . . . . . . . . . 2.2 Inhomogeneous Linearly-Polarized Dual-Band DRA . . . . . . . . . . . . . . 2.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inhomogeneous and Anisotropic Dual-Band DRA with Circular Polarization Principle of Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Antenna Design and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Parametric Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Trade-off Between Axial Ratio Bandwidth and DRA Volume . . . . . . . . . iii 3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Dielectric Periodic Structures and Additive Manufacturing 4.1 Additive Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Additive Manufacturing for Dielectric Resonator Antennas . . . . . . . . . . . 4.3 Design of the 3D-Printed Isotropic Unit Cell . . . . . . . . . . . . . . . . . . . 4.3.1 Parameter Retrieval Methods . . . . . . . . . . . . . . . . . . . . . . . 4.3.1.1 S-Parameters Retrieval Method . . . . . . . . . . . . . . . . 4.3.1.2 Dispersion Diagram . . . . . . . . . . . . . . . . . . . . . . . 4.3.1.3 Dielectric Resonator Based on Periodic Structures . . . . . . 4.3.1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Manufacturing Limitations . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Design of the 3D-Printed Anisotropic Unit Cell . . . . . . . . . . . . . . . . . 4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3D-Printed Dual-Band Circularly-Polarized DRA 5.1 3D-Printed Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Manufacturing Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Measurements and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inhomogeneous and Anisotropic Triple-Band DRA 6.1 Principle of Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Antenna Design and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Feeding Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . term "3D printed antennas" from 2010 to 2020 in the Google Scholar database [7]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 (a) Voronoi antenna 3D-printed in plastic and then coated with metal, and (b) horn antenna 3D-printed directly with metal. . . . . . . . . . . . . . . . . . . Examples of 3D-printed, non-resonant, and electrically-large dielectric structures for antenna applications. . . . . . . . . . . . . . . . . . . . . . . . . . . Examples of 3D-printed, resonant, and electrically-small dielectric structures for antenna applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . typical schematic view of an atom in the (a) absence of applied electric field and (b) under an applied electric field [17]. . . . . . . . . . . . . . . . . . . . Different dielectric shapes used for DRA applications [37]. . . . . . . . . . . . Dielectric resonator antennas fed by a (a) microstrip-coupled slot, (b) probe, (c) microstrip transmission line, and (d) coplanar waveguide (CPW) [37]. . . Perspective view of an isolated rectangular dielectric resonator. . . . . . . . . Electric field distribution of the (a) TE x δ11 and (b) TE x δ13 modes of the isolated rectangular DR at x = d 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1. 7 18 1. 8 18 1. 9 21 vii 2 . 1 3 . 5
7188189212135 Dual-band cylindrical DRA with omnidirectional radiation patterns [44]. . . . Top view of a dual-band CP cylindrical DRA [49]. . . . . . . . . . . . . . . . Perspective view of a dual-band CP DRA fed by a cross-slot [50]. . . . . . . . 19 1.10 Perspective view of a singly-fed dual-band CP DRA [51]. . . . . . . . . . . . . 19 1.11 Perspective view of the dual-band circularly-polarized DRA fed by two ports [52]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.12 Perspective view of the dual-band circularly-polarized DRA fed by a cross-slot [53]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.13 Geometry of the wide dual-band CP stacked rectangular DRA [54]. . . . . . . Electric field distribution of the (a) TE x δ11 and (b) TE y 1δ1 modes considering a homogeneous DR over an infinite ground plane. . . . . . . . . . . . . . . . . . 2.2 Dual-band linearly-polarized rectangular DRA. . . . . . . . . . . . . . . . . . 2.3 Simulated reflection coefficient of the proposed dual-band linearly-polarized DRA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Simulated gain patterns of the dual-band linearly-polarized DRA at the (a) L5 and (b) L1 bands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Electric field distribution of the TE x δ21 mode considering a homogeneous DR over an infinite ground plane. . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Dual-band DR with an air cavity over an infinite ground plane. . . . . . . . 2.7 Curves of (a) resonance frequencies and (b) Q-factors of TE y 1δ1 , TE x δ11 , and TE x δ21 modes for different values of height b 1 of the air cavity. . . . . . . . . . 2.8 Proposed inhomogeneous dual-band linearly-polarized DRA. . . . . . . . . . 2.9 Simulated reflection coefficient of the dual-band linearly-polarized DRA with and without air cavity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10 Simulated gain patterns of the dual-band linearly-polarized DRA with and without the air cavity at (a) L5 and (b) L1 bands. . . . . . . . . . . . . . . . 2.11 Simulated θ-component of the gain pattern at the L1 band in the φ = 0 • plane for different values of b 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.12 Simulated axial ratio of the dual-band linearly-polarized DRA with air cavity in the boresight direction (θ = 0 • and φ = 0 • ). . . . . . . . . . . . . . . . . . 2.13 (a) Cross-and (b) L-shaped dual-band dielectric resonator antennas with air cavity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Perspective view of a square-based rectangular DRA over an infinite ground plane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Curves of quality factor over frequency of two orthogonal modes in which the CP are (a) non-fulfilled and (b) fulfilled. . . . . . . . . . . . . . . . . . . . . . 3.3 (a) Uniaxial anisotropic DRA with circular polarization and (b) its axial ratio [16]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 (a) Resonance frequencies and (b) Q-factors as a function of ε y calculated using the Eigenmode solution of Ansys HFSS and considering a uniaxial anisotropic and homogeneous dielectric resonator. . . . . . . . . . . . . . . . . . . . . . . viii Curves of (a) f 1 and f 2 , (b) Q 1 and Q 2 and (c) f 01 and f 02 of the TE x δ11 , TE y 1δ1 , TE x δ13 and TE y 1δ3 modes for different values of ε y calculated for a homogeneous and uniaxial anisotropic dielectric resonator. . . . . . . . . . . . . . . . . . . . 3.6 Electric field distribution of the fundamental ((a) TE x δ11 and (b) TE y 1δ1 ) and higher ((c) TE x δ13 and (d) TE y 1δ3 ) modes for a homogeneous and isotropic DR over an infinite PEC ground plane. . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Perspective view of the proposed inhomogeneous DR over an infinite ground plane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Curves of (a) resonance frequencies and (b) Q-factors calculated using the Eigenmode solution of Ansys HFSS considering the proposed inhomogeneous and anisotropic dielectric resonator with ε ri = ε x = ε y = 10, b = 75.0 mm, w = 41.5 mm, b a = 42.0 mm, l a = 29.0 mm, and w a = 10.5 mm. . . . . . . . . 3.9 Curves of (a) resonance frequencies, (b) Q-factors, and (c) f 01 and f 02 of the TE x δ11 , TE y 1δ1 , TE x δ13 , and TE y 1δ3 modes for different values of ε z considering the proposed inhomogeneous and anisotropic DR with ε ri = ε x = ε y = 10, b = 75.0 mm, w = 41.5 mm, b a = 42.0 mm, l a = 29.0 mm, and w a = 10.5 mm. 3.10 Curves of f 01 and f 02 of the TE x δ11 , TE y 1δ1 , TE x δ13 " and TE y 1δ3 modes for different values of (a) b a , (b) l a , and (c) w a , considering the proposed inhomogeneous and anisotropic DR with ε ri = ε x = ε y = 10, ε z = 25.1, b = 75.0 mm, w = 41.5 mm, b a = 42.0 mm, l a = 29.0 mm, and w a = 10.5 mm. . . . . . . . . . . . . . 3.11 (a) Perspective and (b) top views of the proposed inhomogeneous and anisotropic DRA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12 Simulated (a) reflection coefficient and (b) axial ratio of the initial model of the proposed DRA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.13 Simulated (a) reflection coefficient and (b) axial ratio of the optimized solid model of the proposed DRA. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.14 Simulated radiation patterns at (a) L5 and (b) L1 bands of the solid model of the proposed DRA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.15 Simulated realized gain of the proposed dual-band circularly-polarized DRA at the boresight direction (φ = 0 • and θ = 0 • ). . . . . . . . . . . . . . . . . . . 3.16 Simulated (a) magnitude and (b) phase of the φ-and θ-components of the electric field of the proposed dual-band circularly-polarized DRA at the boresight direction (φ = 0 • and θ = 0 • ). . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.17 Simulated axial ratio as function of θ at the (a) L5 and (b) L1 bands of the proposed dual-band circularly-polarized DRA. . . . . . . . . . . . . . . . . . . ix 3.18 Simulated radiation efficiency of the proposed DRA. . . . . . . . . . . . . . . 3.19 Simulated (a) reflection coefficient and (b) axial ratio for different values of height of b a of the uniaxial anisotropic dielectric region. . . . . . . . . . . . . 3.20 Simulated (a) reflection coefficient and (b) axial ratio for different values of the length l a of the uniaxial anisotropic dielectric region. . . . . . . . . . . . . . . 3.21 Simulated (a) reflection coefficient and (b) axial ratio for different values of width w a of the uniaxial anisotropic dielectric region. . . . . . . . . . . . . . . 3.22 Simulated (a) reflection coefficient and (b) axial ratio for different values of ε z of the uniaxial anisotropic dielectric region. . . . . . . . . . . . . . . . . . . . 3.23 Simulated (a) reflection coefficient and (b) axial ratio for different values of w g . 3.24 (a) Q-factor and (b) theoretical 3-dB axial ratio bandwidth as a function of ε r . 3.25 Volume of the DRA as a function of ε r . . . . . . . . . . . . . . . . . . . . . . 4.1 Scheme of the SL process [57]. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Scheme of the SLS process [57]. . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Scheme of the FDM process [59]. . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 3D-printed super-shaped dielectric resonator antennas [79]. . . . . . . . . . . 4.5 Perspective view of the (a) 3D-printed elliptical DRA and (b) its unit cell [14]. 4.6 Perspective view of the prototypes of the 3D-printed hemispherical DRA loaded with metallic caps [80]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Perspective view of the (a) 3D-printed multi-ring DRA and (b) its unit cell [81]. 4.8 (a) Schematic and (b) perspective views of the 3D-printed nonhomogeneous dielectric resonator antenna [15, 82]. . . . . . . . . . . . . . . . . . . . . . . . 4.9 Perspective view of the (a) 3D-printed anisotropic rectangular DRA and (b) its unit cell [16]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 (a) Simple, (b) body-centered, and (c) face-centered cubic lattices [85]. . . . . 4.11 Proposed isotropic unit cell. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12 Calculated effective dielectric constant of the cells computed on Ansys HFSS with respect to the filling ratio δ i of the unit cell at 1.575 GHz using the S-parameters retrieval method. . . . . . . . . . . . . . . . . . . . . . . . . . .
x 4 . 6 . 1 xiiiof Tables 2 . 1 27 2. 2 31 3. 1 48 3. 2 80 4. 3 82 5. 1
46121272311482803821 13 (a) Calculated effective permittivity and (b) absolute difference of the proposed unit cell calculated using the S-parameters retrieval method and for a = 4 mm, w si = 0.6 mm, and l si = 2.99 mm. . . . . . . . . . . . . . . . . . . . . . . . . 4.14 Illustration of (a) chromatic, (b) polarization, and (c) spatial dispersions[START_REF] Raymond C Rumpf | Engineering the dispersion and anisotropy of periodic electromagnetic structures[END_REF]. 4.15 Dispersion diagram of the proposed isotropic unit cell and its irreducible Brillouin zone considering a unit cell with a = 4 mm, w si = 0.6 mm, and l si = 2.99 mm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.16 (a) Effective permittivity and (b) absolute permittivity difference of the proposed unit cell calculated using the dispersion diagram and the S-parameters method with a = 4 mm, w si = 0.6 mm, and l si = 2.99 mm. . . . . . . . . . . 4.17 Effective permittivities calculated from the dispersion diagram considering different paths of the IBZ for a = 4 mm, w si = 0.6 mm, and l si = 2.99 mm, and for a = 12 mm, w si = 2.9 mm, and l si = 7.49 mm. . . . . . . . . . . . . . . . 4.18 (a) Solid and (b) 3D-printed models of a cubic dielectric resonator mounted over an infinite ground plane. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.19 Resonance frequency of the TE x δ11 mode as a function of w/a calculated by the Eigenmode analysis of the 3D-printed DR. . . . . . . . . . . . . . . . . . . . . 4.20 Effective permittivity ε dr as a function of w/a calculated from the resonance frequency of the TE x δ11 mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.21 Absolute difference of the effective permittivity from the dispersion diagram method and S-parameters retrieval method using the Eigenmode analysis with the DR as a reference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.22 (a) Solid and (b) 3D-printed models of a cubic dielectric resonator antenna. . 4.23 Simulated reflection coefficient for the solid and 3D-printed models considering the dimensions of unit cell. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.24 (a) Effective dielectric constant ε ef f , (b) maximum frequency f max for a < λ/10, (c) maximum frequency f max for a < λ/6 as a function of w si and l si for a = 4.0 mm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.25 (a) Effective dielectric constant ε ef f , (b) maximum frequency f max for a < λ/10, (c) maximum frequency f max for a < λ/6 as a function of w si and l si for a = 2.5 mm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.26 Proposed anisotropic unit cell. . . . . . . . . . . . . . . . . . . . . . . . . . . 4.27 Effective dielectric constant ε ef f as a function of (a) w sa and (b) l sa as well as the birefrigence ∆ε for different values of (c) w sa and (d) l sa computed with the S-parameters retrieval method at 1.575 GHz. . . . . . . . . . . . . . . . . xi 4.28 Dispersion diagram of the proposed uniaxial anisotropic unit cell and its irreducible Brillouin zone. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 (a) Isotropic and (b) anisotropic subwavelength unit cells. . . . . . . . . . . . 5.2 Effective dielectric constant over frequency of the isotropic unit cell. . . . . . 5.3 Effective dielectric constant over frequency of the anisotropic unit cell. . . . . 5.4 Perspective view of the (a) solid and (b) 3D-printed models of the proposed dual-band CP DRA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Simulated reflection coefficient of the solid and 3D-printed models of the proposed dual-band CP DRA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Simulated axial ratio of the solid and 3D-printed models of the proposed dualband CP DRA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Simulated gain patterns at (a) L5 and (b) L1 bands for solid and 3D-printed models of the proposed dual-band CP DRA. . . . . . . . . . . . . . . . . . . . 5.8 Broken pieces of the proposed 3D-printed DRA. . . . . . . . . . . . . . . . . 5.9 Resonance frequencies of the TE x,y 111 and TE x,y 113 modes as a function of the height b of the DRA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 Effective permittivity of the (a) isotropic and (b) anisotropic subwavelength unit cells. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11 Pictures of the 3D-printed inhomogeneous and anisotropic DRA. . . . . . . . 5.12 Perspective view of the 3D-printed antenna mounted over a ground plane. . 5.13 Measured and simulated reflection coefficient of the proposed inhomogenenous and anisotropic DRA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.14 Measured and simulated axial of the proposed inhomogenenous and anisotropic DRA at the boresight direction (φ = 0 • and θ = 0 • ). . . . . . . . . . . . . . 5.15 Simulated and measured radiation patterns at (a) L1 and (b) TT&C bands of the proposed dual-band circularly-polarized DRA. . . . . . . . . . . . . . . . 5.16 Simulated and measured axial ratio as function of θ at the (a) L1 and (b) TT&C bands for the proposed dual-band circularly-polarized DRA. . . . . . 5.17 Simulated and measured realized gain of the proposed dual-band circularlypolarized DRA at the boresight direction (φ = 0 • and θ = 0 • ). . . . . . . . . xii Perspective view of the proposed dual-band DRA over an infinite ground plane. 108 6.2 (a) Magnetic and (b) electric field distributions of the quasi-TM 001 mode assuming a homogeneous and isotropic square-base rectangular dielectric resonator.108 6.3 (a) Resonance frequency and (b) Q-factor of the TM 001 mode of the proposed inhomogeneous and anisotropic DR as a function of ε z . . . . . . . . . . . . . . 109 6.4 Perspective view of the proposed triple-band DR with an air cavity over an infinite ground plane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 6.5 Resonance frequency of the TE x δ11 , TE y 1δ1 , TE x δ13 , TE y 1δ3 , and quasi-TM 001 modes as a funcion of h ag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 6.6 Perspective split view of the proposed inhomogeneous and anisotropic tripleband DRA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 6.7 Simulated S-parameters of the inhomogeneous and anisotropic triple-band DRA at the (a) GNSS and (b) ISM bands. . . . . . . . . . . . . . . . . . . . . . . . 112 6.8 Simulated axial ratio of the inhomogeneous and anisotropic triple-band DRA at the GNSS band. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 6.9 Simulated radiation patterns at (a) L5 and (b) L1 bands of the triple-band DRA.114 6.10 Simulated radiation patterns at ISM band (f = 2.45 GHz) of the triple-band DRA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 6.11 Simulated realized gain of the triple-band DRA at the (a) GNSS, for θ = 0 • and φ = 0 • , and (b) ISM, for θ = 60 • and φ = 0 • , bands. . . . . . . . . . . . . 116 6.12 Simulated axial ratio as a function of θ at the (a) L5 and (b) L1 bands of the proposed triple-band DRA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 6.13 Perspective view of the proposed triple-band DRA with the initial feeding scheme.118 6.14 Reflection coefficient of the port 1 considering the initial feeding scheme of Fig. 6.13. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 6.15 (a) Initial and (b) final feeding schemes for the triple-band DRA. . . . . . . . 119 6.16 Reflection coefficient of port 1 at the GNSS bands for different values of l m . . 120 6.17 Reflection coefficient of port 2 at the ISM band for different values of l m . . . 120 A.1 Unit cell, boundary conditions and excitation ports on Ansys HFSS. . . . . . 131 List Resonance frequencies and Q-factor of the TE y 1δ1 , TE x δ11 , and TE x δ21 modes for the homogeneous DR. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Parameters of the dual-band linearly-polarized DRA with and without air cavity. Parameters of the initial and solid models of the proposed DRA. . . . . . . . Maximum theoretical and simulated 3-dB AR bandwidth. . . . . . . . . . . 61 4.1 Dimensions of the unit cell for different lattices a using the S-parameters retrieval method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.2 Effective permittivity computed using the S-parameters retrieval method (ε spar ), dispersion diagram (ε dd ), and Eigenmode analysis with the DR (ε dr ). . . . . . Effective permittivity computed using the S-parameters retrieval method (ε spar ), dispersion diagram (ε dd ), and Eigenmode analysis with the DR (ε dr ). . . . . . Parameters of the proposed DRAs designed for the L1 and up-link TT&C bands and for the L5 and L1 bands, where ε ri = ε x = ε y . . . . . . . . . . . . 98 5.2 Simulated and measured results of the proposed dual-band circularly polarized DRA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 xv
1
1
(a) Voronoi antenna 3D-printed plastic coated with metal[START_REF] Bahr | Novel uniquely 3D printed intricate Voronoi and fractal 3D antennas[END_REF] (b) 3D-printed full-metal horn antenna[START_REF] Agnihotri | Design of a 3D metal printed axial corrugated horn antenna covering full Ka-band[END_REF]
Figure 2 :
2 Figure 2: (a) Voronoi antenna 3D-printed in plastic and then coated with metal, and (b) horn antenna 3D-printed directly with metal.
2. 1 0
1 (a) Homogeneous lens (εr)[START_REF] Muhammad S Anwar | 3D printed dielectric lens for the gain enhancement of a broadband antenna[END_REF]
Figure 1 . 1 :
11 Figure 1.1: A typical schematic view of an atom in the (a) absence of applied electric field and (b) under an applied electric field [17].
Figure 1 . 2 :
12 Figure 1.2: Different dielectric shapes used for DRA applications [37].
Figure 1 . 3 :
13 Figure 1.3: Dielectric resonator antennas fed by a (a) microstrip-coupled slot, (b) probe, (c) microstrip transmission line, and (d) coplanar waveguide (CPW) [37].
Figure 1 . 4 :
14 Figure 1.4: Perspective view of an isolated rectangular dielectric resonator.
Figure 1 . 5 :
15 Figure 1.5: Electric field distribution of the (a) TE x δ11 and (b) TE x δ13 modes of the isolated rectangular DR at x = d 2 .
Fig. 1 . 6 .
16 The antenna operates at 3456 MHz and 4797 MHz with different radiation patterns at each band. More precisely, at the upper band, the HE 111 mode is excited and presents broadside radiation patterns. At the lower band, the inductive slot radiates by itself, which shows dipole-like patterns.
Figure 1 . 6 :
16 Figure 1.6: Dual-band dielectric resonator antenna fed by a coplanar waveguide [43].
Figure 1 . 7 :
17 Figure 1.7: Dual-band cylindrical DRA with omnidirectional radiation patterns [44].
Figure 1 . 8 :
18 Figure 1.8: Top view of a dual-band CP cylindrical DRA [49].
Figure 1 . 9 :
19 Figure 1.9: Perspective view of a dual-band CP DRA fed by a cross-slot [50].
Figure 1 . 10 :
110 Figure 1.10: Perspective view of a singly-fed dual-band CP DRA [51].
Figure 1 . 11 :
111 Figure 1.11: Perspective view of the dual-band circularly-polarized DRA fed by two ports [52].
Figure 1 . 12 :
112 Figure 1.12: Perspective view of the dual-band circularly-polarized DRA fed by a cross-slot [53].
Figure 1 .
1 Figure 1.13 shows a wide dual-band circularly-polarized stacked rectangular DRA [54]. The quasi-TE 111 and slot modes are employed at the lower band, and quasi-TE 113 and quasi-TE 115 modes are used at the upper band, and all of these modes are excited by an unbalanced cross-slot. Also, the DRA consists of two stacked DRAs with the same height, same permit-
Figure 1 . 13 :
113 Figure 1.13: Geometry of the wide dual-band CP stacked rectangular DRA [54].
2 . 1 23 2. 2 27 2. 3
21232273 Homogeneous Linearly-Polarized Dual-Band DRA . . . . . . . . . . . Inhomogeneous Linearly-Polarized Dual-Band DRA . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Figure 2 . 1 :Figure 2 . 2 :
2122 Figure 2.1: Electric field distribution of the (a) TE x δ11 and (b) TE y 1δ1 modes considering a homogeneous DR over an infinite ground plane.
Figure 2 . 3 :
23 Figure 2.3: Simulated reflection coefficient of the proposed dual-band linearly-polarized DRA.
Figure 2 .
2 Figure 2.4 shows the θ-and φ-components of the simulated gain patterns, in dBi, of the dual-band DRA in the xz-and yz-planes at the center frequencies of the L5 and L1 bands.At the lower band, as expected for the TE x δ11 mode, broadside patterns are observed for both planes. Note that this DRA is linearly polarized whereas GNSS receiving systems usually require circularly-polarized antennas. At the upper band, the radiation patterns are not as expected for the TE y 1δ1 . To be more specific, in xz-plane, the θ-component of the gain for the DRA presents a null for θ = 285 • , which creates a blind spot, and the direction of the main lobe is shifted to around 30 • as well. Therefore, it is necessary to investigate the reasons behind this unexpected result. Note also that the polarization is still linear but orthogonal to the one in the L5 band as expected.
Figure 2 . 4 :
24 Figure 2.4: Simulated gain patterns of the dual-band linearly-polarized DRA at the (a) L5 and (b) L1 bands.
Figure 2 . 5 :
25 Figure 2.5: Electric field distribution of the TE x δ21 mode considering a homogeneous DR over an infinite ground plane.
2 . 6 . 1 w 1 d 1 Figure 2 . 6 :Figure 2 . 7 : 1 Figure 2 . 8 :
26112627128 Figure 2.6: Dual-band DR with an air cavity over an infinite ground plane.
Figure 2 . 9 :
29 Figure 2.9: Simulated reflection coefficient of the dual-band linearly-polarized DRA with and without air cavity.
Figure 2 .
2 Figure 2.10 shows the simulated gain patterns of the DRA with and without air cavity at φ = 0 • and φ = 90• planes calculated at the center frequencies of the L5 and L1 bands. At the lower band, the gain patterns are the same for both models, as expected. At the upper band, one can note that there is no angular shift anymore with the presence of the air cavity, as expected for the TE y δ11 mode. Therefore, the gain patterns demonstrate that the use of an inhomogeneous DR is efficient to get rid of a given undesirable radiating mode.
f = 1575.42 MHz
Figure 2 . 10 :
210 Figure 2.10: Simulated gain patterns of the dual-band linearly-polarized DRA with and without the air cavity at (a) L5 and (b) L1 bands.
Figure 2 . 11 :
211 Figure 2.11: Simulated θ-component of the gain pattern at the L1 band in the φ = 0 • plane for different values of b 1 .
Figure 2 . 12 :
212 Figure 2.12: Simulated axial ratio of the dual-band linearly-polarized DRA with air cavity in the boresight direction (θ = 0 • and φ = 0 • ).
Figure 2 .
2 Figure 2.13: (a) Cross-and (b) L-shaped dual-band dielectric resonator antennas with air cavity.
Contents 3 . 1 46 3. 3 53 3. 4 59 3. 5
31463534595 Principle of Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.2 Antenna Design and Results . . . . . . . . . . . . . . . . . . . . . . . . . Parametric Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Trade-off Between Axial Ratio Bandwidth and DRA Volume . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Figure 3 . 1 :
31 Figure 3.1: Perspective view of a square-based rectangular DRA over an infinite ground plane.
Figure 3 . 2 :
32 Figure 3.2: Curves of quality factor over frequency of two orthogonal modes in which the CP are (a) non-fulfilled and (b) fulfilled.
Figure 3 . 3 :
33 Figure 3.3: (a) Uniaxial anisotropic DRA with circular polarization and (b) its axial ratio [16].
Figure 3 . 5 :
35 Figure 3.5: Curves of (a) f 1 and f 2 , (b) Q 1 and Q 2 and (c) f 01 and f 02 of the TE x δ11 , TE y 1δ1 , TE x δ13 and TE y 1δ3 modes for different values of ε y calculated for a homogeneous and uniaxial anisotropic dielectric resonator.
Figure 3 . 6 :
36 Figure 3.6: Electric field distribution of the fundamental ((a) TE x δ11 and (b) TE y 1δ1 ) and higher ((c) TE x δ13 and (d) TE y 1δ3 ) modes for a homogeneous and isotropic DR over an infinite PEC ground plane.
Figure 3 . 7 :
37 Figure 3.7: Perspective view of the proposed inhomogeneous DR over an infinite ground plane.
Figure 3 . 8 :
38 Figure 3.8: Curves of (a) resonance frequencies and (b) Q-factors calculated using the Eigenmode solution of Ansys HFSS considering the proposed inhomogeneous and anisotropic dielectric resonator with ε ri = ε x = ε y = 10, b = 75.0 mm, w = 41.5 mm, b a = 42.0 mm, l a = 29.0 mm, and w a = 10.5 mm.
Figure 3 . 9 :
39 Figure 3.9: Curves of (a) resonance frequencies, (b) Q-factors, and (c) f 01 and f 02 of the TE x δ11 , TE y 1δ1 , TE x δ13 , and TE y 1δ3 modes for different values of ε z considering the proposed inhomogeneous and anisotropic DR with ε ri = ε x = ε y = 10, b = 75.0 mm, w = 41.5 mm, b a = 42.0 mm, l a = 29.0 mm, and w a = 10.5 mm.
Figure 3 . 10 :
310 Figure 3.10: Curves of f 01 and f 02 of the TE x δ11 , TE y 1δ1 , TE x δ13 " and TE y 1δ3 modes for different values of (a) b a , (b) l a , and (c) w a , considering the proposed inhomogeneous and anisotropic DR with ε ri = ε x = ε y = 10, ε z = 25.1, b = 75.0 mm, w = 41.5 mm, b a = 42.0 mm, l a = 29.0 mm, and w a = 10.5 mm.
Figure 3 .
3 Figure 3.11: (a) Perspective and (b) top views of the proposed inhomogeneous and anisotropic DRA.
Figure 3 .
3 Figure 3.12: Simulated (a) reflection coefficient and (b) axial ratio of the initial model of the proposed DRA.
Figure 3 . 13 :b w a b a l a w g ε r ε x ε y ε z 2 Table 3 . 1 :
313z231 Figure 3.13: Simulated (a) reflection coefficient and (b) axial ratio of the optimized solid model of the proposed DRA.
Figure 3 . 14 :
314 Figure 3.14: Simulated radiation patterns at (a) L5 and (b) L1 bands of the solid model of the proposed DRA.
Figure 3 .
3 Figure 3.15 shows the realized gain in dBi of the proposed dual-band circularly-polarized DRA at the boresight direction, i.e. for φ = 0 • and θ = 0 • . With reference to the figure, the simulated realized gain at the center frequency of the L5 band is 6.23 dBi. At center of the L1 band, the realized gain is 6.6 dBi at the boresight direction.
Figure 3 . 15 :
315 Figure 3.15: Simulated realized gain of the proposed dual-band circularly-polarized DRA at the boresight direction (φ = 0 • and θ = 0 • ).
Figure 3 .
3 Figure 3.16: Simulated (a) magnitude and (b) phase of the φ-and θ-components of the electric field of the proposed dual-band circularly-polarized DRA at the boresight direction (φ = 0 • and θ = 0 • ).
Figure 3.17 presents the simulated axial ratio as a function θ at φ = 0 • and φ = 90 • calculated at 1.17 GHz and 1.57 Ghz, which are the frequencies of minimum axial ratio at the boresight direction (θ = 0 • and φ = 0 • ). At the lower band, the axial ratio is below 3 dB for -44.84 • ≤ θ ≤ 46.96 • and -44.26 • ≤ θ ≤ 40.40 • at φ = 0 • and φ = 90 • , respectively. By contrast, at the upper band, the axial ratio is below 3 dB for -20.67 • ≤ θ ≤ 22.10 • and -33.19 • ≤ θ ≤ 29.46 • at φ = 0 • and φ = 90 • , respectively.
Figure 3 . 17 :
317 Figure 3.17: Simulated axial ratio as function of θ at the (a) L5 and (b) L1 bands of the proposed dual-band circularly-polarized DRA.
Figure 3 . 18 :
318 Figure 3.18: Simulated radiation efficiency of the proposed DRA.
11 and axial ratio are calculated as a function of b a , l a , w a , w g , and ε z , where the parameters of the DRA are w = 45 mm, b = 67 mm, w a = 10.5 mm, l a = 31 mm, b a = 47 mm, w g = 135 mm, h s = 1.524 mm, w s = 4.16 mm, l s = 57.4 mm, w l = 3.86 mm, l l = 163.9 mm, ε ri = ε x = ε y = 10, and ε z = 22.2.
Figure 3 . 19 :
319 Figure 3.19: Simulated (a) reflection coefficient and (b) axial ratio for different values of height of b a of the uniaxial anisotropic dielectric region.
Figure 3 . 20 :
320 Figure 3.20: Simulated (a) reflection coefficient and (b) axial ratio for different values of the length l a of the uniaxial anisotropic dielectric region.
Figure 3 . 21 :
321 Figure 3.21: Simulated (a) reflection coefficient and (b) axial ratio for different values of width w a of the uniaxial anisotropic dielectric region.
Figure 3 . 22 :
322 Figure 3.22: Simulated (a) reflection coefficient and (b) axial ratio for different values of ε z of the uniaxial anisotropic dielectric region.
Figure 3 . 23 :
323 Figure 3.23: Simulated (a) reflection coefficient and (b) axial ratio for different values of w g .
Figure 3 .
3 Figure 3.24: (a) Q-factor and (b) theoretical 3-dB axial ratio bandwidth as a function of ε r .
Figure 3 . 25 :
325 Figure 3.25: Volume of the DRA as a function of ε r .
4 . 1 63 4. 2 66 4. 3 71 4. 3 . 2 83 4. 4 86 4. 5
416326637132834865 Additive Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . Additive Manufacturing for Dielectric Resonator Antennas . . . . . . Design of the 3D-Printed Isotropic Unit Cell . . . . . . . . . . . . . . . 69 4.3.1 Parameter Retrieval Methods . . . . . . . . . . . . . . . . . . . . . . . . . Manufacturing Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . Design of the 3D-Printed Anisotropic Unit Cell . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Figure 4 . 1 :
41 Figure 4.1: Scheme of the SL process [57].
Figure 4 . 2 :
42 Figure 4.2: Scheme of the SLS process [57].
Figure 4 . 3 :
43 Figure 4.3: Scheme of the FDM process [59].
Figure 4 . 4 :
44 Figure 4.4: 3D-printed super-shaped dielectric resonator antennas [79].
Figure 4 . 5 :
45 Figure 4.5: Perspective view of the (a) 3D-printed elliptical DRA and (b) its unit cell [14].
Figure 4 . 6 :
46 Figure 4.6: Perspective view of the prototypes of the 3D-printed hemispherical DRA loaded with metallic caps [80].
Figure 4 . 7 :
47 Figure 4.7: Perspective view of the (a) 3D-printed multi-ring DRA and (b) its unit cell [81].
Figure 4 . 8 :
48 Figure 4.8: (a) Schematic and (b) perspective views of the 3D-printed nonhomogeneous dielectric resonator antenna [15, 82].
Figure 4 . 9 :
49 Figure 4.9: Perspective view of the (a) 3D-printed anisotropic rectangular DRA and (b) its unit cell [16].
Figure 4 .
4 Figure 4.10: (a) Simple, (b) body-centered, and (c) face-centered cubic lattices [85].
Figure 4 . 11 :
411 Figure 4.11: Proposed isotropic unit cell.
Figure 4 . 12 :
412 Figure 4.12: Calculated effective dielectric constant of the cells computed on Ansys HFSS with respect to the filling ratio δ i of the unit cell at 1.575 GHz using the S-parameters retrieval method.
Figure 4 . 13 :
413 Figure 4.13: (a) Calculated effective permittivity and (b) absolute difference of the proposed unit cell calculated using the S-parameters retrieval method and for a = 4 mm, w si = 0.6 mm, and l si = 2.99 mm.
Figure 4 . 14 :
414 Figure 4.14: Illustration of (a) chromatic, (b) polarization, and (c) spatial dispersions [86].
Figure 4 . 15 :
415 Figure 4.15: Dispersion diagram of the proposed isotropic unit cell and its irreducible Brillouin zone considering a unit cell with a = 4 mm, w si = 0.6 mm, and l si = 2.99 mm.
Figure 4 . 16 :
416 Figure 4.16: (a) Effective permittivity and (b) absolute permittivity difference of the proposed unit cell calculated using the dispersion diagram and the S-parameters method with a = 4 mm, w si = 0.6 mm, and l si = 2.99 mm.
Figure 4 .
4 Figure 4.18: (a) Solid and (b) 3D-printed models of a cubic dielectric resonator mounted over an infinite ground plane.
2 Table 4 . 1 :
241 Dimensions of the unit cell for different lattices a using the S-parameters retrieval method.
Figure 4 .
4 Figure 4.19 shows the resonance frequency of the TE x δ11 mode as a function of w/a for the 3D-printed model considering values presented in Table4.1. One can note that, as w/a is increasing, i.e the cells are getting electrically and physically smaller, the resonance frequency is getting closer to 1.57 GHz. In other words, for electrically-small cells, the effective permittivity of the cell as calculated by the S-parameters retrieval method is accurate.
Figure 4 . 19 :
419 Figure 4.19: Resonance frequency of the TE x δ11 mode as a function of w/a calculated by the Eigenmode analysis of the 3D-printed DR.
Figure 4 . 20 :
420 Figure 4.20: Effective permittivity ε dr as a function of w/a calculated from the resonance frequency of the TE x δ11 mode.
a ε spar ε dd ε eigen 5 .
5 31 mm = w/7 = λ/11.33 10 10.07 10.04 6.23 mm = w/6 = λ/9.27 10 10.09 10.05 7.44 mm = w/5 = λ/7.44 10 10.23 10.20 9.35 mm = w/4 = λ/6.40 10 10.50 10.29 12.4 mm= w/3 = λ/4.99 10 11.12 10.84 18.7 mm= w/2 = λ/3.22 10 15.24 13.34
Figure 4 . 21 :
421 Figure 4.21: Absolute difference of the effective permittivity from the dispersion diagram method and S-parameters retrieval method using the Eigenmode analysis with the DR as a reference.
Figure 4 .
4 Figure 4.22: (a) Solid and (b) 3D-printed models of a cubic dielectric resonator antenna.
Figure 4 .
4 Figure 4.23 shows the magnitude of the reflection coefficient |S 11 |, in dB, for the solid and 3D-printed models considering the dimensions of unit cell presented in Table4.1. For this simulation, the overall dimensions of the DRA are the same for all cases with exception of the height of the coaxial probe that is adjusted for each model to assure impedance matching to 50-Ω. One can note that, as the cells are getting larger, the resonance frequency of the antenna decreases, which means that its effective permittivity is increasing.
- 10 0Figure 4 . 23 :
10423 Figure 4.23: Simulated reflection coefficient for the solid and 3D-printed models considering the dimensions of unit cell.
a ε eigen ε f w Difference 6 .
6 23 mm = w/6 = λ/9.27 10.05 10.10 0.50% 7.44 mm = w/5 = λ/7.44 10.20 10.40 1.96% 9.35 mm = w/4 = λ/6.40 10.29 10.40 1.07% 12.4 mm= w/3 = λ/4.99 10.84 11.00 1.48% 18.7 mm= w/2 = λ/3.22 13.34 13.40 0.45% Table
3D-printer limitation (a) Effective permittivity 3D-printer limitation (b) Maximum frequency for a < λ/10 3D-printer limitation (c) Maximum frequency for a < λ/6
Figure 4 .
4 Figure 4.24: (a) Effective dielectric constant ε ef f , (b) maximum frequency f max for a < λ/10, (c) maximum frequency f max for a < λ/6 as a function of w si and l si for a = 4.0 mm.
6 Figure 4 .
64 Figure 4.25: (a) Effective dielectric constant ε ef f , (b) maximum frequency f max for a < λ/10, (c) maximum frequency f max for a < λ/6 as a function of w si and l si for a = 2.5 mm.
Figure 4 . 26 :
426 Figure 4.26: Proposed anisotropic unit cell.
for wsa = 0.76 mm
Figure 4 . 27 :
427 Figure 4.27: Effective dielectric constant ε ef f as a function of (a) w sa and (b) l sa as well as the birefrigence ∆ε for different values of (c) w sa and (d) l sa computed with the S-parameters retrieval method at 1.575 GHz.
Figure 4 . 28 :
428 Figure 4.28: Dispersion diagram of the proposed uniaxial anisotropic unit cell and its irreducible Brillouin zone.
-Printed Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 5.2 Manufacturing Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 5.3 Measurements and Discussion . . . . . . . . . . . . . . . . . . . . . . . . 100 5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3 and 4.4 are used, respectively. These elementary cells are recalled in Fig.5.1. It is important to point out that both cells are made out of zirconia, which has ε r = 32.5 and tanδ = 1.9 • 10 -4 at 10 GHz.
Figure 5 . 1 :
51 Figure 5.1: (a) Isotropic and (b) anisotropic subwavelength unit cells.
Figure 5 . 2 :
52 Figure 5.2: Effective dielectric constant over frequency of the isotropic unit cell.
Figure 5 . 3 :
53 Figure 5.3: Effective dielectric constant over frequency of the anisotropic unit cell.
Figure 5 . 4 :
54 Figure 5.4: Perspective view of the (a) solid and (b) 3D-printed models of the proposed dual-band CP DRA.
Figure 5 . 5 :
55 Figure 5.5: Simulated reflection coefficient of the solid and 3D-printed models of the proposed dual-band CP DRA.
Figure 5 .
5 Figure 5.6 presents the axial ratio (AR) of the proposed antenna at the boresight direction (θ = 0 • and φ = 0 • ). The simulated 3-dB AR bandwidths of the solid model are 5.00% (1140.3 MHz -1198.8 MHz) and 1.46% (1562.8 MHz -1585.7 MHz), covering both L5 and L1 bands.For the 3D-printed model, the 3-dB AR bandwidths are 4.84% (1121.9 MHz -1177.5 MHz) and 1.42% (1552.9 MHz -1575.1 MHz). One can note a downward frequency shift for the 3D-printed model in comparison to the solid one and, to fix it, it is necessary to re-optimize the 3D-printed model. However, the simulation of this model requires high-computational effort, which does not allows computational optimization.
Figure 5 . 6 :
56 Figure 5.6: Simulated axial ratio of the solid and 3D-printed models of the proposed dual-band CP DRA.
Figure 5 . 7 :
57 Figure 5.7: Simulated gain patterns at (a) L5 and (b) L1 bands for solid and 3D-printed models of the proposed dual-band CP DRA.
Figure 5 . 8 :
58 Figure 5.8: Broken pieces of the proposed 3D-printed DRA.
Figure 5 . 9 :
59 Figure 5.9: Resonance frequencies of the TE x,y 111 and TE x,y 113 modes as a function of the height b of the DRA.
Figure 5 . 10 :
510 Figure 5.10: Effective permittivity of the (a) isotropic and (b) anisotropic subwavelength unit cells.
Figure 5 . 11 :
511 Figure 5.11: Pictures of the 3D-printed inhomogeneous and anisotropic DRA.
Figure 5 . 12 :Figure 5 . 13 :
512513 Figure 5.12: Perspective view of the 3D-printed antenna mounted over a ground plane.
Figure 5 . 14 :
514 Figure 5.14: Measured and simulated axial of the proposed inhomogenenous and anisotropic DRA at the boresight direction (φ = 0 • and θ = 0 • ).
Figure 5 .
5 Figure 5.16 presents the simulated and measured axial ratio as a function of θ at φ = 0 • and φ = 90 • at the lower and upper bands. At the lower band, the simulated axial ratio is below 3 dB for -49.5 • ≤ θ ≤ 47.3 • and -39.9 • ≤ θ ≤ 39.8 • at φ = 0 • and φ = 90 • , respectively. By contrast, at the upper band, the simulated axial ratio is below 3 dB for -21.5 • ≤ θ ≤ 21.7 • and -35.9 • ≤ θ ≤ 29.4 • at φ = 0 • and φ = 90 • , respectively. For the measurements, the circular polarization happens at the lower band for -56.6 • ≤ θ ≤ 48.4 • at φ = 0 • and for -30.6 • ≤ θ ≤ 34.6 • at φ = 90 • . At the upper band, the measured 3-dB axial ratio covers -26.0 • ≤ θ ≤ 31.5 • and -35.6 • ≤ θ ≤ 28.4 • at φ = 0 • and φ = 90 • , respectively.
Figure 5 . 15 :
515 Figure 5.15: Simulated and measured radiation patterns at (a) L1 and (b) TT&C bands of the proposed dual-band circularly-polarized DRA.
Figure 5 . 17 :
517 Figure 5.17: Simulated and measured realized gain of the proposed dual-band circularlypolarized DRA at the boresight direction (φ = 0 • and θ = 0 • ).
Figure 6 . 1 :Figure 6 . 2 :
6162 Figure 6.1: Perspective view of the proposed dual-band DRA over an infinite ground plane.
Figure 6 . 3 :
63 Figure 6.3: (a) Resonance frequency and (b) Q-factor of the TM 001 mode of the proposed inhomogeneous and anisotropic DR as a function of ε z .
Figure 6 . 4 :
64 Figure 6.4: Perspective view of the proposed triple-band DR with an air cavity over an infinite ground plane.
Figure 6 . 6 :
66 Figure 6.6: Perspective split view of the proposed inhomogeneous and anisotropic triple-band DRA. Considering the feeding network and a finite ground plane, the triple-band DRA is optimized using Ansys HFSS. Its dimensions are w = 45.1 mm, b = 70.4 mm, w a = 11.4 mm, b a = 48.5 mm, l a = 27.6 mm, w air = 8 mm, b air = 36 mm, and w g = 100 mm. The permittivity of the isotropic region ε ri is equal to 10 and the permittivity of the anisotropic region is represented by the following tensor in the cartesian coordinate system
Figure 6 . 7 :
67 Figure 6.7: Simulated S-parameters of the inhomogeneous and anisotropic triple-band DRA at the (a) GNSS and (b) ISM bands.
Figure 6 .
6 Figure 6.8 shows the simulated axial ratio (AR) at the boresight direction, i.e. at θ = 0 • and φ = 0 • , of the proposed triple-band DRA for port 1 while port 2 is matched to 50-Ω load. The 3-dB axial ratio bandwidths are 5.9% (1.143 GHz -1.212 GHz) and 1.2% (1.564 GHz -1.584 GHz), covering the L5 and L1 bands. Additionaly, this result is similar to the dual-band DRA proposed in Chapter 3, as expected.
Figure 6 . 8 :
68 Figure 6.8: Simulated axial ratio of the inhomogeneous and anisotropic triple-band DRA at the GNSS band.
(a) f = 1.17 GHz (b) f = 1.57 GHz
Figure 6 . 9 :Figure 6 . 10 :
69610 Figure 6.9: Simulated radiation patterns at (a) L5 and (b) L1 bands of the triple-band DRA.
Figure 6 . 11 :
611 Figure 6.11: Simulated realized gain of the triple-band DRA at the (a) GNSS, for θ = 0 • and φ = 0 • , and (b) ISM, for θ = 60 • and φ = 0 • , bands.
Figure 6 . 14 :Figure 6 .
6146 Figure 6.14: Reflection coefficient of the port 1 considering the initial feeding scheme of Fig. 6.13.
Fig. 6 .
6 16 (w = 45.1 mm, b = 70.4 mm, w a = 11.4 mm, b a = 48.5 mm, l a = 27.6 mm, w air = 8 mm, b air = 36 mm, w g = 100 mm, h pin = 36.5 mm, l s = 51 mm, w s = 3.22 mm, w l = 3.86 mm, l l = 40 mm, w T = 1 mm, l T 1 = 92 mm, w m = 4 mm, ε ri = ε x = ε y = 10, and ε z = 22.9).
Figure 6 . 16 :
616 Figure 6.16: Reflection coefficient of port 1 at the GNSS bands for different values of l m .
Figure 6 . 17 :
617 Figure 6.17: Reflection coefficient of port 2 at the ISM band for different values of l m .
Figure A. 1 :
1 Figure A.1: Unit cell, boundary conditions and excitation ports on Ansys HFSS.
1.1 Reminders on Dielectrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2 A Brief History of the DRA . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3 Definition and Main Characteristics of the DRA . . . . . . . . . . . . 10 1.4 The Rectangular DRA . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1
.4.1 Field Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.4.2 Resonance Frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.4.3 Quality Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15 1.5 State-of-the-Art of Multiband DRAs . . . . . . . . . . . . . . . . . . . . 16 1.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Table 2 .
2
Q-factor
TE x δ11 TE y 1δ1 TE x δ21 1.175 GHz 1.574 GHz 1.575 GHz 20.37 14.66 12.42
1: Resonance frequencies and Q-factor of the TE y 1δ1 , TE x δ11 , and TE x δ21 modes for the homogeneous DR.
Table 2 .
2 2: Parameters of the dual-band linearly-polarized DRA with and without air cavity.
.81% higher than
Table 5 .
5 1: Parameters of the proposed DRAs designed for the L1 and up-link TT&C bands and for the L5 and L1 bands, where ε ri
Simulation Measurement Lower band Upper band Lower band Upper band Impedance bandwidth
22.42% 8.27% 20.95% 8.57%
3-dB AR bandwidth 5.06% 1.45% 4.31% 2.38%
3-dB AR beamwidth for φ = 0 • 96.8 • 43.2 • 105 • 57.5 •
3-dB AR beamwidth for φ = 90 • 79.7 • 65.3 • 65.2 • 64 •
Maximum realized gain 7.02 dBi 8.27 dBi 6.45 dBi 7.30 dBi
Table 5 .
5 2: Simulated and measured results of the proposed dual-band circularly polarized DRA.
La fabrication additive, ou simplement l'impression tridimensionnelle (3D), joue un rôle important dans différents domaines en raison de sa rapidité de fabrication, des économies d'énergie, de la personnalisation et de la réduction des déchets de matériaux, par exemple. En ce qui concerne les applications d'antennes, il s'avère que la plupart des solutions imprimées en 3D à base de diélectrique proposées dans la littérature portent sur des structures non résonantes et de grande taille par rapport à la longueur d'onde. Plus récemment, la possibilité d'utiliser la céramique comme matériau d'impression a ouvert de nouvelles possibilités pour la conception de structures petites et résonantes en raison de la constante diélectrique plus élevée. Dans ce contexte, l'objectif principal de ce travail est de montrer la possibilité de contrôler localement l'inhomogénéité et l'anisotropie d'un diélectrique. Pour le démontrer, des antennes à résonateur diélectrique (DRAs) multibandes sont proposées pour obtenir polarisation circulaire et/ou des diagrammes de rayonnement spécifiques en manipulant la permittivité électrique du diélectrique. Dans un premier temps, une approche originale pour concevoir un DRA bi-bande à polarisation circulaire est présentée. La polarisation circulaire est obtenue uniquement grâce à la manipulation de la permittivité du diélectrique au lieu d'utiliser des techniques d'alimentation complexes et/ou d'utiliser des formes diélectriques complexes comme la plupart des travaux trouvés dans la littérature. Les résultats démontrent l'efficacité du modèle proposé et un guide de conception pour ce type d'antenne est présenté. En outre, une extension à une antenne à trois bandes est proposée, les deux bandes inférieures présentant polarisation circulaire et un diagramme de rayonnement large, tandis que la bande supérieure a une polarisation linéaire et un diagramme omnidirectionnel. Les deux antennes nécessitent un assemblage de diélectriques anisotropes et isotropes avec des valeurs spécifiques de permittivité pour obtenir les résultats souhaités. Cependant, ces matériaux présentant ces caractéristiques ne sont pas nécessairement disponibles. Pour surmonter ce problème, des motifs sub-longueur d'onde agencés périodiquement permettent de créer des milieux artificiels et hétérogènes en contrôlant leur permittivité effective. Pour cela uniquement la zircone est utilisée comme matériau d'impression en un seul procédé de fabrication en utilisant la technologie d'impression 3D.
modes is highly sensitive to b, which shows the possibility of designing this antenna at different frequency bands. Regarding the isotropic unit cell, l si and w si can be adjusted to the desired permittivity by controlling the filling ratio δ i , which is discussed in Section 4.3.
Acknowledgements
i
In the previous chapters, a dual-band circularly-polarized dielectric resonator antenna is presented, which has broadside radiation patterns at both bands. In this chapter, the introduction of a third band with an omnidirectional radiation pattern to the previously proposed DRA is presented. In Section 6.1, the principle of operation of this antenna is described. The presence of the quasi-TM 001 mode is discussed and how its resonance frequency can be controlled without affecting the modes employed at the GNSS band. In Section 6.2, the full antenna with its feeding system and a finite ground plane is considered, and results such as reflection coefficient, axial ratio, and radiation patterns are presented and discussed. In Section 6.3, the feeding method used to excite the TE x δ11 , TE y 1δ1 , TE x δ13 , TE y 1δ3 , and quasi-TM 001 modes is studied in more detail.
Principle of Operation
A great number of communication systems use the ISM band (2.4 GHz -2.5 GHz) for shortrange links. Thus, in addition to a dual-band and circular polarization operation at the GNSS, it would be interesting to add to the antenna proposed in Chapter 3 a third band with an omnidirectional radiation pattern.
From [START_REF] Mei Pan | Study of resonant modes in rectangular dielectric resonator antenna based on radar cross section[END_REF], it is known that the quasi-TM 001 mode can be excited in isotropic and homogenous rectangular DRAs with a square base. This mode results from the combination of the TE x δ21 and TE y 2δ1 modes when they are excited at the same frequency. The quasi-TM 001 mode radiates as a vertical electric dipole and it could be excited by a probe placed at the center of the DRA. However, due to the lack of symmetry of the inhomogeneous dielectric Finally, the results presented show that it is possible to add a third band without disturbing the performance of the DRA at the GNSS bands. This is because the air gap and probe are introduced in a region where the electric field distributions of the modes used at the L5 and L1 bands are weak.
Publications
Communications in international conferences with technical programme committee Appendix A
Parameter Retrieval of Periodic Structures on HFSS Mathematical Formulation
Periodic structures used to be analyzed in terms of their effective constitutive parameters, which can be extracted using scattering-parameters (S-parameters) retrieval methods [88][89]. These methods are easily employed in electromagnetic simulation software such as Ansys HFSS and CST, for instance.
In this work, the retrieval method proposed in [START_REF] Chen | Robust method to retrieve the constitutive effective parameters of metamaterials[END_REF] is considered, where the reflection S 11 and transmission S 21 coefficient are used to calculated the effective permittivity ε eff and permeability µ eff . In this case, the periodic structure is assumed to be an effective homogeneous slab. Considering a normally-incident plane wave on a homogeneous slab, S 11 and S 21 can be written as
where R = (z -1) / (z + 1), d is the thickness of the slab, k 0 is the free-space wavenumber, z is the impedance, and n is the refractive index.
The impedance z and refractive index n can be calculated using the following equations:
3)
where (•) and (•) represents the real and complex components of the operators, respectively. In addition, the index m indicates the presence of multiple solutions for n. However, if the material at issue has passive nature, m can be assumed to be zero and, then, only one solution is found. Also, the sign of z must be chosen so that z ≥ 0, if the periodic structure at issue were passive [START_REF] Smith | Electromagnetic parameter retrieval from inhomogeneous metamaterials[END_REF][START_REF] Arslanagić | A review of the scattering-parameter extraction method with clarification of ambiguity issues in relation to metamaterial homogenization[END_REF][START_REF] Ahmad | Extraction of material parameters for metamaterials using a full-wave simulator [education column[END_REF].
Thus, the effective permittivity ε eff and permeability µ eff are expressed as
)
Extraction of S-Parameters on Ansys HFSS
Previously in this appendix, all the mathematical formulation is presented considering a plane wave with normal incidence on an infinitely large periodic structure, which is achieved by the infinite and periodic repetition of its unit cell. Also, most commercial electromagnetic simulation software present periodic boundary conditions, which become the simulation of this kind of structure easier and faster.
In this work, Ansys HFSS is used to design the unit cells and extract their effective parameters. The employed boundary conditions and excitation ports are thus explained in this appendix. Moreover, an incident electromagnetic wave propagating in ±z-axis is considered, and 2-D periodic boundary conditions along the xy-plane are assigned, which are realized by using the configuration shown in Fig. A.1. In addition, one can note that the unit cell is surrounded by an airbox, where the boundary conditions and excitation port will be set. The shape and dimension of the airbox along the xy-plane must be the same as the unit cell and its height h will be discussed further.
The periodic boundary conditions can be assigned on Ansys HFSS by using perfect electric/magnetic or master/slave conditions. These conditions are supposed to present the same result for cubic-shaped unit cells, however, when it comes to complex shapes, only the master/slave condition can be used effectively. In this work, the master/slave boundary condition is employed along the sidewalls of the unit cell, so that 2D periodic structures can be realized from a single unit cell.
Besides the boundary conditions, it is necessary to choose the proper excitation port and, to do this on Ansys HFSS, Floquet ports are assigned on the top and bottom faces of the air box. Floquet ports are used exclusively with planar-periodic structures and they can be used |
04083732 | en | [
"math.math-kt",
"math.math-ct"
] | 2024/03/04 16:41:22 | 2023 | https://hal.science/hal-04083732v2/file/Morphisms_of_pCY_categories_and_of_A_inf_cat%20%281%29.pdf | Marion Boucrot
Morphisms of pre-Calabi-Yau categories and morphisms of cyclic A ∞ -categories
Keywords: Mathematics subject classification 2020: 16E45, 18G70, 14A22 A ∞ -categories, pre-Calabi-Yau categories
In this article we prove that there exists a relation between d-pre-Calabi-Yau morphisms introduced by M. Kontsevich, A. Takeda and Y. Vlassopoulos and cyclic A∞-morphisms, extending a result proved by D. Fernández and E. Herscovich. This leads to a functor between the category of d-pre-Calabi-Yau structures and the partial category of A∞-categories of the form A⊕A * [d-1] with A a graded quiver and whose morphisms are the data of an A∞-structure on
Introduction
Pre-Calabi-Yau algebras were introduced by M. Kontsevich and Y. Vlassopoulos in the last decade. These structures have also appeared under different names, such as V ∞ -algebras in [START_REF] Tradler | Algebraic string operations[END_REF], A ∞algebras with boundary in [START_REF] Seidel | Fukaya A∞-structures associated to Lefschetz fibrations. II, Algebra, geometry[END_REF], or weak Calabi-Yau structures in [START_REF] Kontsevich | Conference on Homological Mirror Symmetry[END_REF] for example. These references show that pre-Calabi-Yau structures play an important role in homological algebra, symplectic geometry, string topology, noncommutative geometry and even in Topological Quantum Field Theory.
In the finite dimensional case, pre-Calabi-Yau algebras are strongly related to A ∞ -algebras. Actually, for d ∈ Z, a d-pre-Calabi-Yau structure on a finite dimensional vector space A is equivalent to a cyclic A ∞ -structure on A ⊕ A * [d -1] that restricts to A. The definition of pre-Calabi-Yau morphisms first appeared in [START_REF] Kontsevich | Pre-Calabi-Yau algebras and topological quantum field theories[END_REF] and then in [START_REF] Leray | Pre-Calabi-Yau algebras and homotopy double poisson gebras[END_REF], in the properadic setting. A natural question is then about the link between pre-Calabi-Yau morphisms and A ∞ -morphisms of the corresponding boundary construction. D. Fernández and E. Herscovich studied this link in [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] at the level of double Poisson dg algebras and a restricted class of pre-Calabi-Yau algebras, when the multiplications m n vanish for n ≥ 4. In this paper, we study the relation between A ∞ -morphisms and pre-Calabi-Yau morphisms in a larger generality. The main result of this paper is the existence of a functor from the category of d-pre-Calabi-Yau structures pCY d to the partial category A ∞d whose objects are A ∞categories of the form A ⊕ A * [d -1] and whose morphisms are the data of an A ∞ -structure on A ⊕ B * [d -1] together with a diagram of the form
A[1] ⊕ B * [d] A[1] ⊕ A * [d] B[1] ⊕ B * [d] (1.1)
where each of the arrows are A ∞ -morphisms.
We also show that this functor restricts to a functor between a subcategory of pCY d and the partial subcategory of A ∞d whose objects are those of A ∞d and whose morphisms are the data of an almost cyclic A ∞ -structure on A ⊕ B * [d -1] together with a diagram of the form (1.1) where the arrows are A ∞ -morphisms.
Let us briefly present the contents of the article. In Section 2, we fix the notations and conventions we use in this paper and in Section 3, we recall the notions related to A ∞ -categories. Section 4 is devoted to present the notion of discs and diagrams as well as the notion of pre-Calabi-Yau structures based on the necklace bracket introduced in [START_REF] Kontsevich | Pre-Calabi-Yau algebras and topological quantum field theories[END_REF], which is given as the commutator of a necklace product, and their link with A ∞ -structures in the case of a Hom-finite graded quiver. We incidentally show that the necklace product for a graded quiver A is in fact equivalent to the usual Gerstenhaber circle product on A ⊕ A * [d -1] (see Proposition 4.23), which does not seem to have been observed in the literature so far. In Section 5, we recall the definitions of pre-Calabi-Yau morphisms and of the category pCY d given in [START_REF] Kontsevich | Pre-Calabi-Yau algebras and topological quantum field theories[END_REF].
Section 6 is the core of the article. In Subsection 6.1, we prove that given d-pre-Calabi-Yau categories A and B and a strict d-pre-Calabi-Yau morphism A → B, we can produce a cyclic A ∞ -structure on A ⊕ B * [d -1] and a diagram of the form (1.1) whose arrows are cyclic strict A ∞morphisms. We summarize these results in Corollary 6.14. In Subsection 6.2, we prove that given d-pre-Calabi-Yau categories A and B and any d-pre-Calabi-Yau morphism A → B, we can produce an A ∞ -structure on A ⊕ B * [d -1] and a diagram of the form (1.1) where the arrows are A ∞morphisms. Moreover, with an additional assumption on the pre-Calabi-Yau morphism A → B, the A ∞ -structure on A ⊕ B * [d -1] is almost cyclic. We summarize this in Corollary 6.20 in terms of functors.
Acknowledgements. This paper is part of a PhD thesis directed by Estanislao Herscovich and Hossein Abbaspour. The author thanks them for the useful discussions and advices. She also thanks David Fernández for comments on the paper and Johan Leray for useful discussions about pre-Calabi-Yau morphisms. This work is supported by the French National Research Agency in the framework of the "France 2030" program (ANR-15-IDEX-0002) and by the LabEx PERSYVAL-Lab (ANR-11-LABX-0025-01).
Notations and conventions
In what follows k will be a field of characteristic 0 and to simplify we will denote ⊗ for ⊗ k . We will denote by N = {0, 1, 2, . . . } the set of natural numbers and we define N * = N \ {0}. For i, j ∈ N, we define the interval of integers i, j = {n ∈ N|i ≤ n ≤ j}.
Recall that if we have a (cohomologically) graded vector space V = ⊕ i∈Z V i , we define for n ∈ Z the graded vector space V [n] given by V [n] i = V n+i for i ∈ Z and the map s V,n : V → V [n] whose underlying set theoretic map is the identity. Moreover, if f : V → W is a morphism of graded vector spaces, we define the map f [n] : V [n] → W [n] sending an element s V,n (v) to s W,n (f (v)) for all v ∈ V . We will denote s V,n simply by s n when there is no possible confusion, and s 1 just by s.
We now recall the Koszul sign rules, that are the ones we use to determine the signs appearing in this paper. If V, W are graded vector spaces, we have a map τ V,W : V ⊗ W → W ⊗ V defined as τ V,W (v ⊗ w) = (-1) |w||v| w ⊗ v where v ∈ V is a homogeneous element of degree |v| and w ∈ W is a homogeneous element of degree |w|. More generally, given graded vector spaces V 1 , . . . , V n and σ ∈ S n , we have a map
τ σ V1,...,Vn : V 1 ⊗ • • • ⊗ V n → V σ -1 (1) ⊗ • • • ⊗ V σ -1 (n) defined as τ σ V1,...,Vn (v 1 ⊗ • • • ⊗ v n ) = (-1) ϵ (v σ -1 (1) ⊗ • • • ⊗ v σ -1 (n) ) with ϵ = i>j σ -1 (i)<σ -1 (j) |v σ -1 (i) ||v σ -1 (j) | where v i ∈ V i is a homogeneous element of degree |v i | for i ∈ 1, n .
Throughout this paper, when we consider an element v of degree |v| in a graded vector space V , we mean a homogeneous element v of V . Also, we will denote by id the identity map of every space of morphisms, without specifying it. All the products in this paper will be products in the category of graded vector spaces. Given graded vector spaces (V i ) i∈I , we thus have
i∈I V i = n∈Z i∈I V n i
where the second product is the usual product of vector spaces.
Given graded vector spaces V, W we will denote by Hom k (V, W ) the vector space of k-linear maps f : V → W and by hom d k (V, W ) the vector space of homogeneous k-linear maps f : V → W of degree d, i.e. f (v) ∈ W n+d for all v ∈ V n . We assemble them in the graded vector space Hom k (V, W ) = d∈Z hom d k (V, W ) ⊆ Hom k (V, W ). We define the graded dual of a graded vector space V = n∈Z V n as the graded vector space V * = Hom k (V, k). Moreover, given graded vector spaces V , V ′ , W , W ′ and homogeneous elements f ∈ Hom k (V, V ′ ) and g ∈ Hom k (W, W ′ ), we have that
(f ⊗ g)(v ⊗ w) = (-1) |g||v| f (v) ⊗ g(w)
for homogeneous elements v ∈ V and w ∈ W . Recall that given graded vector spaces V 1 , . . . , V n and d ∈ Z we have a homogeneous linear isomorphism of degree 0
H j : ( n i=1 V i )[d] → V 1 ⊗ • • • ⊗ V j-1 ⊗ V j [d] ⊗ V j+1 ⊗ • • • ⊗ V n (2.1) sending an element s d (v 1 ⊗ • • • ⊗ v n ) to (-1) d(|v1|+•••+|vj-1|) v 1 ⊗ • • • ⊗ v j-1 ⊗ s d v j ⊗ v j+1 ⊗ • • • ⊗ v n .
Moreover, given graded vector spaces V and W and an integer d ∈ Z, we have homogeneous linear isomorphisms of degree 0
Hom k (V, W )[d] → Hom k (V, W [d]) (2.2) sending s d f ∈ Hom k (V, W )[d] to the map sending v ∈ V to s d (f (v))
and
Hom k (V, W )[d] → Hom k (V [-d], W ) (2.3) sending s d f ∈ Hom k (V, W )[d] to the map sending s -d v ∈ V [-d] to (-1) d|f | f (v).
Recall that a graded quiver A consists of a set of objects O together with graded vector spaces y A x for every x, y ∈ O. A dg quiver A is a graded quiver such that y A x is a dg vector space for every x, y ∈ O. Given a quiver A, its enveloping graded quiver is the graded quiver A e = A op ⊗A whose set of objects is O × O and whose space of morphisms from an object (x, y) to an object (x ′ , y ′ ) is defined as the graded vector space
(x ′ ,y ′ ) (A op ⊗ A) (x,y) = x A x ′ ⊗ y ′ A y . A
O p1 × • • • × O pn
where T n = N n for n > 1 and T 1 = N * . Given x = (x 1 , . . . , x n ) ∈ Ō we define its length as lg(x) = n, its left term as lt(x) = x 1 and right term as rt(x) = x n . For i ∈ 1, n , we define x≤i = (x 1 , . . . , x i ), x≥i = (x i , . . . , x n ) and for j > i, x i,j = (x i , x i+1 , . . . , x j ). One can similarly define x<i and x>i . Moreover, given x = (x 1 , . . . , xn ) ∈ Ō we define its length as lg( x) = n, its left term as lt(x) = x1 and its right term as rt(x) = xn . For x = (x 1 , . . . , x n ) ∈ Ō, we will denote
A ⊗x = x1 A x2 ⊗ x2 A x3 ⊗ • • • ⊗ x lg(x)-1 A x lg(x)
and we will often denote an element of A ⊗x as a 1 , a 2 , . . . , a lg(x)-1 instead of
a 1 ⊗ a 2 ⊗ • • • ⊗ a lg(x)-1 for a i ∈ xi A xi+1 , i ∈ 1, lg(x) -1 .
Moreover, given a tuple x = (x 1 , . . . , xn ) ∈ Ō we will denote
A ⊗ x = A ⊗x 1 ⊗ A ⊗x 2 ⊗ • • • ⊗ A ⊗x n .
Given tuples x = (x 1 , . . . , x n ), ȳ = (y 1 , . . . , y m ) ∈ Ō, we define their concatenation as x ⊔ ȳ = (x 1 , . . . , x n , y 1 , . . . , y m ). We also define the inverse of a tuple x = (x 1 , . . . , x n ) ∈ Ō as x-1 = (x n , x n-1 , . . . , x 1 ). If σ ∈ S n and x = (x 1 , . . . , x n ) ∈ O n , we define x • σ = (x σ(1) , x σ(2) , . . . , x σ(n) ). Moreover, given σ ∈ S n and x = (x 1 , . . . , xn ) ∈ Ōn , we define x • σ = (x σ(1) , xσ(2) , . . . , xσ(n) ). We denote by C n the subgroup of S n generated by the cycle σ = (12 . . . n) which sends i ∈ 1, n -1 to i + 1 and n to 1.
A ∞ -categories
In this section, we recall the notion of (cyclic) A ∞ -categories and (cyclic) A ∞ -morphisms as well as the definition of the natural bilinear form associated to a graded quiver. We also introduce a bilinear form on categories of the form A ⊕ B * where A and B are graded quivers related by a morphism A → B. We refer the reader to [START_REF] Seidel | Fukaya A∞-structures associated to Lefschetz fibrations. II, Algebra, geometry[END_REF] for the definitions of modules and bimodules over a category.
Definition 3.1. Given a graded quiver A with set of objects O, we define the graded vector space
C(A) = p≥1 x∈O p Hom k (A[1] ⊗x , lt(x) A rt(x) ) Given x = (x 1 , . . . , x n ) ∈ O n and a map F x : A[1] ⊗x → lt(x) A rt(x)
, we associate to F x a disc with several incoming arrows and one outgoing arrow (see Figure 3.1).
x 1 x 2 x 3 x 4 x n-1 x n F Figure 3.1: A disc representing a map F x : A[1] ⊗x → lt(x) A rt(x) , where x = (x 1 , . . . , x n )
To simplify, we will often omit the objects and assemble the incoming arrows in a big arrow (see Figure 3.2).
F
F x : A[1] ⊗x → lt(x) A rt(x) is the tuple x ∈ Ō.
Definition 3.3. Let A be a graded quiver. By the isomorphism (2.2), an element sF ∈ C(A) [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] sending an element (sa 1 , . . . , sa n-1 ) to s(F x(sa 1 , . . . , sa n-1 ))
induces maps A[1] ⊗x → lt(x) A rt(x)
for x = (x 1 , . . . , x n ) ∈ Ō, a i ∈ xi A xi+1 , i ∈ 1, n -1 .
To a homogeneous element sF ∈ C(A) [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF], we thus associate disc with a bold outgoing arrow (see Figure 3.3) to indicate that the output of sF is an element in A [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF]. [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] with a disc associated to a map
F
G (xi,y1,...,ym,xi+1) : A[1] ⊗(xi,y1,...,ym,xi+1) → xi A xi+1
F x : A[1] ⊗x → x1 A xn [1] with x = (x 1 , . . . , x n ). The type of this diagram is x≤i ⊔ ȳ ⊔ x>i . This diagram is associated to the map F x • (id ⊗i-1 ⊗G (xi,y1,...,ym,xi+1) ⊗ id ⊗n-i ).
Definition 3.5. Given a dg bimodule M over a dg category A with set of objects O, its naturalization is the chain complex M nat = (
x∈O x M x )/(f g -(-1) |g||f | gf ) where f ∈ y A x and g ∈ x M y .
Definition 3.6. Given dg category A with differential d A and product µ, we define the dg bimodule Bar(A) as Bar(A) = ( x ′ Bar(A) x ) x,x ′ ∈O , where
x ′ Bar(A) x = p≥0 (x0,...,xp)∈O p x ′ A x0 ⊗ x0 A x1 [1] ⊗ ... ⊗ xp-1 A xp [1] ⊗ xp A x whose differential restricted to x ′ Bar(A) x is given by x∈ Ō d x 0 + d x 1
where
d x 0 (f 0 ⊗ sf 1 ⊗ ... ⊗ sf n ⊗ f n+1 ) = n+1 i=0 (-1) i-1 j=0 (|fj |+1) f 0 ⊗ sf 1 ⊗ ... ⊗ s(d A (f i )) ⊗ ... ⊗ sf n ⊗ f n+1 and d x 1 (f 0 ⊗ sf 1 ⊗ ... ⊗ sf n ⊗ f n+1 ) = (-1) ϵ1 f 0 f 1 ⊗ sf 2 ⊗ ... ⊗ sf n ⊗ f n+1 + n-1 i=1 (-1) ϵi f 0 ⊗ sf 1 ⊗ ... ⊗ s(f i f i+1 ) ⊗ ... ⊗ sf n ⊗ f n+1 + (-1) ϵn f 0 ⊗ sf 1 ⊗ ... ⊗ sf n-1 ⊗ f n f n+1 for all x = (x 0 , x 1 , . . . , x n ), f 0 ∈ x ′ A x0 , f n+1 ∈ xn A x and f i ∈ xi-1 A xi , i ∈ 1, n , with ϵ i = |f 0 | + i-1 j=1 (|f j | + 1)
where we have written f i f i+1 instead of µ(f i , f i+1 ) for i ∈ 0, n to denote the composition of A.
The bimodule structure of Bar(A) is given by ρ = ( (x ′ ,y ′ ) ρ (x,y) ) x,y,x ′ ,y ′ ∈O where
(x ′ ,y ′ ) ρ (x,y) : (x ′ ,y ′ ) A e (x,y) ⊗ y Bar(A) x → y ′ Bar(A) x ′ sends (f ⊗g)⊗(a⊗sa 0 ⊗sa 1 ⊗• • •⊗sa n ⊗a ′ ) to (-1) |f |(|a|+|b|+ n i=0 |sai|) (ga⊗sa 0 ⊗sa 1 ⊗• • •⊗sa n ⊗a ′ f ) for all (f ⊗ g) ∈ (x ′ ,y ′ ) A e (x,y) , (x 0 , . . . , x n+1 ) ∈ Ōn+1 , a ∈ y A x0 , b ∈ xn+1 A
x and a i ∈ xi A xi+1 for i ∈ 0, n . Moreover, in that case, we have a quasi-isomorphism of dg bimodules Bar(A) → A whose restriction to Bar(A) p vanishes for p ≥ 1 and whose restriction to Bar(A) 0 is µ. Definition 3.7. Given a dg category A, the dual bimodule of the dg bimodule Bar(A) is given by Bar(A) ∨ =
x,y∈O y Bar(A) ∨
x , with
y Bar(A) ∨ x = Hom -A e ( y ′ Bar(A) x ′ , y ′ A x ⊗ y A x ′ )
where the subscript -A e of the Hom indicates that we consider the space of morphisms of right A e -modules.
In particular, y Bar ∨ x is a right graded A e -module for each x, y ∈ O for the left A e -structure of A e which is the inner one.
More precisely, Bar(A) is a A-bimodule with the action given by ρ = ( (x",y") ρ (x,y) ) x,y,x",y"∈O where (x",y") ρ (x,y) : (x",y") A e (x,y) ⊗ y Bar(A) ∨ x → y" Bar(A) ∨
x"
sends (f ⊗ g) ⊗ y ′ y Φ x ′ x to the map y ′ y" Ψ x ′ x" : y" Bar(A) x" → y ′ A x" ⊗ y" A x ′ defined by y ′ y" Ψ x ′ x" (a ⊗ sa 0 ⊗ sa 1 ⊗ • • • ⊗ sa n ⊗ a ′ ) = (-1) (|f |+|g|)|Φ (1) | Φ (1) f ⊗ gΦ (2)
where we have written
y ′ y Φ x ′ x (a ⊗ sa 0 ⊗ sa 1 ⊗ • • • ⊗ sa n ⊗ a ′
) as a tensor product Φ (1) ⊗ Φ (2) , for all (f ⊗ g) ∈ (x ′ ,y ′ ) A e (x,y) , (x 0 , . . . , x n+1 ) ∈ Ōn+1 , a ∈ y A x0 , a ′ ∈ xn+1 A x and a i ∈ xi A xi+1 for i ∈ 0, n .
Remark 3.8. If A is a dg category, we have a map
Bar(A) ∨ nat → p≥1 x∈O p Hom k (A[1] ⊗x , lt(x) A rt(x) ) sending z y Φ z x ∈ Hom -A e ( z Bar(A) z , z A x ⊗ y A z ) to the collection of k-linear maps Ψ : p≥0 (x0,...,xp)∈O p x0 A x1 [1] ⊗ ... ⊗ xp-1 A xp [1] → x0 A xp
given by Ψ(sa 0 , . . . , sa p-1 ) = µ(τ
• x0 x0 Φ x0 xp (1 x0 , sa 0 , . . . , sa p-1 , 1 x0 )) for a i ∈ xi A xi+1
and where 1 x0 is the identity of x0 A x0 . Moreover, this map is an isomorphism of graded vector spaces. Definition 3.9. Given a graded quiver A with set of objects O, the Gerstenhaber product of elements sF, sG ∈ C(A) [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] is defined as the element sF • G sG ∈ C(A) [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] given by
(sF • G sG) x = 1≤i<j≤lg(x) sF x≤i ⊔x ≥j • G,i,j sG x i,j
for x ∈ Ō where sF x≤i ⊔x ≥j • G,i,j sG x i,j (sa 1 , . . . , sa n-1 ) = (-1) ϵi sF x≤i ⊔x ≥j (sa 1 , . . . , sa i-1 , sG x i,j (sa i , . . . , sa j-1 ), sa j , . . . , sa n-1 )
for a i ∈ xi A xi+1 , with ϵ = (|G| + 1) i-1 r=1 |sa r |.
The map (sF • G sG) x is by definition the sum of the maps associated to diagrams of type x and of the form F G Definition 3.10. Given a graded quiver A with set of objects O, the Gerstenhaber bracket is the graded Lie bracket [-, -] G on C(A) [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] defined for elements sF, sG ∈ C(A) [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] as the element [sF, sG] G ∈ C(A) [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] given by
[sF, sG] x G = (sF • G sG) x -(-1) (|F|+1)(|G|+1) (sG • G sF) x for x ∈ Ō.
Definition 3.11. An A ∞ -structure on a graded quiver A is a homogeneous element sm A ∈ C(A) [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] of degree 1 satisfying the Maurer-Cartan equation [sm A , sm A ] G = 0.
Remark 3.12. An A ∞ -structure on a graded quiver A is tantamount to the data of a homogeneous element sm A ∈ C(A) [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] satisfying the following identities
A x [1] ⊗ x A y [1] → k of degree d + 2.
Definition 3.16. A bilinear form Γ on a graded quiver A is nondegenerate if the induced map
y A x [1] → ( y A x [1]) *
sending an element sa ∈ y A x [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] to the map sending sb ∈ x A y [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] to y Γ x (sa, sb) is an isomorphism.
Example 3.17. Consider two graded quivers A and B and a morphism (Φ 0 , Φ) : A → B of graded quivers. Define the bilinear form
Γ Φ : (A[1] ⊕ B * [d]) ⊗2 → k of degree d + 2 by y Γ Φ x (tf, sa) = -(-1) |sa||tf | x Γ Φ y (sa, tf ) = (-1) |tf |+1 (f • x Φ y )(a) for f ∈ Φ0(y) B * Φ0(x)
, a ∈ x A y , where t stands for the shift morphism B * → B * [d] and
y Γ Φ x (tf, tg) = y Γ Φ x (sa, sb) = 0 for f ∈ Φ0(y) B * Φ0(x) , g ∈ Φ0(x) B * Φ0(y)
, a ∈ y A x and b ∈ x A y . This bilinear form Γ Φ will be called the Φ-mixed bilinear form.
Example 3.18. If B = A, the bilinear form Γ id of the previous example is called the natural bilinear form on A and will be denoted Γ A .
Remark 3.19. The natural bilinear form on a Hom-finite graded quiver A is nondegenerate, whereas the Φ-mixed bilinear form is not in general. Definition 3.20. An A ∞ -structure sm A ∈ C(A) [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] on a graded quiver A is almost cyclic with respect to a homogeneous bilinear form Γ : A[1] ⊗2 → k if the following holds:
xn Γ x1 (sm x A (sa 1 , . . . , sa n-1 ), sa n ) = (-1) |san|( n-1 i=1 |sai|) xn-1 Γ xn (sm x•σ -1 A (sa n , sa 1 , . . . , sa n-2 ), sa n-1 ) (3.1) for each n ∈ N * , x = (x 1 , . . . , x n ) ∈ Ō, σ = (12 . . . n) with a i ∈ xi A xi+1 for i ∈ 1, n -1 and a n ∈ xn A x1 . An almost cyclic A ∞ -category is an A ∞ -category whose A ∞ -
F x : A[1] ⊗x → F0(lt(x)) B F0(rt(x)
) [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] is a map of degree 0, that satisfies 1≤i<j≤lg(x)
F x≤i ⊔x ≥j • (id ⊗x ≤i ⊗sm x i,j A ⊗ id ⊗x ≥j ) = 1≤i1<•••<in≤lg(x) sm ȳ B (F x≤i 1 ⊗ F x i 1 ,i 2 ⊗ • • • ⊗ F x in ,lg(x) ) (MI)
for every x ∈ ŌA and with ȳ = (F 0 (x 1 ), F 0 (x i1 ), . . . , F 0 (x lg(x) )). Note that given x ∈ O A the terms in both sums are sums of maps associated with diagrams of type x that are respectively of the form
F and m B m A F F F
The following definition was introduced in [START_REF] Kajiura | Noncommutative homotopy algebras associated with open strings[END_REF] by H. Kajiura in the cyclic case. Definition 3.22. An A ∞ -morphism (F 0 , F) between almost cyclic A ∞ -categories (A, sm A ) and (B, sm B ) with respect to bilinear forms γ and Γ is cyclic if
F0(y) Γ F0(x) (F 1 (sa), F 1 (sb)) = y γ x (sa, sb) for x, y ∈ O, a ∈ y A x and b ∈ x A y and for n ≥ 3 x∈Z ȳ∈Z ′ lt(z) Γ rt(z) (F x(sa 1 , . . . , sa i ), F ȳ (sa i+1 , . . . , sa n )) = 0
for z ∈ Ō, (sa 1 , . . . , sa i ) ∈ A [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] ⊗x and (sa i+1 , . . . , sa n ) ∈ A [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] ⊗ȳ where
Z = {x ∈ Ō | lt(x) = lt(z), rt(x) = rt(z)} and Z ′ = {ȳ ∈ Ō | lt(ȳ) = rt(z), rt(ȳ) = lt(z)}
Pre-Calabi-Yau categories
In this section, we present the diagrammatic calculus and recall the definition of d-pre-Calabi-Yau structures, d ∈ Z, appearing in [START_REF] Kontsevich | Pre-Calabi-Yau algebras and topological quantum field theories[END_REF] and [START_REF] Yeung | Pre-Calabi-Yau structures and moduli of representations[END_REF] as well as their relation with A ∞ -structures when the graded quiver considered is Hom-finite.
Diagrammatic calculus
In this subsection, we define discs and diagrams and we explain how to evaluate and compose them.
Following [START_REF] Yeung | Pre-Calabi-Yau structures and moduli of representations[END_REF] we define the following graded vector space.
Definition 4.1. Given a graded quiver A with set of objects O, we define the graded vector space
Multi • (Bar(A)[d]) = n∈N * Multi n (Bar(A)[d]) = n∈N * x∈ Ōn Multi x(Bar(A)[d])
where Multi x(Bar(A)[d]) is the graded vector space consisting of sums of homogeneous k-linear maps of the form
F x1 ,...,x n : A[1] ⊗x 1 ⊗A[1] ⊗x 2 ⊗• • •⊗A[1] ⊗x n → lt(x 1 ) A rt(x 2 ) [-d]⊗ lt(x 2 ) A rt(x 3 ) [-d]⊗• • •⊗ lt(x n ) A rt(x 1 ) [-d] for x = (x 1 , . . . , xn ). The action of σ = (σ n ) n∈N * ∈ n∈N * C n on an element F = (F x) x∈ Ō ∈ Multi • (Bar(A)[d]) is the element σ • F ∈ Multi • (Bar(A)[d]) given by (σ • F) x = τ σ -1 lt(x 1 ) A rt(x 2 ) [-d], lt(x 2 ) A rt(x 3 ) [-d],..., lt(x n ) A rt(x 1 ) [-d] • F x•σ • τ σ A[1] ⊗x 1 ,A[1] ⊗x 2 ,...,A[1] ⊗x n for x ∈ Ō. We will denote by Multi • (Bar(A)[d]) C lg(•) the space of elements of Multi • (Bar(A)[d]) that are invariant under the action of n∈N * C n . Remark 4.2. If A is a dg category, Multi n (Bar(A)[d]
) is the dg vector space
Hom (A e ) ⊗n (Bar(A)[d], id (A ⊗n ) σ )
where id (A ⊗n ) σ denotes the A ⊗n -bimodule A ⊗n whose structure is given by
(y1,...,yn) (A ⊗n ) (x1,...,xn) = n i=1 yi A xi+1
with the convention that
x n+1 = x 1 .
The action on morphisms is given by
(g 1 ⊗ • • • ⊗ g n ) • ω • (f 1 ⊗ • • • ⊗ f n ) = (-1) |f1|(|f2|+•••+|fn|) (g 1 ⊗ • • • ⊗ g n ) • ω • (f n ⊗ f 2 ⊗ • • • ⊗ f n-1 ⊗ f 1 )
for objects
x i , x ′ i , y i , y ′ i ∈ O, i ∈ 1, n , ω ∈ n i=1 yi A xi+1 , f i ∈ xi A x ′ i and g i ∈ yi A x ′ i where • denotes the usual bimodule structure of Hom (A e ) ⊗n (Bar(A)[d], A ⊗n ).
Definition 4.3.
A disc D is a circle with distinguished set of points which are either incoming or outgoing points. An incoming (resp. outgoing) point will be pictured as an incoming (resp. outgoing) arrow (see Figure 4.1). The size of the disc D is the number of outgoing arrows, and it will be denoted by |D|. The type of the decorated disc is the tuple of the form (x 1 , . . . , xn ) where xi is the tuple formed by objects of O, read in counterclockwise order, between the outgoing arrows i -1 and i, with the convention that the arrow 0 is the arrow n (see Figure 4.2). Definition 4.6. A marked disc is a decorated disc with a bold arrow (see Figure 4.3).
x 1 3 3 x 1 2 x 1 1 1 x 2 4 x 2 3 x 2 2 x 2 1 x 3 3 2 x 3 2 x 3 1 Figure 4.2: A decorated disc of type x = (x 1 , x2 , x3 ) where x1 = (x 1 1 , x 1 2 , x 1 3 ), x2 = (x 2 1 , x 2 2 , x 2 3 , x 2 4 ) and x3 = (x 3 1 , x 3 2 , x 3
x 1 3 x 1 2 x 1 1 x 2 4 x 2 3 x 2 2 x 2 1 x 3 3 x 3 2 x 3 1 3 1 2 Figure 4.3: A marked disc Definition 4.7. Consider s d+1 F x ∈ Multi x(Bar(A)[d])[d + 1] with x = (x 1 , . . . , xn ) ∈ Ōn . Given (a, b) ∈ ({i} × 1, lg(x 1 ) + • • • + lg(x n ) ) ⊔ ({o} × 1, n )
, by the isomorphisms (2.1), (2.2) and (2.3), s d+1 F x induces a morphism of the form
A[1] ⊗x 1 ⊗ • • • ⊗ A[1] ⊗x j-1 ⊗ A[1] ⊗x j ≤b ′ ⊗ x j b ′ A[-d] x j b ′ +1 ⊗ A[1] ⊗x j >b ′ ⊗ A[1] ⊗x n → lt(x 1 ) A rt(x 2 ) [-d] ⊗ • • • ⊗ lt(x n ) A rt(x 1 ) [-d] (4.1)
given by (-1)
(d+1)|F x| F x•(id ⊗(lg(x 1 )+•••+lg(x j-1 )+b ′ -j) ⊗s -d-1 ⊗id ⊗(lg(x j )-b ′ +lg(x j+1 )+•••+lg(x n )-n+j) ) if a = i and b = lg(x 1 ) + • • • + lg(x j ) + b ′ with j ∈ 1, n , b ′ ∈ 1, lg(x j ) -1 ,
and a morphism of the form
A[1] ⊗x 1 ⊗ A[1] ⊗x 2 ⊗ • • • ⊗ A[1] ⊗x n → lt(x 1 ) A rt(x 2 ) [-d] ⊗ • • • ⊗ lt(x b-1 ) A rt(x b ) [-d] ⊗ lt(x b ) A rt(x b+1 ) [1] ⊗ • • • ⊗ lt(x n ) A rt(x 1 ) [-d] (4.2)
given by (id
⊗(b-1) ⊗s d+1 ⊗ id ⊗(n-b) ) • F x if a = o. To s d+1 F x
s d+1 F ∈ Multi • (Bar(A)[d])[d + 1], the evaluation of E(D, s d+1 F x) at elements ( sa 1 , sa 2 , sa 3 ) is obtained by first compute F x( sa 1 , sa 2 , sa 3 ) = lt(x 1 ) F -d rt(x 2 ) ⊗ lt(x 2 ) F -d rt(x 3 ) ⊗ lt(x 3 ) F -d rt(x 3 )
, and then apply the shift s d+1 to the third tensor factor of the result since the bold arrow is the third outgoing arrow of the disc. The result is then
(-1) ϵ lt(x 1 ) F -d rt(x 2 ) ⊗ lt(x 2 ) F -d rt(x 3 ) ⊗ lt(x 3 ) F 1 rt(x 3 ) with ϵ = (d + 1)(| lt(x 1 ) F -d rt(x 2 ) | + | lt(x 2 ) F -d rt(x 3 ) |).
To simplify, from now on we will omit the objects when drawing a marked disc as well as the label of the outgoing arrows. By convention, if the bold arrow of the marked disc (of size n) is outgoing, it denotes the n-th outgoing arrow, and if the bold arrow is incoming, then the clockwise preceding outgoing arrow of the marked disc is the one labeled by n. Moreover, we will draw a big incoming arrow instead of several consecutive incoming arrows (see
A i for i ∈ 1, n , together with a subset R ⊆ (⊔ n i=1 A i ) 2 satisfying that (D.1) if (α, β) ∈ R ∩ (A i × A j ), then i ̸ = j
and α is an incoming arrow and β is an outgoing arrow;
(D.2) if ( y ′ α x ′ , y β x ) ∈ R, then x ′ = y and x = y ′ ;
where we use the notation introduced in Definition 4.4. We will represent a pair (α, β) ∈ R ∩ (A i × A j ) by connecting the outgoing arrow β of D j with the incoming arrow α of D i (see Figure 4.6), and we will say that the disc D i shares an arrow with D j or that α and β are connected.
An incoming (resp. outgoing) arrow of the diagram (D, R) is an arrow α of one of the discs D i such that there is no arrow β satisfying that (α, β) ∈ R (resp. (β, α) ∈ R). A diagram (D, R) has a distinguished object of O between any couple of consecutive arrows of (D, R) given by the decoration of the discs D 1 , . . . , D n .
(A.1) ∀i ∈ 1, n , ∃ α i ∈ D i such that (α i , β j ) ∈ R for some j ̸ = i, β j ∈ D j ;
(A.2) for (x, y) ∈ R, either x or y is a bold arrow, but not both;
(A.3) there is no family of arrows {x 1 , . . . , x k } such that (x i , x i+1 ) ∈ R for all i ∈ 1, k and
x k = x 1 .
Note that there is precisely one disc in D = {D 1 , . . . , D n } whose bold arrow is also an incoming or outgoing arrow of the diagram (D, R), i.e. an admissible diagram (D, R) always has either an incoming or outgoing bold arrow.
The size of an admissible diagram
(D = {D 1 , . . . , D n }, R) is n i=1 |D i | -n + 1
, and it will be denoted by |D|. We will relabel the outgoing arrows of (D, R) in clockwise direction from 1 to |D|, such that the outgoing arrow labeled by |D| is precisely the bold arrow of (D, R) if the latter arrow is outgoing, and it is the outgoing arrow preceding the bold arrow of (D, R) in clockwise sense if the bold arrow of (D, R) is incoming. Given an admissible diagram (D = {D 1 , . . . , D n }, R) and a tuple (s d+1 F 1 , . . . , s d+1 F n ) with
F i ∈ Multi xi (Bar(A)[d]) such that xi is the type of D i for all i ∈ 1, n , we will define a map E((D, R), s d+1 F 1 , . . . , s d+1 F n ) ∈ Multi x(Bar(A)[d])[d + 1]
where x is the type of (D, R), as follows. First, we suppose that the bold arrow is on a sink. Definition 4.14. Let (D = {D 1 , . . . , D n }, R) be a diagram and let (s d+1 F 1 , . . . , s d+1 F n ) be a tuple of homogeneous elements with F i ∈ Multi xi (Bar(A) [d]) such that xi is the type of D i for all i ∈ 1, n . We will define the evaluation of E((D, R), s d+1 F 1 , . . . , s d+1 F n ) at an element sa x of A[1] ⊗ x, where x is the type of (D, R) by induction on n as follows. We fix a source D s of (D, R).
(E.1) We place each element sa i j ∈ x i j A x i j+1 [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] in the incoming arrow x i j α x i j+1 of (D, R) in counterclockwise order beginning at the bold arrow. This will create a sign, as follows : if an element of degree m turns around the source representing a map of degree m ′ , we add a sign (-1) mm ′ and if an element of degree ℓ passes through an element of degree ℓ ′ to go to its place, we add a sign (-1) ℓℓ ′ . Here, turning around the source means passing through all of its inputs.
(E.2) We evaluate E(D s , s d+1 F s ) at the elements sa i j ∈ x i j A x i j+1 [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] corresponding to the incoming arrows of D s , to obtain an element of the form [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] corresponding to the incoming arrows of (D, R) between the last outgoing arrow of (D, R) and the bold arrow of D s connected with an arrow of the rest of the diagram in the clockwise order. Recall that the elements b i for i ∈ 1, |D s | -1 are labeled with an index from 1, |D| coming from the corresponding outgoing arrow of (D, R).
(-1) (d+1)(|b 1 |+•••+|b |Ds |-1 |) b 1 ⊗ • • • ⊗ b |Ds|-1 ⊗ s d+1 b |Ds| ∈ y ′ 1 A y1 [-d] ⊗ • • • ⊗ y ′ |Ds| A y |Ds| [1] (E.3) We add a Koszul sign coming from transposing b 1 ⊗ • • • ⊗ b |Ds|-1 with all of the elements sa i j ∈ x i j A x i j+1
(E.4) We consider the diagram (D ′ , R ′ ) given by removing the disc D s from (D, R) and place s d+1 b |Ds| at the incoming arrow of (D ′ , R ′ ) that was previously connected to the bold outgoing arrow of D s . By induction we evaluate E((D ′ , R ′ ), s d+1 F 1 , . . . , s d+1 Fs , . . . , s d+1 F n ) at the elements sa i j ∈ x i j A x i j+1 [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] corresponding to the incoming arrows of D ′ . This evaluation carries a sign
(-1) b |Ds|(| sa u |+| sa v |)+| sa u || sa v |
where | sa u | and | sa v | are the tuples of objects corresponding to the incoming arrows of D ′ that precede and follow the element b |Ds| without any outgoing arrow between them. Recall that the tensor factors in the evaluation of E((D ′ , R ′ ), s d+1 F 1 , . . . , s d+1 Fs , . . . , s d+1 F n ) at the elements sa i j ∈ x i j A x i j+1 [1] stated before are labeled with an index from 1, |D| coming from the corresponding outgoing arrow of (D, R).
(E.5) We reorder the tensor factors obtained in steps (E.4) and (E.5) according to the labeling of the outgoing arrows of (D, R), and add the respective Koszul sign.
We illustrate the previous procedure with an admissible diagram D consisting of two discs, given as follows
D 1 D 2
In this case, the evaluation of E(D, s d+1 G ȳ , s d+1 F x) at ( sa 1 , . . . , sa p-1 , sb First, we place each tensor factor sa i j and sb i j of sa i and sb i in the corresponding incoming arrow, as explained in (E.1), adding a sign (-1)
(|G ȳ |+d+1)(| sa 1 |+•••+| sa p-1 |+| sa p >q |)
. Also, we add a sign
(-1) | sb 1 || sa p >q |+| sb m || sa p ≤q |
for the permutation of the corresponding elements. We picture this as follows
D 1 D 2 sa 1 sa n sa p >q sb 1 sa p ≤q sb m In step (E.2), we compute E(D 1 , s d+1 G ȳ )( sb 1 , . . . , sb m ) = (-1) (d+1)(|ϵ1|+•••+|ϵm-1|) ϵ 1 ⊗• • •⊗s d+1 ϵ m .
After step (E.3) we have gained a total sign (-1) ∆ with
∆ = (|G ȳ | + d + 1)(| sa p ≤q | + | sa p+1 | + • • • + | sa n |) + (|G ȳ | + | sa 1 | + • • • + | sa p >q |)(|ϵ 1 | + • • • + |ϵ j-i-1 |), multiplying the element ϵ 1 ⊗• • •⊗ϵ j-i-1 ⊗E(D 2 , s d+1 F x)( sa 1 ⊗• • •⊗ sa p >q ⊗s d+1 ϵ j-i ⊗ sa p ≤q ⊗• • •⊗ sa n ). In (E.4), we compute E(D 2 , s d+1 F x)( sa 1 ⊗ • • • ⊗ sa p ≤q ⊗ s d+1 ϵ j-i ⊗ sa p >q ⊗ • • • ⊗ sa n ) = δ 1 ⊗ • • • ⊗ δ i-1 ⊗ δ j ⊗ • • • ⊗ s d+1 δ n
and add a sign (-1) ∆ ′ to the final result, with
∆ ′ = | sa p >q |(d+1+|ϵ j-i |+| sa p ≤q |)+| sa p ≤q |(d+1+|ϵ j-i |)+(d+1)(|δ 1 |+• • •+|δ i-1 |+|δ j |+• • •+|δ n-1 |)
In (E.5), we reorder the outputs with a sign, giving finally
E(D, s d+1 G ȳ , s d+1 F x)( sa 1 , . . . , sa n ) = (-1) ∆+∆ ′ +∆ ′′ δ 1 ⊗• • •⊗δ i-1 ⊗ϵ 1 ⊗• • •⊗ϵ j-i-1 ⊗δ j ⊗• • •⊗s d+1 δ n where ∆ ′′ = (|δ 1 | + . . . |δ i-1 |)(|ϵ 1 | + • • • + |ϵ j-i-1 |).
Now, supose that the bold arrow is on a source. Definition 4.15. Let (D = {D 1 , . . . , D n }, R) be a diagram and let (s d+1 F 1 , . . . , s d+1 F n ) be a tuple of homogeneous elements with F i ∈ Multi xi (Bar(A) [d]) such that xi is the type of D i for all i ∈ 1, n . We will define the evaluation E((D, R), s d+1 F 1 , . . . , s d+1 F n ) at an element sa x of A[1] ⊗ x, where x is the type of (D, R) by induction on n as follows. Suppose that the bold arrow is an arrow of the source D s of (D, R) and that the p-th outgoing arrow α of D s is connected with another disc.
(F.1) We place each element sa i j ∈ x i j A x i j+1 [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] in the incoming arrow x i j α x i j+1 of (D, R) in counterclockwise order beginning at the bold arrow. This will create a sign, as follows : if an element of degree m turns around a source representing a map of degree m ′ , we add a sign (-1) mm ′ and if an element of degree ℓ passes through an element of degree ℓ ′ to go to his place, we add a sign (-1) ℓℓ ′ . Again, turning around a source means passing through all of its inputs.
(F.2) We transpose all the elements that do not correspond to incoming arrows of D s with the elements corresponding to incoming arrows of D s between α and the bold arrow. We add the sign coming from this transposition. We evaluate E(D s , s d+1 F s ) at the elements sa i j ∈ x i j A x i j+1 [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] corresponding to the incoming arrows of D s , to obtain an element of the form (F.4) We then consider the diagram (D ′ , R ′ ) given by removing the disc D s from (D, R). Its bold arrow is the one previously connected to α. Note that the element b p is associated with this incoming arrow of (D ′ , R ′ ).
s d+1 (b 1 ⊗ • • • ⊗ b |Ds| ) ∈ ( y ′ 1 A y1 [-d] ⊗ • • • ⊗ y ′ |Ds | A y |Ds| [-d])[d + 1] (F.
(F.5) We evaluate E((D ′ , R ′ ), s d+1 F 1 , . . . , s d+1 Fs , . . . , s d+1 F n ) at the elements sa i j ∈ x i j A x i j+1 [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] corresponding to the incoming arrows of D ′ . Recall that the tensor factors in this evaluation are labeled with an index from 1, |D| coming from the corresponding outgoing arrow of (D, R).
We illustrate the previous procedure with an admissible diagram D consisting of two discs, given as follows
D 1 D 2
In this case, the evaluation of E(D, s d+1 F x, s d+1 G ȳ ) at ( sa 1 , . . . , sa p-1 , sb sa n
In step (F.2), we place all the elements sb i after the others and multiply by a sign
(-1) (| sb 1 |+•••+| sb m |)(| sa p+1 |+•••+| sa n |)
We then compute
E(D 1 , s d+1 F x)( sa 1 , . . . , sa n ) = (-1) (d+1)(|ϵ1|+•••+|ϵn-1|) ϵ 1 ⊗ • • • ⊗ s d+1 ϵ n . After (F.
3), we end with
ϵ 1 ⊗ • • • ⊗ ϵ p-1 ⊗ (E, s d+1 G ȳ )(ϵ p ⊗ sb 1 ≤q ⊗ • • • ⊗ sb m ⊗ sb 1 >q ) ⊗ ϵ p+1 ⊗ • • • ⊗ s d+1 ϵ n
preceded by a sign (-1) ∆ ′ with
∆ ′ = | sb 1 ≤q || sa p | + | sb 1 >q || sa p+1 | + ∆ + (| sb 1 ≤q | + • • • + | sb m | + | sb 1 >q |)(|ϵ p+1 | + • • • + |ϵ n | + d + 1) + (|G ȳ | + d + 1)(ϵ 1 + • • • + ϵ p-1 )
In (F.5), we order the elements and compute E(D 2 , s d+1 G ȳ )( sb Finally, we get
1 ≤q ⊗ ϵ p ⊗ sb
E(D, s d+1 F x, s d+1 G ȳ )( sa 1 , . . . , sa n ) = (-1) ∆ ′′ ϵ 1 ⊗ • • • ⊗ ϵ p-1 ⊗ δ 1 ⊗ • • • ⊗ δ m ⊗ ϵ p+1 ⊗ • • • ⊗ s d+1 ϵ n
where
∆ ′′ = ∆ ′ + | sb 1 >q |(|ϵ i | + | sb 2 | + • • • + | sb m |) + (d + 1)|G ȳ |. Definition 4.16. A filled diagram is an admissible diagram (D = {D 1 , . . . , D n }, R) together with ele- ments s d+1 F i ∈ Multi • (Bar(A)[d])[d + 1] for i ∈ 1, n . To each filled diagram D = {(D = {D 1 , . . . , D n }, R), (s d+1 F i ) i∈ 1,n } we associate the element E( D) = E((D, R), s d+1 F x1 , . . . , s d+1 F xn ) ∈ Multi x(Bar(A)[d])[d + 1]
where x is the type of (D, R) and xi is the type of D i for i ∈ 1, n . We depict a filled diagram as a diagram replacing the names of the discs by the corresponding maps (see Example 4.17. Consider the following filled diagram.
x 1 1 x 1 2 x 1 p 1 -2 x 1 p 1 -1 x 1 p 1 x 3 <i x 3 ≥i x 3 i x 3 i-1 y 2 1 y 2 q 2 y 2 q 2 -1 y 1 1 y 1 2 y 1 q 1 G F
It represents a map
s d+1 F x : A[1] ⊗x 1 ⊗ A[1] ⊗x 2 ⊗ A[1] ⊗x 3 → x 1 1 A x 2 p 2 [-d] ⊗ x 2 1 A x 3 p 3 [-d] ⊗ x 3 1 A x 1 p 1 [1]
where x = (x 1 , x2 , x3 ) and xj = (x j 1 , . . . , x j lg(x i ) ) for j ∈ 1, 3 that takes as an input the last output of a map
s d+1 G (ȳ 1 ,ȳ 2 ) : A[1] ⊗ȳ 1 ⊗ A[1] ⊗ȳ 2 → y 1 1 A y 2 m [-d] ⊗ x 3 i-1 A x 3 i [1] with ȳ1 = (y 1 1 , y 1 2 , . . . , y 1 n , x 3 i ), ȳ2 = (x 3 i-1 , y 2 1 , . . . , y 2 m ). The type of the diagram is (x 1 , x2 , ȳ1 ⊔ x3 ≥i , x3 <i ⊔ ȳ2
), meaning that it represents a map that can be evaluated at ( sa 1 , sa 2 , sb 1 ⊗ sa 3 ≥i , sa 3 <i ⊗ sb 2 ) where sa i ∈ A[1] ⊗x i and sb i ∈ A ⊗ȳ i . This evaluation goes in 5 steps.
1. We place the elements around the diagram, which creates a sign (-1) ϵ with
ϵ = |s d+1 G (ȳ 1 ,ȳ 2 ) |(| sa 1 | + | sa 2 | + | sa 3 ≥i |) + | sa ≥i || sb 1 | + | sb 2 || sa >i | 2. We evaluate E(D 1 , s d+1 G (ȳ 1 ,ȳ 2 ) ) at ( sb 1 , s b 2 )
. The result of this evaluation is an element of the shifted tensor product
( y 1 1 A y 2 m [-d]⊗ x 3 i-1 A x 3 i [-d])[d+1] that we will write s d+1 ( y 1 1 G -d y 2 m ⊗ x 3 i-1 G -d x 3 i
). We then use the isomorphism (2.1) to obtain the tensor product
(-1) ϵ+(d+1)| y 1 1 G -d y 2 m | y 1 1 G -d y 2 m ⊗ x 3 i-1 G 1 x 3 i ∈ y 1 1 A y 2 m [-d] ⊗ x 3 i-1 A x 3 i [1]
3. We put the first output before and get
(-1) ϵ+(d+1)| y 1 1 G -d y 2 m |+ϵ ′ y 1 1 G -d y 2 m ⊗ E(D 2 , s d+1 F x( sa 1 , sa 2 , sa ≥i , sa <i ⊗ x 3 i-1 G 1 x 3 i )) with ϵ ′ = y 1 1 G -d y 2 m |(|F | + d + 1 + | sa 1 | + | sa 2 | + | sa ≥i |) 4. We evaluate F x at ( sa 1 , sa 2 , sa <i ⊗ x 3 i-1 G 1 x 3 i ⊗ sa ≥i ) which add a sign (-1) | sa ≥i |(| sa<i|+| x 3 i-1 G 1 x 3 i |)
to the result for the transposition of the corresponding elements.
5. We order the outputs according to the labeling of their corresponding outgoing arrow and finally get (-1)
ϵ+(d+1)| y 1 1 G -d y 2 m |+ϵ ′ +ϵ ′′ x 1 1 F -d x 2 p 2 ⊗ x 1 2 F -d x 3 p 3 ⊗ y 1 1 G -d y 2 m ⊗ x 3 1 F 1 x 1 p 1
with ϵ ′′ = (-1)
(| y 1 1 G -d y 2 m |+1)(| x 1 1 F -d x 2 p 2 |+| x 2 1 F -d x 3 p 3 |)+(d+1)(| x 1 1 F -d x 2 p 2 |+•••+| x 1 2 F -d x 3 p 3 |+•••+| y 1 1 G -d y 2 m |)
.
The necklace graded Lie algebra
In order to recall what a pre-Calabi-Yau structure on A is, one first defines a graded Lie algebra, called the necklace graded Lie algebra and appearing in [START_REF] Kontsevich | Pre-Calabi-Yau algebras and topological quantum field theories[END_REF]. As a graded vector space, this graded Lie algebra is Multi
• (Bar(A)[d]) C lg(•) [d + 1].
In order to define a graded Lie bracket on this space, we first define a new operation as follows.
Definition 4.18. Consider a graded quiver A with set of objects O as well as tuples of elements of Ō given by x = (x 1 , ..., xn ), ȳ = (ȳ 1 , ..., ȳm ) ∈ Ō such that rt(ȳ 1 ) = x v j and lt(ȳ m ) = x v j-1 for some v ∈ 1, n and j ∈ 1, lg(x v ) . The inner necklace composition at v,j of elements given by x = (x 1 , ..., xn ), ȳ = (ȳ 1 , ..., ȳm ) ∈ Ō such that lt(ȳ v ) = x 1 j-1 and rt(ȳ v+1 ) = x 1 j for some v ∈ 1, m and j ∈ 1, lg(x 1 ) . The outer necklace composition at v,j of elements
s d+1 F x ∈ Multi x(Bar(A)[d])[d + 1] and s d+1 G ȳ ∈ Multi ȳ (Bar(A)[d])[d + 1] is given by s d+1 F x • nec,v,j inn s d+1 G ȳ = E( D) ∈ Multi x ⊔ v,j,inn ȳ (Bar(A)[d])[d + 1] with x ⊔ v,
s d+1 F x ∈ Multi x(Bar(A)[d])[d + 1] and s d+1 G ȳ ∈ Multi ȳ (Bar(A)[d])[d + 1] is given by s d+1 F x • nec,v,j out s d+1 G ȳ = E( D) ∈ Multi x ⊔ v,j,out ȳ (Bar(A)[d])[d + 1] with x ⊔ v,
s d+1 F, s d+1 G ∈ Multi • (Bar(A)[d]) C lg(•) [d + 1] is the element s d+1 F • nec s d+1 G ∈ Multi • (Bar(A)[d])[d + 1]
given by
(s d+1 F • nec s d+1 G) z = ( x, ȳ,v,j)∈Iinn s d+1 F x • nec,v,j inn s d+1 G ȳ + ( x, ȳ,v,j)∈Iout s d+1 F x • nec,v,j out s d+1 G ȳ
for all z ∈ Ō, where
I inn = {( x, ȳ, v, j) ∈ Ō × Ō × 1, lg( x) × 1, lg(x v ) | x ⊔ v,j,inn ȳ = z} I out = {( x, ȳ, v, j) ∈ Ō × Ō × 1, lg( ȳ) × 1, lg(x 1 ) | x ⊔ v,j,out ȳ = z}
Definition 4.21. Given a graded quiver A with set of objects O, the necklace bracket of two elements
s d+1 F, s d+1 G ∈ Multi • (Bar(A)[d]) C lg(•) [d + 1] is defined as the element [s d+1 F, s d+1 G] nec ∈ Multi • (Bar(A)[d])[d + 1]
where
[s d+1 F, s d+1 G] z nec = (s d+1 F • nec s d+1 G) z -(-1) (|F|+d+1)(|G|+d+1) (s d+1 G • nec s d+1 F) z
for every z ∈ Ō.
Lemma 4.22.
Let A be a graded quiver with set of objects O. Then, we have an injective map
j : Multi • (Bar(A)[d]) C lg(•) [d + 1] → C(A ⊕ A * [d -1]
) [1] sending
s d+1 ϕ ∈ Multi • (Bar(A)[d])[d + 1] to sψ x given by (π A •ψ x)( sa n , tf n-1 , sa n-2 , tf n-2 , ..., sa 2 , tf 2 , sa 1 ) = (-1) ϵ n-1 i=1 (f i •s d )⊗s d ϕ x( sa 1 , sa 2 , ..., sa n ) with ϵ = n-1 i=1 |tf i | n j=i+1 | sa j | + (|ϕ| + 1) n-1 i=1 |tf i | + d(n -1) + 1≤i<j≤n | sa i || sa j | + 1≤i<j≤n-1 |tf i ||tf j |
and by
(π A * • ψ x)( sa n , tf n-1 , sa n-2 , tf n-2 , ..., sa 2 , tf 2 , sa 1 )(s 1-d b) = (-1) δ n-1 i=1 (f i • s d ) ϕ x′ ( sa 1 ⊗ sb ⊗ sa n , sa 2 , ..., sa n-1 ) for x = (x 1 , . . . , xn ), x′ = (x 1 ⊔ xn , x2 , . . . , xn-1 ) sa i ∈ A[1] ⊗x i for i ∈ 1, n , sb ∈ rt(x 1 ) A[1] lt(x n ) and tf i ∈ rt(x i+1 ) A * lt(x i ) [d] for i ∈ 1, n -1 with δ = n-1 i=1 |tf i | n j=i+1 | sa j | + (|ϕ| + 1) n-1 i=1 |tf i | + d(n -1) + 1≤i<j≤n | sa i || sa j | + 1≤i<j≤n-1 |tf i ||tf j | + (d + 1) n i=1 | sa i | + (| sa n | + |sb|) n-1 i=2 | sa i | + | sa n ||sb| where π A (resp. π A * ) is the canonical projection A ⊕ A * [d -1] → A (resp. A ⊕ A * [d -1] → A * [d -1]).
We have the following relation between the necklace product and the usual Gerstenhaber circle product, which does not seem to have been observed in the literature so far.
Proposition 4.23. Let F, G be elements in Multi
• (Bar(A)[d]) C lg(•) . Then, we have jx ⊔ v,j,inn ȳ (s d+1 F x • nec,v,j inn s d+1 G ȳ ) = jx(s d+1 F x) • G,p,q π A (jȳ(s d+1 G ȳ )) (4.3) for x, ȳ ∈ Ō, v ∈ 1, lg( x) and j ∈ 1, lg(x v ) and jx ⊔ v,j,out ȳ (s d+1 F x • nec,v,j out s d+1 G ȳ ) = -(-1) (|F|+d+1)(|G|+d+1) jȳ(s d+1 G ȳ ) • G,p,q π A * (jx(s d+1 F x)) (4.4)
for v ∈ 1, lg( ȳ) and j ∈ 1, lg(x 1 ) , where p = lg(x 1 ) + ... + lg(x v ) + j + 2 and q = p + lg( ȳ) i=1 lg(ȳ i ).
Proof. We will first show the identity (4.3). Given x = (x 1 , . . . , xn ), ȳ = (ȳ 1 , . . . , ȳm ) ∈ Ō, both the compositions
s d+1 F x • nec,v,j inn s d+1 G ȳ and jx(s d+1 F x) • G,p,q π A (jȳ(s d+1 G ȳ ))
are zero if there are no u, v ∈ 1, n and j ∈ 1, lg(x v ) such that lt(ȳ u ) = x v j and rt(ȳ u+1 ) = x v j+1 with the convention that xn+1 = x1 . We will thus assume that there exist such u, v ∈ 1, n and j ∈ 1, lg(x v ) . We can further suppose that u = n because of the invariance under the action of C n . To simplify the expressions, we will denote the result of applying F x on any argument as a tensor product lt(x
1 ) F -d rt(x 2 ) ⊗ • • • ⊗ lt(x n-1 ) F -d rt(x n ) ⊗ lt(x n ) F -d rt(x 1 )
, where we omit those arguments. We have
jx ⊔ v,j,inn ȳ (s d+1 F x • nec,v,j inn s d+1 G ȳ ) ( sa n , tf n-1 , . . . , sa v+1 , tf v , sa v >j , sb m , tg m-1 , . . . , sb 2 , tg 1 , sb 1 , sa v ≤j , tf v-1 , . . . , sa 2 , tf 1 , sa 1 ) = (-1) ϵ ((f 1 • s d ) ⊗ • • • ⊗ (f v • s d ) ⊗ (g 1 • s d ) ⊗ • • • ⊗ (g m-1 • s d ) ⊗ (f v+1 • s d ) ⊗ • • • ⊗ (f n-1 • s d ) ⊗ id) (s d+1 F x • nec,v,j inn s d+1 G ȳ )( sa 1 , sa 2 , . . . , sa v-1 , sa v ≤j , sb 1 , . . . , sb m-1 , sb m , sa v >j , . . . , sa n-1 , sa n ) = (-1) ϵ+ϵ ′ ((f 1 • s d ) ⊗ • • • ⊗ (f v • s d ) ⊗ (g 1 • s d ) ⊗ • • • ⊗ (g m-1 • s d ) ⊗ (f v+1 • s d ) ⊗ • • • ⊗ (f n-1 • s d ) ⊗ id) s d+1 ( lt(x 1 ) F -d rt(x 2 ) , . . . lt(x v-1 ) F -d rt(x v ) , lt(ȳ 1 ) G -d rt(ȳ 2 ) , ..., lt(ȳ m-1 ) G -d rt(ȳ m ) , lt(x v ) F -d rt(x v+1 ) . . . lt(x n ) F -d rt(x 1 ) )
for elements
sa i ∈ A[1] ⊗x i , sb i ∈ A[1] ⊗ȳ i , tf i ∈ rt(x i+1 ) A * lt(x i ) [d] and tg i ∈ rt(ȳ i+1 ) A * lt(ȳ i ) [d], with ϵ = n-1 i=v |tf i | n k=i+1 | sa k | + m-1 i=1 |tg i |( n k=v+1 | sa k | + | sa v >j | + m k=i+1 | sb k |) + v-1 i=1 |tf i |( n k=v+1 | sa k | + | sa v >j | + m k=1 | sb k | + | sa v ≤j | + v-1 k=i+1 | sa k |) + d(n + m) + (|F| + |G| + d)( n-1 i=1 |tf i | + m-1 i=1 |tg i |) + n-1 i=1 |tf i | m-1 i=1 |tg i | + 1≤i<k≤n-1 |tf i ||tf k | + 1≤i<k≤m-1 |tg i ||tg k | + 1≤i<k≤n | sa i || sa k | + | sa v >j || sa v ≤j | + 1≤i<k≤m | sb i || sb k | + n i=1 | sa i | m i=1 | sb i | ϵ ′ = (|G| + d + 1)( v-1 i=1 | sa i | + | sa v ≤j |) + m-1 i=1 | lt(ȳ i ) G -d rt(ȳ i+1 ) |(| sa v >j | + n i=v+1 | sa i | + n i=v | lt(x i ) F -d rt(x i+1 ) | + | lt(ȳ m ) G 1 rt(ȳ 1 ) | + d + 1)
On the other hand, we have that
jx(s d+1 F x) • G,p,q π A (jȳ(s d+1 G ȳ )) ( sa n , tf n-1 , . . . , sa v+1 , tf v , sa v >j , sb m , tg m-1 , . . . , sb 2 , tg 1 , sb 1 , sa v ≤j , tf v-1 , . . . , sa 2 , tf 1 , sa 1 ) = (-1) δ jx(s d+1 F x)( sa n , tf n-1 , . . . , sa v+1 , tf v , sa v >j , π A ( m-1 i=1 (g i • s d ) ⊗ id)s d+1 G ȳ ( sb 1 , . . . , sb m ) , sa v ≤j , tf v-1 , . . . , sa 2 , tf 1 , sa 1 ) = (-1) δ+δ ′ ((f 1 • s d ) ⊗ • • • ⊗ (f v-1 • s d ) ⊗ (f v • s d ) ⊗ • • • ⊗ (f n-1 • s d ) ⊗ id) s d+1 ( lt(x 1 ) F -d rt(x 2 ) , . . . lt(x v-1 ) F -d rt(x v ) , λ G , lt(x v ) F -d rt(x v+1 ) . . . lt(x n ) F -d rt(x 1 ) )
where
λ G = m-1 i=1 (g i • s d ))( lt(ȳ 1 ) G -d rt(ȳ 2 ) , ..., lt(ȳ m-1 ) G -d rt(ȳ m ) ) ∈ k and δ = (|G| + d + 1)( n i=v+1 | sa i | + | sa v >j | + n-1 i=v |tf i |) + m-1 i=1 |tg i | m k=i+1 | sb k | + d(m -1) + (|G| + d) m-1 i=1 |tg i | + 1≤i<k≤m-1 |tg i ||tg k | + 1≤i<k≤m | sb i || sb k | δ ′ = 1≤i<k≤n-1 |tf i ||tf k | + 1≤i<k≤n | sa i || sa k | + | sa v ≤j || sa v >j | + n i=1 | sa i || lt(ȳ m ) G 1 rt(ȳ 1 ) | + v-1 i=1 |tf i |( n k=i+1 | sa k | + | lt(ȳ m ) G 1 rt(ȳ 1 ) |) + n-1 i=v |tf i | n k=i+1 | sa k | + |λG|( v-1 i=1 | sa i | + | sa v ≤j | + | lt(ȳ m ) G 1 rt(ȳ 1 ) | + v-1 i=1 |tf i | + n i=v lt(x i ) F -d rt(x i+1 ) |) + d(n -1) + (|F| + 1) n-1 i=1 |tf i | + (d + 1) m-1 i=1 | lt(ȳ i ) G -d rt(ȳ i+1 ) |
Therefore, we have
jx(s d+1 F x) • G,p,q πA jȳ(s d+1 G ȳ ) ( sa n , tf n-1 , . . . , sa v+1 , tf v , sa v >j , sb m , tg m-1 , . . . , sb 2 , tg 1 , sb 1 , sa v ≤j , tf v-1 , . . . , sa 2 , tf 1 , sa 1 ) = (-1) δ+δ ′ +δ ′′ ((f 1 • s d ) ⊗ • • • ⊗ (f v • s d ) ⊗ (g 1 • s d ) ⊗ • • • ⊗ (g m-1 • s d ) ⊗ (f v+1 • s d ) ⊗ • • • ⊗ (f n-1 • s d ) ⊗ id) s d+1 ( lt(x 1 ) F -d rt(x 2 ) , . . . lt(x v-1 ) F -d rt(x v ) , lt(ȳ 1 ) G -d rt(ȳ 2 ) , ..., lt(ȳ m-1 ) G -d rt(ȳ m ) , lt(x v ) F -d rt(x v+1 ) . . . lt(x n ) F -d rt(x 1 ) ) with δ ′′ = m-1 i=1 |tg i |( n-1 i=v |tf i | + v-1 i=1 lt(x i ) F -d rt(x i+1 )
). One can easily check that ϵ + ϵ ′ = δ + δ ′ + δ ′′ mod 2. Then, the first identity is proved.
We now prove the identity (4.4). Given x = (x 1 , . . . , xn ), ȳ = (ȳ 1 , . . . , ȳm ) ∈ Ō, both the compositions
s d+1 F x • nec,v,j out s d+1 G ȳ and jȳ(s d+1 G ȳ ) • G,p,q π A * (jx(s d+1 F x))
are zero if there are no v ∈ 1, m and j ∈ 1, lg(x 1 ) such that x1 j-1 = lt(ȳ v ) and x1 j = rt(ȳ v+1 ). We will thus assume that such v ∈ 1, m and j ∈ 1, lg(x 1 ) exist. Then, we have
jx ⊔ v,j,out ȳ (s d+1 F x • nec,v,j out s d+1 G ȳ ) ( sb m , tg m-1 , . . . , sb v+1 , sa 1 ≥j , tf n , sa n , tf n-1 , sa n-1 , . . . , tf 1 , sa 1 <j , sb v , tg v-1 , . . . , sb 1 ) = (-1) ϵ ((g 1 • s d ) ⊗ • • • ⊗ (g v-1 • s d ) ⊗ (f 1 • s d ) ⊗ • • • ⊗ (f n • s d ) ⊗ (g v+1 • s d ) ⊗ • • • ⊗ (g m-1 • s d ) ⊗ id) (s d+1 F x • nec,v,j out s d+1 G ȳ )( sb 1 , . . . , sb v , sa 1 <j , . . . , sa n , sa 1 ≥j , sb v+1 , . . . , sb m ) = (-1) ϵ+ϵ ′ ((g 1 • s d ) ⊗ • • • ⊗ (g v-1 • s d ) ⊗ (f 1 • s d ) ⊗ • • • ⊗ (f n • s d ) ⊗ (g v+1 • s d ) ⊗ • • • ⊗ (g m-1 • s d ) ⊗ id) s d+1 ( lt(ȳ 1 ) G -d rt(ȳ 2 ) , . . . , lt(ȳ v-1 ) G -d rt(ȳ v ) , lt(x 1 ) F -d rt(x 2 ) , . . . , lt(x n ) F -d rt(x 1 ) , lt(ȳ v+1 ) G -d rt(ȳ v+2 ) , . . . , lt(ȳ m ) G -d rt(ȳ 1 )
)
with ϵ = (|F| + |G| + d)( m-1 i=1 |tg i | + n i=1 |tf i |) + d(n + m) + m-1 i=v+1 |tg i | m k=i+1 | sb k | + | sa 1 ≥j || sa 1 <j | + 1≤i<k≤n | sa i || sa k | + n i=1 |tf i |( n k=i+1 | sa k | + | sa 1 ≥j | + m i=v+1 | sb i |) + v-1 i=1 |tg i |( n i=1 | sa i | + m k=i+1 | sb k |) + 1≤i<k≤m | sb i || sb k | + 1≤i<k≤m-1 |tg i ||tg k | + 1≤i<k≤n |tf i ||tf k | + m-1 i=1 |tg i | n i=1 |tf i | + m i=1 | sb i | n i=1 | sa i | ϵ ′ = n i=1 | sa i |( m i=v+1 | sb i | + m i=v+1 | lt(ȳ i ) G -d rt(ȳ i+1 ) |) + (|F| + d + 1) v-1 i=1 | lt(ȳ i ) G -d rt(ȳ i+1 ) | + | sa 1 ≥j | n i=2 | sa i | + | sa 1 <j || lt(ȳ v ) G 1 rt(ȳ v+1 ) |
On the other hand, we have that
jȳ(s d+1 G ȳ ) • G,p,q π A * (jx(s d+1 F x)) ( sb m , tg m-1 , . . . , sb v+1 , sa 1 ≥j , tf n , sa n , tf n-1 , sa n-1 , . . . , sa 1 <j , sb v , tg v-1 , . . . , sb 1 ) = (-1) δ jȳ(s d+1 G ȳ )( sb m , tg m-1 , . . . , sb v+1 , tg, sb v , tg v-1 , . . . , sb 1 )
where tg = π A * (jx(s d+1 F x))( sa 1 ≥j , tf n , sa n , tf n-1 , sa n-1 , . . . , sa 1 <j ) ∈ A * [d] and
δ = (|F| + d + 1)( m i=v+1 | sb i | + m-1 i=v+1 |tg i |)
Therefore, we have that
jȳ(s d+1 G ȳ ) • G,p,q π A * (jx(s d+1 F x)) ( sb m , tg m-1 , . . . , sb v+1 , tg v+1 , sa 1 ≥j , sa n , tf n-1 , sa n-1 , . . . , sa 1 <j , sb v , tg v-1 , . . . , sb 1 ) = (-1) δ+δ ′ ((g 1 • s d ) ⊗ • • • ⊗ (g v-1 • s d ) ⊗ (g • s d ) ⊗ (g v+1 • s d ) ⊗ • • • ⊗ (g m • s d ) ⊗ id) s d+1 ( lt(ȳ 1 ) G -d rt(ȳ 2 ) , . . . , lt(ȳ v-1 ) G -d rt(ȳ v ) , lt(ȳ v ) G -d rt(ȳ v+1 ) , . . . , lt(ȳ m ) G -d rt(ȳ 1 )
) where
δ ′ = |tg| m k=v+1 | sb k | + m-1 i=1 |tg i | m k=i+1 | sb k | + 1≤i<k≤m | sb i || sb k | + 1≤i<k≤m-1 |tg i ||tg k | + m-1 i=1 |tg i ||tg| + (|G| + 1)( m-1 i=1 |tg i | + |tg|) + d(m -1)
Furthermore, by definition, we have that
(g • s d )( lt(ȳ v ) G -d rt(ȳ v+1 ) ) = (-1) ∆ ( n i=1 (f i • s d ))( lt(x 1 ) F -d rt(x 2 ) ⊗ • • • ⊗ lt(x n ) F -d rt(x 1 ) )
where
∆ =|tg| + 1 + 1≤i<k≤n | sa i || sa k | + | lt(ȳ v ) G 1 rt(ȳ v+1 ) | n i=2 | sa i | + 1≤i<k≤n |tf i ||tf k | + d(n -1) + (|F| + 1) n i=1 |tf i | + n-1 i=1 |tf i | n k=i+1 | sa k | + | sa 1 ≥j |( n i=2 | sa i | + | sa 1 <j | + | lt(ȳ v ) G 1 rt(ȳ v+1 ) |)
Finally, we have that
jȳ(s d+1 G ȳ ) • G,p,q π A * (jx(s d+1 F x)) ( sb m , tg m-1 , . . . , sb v+1 , tg v+1 , sa 1 ≥j , tf n , sa n , tf n-1 , sa n-1 , . . . , sa 1 <j , sb v , tg v-1 , . . . , sb 1 ) = (-1) γ ((g 1 • s d ) ⊗ • • • ⊗ (g v-1 • s d ) ⊗ (f 1 • s d ) ⊗ • • • ⊗ (f n • s d ) ⊗ (g v+1 • s d ) ⊗ • • • ⊗ (g m-1 • s d ) ⊗ id) s d+1 ( lt(ȳ 1 ) G -d rt(ȳ 2 ) , . . . , lt(ȳ v-1 ) G -d rt(ȳ v ) , lt(x 1 ) F -d rt(x 2 ) , . . . , lt(x n ) F -d rt(x 1 ) , lt(ȳ v+1 ) G -d rt(ȳ v+2 ) , . . . , lt(ȳ m ) G -d rt(ȳ 1 ) )
where
γ = δ + δ ′ + |tg|( m-1 i=v+1 |tg i | + d + 1 + v-1 i=1 | lt(ȳ i ) G -d rt(ȳ i+1 ) |) + ∆ + n i=1 |tf i |( v-1 i=1 | lt(ȳ i ) G -d rt(ȳ i+1 ) | + m-1 i=v+1 |tg i |) It is straightforward to check that ϵ + ϵ ′ + γ = 1 + (|F| + d + 1)(|G| + d + 1) mod 2.
Corollary 4.24. The necklace bracket [-, -] nec introduced in Definition 4.21 gives a graded Lie algebra structure on Multi
• (Bar(A)[d]) C lg(•) [d + 1]. Proof. For F, G ∈ Multi • (Bar(A)[d]) C lg(•) , we have that j([s d+1 F, s d+1 G] nec ) = [j(s d+1 F), j(s d+1 G)] G
Moreover, using that j is injective, we have that [-, -] nec is a graded Lie bracket.
Pre-Calabi-Yau structures
Definition 4.25. A d-pre-Calabi-Yau structure on a graded quiver A is an element
s d+1 M A ∈ Multi • (Bar(A)[d]) C lg(•) [d + 1]
of degree 1, solving the Maurer-Cartan equation
[s d+1 M A , s d+1 M A ] nec = 0
Note that, since s d+1 M A has degree 1, this is tantamount to requiring that
s d+1 M A • nec s d+1 M A = 0.
We now recall the following result of [START_REF] Kontsevich | Pre-Calabi-Yau algebras and topological quantum field theories[END_REF] which states the link between a d-pre-Calabi-Yau structure on a Hom-finite graded quiver A and a cyclic A ∞ -structure on A ⊕ A * [d -1]. We denote by Γ the natural bilinear form on the boundary quiver
∂ d-1 A = A ⊕ A * [d -1], defined in Example 3.18. Proposition 4.26. A d-pre-Calabi-Yau structure on a graded quiver A induces an A ∞ -structure on A ⊕ A * [d -1]
that restricts to A and that is almost cyclic with respect to Γ. Moreover, if the graded quiver A is Hom-finite, then the data of a d-pre-Calabi-Yau structure on A is equivalent to the data of a cyclic
A ∞ -structure on A ⊕ A * [d -1] that restricts to A. Proof. Consider an element s d+1 M A ∈ Multi • (Bar(A)[d]) C lg(•) [d + 1] of degree 1.
We then define sm A⊕A * = j(s d+1 M A ) By Proposition 4.23, s d+1 M A defines a d-pre-Calabi-Yau structure if and
only if sm A⊕A * defines an A ∞ -structure on A ⊕ A * [d -1]
. Moreover, it is straightforward to show that this A ∞ -structure is almost cyclic with respect to Γ.
If A is Hom-finite, the bijectivity of j tells us that the collection of maps m A⊕B * are in correspondence with maps of the form M A , which shows the equivalence.
5 Pre-Calabi-Yau morphisms
The mixed necklace graded Lie algebra
One can also define a "mixed" necklace bracket, which will be useful in the next section. As we did for the necklace bracket, we first define the following graded vector space.
B•(A[1], B[-d]) = n∈N x∈ Ōn A Hom k n i=1 A[1] ⊗x i , n-1 i=1 Φ(lt(x i )) B Φ(rt(x i+1 )) [-d] ⊗ ( lt(x n ) A[-d] rt(x 1 ) ⊕ Φ(lt(x n )) B * [-1] Φ(rt(x 1 )) ) Definition 5.2. Consider s d+1 F x ∈ Bx(A[1], B[-d])[d + 1] with x = (x 1 , . . . , xn ) ∈ Ōn . Then s d+1 F
x induces a morphism of the form
A[1] ⊗x 1 ⊗ Φ(rt(x 1 )) B Φ(lt(x n )) [-d] ⊗ A[1] ⊗x n ⊗ • • • ⊗ A[1] ⊗x n-1 → n-1 i=1 Φ(lt(x i )) B Φ(rt(x i+1 )) [-d] (5.1)
sending ( sa 1 , s -d b, sa n , . . . , sa n-1 ) to
(-1) ϵ Φ(lt(x 1 )) F -d Φ(rt(x 2 )) ⊗ • • • ⊗ Φ(lt(x n-1 )) F -d Φ(rt(x n )) ⊗ Φ(lt(x n )) F 1 Φ(rt(x 1
)) (sb) where we have written
(id ⊗(n-1) ⊗π B ) F x1 ,...,x n ( sa 1 , . . . , sa n ) = Φ(rt(x 1 )) F -d Φ(lt(x 2 )) ⊗ • • • ⊗ Φ(rt(x n-1 )) F -d Φ(lt(x n )) ⊗ Φ(rt(x n )) F 1 Φ(lt(x 1 ))
where π B denotes the canonical projection
lt(x n ) A[-d] rt(x 1 ) ⊕ Φ(lt(x n )) B * [-1] Φ(rt(x 1 )) → Φ(lt(x n )) B * [-1] Φ(rt(x 1 )) for sa i ∈ A[1] ⊗x i with ϵ = (d + 1)(| sa 1 | + |F x|) + (|sb| + | sa n |) n-1 i=2 | sa i | + |sb|| sa n | + (d + 1) n-1 i=1 | Φ(lt(x i )) F -d Φ(rt(x i+1 )) |
and a morphism of the form
A[1] ⊗x 1 ⊗ A[1] ⊗x 2 ⊗ • • • ⊗ A[1] ⊗x n → n-1 i=2 Φ(lt(x i )) B Φ(rt(x i+1 )) [-d] ⊗ lt(x n ) A[1] rt(x 1 ) (5.2)
given by id ⊗(n-1) ⊗(s d+1 • π A ) • F x1 ,...,x n where π A denotes the canonical projection
lt(x n ) A[-d] rt(x 1 ) ⊕ Φ(lt(x n )) B * [-1] Φ(rt(x 1 )) → lt(x n ) A[-d] rt(x
B : B • (A[1], B[-d])[d + 1] ≃ -→ B A • (A[1], B[-d]) ⊕ B B • (A[1], B[-d]) where B A • (A[1], B[-d]) = n∈N x∈ Ōn A Hom k n i=1 A[1] ⊗x i , n-1 i=1 Φ(lt(x i )) B Φ(rt(x i+1 )) [-d] ⊗ lt(x n ) A[1] rt(x 1 )
and
B B • (A[1], B[-d]) = n∈N x∈ Ōn A Hom k A[1] ⊗x 1 ⊗ Φ(rt(x 1 )) B[-d] Φ(lt(x n )) ⊗ A[1] ⊗x n ⊗ • • • ⊗ A[1] ⊗x n-1 , n-1 i=1 Φ(lt(x i )) B Φ(rt(x i+1 )) [-d] sending an element s d+1 F ∈ B • (A[1], B[-d])[d + 1]
O A → O B . Given s d+1 F, s d+1 G ∈ B • (A[1], B[-d])[d + 1], we define their Φ-mixed necklace product as the element s d+1 F • Φ nec s d+1 G ∈ B • (A[1], B[-d])[d + 1] given by (s d+1 F • Φ nec s d+1 G) x = E( D) + E( D ′ ) ∈ B • (A[1], B[-d])[d + 1]
where the sums are over all the filled diagrams D and D ′ of type x and of the form
F G D = D ′ = F G
More precisely, each of D and D ′ can be pictured as two different filled diagrams by put in bold the arrow corresponding to the last output of F which is either an element in A [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF]
or in B * [d]. Thus, D is one of the following F G D 1 = D 2 = F G
and D ′ is one of the following
F G D ′ 1 = D ′ 2 = F G
Remark 5.6. The diagrams of Definition 5.5 are filled with F and G seen as elements of Multi
• (Bar(Q ′ Φ )[d]) where Q ′
Φ is the graded quiver whose set of objects is O A and whose spaces of morphisms are y (Q ′ Φ ) x = y A x ⊕ Φ(y) B Φ(x) . Definition 5.7. Given graded quivers A and B with respective sets of objects O A and O B and a map Φ : O A → O B , the Φ-mixed necklace bracket is the graded Lie bracket which is defined for elements
F, G ∈ B • (A[1], B[-d]) by [s d+1 F, s d+1 G] Φ nec = s d+1 F • Φ nec s d+1 G -(-1) (|F|+d+1)(|G|+d+1) s d+1 G • Φ nec
d+1 F ∈ Multi • (A[1], B[-d])[d + 1]
of degree 0, the multinecklace composition of s d+1 M A and s d+1 F is the element
s d+1 F • multinec s d+1 M A ∈ Multi • (A[1], B[-d])[d + 1] given by (s d+1 F • multinec s d+1 M A ) x = E( D) for x ∈ O A ,
s d+1 M B • pre s d+1 F ∈ Multi • (A[1], B[-d])[d + 1]
given by
(s d+1 M B • pre s d+1 F) x = E(D ′ ) for x ∈ O A ,
s d+1 M A ) → (B, s d+1 M B ) is a map F 0 : O A → O B together with an element s d+1 F ∈ Multi • (A[1], B[-d]) C lg(•) [d + 1] of degree 0 satisfying the following equation (s d+1 F • multinec s d+1 M A ) x = (s d+1 M B • pre s d+1 F) x (5.3)
for all x ∈ ŌA . Note that the left member and right member of the previous identity belong to
Hom k n i=1 A[1] ⊗x i , n-1 i=1 F0(lt(x i )) B F0(rt(x i+1 )) [-d] ⊗ F0(lt(x n )) B F0(rt(x 1
)) [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] We now recall how to compose d-pre-Calabi-Yau morphisms.
(G 0 • F 0 , s d+1 F • pCY s d+1 G)
where
s d+1 F • pCY s d+1 G ∈ Multi • (A[1], B[-d]) C lg(•) [d + 1]
is of degree 0 and is given by Proof. We only have to check that the composition is associative and that the composition of any two d-pre-Calabi-Yau morphisms F and G is a d-pre-Calabi-Yau morphism. The associativity of the composition is clear. Now, consider two d-pre-Calabi-Yau morphisms F and G. Their composition is the sum of diagrams of the form (5.4). Therefore, the multinecklace composition of this composition and the pre-Calabi-Yau structure s d+1 M A is a sum of diagrams of the form
(s d+1 G • pCY s d+1 F ) x = E( D)
M A F F G F G F F G G F F (5.5)
Given x ∈ ŌA , we have to sum over all diagrams of such type and we have several possibilities for the type of the inner diagram, defined as the subdiagram consisting of the disc filled with M A together with all those discs directly connected to the one filled with M A . Note that if we fix the type of the outer diagram given as the complement of the inner diagram, the type of the inner one is fixed. Moreover, changing the inner diagram for one of same type does not change the type of the whole diagram. Therefore, taking the sum over all diagrams of type x ∈ ŌA is the same as taking the sum over all the possible types for the outer diagram and for each of those, taking the sum over all the suitable types for the inner one. This second sum allows us to use that F is a pre-Calabi-Yau morphism to replace the inner diagram by one consisting of a discs filled with M B whose incoming arrows are connected with outgoing arrows of discs filled with F. Then, the sum of all the diagrams of type x of the form (5.5) is equal to the sum of all the diagrams of type x of the form
G F G F G G F F M B F F (5.6)
and we now define the inner diagram as filled diagram consisting of the disc filled with M B and of all the discs connected to it. The previous remarks on the types of the inner and outer diagrams still hold. Thus, the sum over all possible types for the whole diagram is again the sum over all the possible types for the outer diagram and for each of those, taking the sum over all the suitable types for the inner one. G being a pre-Calabi-Yau morphism, one can again use (5.3) and say that the sum of all the diagrams of type x of the form (5.6) is now equal to the sum of all the diagrams of type x of the form
G F F F G M C G Therefore, s d+1 G • pCY s d+1 F is a pre-Calabi-Yau morphism.
The following class of morphisms will be useful in the next subsection. Note that this condition is not closed under the pre-Calabi-Yau composition. We thus restrict this notion of good morphisms and give the following definition.
The relation between pre-Calabi-Yau morphisms and A ∞ -morphisms
Recall that given a Hom-finite graded quiver A, we have an equivalence between the data of a d-pre-Calabi-Yau structure on A and a cyclic A ∞ -structure on A ⊕ A * [d -1] that restricts to A. In this section, we study the relation between d-pre-Calabi-Yau morphisms and A ∞ -morphisms.
The case of strict morphisms
In this subsection, we study the relation between strict d-pre-Calabi-Yau morphisms and A ∞morphisms. We first recall the notion of strict d-pre-Calabi-Yau morphism.
(Φ lt(x 1 ),rt(x 2 ) ⊗• • •⊗Φ lt(x n ),rt(x 1 ) )•s d+1 M x1 ,...,x n A = s d+1 M x1 ,...,x n B •(Φ ⊗ lg(x 1 )-1 ⊗• • •⊗Φ ⊗ lg(x n )-1 ) for each n ∈ N * , (x 1 , . . . , xn ) ∈ Ōn A .
For simplicity, we will omit the elements when writing the map Φ. We will denote
Φ 0 (x) = (Φ 0 (x 1 ), . . . , Φ 0 (x n )) for x = (x 1 , . . . , x n ) ∈ O n A and Φ 0 (x) = (Φ 0 (x 1 ), . . . , Φ 0 (x n )) for x = (x 1 , . . . , xn ) ∈ Ōn A . Definition
lt(x 1 ) A rt(x n ) ⊕ Φ0(lt(x 1 )) B * Φ0(rt(x n )) [d -1] → lt(x 1 ) A rt(x n )
is given by the map
m x1 ,...,x n A⊕B * →A : n-1 i=1 (A[1] ⊗x i ⊗ Φ0(rt(x i+1 )) B * [d] Φ0(lt(x i )) ) ⊗ A[1] ⊗x n → lt(x 1 ) A rt(x n ) defined by m x1 ,...,x n A⊕B * →A = m x1 ,...,x n A⊕A * →A • (id ⊗ lg(x 1 )-1 ⊗Φ * ⊗ id ⊗ lg(x 2 )-1 ⊗ • • • ⊗ id ⊗ lg(x n-1 )-1 ⊗Φ * ⊗ id ⊗ lg(x n )-1 )
and the composition of m x1 ,...,x n A⊕B * with the canonical projection
lt(x 1 ) A rt(x n ) ⊕ Φ0(lt(x 1 )) B * Φ0(rt(x n )) [d -1] → Φ0(lt(x 1 )) B * Φ0(rt(x n )) [d -1]
is given by the map m x1 ,...,x n A⊕B * →B * :
n-1 i=1 (A[1] ⊗x i ⊗ Φ0(rt(x i+1 )) B * [d] Φ0(lt(x i )) ) ⊗ A[1] ⊗x n → Φ0(lt(x 1 )) B * Φ0(rt(x n )) [d -1] defined by m x1 ,...,x n A⊕B * →B * = m Φ0(x 1 ),...,Φ0(x n ) B⊕B * →B * • (Φ ⊗ lg(x 1 )-1 ⊗ id ⊗Φ ⊗ lg(x 2 )-1 ⊗ • • • ⊗ id ⊗Φ ⊗ lg(x n )-1 ) Proposition 6.4. The element sm A⊕B * ∈ C(A ⊕ B * [d -1])[1] defines an A ∞ -structure on A ⊕ B * [d -1]
that is almost cyclic with respect to the Φ-mixed bilinear form Γ Φ , defined in Example 3.17.
Proof. Using the Proposition 5.9, we have that the equality sm
A⊕B * • G sm A⊕B * = 0 is tantamount to s d+1 M A⊕B * • Φ nec s d+1 M A⊕B * = 0 where s d+1 M A⊕B * ∈ B • (A[1], B[-d])[d + 1] is uniquely deter- mined by M x A⊕B * = (Φ ⊗(n-1) ⊗ id) • s d+1 M x A ∈ B A x (A[1], B[-d]) and M Φ0(x) A⊕B * = s d+1 M Φ0(y) B •(Φ ⊗ lg(x 1 )-1 ⊗id ⊗Φ ⊗ lg(x n )-1 ⊗Φ ⊗ lg(x 2 )-1 ⊗• • •⊗Φ ⊗ lg(x n-1 )-1 ) ∈ B B x (A[1], B[-d])
for x = (x 1 , . . . , xn ) ∈ Ōn and ȳ = (x 1 ⊔ xn , x2 , . . . , xn-1 ). Moreover,
π A (s d+1 M A⊕B * • Φ nec s d+1 M A⊕B * ) x = E( D) + E( D ′ ) where π A is the canonical projection B • (A[1], B[-d])[d + 1] → B A • (A[1], B[-d])
and where the sums are over all the filled diagrams D and D ′ of type x that are of the form
M A M A Φ Φ Φ Φ Φ and M A Φ Φ Φ Φ M B
respectively. Now, note that the second diagram can be cut into two as follows.
M A Φ Φ Φ Φ M B
Using the morphism identity satisfied by Φ, the diagram on the right can be replaced by one with a disc filled with M A whose outgoing arrows are connected with the unique incoming arrow of discs of size 1 filled with Φ. We thus obtain that
π A (s d+1 M A⊕B * • G s d+1 M A⊕B * ) x = E( D) + E( D ′ )
where the sums are over all the filled diagrams D and D ′ of type x that are of the form
M A M A Φ Φ Φ Φ Φ and M A M A Φ Φ Φ Φ Φ respectively. Moreover, E( D) + E( D ′ ) = 0 since s d+1 M A is a d-pre-Calabi-Yau structure.
Thus, if we show that this structure satisfies the cyclicity condition (3.1), sm A⊕B * satisfies the Stasheff identities (SI).
Using the definition of Γ Φ and sm B⊕B * and since sm B⊕B * is cyclic with respect to Γ B , we have that
Γ Φ (sm x1 ,...,x n A⊕B * →B * ( sa 1 , tf 1 , . . . , sa n-1 , tf n-1 , sa n ), sb) = Γ B (sm Φ0(x 1 ),...,Φ0(x n ) B⊕B * →B * (Φ ⊗ lg(x 1 )-1 ( sa 1 ), tf 1 , . . . , tf n-1 , Φ ⊗ lg(x n )-1 ( sa n )), Φ(sb)) = (-1) ϵ Γ B (sm Φ0(x n )⊔Φ0(x 1 ),Φ0(x 2 ),...,Φ0(x n-1 ) B⊕B * →B Φ ⊗ lg(x n )-1 ( sa n ) ⊗ Φ(sb) ⊗ Φ ⊗ lg(x 1 )-1 ( sa 1 ), tf 1 , . . . , tf n-2 , Φ ⊗ lg(x n-1 )-1 ( sa n-1 ) , tf n-1 ) = (-1) ϵ+δ Γ B ((f n-2 • s d ) ⊗ • • • ⊗ (f 1 • s d ) ⊗ id) (M Φ0(x n-1 ),...Φ0(x n )⊔Φ0(x 1 ) B (Φ ⊗ lg(x n-1 ) ( sa n-1 ), . . . , Φ ⊗ lg(x n ) ( sa n ) ⊗ Φ(sb) ⊗ Φ ⊗ lg(x 1 ) ( sa 1 )), tf n-1 (6.1) for s a i ∈ A[1] ⊗x i , tf i ∈ Φ0(rt(x i+1 )) B * [d] Φ0(lt(x i )) , with ϵ = (| sa n | + |sb|)( n-1 i=1 (| sa i | + |tf i |) and δ = (| sa 1 | + |sb| + | sa n |)( n-1 i=1 | sa i | + n-2 i=1 |tf i |) + 2≤i≤j≤n-2 | sa i ||tf j | + dn + 2≤i<j≤n-1 | sa i || sa j | + 1≤i<j≤n-2 |tf i ||tf j |
Moreover, using that Φ is a d-pre-Calabi-Yau morphism, we have that the last member of (6.1) is
(-1) ϵ+δ Γ A ( n-1 i=2 (f n-i • Φ) ⊗ id) • s d+1 M xn-1 ,...,x n ⊔x 1 A ( sa n-1 , . . . , sa 2 , sa n ⊗ sb ⊗ sa 1 ), tf n-1 • Φ = (-1) ϵ Γ A sm xn ⊔x 1 ,x 2 ,...,x n-1 A⊕A * →A ( sa n ⊗ sb ⊗ sa 1 , tf 1 • Φ, sa 2 , . . . , tf n-2 • Φ, sa n-1 ), tf n-1 • Φ (6.2)
where Φ denotes the morphism Φ[d -1]. Thus, comparing (6.1) and (6.2) we get that Γ B (sm
Φ0(x n )⊔Φ0(x 1 ),Φ0(x 2 ),...,Φ0(x n-1 ) B⊕B * →B Φ ⊗ lg(x n )-1 ( sa n ) ⊗ Φ(sb) ⊗ Φ ⊗ lg(x 1 )-1 ( sa 1 ), tf 1 , . . . , tf n-2 , Φ ⊗ lg(x n-1 )-1 ( sa n-1 ) , tf n-1 ) = Γ A (sm xn ⊔x 1 ,x 2 ,...,x n-1 A⊕A * →A ( sa n ⊗ sb ⊗ sa 1 , tf 1 • Φ, sa 2 , . . . , tf n-2 • Φ, sa n-1 ), tf n-1 • Φ) (6.3)
Finally, we have that
(-1) ϵ Γ A (sm xn ⊔x 1 ,x 2 ,...,x n-1 A⊕A * →A ( sa n ⊗ sb ⊗ sa 1 , tf 1 • Φ, sa 2 , . . . , tf n-2 • Φ, sa n-1 ), tf n-1 • Φ) = (-1) ϵ Γ Φ (sm xn ⊔x 1 ,x 2 ,...,x n-1 A⊕B * →A ( sa n ⊗ sb ⊗ sa 1 , tf 1 , sa 2 , . . . , tf n-2 , sa n-1 ), tf n-1 ) Therefore, A ⊕ B * [d -1] together with sm A⊕B * is an A ∞ -
M A ) → (B, s d+1 M B ).
We define maps of graded vector spaces Proof. We only check that φ A is a morphism since the case of φ B is similar. We only have to verify that A⊕B * →A ( sa 1 , tf 1 , sa 2 , . . . , sa n-1 , tf n-1 , sa n ) = sm x1 ,...,x n A⊕A * →A ( sa 1 , φ A (tf 1 ), sa 2 , . . . , sa n-1 , φ A (tf n-1 ), sa n ) for sa i ∈ A[1] ⊗x i and tf i ∈ Φ0(rt(x i+1 )) B * [d] Φ0(lt(x i )) , so that (6.4) holds.
φ x,y A : x A[1] y ⊕ Φ0(x) B * [d] Φ0(y) → x (A[1] ⊕ A * [d]) y and φ x,y B : x A[1] y ⊕ Φ0(x) B * [d] Φ0(y) → Φ0(x) (B[1] ⊕ B * [d]) Φ0(y) given by φ x,y A (sa) = sa, φ x,y B (sa) = Φ x,y (sa), φ x,y A (tf ) = tf • Φ y,x [d] and φ x,y B (tf ) = tf for x, y ∈ O A , sa ∈ x A[1] y and tf ∈ Φ0(x) B * [d] Φ0(y) .
Moreover, we also have φ x,y A • sm x1 ,...,x n A⊕B * →B * ( sa 1 , tf 1 , sa 2 , . . . , sa n-1 , tf n-1 , sa n ) = sm x1 ,...,x n A⊕B * →B * ( sa 1 , tf 1 , sa 2 , . . . , sa n-1 , tf n-1 , sa n ) • Φ x,y and given sb ∈ x A y , we have that sm x1 ,...,x n A⊕B * →B * ( sa 1 , tf 1 , sa 2 , . . . , sa n-1 , tf n-1 , sa n )(Φ x,y (sb)) = Γ Φ (sm x1 ,...,x n A⊕B * →B * ( sa 1 , tf 1 , sa 2 , . . . , sa n-1 , tf n-1 , sa n ), sb) On the other hand, we have ( sa 1 ) ⊗ φ A (tf 1 ) ⊗ φ
⊗ lg(x 2 )-1 A ( sa 2 ) • • • ⊗ φ ⊗ lg(x n )-1 A ( sa n )), sb)
Using the identity (6.3), we thus get (6.5).
It remains to show that the morphisms are cyclic. To prove it, we note that y Γ A x (φ y,x A (tf ), φ x,y A (sa)) = y Γ A x (t(f • Φ x,y ), sa) = f (Φ x,y (sa)) = y Γ Φ x (tf, sa)
as well as Definition 6.9. A partial category is an A ∞ -pre-category as defined in [START_REF] Kontsevich | Homological mirror symmetry and torus fibrations, Symplectic geometry and mirror symmetry[END_REF] where the multiplications m n vanish for n > 2. We now summarize the results of Propositions 6.4 and 6.6. The minus in the identity (6.7) comes from the fact that the discs filled with M A change their place, in the sense that the order of the labeling of their first outgoing arrow changes. Since s d+1 M A is of degree 1, this create a minus sign. Moreover, E( D ′ ) = E( D 2 ) so that it remains to show that E( D) + E( D 1 ) = 0. This is the case since s d+1 M A is a pre-Calabi-Yau structure. Indeed, the sum of these evaluations of diagrams is the composition of
s d+1 M A • nec s d+1 M A
Figure 3 . 2 :Definition 3 . 2 .
3232 Figure 3.2: The representation of a homogeneous element in C(A) Definition 3.2. The type of a disc representing a map F x : A[1] ⊗x → lt(x) A rt(x) is the tuple x ∈ Ō.
Figure 3 . 3 :G
33 Figure 3.3: The representation of a homogeneous element in C(A)[START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF]
Figure 4 . 1 : A disc of size 3 Definition 4 . 4 .
41344 Figure 4.1: A disc of size 3
3 ) 4 . 5 .
345 Figure 4.2: A decorated disc of type x = (x 1 , x2 , x3 ) where x1= (x 1 1 , x 1 2 , x 1 3 ), x2 = (x 2 1 , x 2 2 , x 2 3 , x24 ) and x3 = (x 3 1 , x 3 2 , x 3 3 ) Definition 4.5. Given x ∈ Ō we associate to a map F x ∈ Multi x(Bar(A[d])) the unique decorated diagram of type x.
Example 4 . 8 .
48 and (a, b) as before, we associate the marked diagram which is decorated as the one of F x, where the bold arrow is the b-th incoming arrow if a = i and the b-th outgoing arrow if a = o where the incoming arrows are numbered in clockwise order, the first being the incoming arrow following the last outgoing one. Given a disc D of type x, we will denote by E(D, s d+1 F x) the map (4.1) if a = i and (4.2) if a = o. Given the marked disc in Figure 4.3 and an element
FFigure 4 . 4 :FFigure 4 . 5 :
4445 Figure 4.4: A disc representing a map of the form (4.2)
Figure 4 .
4 Figure 4.6: A diagram
Definition 4 . 11 .
411 The type of an admissible diagram (D = {D 1 , . . . , D n }, R) is the tuple x = (x 1 , . . . , xm ) ∈ Ō where m = |D| and xi ∈ Ō is the tuple composed of the objects of (D, R), placed in counterclockwise order, that we can read between the outgoing arrows i -1 and i of (D, R). Definition 4.12. A source (resp. sink) of an admissible diagram is a disc which shares none of its incoming (resp. outgoing) arrows with another one. Remark 4.13. An admissible diagram has at least one source and one sink.
3 )
3 We add a Koszul sign coming from transposing b p+1 ⊗ • • • ⊗ b |Ds| with all of the elements that do not correspond to an incoming arrow of D s . Recall that the elements b i for i ∈ 1, |D s | \ {p} are labeled with an index from 1, |D| coming from the corresponding outgoing arrow of (D, R).
1 >q , sb 2 ,
2 . . . , sb m ) and multiply the result by (-1) | sb 1 >q |(|ϵi|+| sb 2 |+•••+| sb m |)+(d+1)|G ȳ |
Figure 4 Figure 4 . 7 :
447 Figure 4.7: A filled diagram
Definition 4 . 19 .
419 j,inn ȳ = (x 1 , . . . , xv-1 , ȳ1 ⊔ xv <j , ȳ2 , . . . , ȳm-1 , xv >j-1 ⊔ ȳm , xv+1 , . . . , xn ) and where D is the filled diagram of type x ⊔ Consider a graded quiver A with set of objects O as well as tuples of elements of Ō
Definition 4 . 20 .
420 j,out ȳ = (ȳ 1 , . . . , ȳv-1 , x1 <j-1 ⊔ ȳv , x2 , . . . , xn , ȳv+1 ⊔ x1 >j , ȳv+2 , . . . , ȳm )and where D is the filled diagram of type x ⊔ Given a graded quiver A with set of objects O, the necklace product of elements
Definition 5 . 1 .
51 Given graded quivers A and B with respective sets of objects O A and O B and a map Φ : O A → O B , consider the graded vector space
1 ) 5 . 3 .
153 Lemma Given two graded quivers A and B with respective sets of objects O A and O B and a map Φ : O A → O B , we have an isomorphism
Definition 5 . 4 .
54 to the elements defined in (5.2) and (5.1). Let A and B be graded quivers with respective sets of objects O A and O B and consider a map Φ : O A → O B . We define the graded quiver Q Φ whose set of objects is O A and whose spaces of morphisms are y (Q Φ ) x = y A x ⊕ Φ(y) B * Φ(x) [d -1] for x, y ∈ O A . Definition 5.5. Let A and B be graded quivers with respective sets of objects O A and O B and consider a map Φ :
s d+1 F Definition 5 . 11 .
511 Given d-pre-Calabi-Yau categories (A, s d+1 M A ) and (B, s d+1 M B ) with respective sets of objects O A and O B and an element s
where the sum is over all the filled diagrams D of type x of the form F F F M A and where we have omitted the bold arrow, meaning that it is any of the outgoing arrows. Definition 5.12. Given d-pre-Calabi-Yau categories (A, s d+1 M A ) and (B, s d+1 M B ) with respective sets of objects O A and O B and an element s d+1 F ∈ Multi • (A[1], B[-d])[d + 1] of degree 0, the pre composition of s d+1 F and s d+1 M B is the element
where the sum is over all the filled diagrams D ′ of type x of the form M B F F F and where we have omitted the bold arrow, meaning that it is any of the outgoing arrows. Definition 5.13. Given d-pre-Calabi-Yau categories (A, s d+1 M A ) and (B, s d+1 M B ) with respective sets of objects O A and O B a d-pre-Calabi-Yau morphism (F 0 , F) : (A,
Definition 5 .
5 14. Let (A, s d+1 M A ), (B, s d+1 M B ) and (C, s d+1 M C ) be d-pre-Calabi-Yau categories with respective sets of objects O A , O B and O C and let (F 0 , F) : (A, s d+1 M A ) → (B, s d+1 M B ) and (G 0 , G) : (B, s d+1 M B ) → (C, s d+1 M C ) be d-pre-Calabi-Yau morphisms. The composition of s d+1 F and s d+1 G is the pair
where the sum is over all filled diagrams D of type x ∈ ŌA of the form G and where we have omitted the bold arrow, meaning that it any of the outgoing arrows. Proposition 5.15. For d ∈ Z, d-pre-Calabi-Yau categories and d-pre-Calabi-Yau morphisms together with the composition given in Definition 5.14 define a category, denoted as pCY d . Given a graded quiver A with set of objects O, the identity morphism Id : (A, s d+1 M A ) → (A, s d+1 M A ) is given by Id x = id x Ax for x ∈ O and Id x1 ,...,x n = 0 for (x 1 , . . . , xn ) ∈ Ōn such that n ̸ = 1 or n = 1 and lg(x 1 ) ̸ = 1.
Definition 5 .
5 16. Given d-pre-Calabi-Yau categories (A, s d+1 M A ) and (B, s d+1 M B ) with respective sets of objects O A and O A and a d-pre-Calabi-Yau morphism F = (F 0 , F) : (A, s d+1 M A ) → (B, s d+1 M B ), we say that F is good if E( D) = E( D ′ ) where the sums are over all the filled diagrams D and D ′ of type x of the form
Definition 5 .
5 17. Given d-pre-Calabi-Yau categories (A, s d+1 M A ) and (B, s d+1 M B ) with respective sets of objects O A and O A and a d-pre-Calabi-Yau morphism F = (F 0 , F) : (A, s d+1 M A ) → (B, s d+1 M B ), we say that F is nice if E( D) = E( D ′ ) where the sums are over all the filled diagrams D and D ′ of type x of the form
Definition 6 . 1 .
61 Let (A, s d+1 M A ), (B, s d+1 M B ) be d-pre-Calabi-Yau categories with respective sets of objects O A and O B . A d-pre-Calabi-Yau morphism (Φ 0 , Φ) : (A, s d+1 M A ) → (B, s d+1 M B ) is strict if Φ x vanishes for each x ∈ Ōn A with n > 1 and lg(x 1 ) ̸ = 2 if n = 1. Equivalently, a strict d-pre-Calabi-Yau morphism between d-pre-Calabi-Yau categories (A, s d+1 M A ) and (B, s d+1 M B ) is the data of a map between their sets of objects Φ 0 : O A → O B together with a collection Φ = (Φ x,y : x A y [1] → x B y [1]) x,y∈O A of maps of degree 0 that satisfies
6 . 2 . 1 AΦ0Definition 6 . 3 .
62163 We denote by SpCY d the subcategory of pCY d whose objects are d-pre-Calabi-Yau categories and whose morphisms are strict d-pre-Calabi-Yau morphisms. Given d-pre-Calabi-Yau categories (A, s d+1 M A ) and (B, s d+1 M B ) with respective sets of objects O A and O B and a strict d-pre-Calabi-Yau morphism (Φ 0 , Φ) : (A, s d+1 M A ) → (B, s d+1 M B ), we now construct an A ∞ -structure on A ⊕ B * [d -1]. For x ∈ ŌA , we denote by sm x A⊕A * →A the composition of jx-1(s d+1 Mx-) and the canonical projection on A[START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF]. Similarly, we will denote by sm Φ0(x) B⊕B * →B * the composition of j canonical projection on B * [d]. We define sm A⊕B * ∈ C(A⊕B * [d-1])[1] as the unique element such that the composition of m x1 ,...,x n A⊕B * with the canonical projection
Proposition 6 . 6 .
66 Let (A, s d+1 M A ), (B, s d+1 M B ) be d-pre-Calabi-Yau categories with respective sets of objects O A and O B . Consider a strict d-pre-Calabi-Yau morphism (Φ 0 , Φ) : (A, s d+1 M A ) → (B, s d+1 M B ) and the A ∞ -category (A⊕B * [d-1], sm A⊕B * ) where sm A⊕B * ∈ C(A⊕B * [d-1])[1] is given in Definition 6.3. The maps φ A and φ B defined in Definition 6.5 are cyclic A ∞ -morphisms, in the sense of Definition 3.22.
φ x,y A • sm x1 ,...,x n A⊕B * →A = sm x1 ,...,x n A⊕A * →A • (φ ⊗ lg(x 1 )-1 A ⊗ φ A ⊗ φ ⊗ lg(x 2 )-1 A • • • ⊗ φ ⊗ lg(x n )-1 A ) (6.4) and φ x,y A • sm x1 ,...,x n A⊕B * →B * = sm x1 ,...,x n A⊕A * →A * • (φ ⊗ lg(x 1 )-1 A ⊗ φ A ⊗ φ ⊗ lg(x 2 )-1 A • • • ⊗ φ ⊗ lg(x n )-1 A ) (6.5) for x, y ∈ O A , (x 1 , . . . , xn ) ∈ Ōn A such that lt(x n ) = x, rt(x 1 ) = y. First,note that φ x,y A (sm x1 ,...,x n A⊕B * →A ( sa 1 , tf 1 , sa 2 , . . . , sa n-1 , tf n-1 , sa n )) = sm x1 ,...,x n
sm x1 ,...,x n A⊕A * →A * (φ ⊗ lg(x 1 )-1 A ( sa 1 ) ⊗ φ A (tf 1 ) ⊗ φ ⊗ lg(x 2 )-1 A ( sa 2 ) • • • ⊗ φ ⊗ lg(x n )-1 A ( sa n ))(sb) = Γ A (sm x1 ,...,x n A⊕A * →A * (φ ⊗ lg(x 1 )-1 A
2 . 6 . 7 .
267 y Γ Bx (φ y,x B (tf ), φ x,y B (sa)) = y Γ B x (tf, Φ x,y (sa)) = f (Φ x,y (sa)) = y Γ Φ x (tf, sa)for sa ∈ x A[1] y , tf ∈ Φ0(y) B * [d] Φ0(x). The second condition to be a cyclic morphism is obviously satisfied since φ x A and φ x B vanish forx ∈ O n with n > Definition Let (A ⊕ A * [d -1], sm A⊕A * ), (B ⊕ B * [d -1], sm B⊕B * ) be A ∞ -categories. A hat morphism from A ⊕ A * [d -1] to B ⊕ B * [d -1] is a triple (sm A⊕B * , φ A , φ B ) where sm A⊕B * is an A ∞ -structure on A ⊕ B * [d -1] and φ A , φ B are A ∞ -morphisms A[1] ⊕ B * [d] A[1] ⊕ A * [d] B[1] ⊕ B * [d]
Definition 6 . 8 .
68 Let (A⊕A * [d-1], sm A⊕A * ), (B⊕B * [d-1], sm B⊕B * ) and (C⊕C * [d-1], sm C⊕C * ) be A ∞categories. Two hat morphisms (sm A⊕B * , φ A , φ B ) : A ⊕ A * [d -1] → B ⊕ B * [d -1], (sm B⊕C * , ψ B , ψ C ) : B ⊕ B * [d -1] → C ⊕ C * [d -1] are composable if there exist a triple (sm A⊕C * , χ A , χ C ) where χ A : A ⊕ C * [d -1] → A ⊕ B * [d -1] and χ C : A ⊕ C * [d -1] → B ⊕ C * [d -1] are A ∞ -morphisms and such that φ B • χ A = ψ B • χ C . The composition of (sm A⊕B * , φ A , φ B ) and (sm B⊕C * , ψ B , ψ C ) is then given by (sm A⊕C * , φ A • χ A , ψ C • χ C ).
Definition 6 . 10 .
610 The A ∞ -hat category is the partial category A ∞d whose objects are A ∞ -categories of the form A ⊕ A * [d -1] and whose morphisms are hat morphisms. Definition 6.11. A functor between partial categories A and B with respective sets of objects O A and O B is the data of a map F 0 : O A → O B together with a family F = ( y F x ) x,y∈O A sending a morphism f : x → y to a morphism y F x (f ) : F 0 (x) → F 0 (y) such that if two morphisms f : x → y and g : y → z are composable, then y F x (f ) and z F y (g) are composable and their composition is given by the morphismz F y (g) • y F x (f ) = z F x (g • f ).Definition 6.12. We define the partial subcategory cyc A ∞d of A ∞d whose objects are almost cyclic A ∞categories of the formA ⊕ A * [d -1] and whose morphisms A ⊕ A * [d -1] → B ⊕ B * [d -1] are the data of an almost cyclic A ∞ -structure on A ⊕ B * [d -1]together with a diagram of the form (6.6) where φ A and φ B are A ∞ -morphisms. Definition 6.13. We define the partial subcategory Scyc A ∞d of cyc A ∞d whose objects are the ones of cyc A ∞d and whose morphisms are strict cyclic morphisms of cyc A ∞d .
Proposition 6 . 16 .
616 The element sm A⊕B * ∈ C(A⊕B * [d-1])[START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF] defines an A ∞ -structure on A⊕B * [d-1]. Moreover, if the morphism Φ is good, sm A⊕B * satisfies the cyclicity condition (3.1).Proof. Using Proposition 5.9, it suffices to show thats d+1 M A⊕B * • Φ nec s d+1 M A⊕B * = 0. We have that π A (s d+1 M A⊕B * • Φ nec s d+1 M A⊕B * ) = 0 is tantamount to E( D) + E( D ′ ) + E( D ′′ ) = 0where the sums are over all the filled diagrams D, D ′ and D ′′ of type x of the form third diagram can be cut into two as follows Using that Φ is a pre-Calabi-Yau morphism, the left side can be changed into a diagram consisting of a disc filled with M A whose outgoing arrows are shared with discs filled with Φ. We thus get that E( D ′′ ) = E( D 1 ) -E( D 2 ) (6.7)where D 1 and D 2 are filled diagrams of the form
FFF
with a tensor product composed of maps of the collection Φ and of the identity map id in the last tensor factor. Therefore, the element sm A⊕B * satisfies the Stasheff identities (SI).It is clear that if the morphism Φ is good, then theA ∞ -structure on A ⊕ B * [d -1] is almost cyclic with respect to Γ 1 . Indeed, Γ Φ 1 • (sm A⊕B * →A ⊗ id A * ) = E( D)where the sum is over all the filled diagrams E( D) of the formM A On the other hand, Γ Φ 1 • (sm A⊕B * →B * ⊗ id B ) = E( D ′ )where the sum is over all the filled diagrams E( D ′ ) of the form
graded quiver A with set of objects O is said to be Hom-finite if y A x is finite dimensional for every x, y ∈ O. Given graded quivers A and B with respective sets of objects O A and O B , a morphism of graded quivers (Φ 0 , Φ) : A → B is the data of a map Φ 0 : O A → O B between the sets of objects together with a collection Φ = ( y Φ x ) x,y∈O A of morphisms of graded vector spaces y Φ x : y A x → Φ0(y) B Φ0(x) for every x, y ∈ O A . In this paper we will only consider small graded quivers and small categories.
We will denote Ō =
n∈N Ō =
n∈N
*
O n and more generally, we will denote by Ō the set formed by all finite tuples of elements of Ō, i.e. * Ōn = n>0 (p1,...,pn)∈Tn
n . These are called the Stasheff identities and were first introduced in [8] by J. Stasheff. Example 3.13. If A is a dg category with differential d A and product
sm x≤i ⊔x ≥j A • (id ⊗x ≤i ⊗sm A x i,j ⊗ id ⊗x ≥j ) = 0 (SI)
1≤i<j≤n
for every n ∈ N µ, it carries a natural A ∞ -structure
sm A ∈ C(A)[1] with m 1 A = d A , m 2 A = µ and m n A = 0 for n ≥ 3.
* and x ∈ O
Definition 3.14. Given
a graded quiver A with set of objects O its graded dual quiver is the quiver A * whose set of objects is O and for x, y ∈ O, the space of morphisms from x to y is defined as y A * x = ( x A y ) * .
Definition 3.15. A bilinear form of degree d on
a graded quiver A is a collection Γ = ( y Γ x ) x,y∈O of homogeneous k-linear maps y Γ x : y
Definition 3.21. An A ∞ -morphism between
structure is almost cyclic with respect to a fixed homogeneous bilinear form. An almost cyclic A ∞ -category with respect to a nondegenerate bilinear form is called a cyclic A ∞ -category. A ∞ -categories (A, sm A ) and (B, sm B ) with respective sets of objects O A and O B is a map F 0 : O A → O B together with a collection F = (F x) x∈ ŌA , where
The following definition was first introduced by M. Sugawara in [9].
1 , sa p >q , sb 2 . . . , sb m-1 , sa p ≤q , sb m , sa p+1 . . . , sa n ) with sa i ∈ A[1] ⊗x i and sb i ∈ A[1] ⊗ȳ i , where x = (x 1 , . . . , xn ) and ȳ = (ȳ 1 , . . . , ȳm ) are the respective types of D 1 and D 2 , is detailed as follows.
1
≤q , sa p , sb 2 . . . , sb m , sa p+1 , sb 1 >q , sa p+2 . . . , sa n ) with sa i ∈ A[1] ⊗x i and sb i ∈ A[1] ⊗ȳ i , where x = (x 1 , . . . , xn ) and ȳ = (ȳ 1 , . . . , ȳm ) are the respective types of D 1 and D 2 , is detailed as follows. First, we place each tensor factor sa i j and sb i j of sa i and sb i in the corresponding incoming arrow, as explained in (F.1), adding a sign (-1) | sb 1 ≤q || sa p |+| sb 1 >q || sa p+1 | for the permutation of the corresponding elements.
sa p
sa 1 sb 1 ≤q
D 1 D 2
sb 1 >q sa p+1
category that is almost cyclic with respect to Γ Φ . Let (A, s d+1 M A ), (B, s d+1 M B ) be d-pre-Calabi-Yau categories with respective sets of objects O A and O B . Consider a strict d-pre-Calabi-Yau morphism (Φ 0 , Φ) : (A, s d+1
Definition 6.5.
[START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF]sending s d+1 ϕ ∈ Bx(A [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF], B[-d])[d + 1] to sψx where ψx :
is given by ψx( sa n , tf n-1 , sa n-2 , tf n-2 , ..., sa 2 , tf 2 , sa 1 ) = (-1) ϵ n i=1
Proposition 5.9. Let A and B be graded quivers with respective sets of objects O A and O B and consider a map Φ :
Pre-Calabi-Yau morphisms (after M. Kontsevich, A. Takeda and Y. Vlassopoulos)
Following the article [START_REF] Kontsevich | Pre-Calabi-Yau algebras and topological quantum field theories[END_REF], we recall the definition of the category of d-pre-Calabi-Yau categories.
The action of σ
given by
for x ∈ Ō. We will denote by Multi
) that are invariant under the action of C lg(•) .
Corollary 6.14. There exists a functor SpCY
given in Definition 6.5.
General case
We now present the relation between not necessarily strict d-pre-Calabi-Yau morphisms and A ∞morphisms. Consider d-pre-Calabi-Yau categories (A, s d+1 M A ), (B, s d+1 M B ) as well as a (Φ 0 , Φ) :
Definition 6.15. We define
where the sum is over all the filled diagrams D of type x and of the form
where the sum is over all the filled diagrams D ′ of type x and of the form
) [START_REF] Fernández | Cyclic A∞-algebras and double Poisson algebras[END_REF]. We will denote by m A⊕B * →A (resp. m A⊕B * →B * ) the composition of m A⊕B * with the canonical projection on
Lemma 6.17. Consider d-pre-Calabi-Yau categories (A, s d+1 M A ) and (B, s d+1 M B ) as well as a d-pre-Calabi-Yau morphism (Φ 0 , Φ) : (A, s d+1 M A ) → (B, s d+1 M B ). Then, Φ induces morphisms
and a morphism
defined by
( sa n , . . . , sa 1 ))
and for each n ∈ N * , x = (x 1 , . . . , xn ) ∈ Ōn A .
Definition 6.18. Consider d-pre-Calabi-Yau categories (A, s d+1 M A ) and (B, s d+1 M B ) as well as a dpre-Calabi-Yau morphism (Φ 0 , Φ) : (A, s d+1 M A ) → (B, s d+1 M B ). We define maps of graded vector spaces φ
Proposition 6.19. The maps φ A and φ B are morphisms of A ∞ -categories.
Proof. The part of the identity (MI) for φ A that takes place in
is clearly satisfied. Moreover, by definition of the A ∞ -structure sm A⊕B * , the part of the identity (MI) that takes place in
where the sums are over all the filled diagrams D 1 , D 2 and D 3 of type x of the form
and respectively. Using that Φ is a pre-Calabi-Yau morphism, we thus have that φ A is an A ∞ -morphism. The case of φ B is similar.
We now summarize the results of Propositions 6.16 and 6.19. |
04105636 | en | [
"stat.ml"
] | 2024/03/04 16:41:22 | 2023 | https://hal.science/hal-04105636/file/CGS-FreeAdaGrad.pdf | Evgenii Chzhen
email: [email protected]
Christophe Giraud
email: [email protected]
Gilles Stoltz
email: [email protected]
Parameter-free projected gradient descent
We consider the problem of minimizing a convex function over a closed convex set, with Projected Gradient Descent (PGD). We propose a fully parameter-free version of AdaGrad, which is adaptive to the distance between the initialization and the optimum, and to the sum of the square norm of the subgradients. Our algorithm is able to handle projection steps, does not involve restarts, reweighing along the trajectory or additional gradient evaluations compared to the classical PGD. It also fulfills optimal rates of convergence for cumulative regret up to logarithmic factors. We provide an extension of our approach to stochastic optimization and conduct numerical experiments supporting the developed theory.
Introduction
In this work we study the problem of minimizing a convex function f over a closed, possibly unbounded, convex set Θ ⊆ R d . Our main goal is to provide a variant of AdaGrad [START_REF] Streeter | Less regret via online conditioning[END_REF][START_REF] Duchi | Adaptive subgradient methods for online learning and stochastic optimization[END_REF] which is adaptive to the distance x 1x * between the initialization x 1 ∈ Θ and a minimizer x * ∈ Θ, which is assumed to exist. More precisely, we provide a Projected Gradient Descent (PGD) algorithm of the form
x t+1 = Proj Θ (x t -η t g t ) with η t = 2 kt H s t g s 2 ,
where g t ∈ ∂f (x t ) is a sub-gradient of f at x t , Proj Θ (•) is the Euclidean projection operator onto closed convex Θ, H(x) = (x + 1) log(e(1 + x)) and k t is an automatically tuned sequence by Algorithm 1. Unlike recent works on the subject [START_REF] Defazio | Learning-rate-free learning by D-adaptation[END_REF][START_REF] Carmon | Making sgd parameter-free[END_REF], we provide bounds on the cumulative regret of the form
R T := T t=1 f (x t ) -f (x * ) ,
where x * is any minimizer of f over Θ. Using standard online-to-batch conversion, we also have by convexity f (x T )f (x * ) R T /T , for xT being the average of x 1 , . . . , x T .
In the classical case where f is assume to be L-Lipschitz, it is well known that setting
η t = x1-x * L √ T
gives the optimal rate of convergence [N + 18]:
R T x 1 -x * L √ T .
However, such a choice requires f to be Lipschitz, and the knowledge of three quantities: 1) distance to the optimum x 1 -x * ; 2) Lipschitz constant L; 3) optimization horizon T . Should the distance x 1x * be known, one could set η t = x1-x * √ t s=1 gs 2 , resulting in ADAGRAD algorithm [START_REF] Streeter | Less regret via online conditioning[END_REF][START_REF] Duchi | Adaptive subgradient methods for online learning and stochastic optimization[END_REF].
Preprint. Under review.
Algorithm 1: FREE ADAGRAD For this choice of η t , without Lipschitz assumption, we have the upper bound on the regret
Input: x 1 ∈ R d , Θ ⊂ R d , γ 0 > 0 Initialization: Γ 2 1 = 0, k 0 = 1, S 0 = 0, γ k = γ 0 2 k for k 1 for t
R T c x 1 -x * T t=1 g t 2 .
(1)
In practice the distance x 1x * is unknown. When an upper bound D * on x 1x * is available, typically the diameter of Θ when Θ is bounded, x 1x * can be replaced by D * in η t . The ADAGRAD algorithm then fulfills (1) with x 1x * replaced by D * . This bound can be very sub-optimal yet, when x 1x * is much smaller than D * . Worse, when Θ is unbounded, no bound D * on x 1x * is available, without additional information.
Our objective is to provide a variant of the ADAGRAD step-size tuning, not requiring f to be Lipschitz, nor any knowledge on x 1x * or T , while still fulfilling the regret bound (1) up to a log factor. Our contribution can be placed alongside the ever expanding literature of parameter-free optimization algorithm [DM23, CH23, Cut19, MS12, MO14a, MK20, OP21, OT17, ZCP22, JC22, OP16], discussed below Theorem 1. Main contributions. Let us describe our three main contributions 1. we propose a simple tuning of PGD, we call FREE ADAGRAD, with no line-search, no cold-restart, no gradient transformation, and no computations of extra gradients; 2. we handle any finite convex function f (no Lipschitz condition), over any possibly unbounded constraint set Θ; 3. we provide regret bounds like R T = O ( x 1x * + 1) 1 + t T g t 2 , where Õ hides log-factor, but no additional terms.
We also partially extend our results to the Stochastic Gradient Descent setting.
Notation. For any a, b ∈ R we denote by a ∨ b (resp. a ∧ b) the largest (resp. the smallest) of the two. We denote by • the Euclidean norm and by •, • the standard inner product in R d . For Θ ⊂ R d , we denote by Proj Θ (•) the Euclidean projection operator onto Θ. We denote by log 2 (•) and log(•) the base 2 and the natural logarithms respectively. The base of the natural logarithm is denoted by e.
Main result
We make the following assumption, which is necessary for the meaningful treatment of the problem. Let us highlight that we do not assume that the subgradients g t are uniformly bounded, that is, we do not require f to be globally Lipschitz. This stays in contrast with the literature on online convex optimization (OCO). Indeed, OCO lower bounds imply that without any prior knowledge a regret O(
x 1 -x * ( t T g t 2 )
1 /2 ) is not achievable [CB17, Cut19, MK20] and higher order terms either in T or in x 1x * are necessary. For example, we are able to handle Θ = [0, +∞) d and f (x) = n i=1 exp( xa i /σ i ) for some a i ∈ R d and σ i > 0. The proposed method, that we call FREE ADAGRAD, is summarized in Algorithm 1. It consists in simple projected gradient steps at every round t 1, but with additional cheap condition on Line 7 that is checked on every iteration. If the condition on this line is satisfied for k = k t-1 , then the algorithm makes (almost) the usual ADAGRAD step, otherwise, the step size is doubled and the condition is checked again. We underline that the sequence of integers {k t : t 1} in Algorithm 1 is non-decreasing, and we prove in Eq. ( 12) that it is upper-bounded by 2(log 2 (1 + x 1x * /γ 0 ) + 1). Thus, there are only a finite number of doublings in the sequence. The only input parameter of the algorithm is γ 0 that can be seen as an initial lower-bound guess for x 1x * and can be taken arbitrary small only incurring additional log(1/γ 0 ) factor. Note that once the step-size is doubled at some time t 1, Algorithm 1 continues the optimization from x t without cold-restart.
Theorem 1. Let Assumption 1 be satisfied. Let S T = T t=1 g t 2 . For any γ 0 > 0, let D γ0 := x 1 -x * ∨ γ 0 , Algorithm 1 satisfies for some universal c > 0 R T cD γ0 (S T +1 + 1) log(1 + S T +1 ) log(1 + D γ0 ) log log(1 + S T ) .
A large part of the literature on parameter-free optimization considers the context of online convex optimization with L-Lipschitz functions. A series of papers [MO14a, ZCP22, JC22, CO18] have produced algorithms, mainly based on coin betting, enjoying regret bounds O(D γ0 S T log(1 + D γ0 S T /γ 0 )), up to lower order terms. Such a regret bound is not achievable in online convex optimization when L is unknown as shown in [START_REF] Cutkosky | Online learning without prior information[END_REF]. Some papers [Cut19, MK20, JC22] consider yet the case where L is unknown, and provide regret bound including additional terms depending on L and on higher order of D γ0 .
If only the optimization error is of concern (and not the regret), bounds with log-factors replaced by log-log factors have been produced by [START_REF] Carmon | Making sgd parameter-free[END_REF][START_REF] Defazio | Learning-rate-free learning by D-adaptation[END_REF], breaking the barrier of online-to-batch conversion, but still requiring some knowledge about the Lipschitz constant L. Relying on binary search, [START_REF] Carmon | Making sgd parameter-free[END_REF] construct an adaptive algorithm for the problem of stochastic optimization, while [START_REF] Defazio | Learning-rate-free learning by D-adaptation[END_REF] provide an adaptive version of dual averaging and gradient descent algorithms, without allowing for projection step and requiring a careful weighted averaging along the trajectory to obtain the final solution. In contrast, we do not require Lipschitz condition, we handle the projection step and our bounds are valid for the usual average.
The closer to us, in a setting of online convex optimization with L-Lipschitz functions, [START_REF] Mcmahan | No-regret algorithms for unconstrained online convex optimization[END_REF] propose a tuning of GD (without projection) which is based on a doubling trick with cold-restarts and which requires the knowledge of L. This algorithm is shown to be adaptive to x 1x * at the price of loosing a log factor log(D γ0 T /γ 0 ) in the regret bound. In Appendix D, we show that the algorithm of [START_REF] Mcmahan | No-regret algorithms for unconstrained online convex optimization[END_REF] can be seen as a specific instantiation of Algorithm 1, with the major difference that cold-restart are performed when doubling the step-size.
Warm-up: simple analysis and intuition
Before proceeding to the analysis of the FREE ADAGRAD algorithm 1, we explain the main ideas behind our step-size scheme in the following simpler setup.
Simple warm-up setup 1) the norm of the subgradients are uniformly bounded by some known L, i.e. g t L,
2) the optimization is unconstrained, i.e. Θ = R d , 3) the time horizon T is fixed in advance.
In this case, we can replace (h t ) t 1 set on Line 4 of Algorithm 1 by the constant sequence h t = L √ T , and the choice γ = x 1x * is known to achieve the optimal rates for the regret x 1x * L √ T , see e.g. [N + 18]. In this context, the overall strategy of Algorithm 1 is to start from a small value
x T 1 = x 1 x T 2 x T 3 x T k T -1 x T k T x T k T +1 = x T η t = 2γ 0 L √ T η t = 4γ 0 L √ T . . . η t = 2 k T γ 0 L √ T η t = 2 k T -1 γ 0 L √ T η t = 2 k T -1 γ 0 L √ T x + T3-1 (2) -x * > BT 3 (2) x + Tk T -1 (kT -1) -x * > BT k T (kT -1)
Figure 1: Schematic illustration of the algorithm in the simplest case.
γ 0 for γ, and then track x tx 1 in order to detect if γ < x 1x * . If so, γ is doubled. The algorithm then increases the value γ until reaching the level x tx 1 .
In order to keep the analysis simple in this warm-up section, we replace the threshold B t+1 (k) Line 5 of Algorithm 1 by
B simple t+1 (k) = 3γ k (recall that γ k = γ 0 2 k )
, what eventually leads to a slightly worse bound. The gradient step and the step-size choice are then simply
x + t (k) = x t - γ k L √ T g t and k t = min k k t-1 : x + t (k) -x 1 3γ k . (2)
Below, we sketch the main arguments, and we refer to Appendix A for all the details. The first ingredient is the text-book decomposition using subgradient upper-bound: for any k 1
0 f (x t ) -f (x * ) g t , x t -x * = γ k 2L √ T g t 2 + L √ T 2γ k x * -x t 2 -x + t (k) -x * 2 γ k L 2 √ T + L √ T 2γ k x * -x t 2 -x + t (k) -x * 2 . (3)
It follows from this bound, a one-step deviation upper-bound
x + t (k) -x * 2 x t -x * 2 + γ 2 k /T .
Summing this bound over t, we get a first important bound on the distance to optimum
x + t (k) -x * 2 x 1 -x * 2 + t-1 s=1 γ 2 ks T + γ 2 k T x 1 -x * 2 + γ 2 k , for all k k t-1 , (4)
and then another important bound on the distance to initialization
x + t (k) -x 1 x 1 -x * + x + t (k) -x * 2 x 1 -x * + γ k , for all k k t-1 , (5)
where the last inequality follows from (4) and the sub-additivity of square-root.
Controlling the number of phases. The bound (5) plays a central role in our step-size tuning. Indeed, we observe that if x + t (k t-1 )x 1 > 3γ kt-1 , then it means that γ kt-1 < x 1x * , and our step-size tuning then increases k until the condition x + t (k)x 1 3γ k is met. In addition, we check below that the design of B simple t+1 (k) ensures that we have k t k * for all t T , where k * 1 is the integer defined by γ k * -1 D γ0 := x *x 1 ∨ γ 0 < γ k * , and fulfilling
k * 1 + log 2 x * -x 1 γ 0 ∨ 1 = log 2 2D γ0 γ 0 , and
γ k * 2D γ0 . (6)
Indeed, if k t-1 k * , then (5) ensures that
x + t (k * ) -x 1 2γ k * + γ k * = B simple t+1 (k * ) , so k t k * ,
R T = k T k=1 T k+1 -1 t=T k (f (x t ) -f (x * )) k * k=1 γ k L 2 √ T (T k+1 -T k ) + L √ T 2γ k x T k -x * 2 -x T k+1 -x * 2 L √ T 2 γ k * + k * k=1 1 γ k x T k -x * 2 -x T k+1 -x * 2 . ( 7
)
From the step-size rule, we have that x T k+1x 1 B T k+1 (k T k+1 -1 ) = 3γ k , and from (4) we have
x T k -x * 2 x 1 -x * 2 + γ 2 k-1
, so we can upper bound the last term in the right-hand side of (7)
x T k -x * 2 -x T k+1 -x * 2 x 1 -x * 2 + γ 2 k-1 -x 1 -x * -x T k+1 -x 1 2 + γ 2 k-1 + x 1 -x * 2 -[ x 1 -x * -3γ k ] 2 + 1 4 γ 2 k + 6γ k x 1 -x * , (8)
where the last inequality follows from the basic inequality
∆ 2 -[∆ -B] 2 +
2∆B, for all ∆, B 0. Substituting (8) in (7) and using the bound (6), we end with the upper-bound
R T L √ T 2 γ k * + γ k * +1 4 + 6k * x 1 -x * L √ T 3 x 1 -x * log 2 2D γ0 γ 0 +2D γ0 . (9)
The bound (9) for Algorithm 1, then matches the optimal rate x 1x * L √ T obtained with the oracle step size η = x 1x * /(L √ T ), up to a factor log 2 (D γ0 /γ 0 ).
It turns out that, in this L-Lipschitz setting, it is possible to adapt to x 1x * with a bound O D γ0 L T log 2 (D γ0 /γ 0 ) on the regret, by, for example, using coin betting [START_REF] Mcmahan | Unconstrained online linear learning in hilbert spaces: Minimax algorithms and normal approximations[END_REF][START_REF] Orabona | Coin betting and parameter-free online learning[END_REF].
We achieve such tighter bound with PGD with a better tuning of the threshold B t+1 (k) which is explained in the next section.
3.1 Improving log factor by better tuning of B t+1 (k)
Previous section gave the basic intuition, explaining why such a doubling strategy works. Yet, our choice of B t+1 on Line 5 of Algorithm 1 differs from B simple t+1 (k) = 3γ k . Remaining in the simple setup of warmup, let us explain two key ingredients, which eventually lead to our choice of B t+1 on Line 5 of Algorithm 1. The first ingredient is to track more tightly the upper-bound on x + t (k)x 1 . Indeed, we can improve the Bound (4) by keeping x + t (k)x * 2
x 1 (k)x 1 2 + Γ 2 t + γ 2 k /T instead of relying on the last bound in (4). Hence, we can replace (5) by
x + t (k) -x 1 2 x 1 -x * + Γ 2 t + γ 2 k /T , (10)
in order to implicitly track the value x 1 -x * . This improved tracking alone is not enough in order to improve the log factor. Indeed, choosing
B t+1 (k) = 2γ k + Γ 2 t + γ 2 k /T , still introduces log(D γ0 ) term.
To improve the log factor, our second ingredient is to choose a slightly smaller threshold B t+1 , at the price of possibly moderately increasing the number k T of doubling. In particular, setting
B t+1 (k) = 2γ k √ k + Γ 2 t + γ 2 k /T , (11)
we get that k T k * + 0.5 log 2 (k * ) + 1.25, and instead of log(D γ0 ) we have log(D γ0 ) in the regret bound (see Appendix A for details). Combining everything together, we get the bound
R T 10D γ0 L √ T 2 log 2 (2D γ0 /γ 0 ) ,
for algorithm in (2) with 3γ k replaced by B t+1 (k) in (11) (see Theorem 4 in Appendix A). While, the above discussion was still assuming the simple setup of L-Lipschitz function f , known L and T , we are able to generalize the above argument to nearly arbitrary convex f and unknown T .
4 Meta theorem: a general case of Algorithm 1
In this section, we provide a unified analysis of Algorithm 1, that is valid under the minimal Assumption 1, and for the choice of any positive non-decreasing sequence (h t ) t 1 on Line 4 of Algorithm 1. Our main result, stated in Theorem 1, is obtained as a consequence of this general result, and is made precise in Corollary 1. Theorem 2. Let Assumption 1 be satisfied. For any γ 0 > 0, for any positive non-decreasing (h t ) t 1 on Line 4 of Algorithm 1, Algorithm 1 satisfies
T t=1 (f (x t ) -f (x * )) h T +1 2 x 1 -x * k T 2+ 1 3 T t=1 g t 2 h 2 t + γ k T T t=1 g t 2 h 2 t .
It is interesting to observe that the term t T g t 2 /h t , that often appears in the analysis of ADAGRAD is absent in our bound. Instead, we have t T g t 2 /h 2 t which behaves slightly worse and hence requires additional correction of h t (extra log factor) to ensure convergence. The proof of Theorem 2, which can be found in Appendix B, is based on the following general lemma. Lemma 1. Let Assumption 1 be satisfied. Consider the following algorithm for t 1
x t+1 = Proj Θ x t - γ h t g t ,
where g t ∈ ∂f (x t ), (h t ) t 1 is non-decreasing and positive, and x 1 ∈ R d . For all T > 1, and all
x 1 ∈ R d , we have T t=1 (f (x t ) -f (x * )) h T +1 x 1 -x * 2 -x T +1 -x * 2 2γ + γ 2 T t=1 g t 2 h 2 t .
The above lemma replaces the key inequality (3) that was available for one step of PGD in the simplest case. However, since the step-size in our case is time-varying, we rather need a variant of this inequality over the whole trajectory. While simple to prove, it seems that this result is novel and could be of independent interest.
Finally, to obtain the bound of Theorem 1, we only need to bound the number of phases k T . Note that the intuition of the previous section still applies in this case, yet, the actual bound on k T is more refined-it gives better constants and improves logarithmic factors. Lemma 2. Let Assumption 1 be satisfied. For any γ 0 > 0, and any non-decreasing positive (h t ) t 1 , Algorithm 1 satisfies for T 2
k T k * + 1 2 log 2 (k * ) + 5 4 and γ k T 5 2 √ k * γ 0 2 k * .
where
k * is such that γ 0 2 k * -1 x 1 -x * ∨ γ 0 γ 0 2 k * . Furthermore, k T = 1 if k * = 1.
As a direct consequence of the above lemma and recalling that
D γ0 = x 1 -x * ∨ γ 0 , we obtain k T √ 2 log 2 D γ0 γ 0 + 1 and γ k T 5D γ0 log 2 D γ0 γ 0 + 1 , (12)
that is, k t takes at most 2(log 2 (D γ0 /γ 0 ) + 1) values.
Proof of Lemma 2. Lemma 4 in Appendix, applied by phases, implies that for all t 1 and k 1 we have
x + t (k) -x * 2 x 1 -x * 2 + Γ 2 t + γ 2 k g t 2 h 2 t .
Thus, the triangle inequality, yields
x + t (k) -x 1 2 x 1 -x * + Γ 2 t + γ 2 k g t 2 h 2 t 2D γ0 + Γ 2 t + γ 2 k g t 2 h 2 t ,
where
D γ0 = x 1 -x * ∨ γ 0 . Let k be the smallest integer such that 2 k/ √ k 2 k * .
Then, for any k 1 and any t 1
x + t (k) -x 1 2γk √ k + Γ 2 t + γ 2 k g t 2 h 2 t .
In particular, the above implies that x + t ( k)x 1 B t+1 ( k) for all t 1. Thus, once k t reaches k on Line 7 of Algorithm 1, it never changes its value. That is, k T k. Lemma 12 in Appendix shows that k k * + 0.5 log 2 (k * ) + 1.25 and k = 1 if k * = 1, which concludes the proof.
4.1 Applications of Theorem 2: specific choices of (h t ) t 1 Theorem 2 and Lemma 2 yield the main result of this work-theorem announced in Section 2.
Corollary 1. Under assumptions of Theorem 2. Let H(x) = (x + 1) log(e(x + 1)). Setting
h t = H(S t ) and D γ0 = x 1 -x * ∨ γ 0 , Algorithm 1 satisfies T t=1 (f (x t ) -f (x * )) D γ0 H(S T +1 ) log 2 2D γ0 γ 0 6 log(log(e(1 + S T ))) + 6.5 .
While the above choice of (h t ) t 1 gives nearly optimal rates, it is not standard in the literature. Let us highlight the usefulness of Theorem 2 by providing some instantiations which correspond to other, more common, but less optimal, examples.
The standard ADAGRAD corresponds to h t = √ S t [START_REF] Streeter | Less regret via online conditioning[END_REF]. The main inconvenience of this choice, is that the term t T g t 2 /S t is not bounded uniformly by a non-decreasing function of S T . Indeed, assume that g t 2 = 1/T for all t = 1, . . . , T , then S t = t/T 1 and t T g t 2 /S t ≈ log(T ).
It is possible, however, to write t T g t 2 /S t 1 + log(S T / g 1 2 ), which involves additional dependency on the gradient at initialization. All in all, we can state the following corollary. Corollary 2. Under assumptions of Theorem 2. Setting h t = √ S t and
D γ0 = x 1 -x * ∨ γ 0 , Algorithm 1 satisfies T t=1 (f (x t ) -f (x * )) D γ0 S T +1 log 2 2D γ0 γ 0 6 log eS T g 1 2 + 6.5 .
An attractive feature of this bound is its scale-invariance-multiplying f by some constant, multiplies the bound by the same constant.
The dependency on the initial gradient can be avoided setting h t = √ ε + S t with arbitrary ε > 0, as it is usually done in practice with ADAGRAD, and initially proposed in [START_REF] Duchi | Adaptive subgradient methods for online learning and stochastic optimization[END_REF]. Corollary 3. Under assumptions of Theorem 2. Let
h t = √ ε + S t , for some ε > 0. Setting D γ0 = x 1 -x * ∨ γ 0 , Algorithm 1 satisfies T t=1 (f (x t ) -f (x * )) D γ0 (S T +1 + ε) log 2 2D γ0 γ 0 6 log 1 + S T ε + 6.5 .
Note that compared to Corollary 1, the above bound contains an additional log(1 + S T ) multiplicative factor, but it improves upon that of Corollary 2. Finally, we can also recover the results claimed in the end of Section 3, where f is assumed to be L-Lipschitz, see Appendix B.2 for details.
An extension to stochastic optimization
In this section we demonstrate that at least the warm-up analysis provided in Section 3 extends to the setup of stochastic optimization (see Algorithm 2), where the objective function takes the form
f (x) = E[F (x, ξ)] ,
and where we only have access to g t ∈ ∂F (x t , ξ t ), for some i.i.d. (ξ t ) t 1 . As in [START_REF] Carmon | Making sgd parameter-free[END_REF], we make the following standard assumption on the regularity of F (•, ξ).
Algorithm 2: Stochastic case
Input: x 1 ∈ R d , Θ ⊂ R d , γ 0 > 0, L > 0, T, δ > 0 Initialization: Γ 2 1 = 0, k 0 = 1, S 0 = 0, h : R → R, γ k = γ 0 2 k for k 1, T (δ) := 1 ∨ log(log 2 (2T )/δ) for t = 1, . . . , T do g t ∈ ∂F (x t , ξ t ) // get subgradient h t = L T T (δ/(1 + k t-1 ) 2 ) // update ht x + t (k) = Proj Θ x t -γ k ht g t // probing step k t = min k k t-1 : x + t (k) -x 1 38γ k // find the step size x t+1 = x + (k t ) // make the step end Output :Trajectory (x t ) T t=1 Assumption 2. The mapping x → F (x, ξ) is L-Lipschitz almost surely.
In some applications (e.g., linear contextual bandits), L is actually known and the control of regret is necessary. For example, in linear Contextual Bandits with Knapsacks (lin-CBwK), having PGD strategy for unbounded Θ, while still controlling the regret is needed [see e.g., AD16]. Thus, our Algorithm 2, could bring new results in CBwK and related contexts.
We can state the following result concerning Algorithm 2. Theorem 3. Let Assumptions 1 and 2 be satisfied. Define T (δ) = 1 ∨ log(log 2 (2T )/δ) and
D γ0 = x 1 -x * ∨ γ 0 . For any γ 0 > 0, Algorithm 2 satisfies with probability at least 1 -δ T t=1 (f (x t ) -f (x * )) 3500D γ0 L √ T log 2 2D γ0 γ 0 1 /2 T δ /log 2 2 (4Dγ 0 /γ0) .
The above bound is of order O(LD γ0 √ T log(D γ0 ) log log(T D γ0 )). Note that if only the optimization error is of concern, and one does not wish to control the regret, [START_REF] Carmon | Making sgd parameter-free[END_REF] provide a bound without log 2 (2D γ0 /γ 0 ) using bisection algorithm and several restarted runs of SGD.
Experiments
We have implemented our FREE ADAGRAD (with γ 0 = 1 throughout) algorithm and compared it to the ADAGRAD that requires the knowledge of x 1x * and to the Oracle choice of step x 1x * /(L √ T ). We consider three functions f (x) = x p for p ∈ {1, 2} and f (x) = n -1 i n | a i , x | where a i ∈ R d are generated i.i.d. from standard multivariate Gaussian. The initialization point is picked the same for the three algorithms and is sampled from uniform distribution on [-1, 1] d . For our experiments, we set d = 625 and n = 1000. Note that in the first case, the considered function is Lipschitz with L = 1 and for the second one L
1 n ( a 1 + . . . + a n ). A subgradient at x ∈ R d in the second case is given by n -1 i n a i sign( a i , x
) and since a i 's are i.i.d. Gaussian, it is expected that g 1 n ( a 1 + . . . + a n )-algorithms that are adaptive to the norm of the gradient should perform better in this case. For all three functions, a global minimizer is given by x * = (0, . . . , 0) . All the algorithms run for T = 10000 iterations. All the plots are reported on loglog scale. The first results are reported on Figure 2. The second column displays the step-sizes used by the three algorithms. As a sanity check, we observe that the step size of ADAGRAD decreases over time and the step size of the ORACLE remains constant. One can also observe the characteristic jumps of the proposed FREE ADAGRAD method-the step size decreases withing a fixed phase and is doubled from one phase to the other. On the first row of Figure 2 we display the regret, on initial stages our algorithm behaves similarly to the ORACLE one, while surpassing the performance of the ADAGRAD on the later stages. The third row of Figure 2 displays the case of the averages. Note that in this case the ORACLE algorithm performs worse than the other two, since it takes the worst-case Lipschitz constant and does not adapt to the actual norms of the seen gradients. Step size: ηt
Step sizes: f (x) = x 1, x1x * = 14.36 Step size: ηt
Step sizes: f (x) = x 2, x1x * = 14.36 Step size: ηt
Step sizes:
f (x) = 1 n n i=1 | ai, x |, x1 -x * = 14.38
Discussion
We have introduced FREE ADAGRAD-a simple fully adaptive version of ADAGRAD, that does not rely on any prior information about the objective function. Our bounds are optimal up to logarithmic factors and are applicable to non-globally Lipschitz functions. We have extended our approach to stochastic optimization in a Lipschitz context, at the cost of the knowledge of the Lipschitz constant and sub-optimal logarithmic factors. Numerical illustrations suggest that FREE ADAGRAD performs on par or outperforms ADAGRAD with knowledge of x 1x * and the ORACLE choice of step-size.
Limitations. Let us also list the main limitations and future directions of our work.
1)
We are only dealing with batch optimization. The extension of our analysis to the case of Online Convex Optimization (OCO) seems non-trivial, since the bounds that we obtain are known to be unachievable without prior knowledge in the OCO context [START_REF] Cutkosky | Online learning without prior information[END_REF]. The investigation of FREE ADAGRAD in the OCO setting is left for future work;
2) If f is assumed to be L-Lipschitz with known constant L, slightly better bounds-with improved log-factors-can be obtained in OCO setting [see e.g., OP16, CO18]. It remains an open question wether such bounds, can be obtained in batch optimization in the non-Lipschitz (or unknown L) and unknown x 1x * case;
3) Concerning stochastic optimization, we require f to be L-Lipschitz for some known L. We note, however, that even in the state-of-the-art bound of [START_REF] Carmon | Making sgd parameter-free[END_REF], the knowledge of L is required. A Proofs of the results of the warm-up Section 3
We prove here with full details the results of the warm-up Section 3, in the setting where the norm of the subgradients are bounded by a known constant L, and where the time horizon T is known. We set h t = L √ T , and we analyze simultaneously FREE ADAGRAD algorithm 1 with this choice of h, and the simple variant, where we set B simple t+1 (k) = 3γ k for the threshold, as in Section 3. Theorem 4. Assume that f is a convex L-Lipschitz function, such that there exists x * ∈ arg min x∈Θ f (x) bounded. Let γ 0 > 0 and
D γ0 = x 1 -x * ∨ γ 0 . The FREE ADAGRAD algorithm 1 with h t = L √ T and B t+1 (k) = γ k 2 √ k + 1 √ T + Γ t fulfills T t=1 (f (x t ) -f (x * )) 10D γ0 L √ T 2 log 2 2D γ0 γ 0 .
The simple variant with
h t = L √ T and B simple t+1 (k) = 3γ k fulfills T t=1 (f (x t ) -f (x * )) 3 x 1 -x * L √ T log 2 2D γ0 γ 0 + 2D γ0 L √ T .
Proof of Theorem 4.
We start by emphasizing that the algorithm runs without diverging, in the sense that
k t := min k k t-1 such that x + t (k) -x 1 B t+1 (k) (13)
is finite for any t. Indeed, we observe that x + t (k)x 1 grows at most like γ k / √ T when k goes to infinity, while B t+1 (k) grows faster than γ k 2/ √ k + 1/ √ T and B simple t+1 (k) grows like 3γ k . In fact, we will prove below that k t remains upper-bounded by a quantity independent of T .
The starting point of the proof is the classical analysis for a projected gradient step Proj Θ (x tηg t )
f (x t ) -f (x * ) g t , x t -x * = η 2 g t 2 + 1 2η x t -x * 2 -x t -ηg t -x * 2 η 2 L 2 + 1 2η x t -x * 2 -Proj Θ (x t -ηg t ) -x * 2 ,
where the last inequality follows from the fact that x * ∈ Θ. Since x * is a minimizer of f in Θ, the left hand side is non-negative, so the above inequality with η = γ k /(L √ T ) gives that for any k 1
0 f (x t ) -f (x * ) g t , x t -x * γ k L 2 √ T + L √ T 2γ k x * -x t 2 -x + t (k) -x * 2 . ( 14
)
It follows from this bound, a one-step deviation upper-bound
x + t (k) -x * 2 x t -x * 2 + γ 2 k T .
Summing this bound over t, we get a bound on the distance to optimum
x + t (k) -x * 2 x 1 -x * 2 + t-1 s=1 γ 2 ks T + γ 2 k T = x 1 -x * 2 + Γ 2 t + γ 2 k T , (15)
and then a bound on the distance to initialization
x + t (k) -x 1 x 1 -x * + x + t (k) -x * x 1 -x * + x 1 -x * 2 + Γ 2 t + γ 2 k T 2 x 1 -x * + Γ 2 t + γ 2 k T , (16)
where the two inequalities follow from (15) and the sub-additivity of square-root.
Controlling the number k T of phases. The Inequality ( 16) is the key to get an upper-bound on k T = max {k t : 1 t T }. Let us define the integer k * 1 by
γ k * -1 D γ0 := x * -x 1 ∨ γ 0 < γ k * , which fulfills k * 1 + log 2 x * -x 1 γ 0 ∨ 1 = log 2D γ0 γ 0 , and
γ k * 2D γ0 . (17)
To upper bound k T , we rely on the following estimate derived from (16)
x + t (k) -x 1 2γ k * + Γ 2 t + γ 2 k T . (18)
• Simple case: B simple t+1 (k) = 3γ k . A basic induction ensures that k t k * for all t T . Indeed, if the property k t-1 k * holds, then, since Γ 2 t t-1 T γ 2 kt-1 , we have
x + t (k * ) -x 1 2γ k * + γ k * = B simple t+1 (k * ) , which, in turn, ensures that k t k * . So k * is an upper-bound on k T = max {k t : 1 t T } in this case. • FREE ADAGRAD case: B t+1 (k) = 2γk k1/2 + Γ 2 t + γ 2 k T .
Let us define k as the smallest integer fulfilling k-1/2 γk γ k * . Then, from (18), we have
x + t ( k) -x 1 2γk k1/2 + Γ 2 t + γ 2 k T = B t+1 ( k) ,
so, by induction, we get k t k for all t T . In addition, we prove in Lemma 12 page 27 that k k * + 0.5 log 2 (k * ) + 1.25 .
Bounding the regret on phase k t = k. We denote by [T k , T k+1 -1] the interval where k t = k, with the convention T k+1 = T k if we never have k t = k. For t ∈ [T k , T k+1 -1], we have x t+1 = x + t (k). So from (14), we get
T k+1 -1 t=T k (f (x t ) -f (x * )) T k+1 -1 t=T k γ k L 2 √ T + L √ T 2γ k x * -x t 2 -x t+1 -x * 2 = γ k L 2 √ T (T k+1 -T k ) + L √ T 2γ k x T k -x * 2 -x T k+1 -x * 2 , ( 20
)
with the convention that
T k -1
t=T k = 0. We bound the second term in the right-hand side of (20), with (15)
x
T k -x * 2 x 1 -x * 2 + Γ 2 T k -1 + γ k T = x 1 -x * 2 + Γ 2
T k , and for the third term, we combine (13) with a triangular inequality to get
x T k+1 -x * 2 x 1 -x * -x T k+1 -x 1 2 + x 1 -x * -B T k+1 2 + ,
where we used the condensed notation B T k+1 := B T k+1 (k T k+1 -1 ) = B T k+1 (k). Plugging these two upper and lower bounds in (20), and applying the simple inequality
∆ 2 -[∆ -B] 2 + 2∆B, for all ∆, B 0, (21)
we get
T k+1 -1 t=T k (f (x t ) -f (x * )) γ k L 2 √ T (T k+1 -T k ) + L √ T 2γ k x 1 -x * 2 + Γ 2 T k -x 1 -x * -B T k+1 2 + L √ T 2 γ k T k+1 -T k T + Γ 2 T k γ k + 2B T k+1 γ k x 1 -x * , using Γ 2 T k γ 2 k-1 T k /T / γ k-1 γ k /2 for k 1, we get T k+1 -1 t=T k (f (x t ) -f (x * )) L √ T 2 γ k T k+1 -T k T + γ k-1 2 + 2B T k+1 γ k x 1 -x * .
Bounding the total regret. Summing the above inequality over k, we get the upper-bound on the total regret
T t=1 (f (x t ) -f (x * )) L √ T 2 1 k k T γ k T T k+1 -T k T + γ k-1 2 + 2B T k+1 γ k x 1 -x * L √ T 2 3γ k T 2 + 1 k k T 2B T k+1 γ k x 1 -x * . ( 22
)
We point out that the bound ( 22) is valid for any choice of B t+1 (k). Let us treat apart the two cases.
• Simple case:
B simple t+1 (k) = 3γ k . Using that k T k * in this case, B T k+1 = B T k+1 (k) = 3γ k ,
and recalling the upper bound (17) on k * and γ k * , we get from (22
) T t=1 (f (x t ) -f (x * )) L √ T 2 3 2 γ k * + 1 k k * 6 x 1 -x * = L √ T 2 3 2 γ k * + 6k * x 1 -x * L √ T 3 x 1 -x * log 2 2D γ0 γ 0 + 2D γ0 .
• FREE ADAGRAD case:
B t+1 (k) = 2γ k k 1/2 + Γ 2 t + γ 2 k T .
We have proved in this case that k T k with k upper bounded by (19). Combining ( 19) and ( 17), the first term in the right-hand side of ( 22) can be readily bounded by
3 4 γk 3 4 γ k * +0.5 log 2 (k * )+1.25 4D γ0 log 2 2D γ0 γ 0 .
The last term in the right-hand side of ( 22), can be bounded as follows. We notice that
Γ 2 T k+1 -1 + γ 2 k T k+1 -1 T = Γ 2 T k+1 , so we have 1 k k B T k+1 γ k = 1 k k 2 √ k + Γ T k+1 γ k 4 k + 1 k k Γ T k+1 γ k .
For the last term, we observe that
1 k k Γ T k+1 γ k = 1 k k γ -1 k j k γ 2 j ∆T j /T 1 k k j k γ -1 k γ j ∆T j /T 1 j k γ j ∆T j /T k:k j γ -1 k = 2 1 j k ∆T j /T 2 k ,
where the last inequality follows from Cauchy Schwarz. Then, plugging these bounds in (22), and using that k = 1 when k * = 1, and k k * + 0.5 log 2 (k * ) + 1.25 2k * , for k * 2 , we get from (17)
T t=1 (f (x t ) -f (x * )) L √ T 3 4 γk + 6 x 1 -x * k 10D γ0 L √ T 2 log 2 2D γ0 γ 0 .
which concludes the proof of Theorem 4.
B Proofs for Section 4
Proof of Theorem 2. First of all, observe that the algorithm in question can be written as
x t+1 = Proj Θ x t - γ kt h t g t ,
where we recall that (h t ) t 1 is assumed to be non-decreasing and positive. As before, we denote by
[T k , T k+1 -1] the interval where k t = k. In particular, T k T +1 -1 = T . On the interval [T k , T k+1 -1],
the algorithm is simply AdaGrad (slightly modified) started from the point x T k and with the final point at x T k+1 . Thus, within each phase, we can apply the analysis of the AdaGrad that we recall and slightly adapt in Appendix B.1, page 18. The proof closely follows that of the warm-up setup: observing that
T t=1 (f (x t ) -f (x * )) = k T k=1 T k+1 -1 t=T k (f (x t ) -f (x * )) ,
1. We start with one phase analysis, using the results of Appendix B.1, page 18, which contains Lemma 1 displayed in the main body;
2. Then, we sum-up the total regret over k T phases, using the previous analysis, and bound the key quantities;
One phase analysis. Fix some k k T and assume that the the kth phase is non-empty, that is, T k+1 > T k . Thus, in view of the above discussion, Lemma 5, page 18, yields
T k+1 -1 t=T k (f (x t ) -f (x * )) h T k+1 x T k -x * 2 -x T k+1 -x * 2 2γ k + γ k 2 T k+1 -1 t=T k g t 2 h 2 t = Γ 2 T k+1 -Γ 2 T k γ 2 k = h T k+1 x T k -x * 2 -x T k+1 -x * 2 2γ k + Γ 2 T k+1 -Γ 2 T k 2γ k . (23)
Note that by design x T k+1x * x 1x * -B T k+1 (k) + . Furthermore, iteratively applying Lemma 4 by phases, we deduce that
x T k -x * 2 x 1 -x * 2 + Γ 2 T k . That is, we have x T k -x * 2 -x T k+1 -x * 2 2γ k x 1 -x * 2 -x 1 -x * -B T k+1 (k) 2 + 2γ k + Γ 2 T k 2γ k . Furthermore, recalling that ∆ 2 -[∆ -B] 2 + 2∆B
, the above can be further bounded as
x T k -x * 2 -x T k+1 -x * 2 2γ k x 1 -x * B T k+1 (k) γ k + Γ 2 T k 2γ k . ( 24
)
Substitution of (24) into (23), yields
T k+1 -1 t=T k (f (x t ) -f (x * )) h T k+1 x 1 -x * B T k+1 (k) γ k + h T k+1 Γ 2 T k+1 2γ k . ( 25
)
Summing up over phases. Summing up all the inequalities (25) for k T phases, we obtain
T t=1 (f (x t ) -f (x * )) x 1 -x * k k T h T k+1 B T k+1 (k) γ k =:I + k k T h T k+1 Γ 2 T k+1 2γ k =:II . ( 26
)
Bounding the sum of h Γ 2γ terms (I). Observe that, by definition of thereof,
Γ 2 T k+1 = j k γ 2 j Tj+1-1 t=Tj g t 2 h 2 t . (27)
Hence, using trivial bound h T k+1 h T +1 , we deduce
I = k k T h T k+1 Γ 2 T k+1 2γ k h T +1 2 k k T Γ 2 T k+1 γ k = h T +1 2 k k T j k γ -1 k γ 2 j Tj+1-1 t=Tj g t 2 h 2 t = h T +1 2 j k T k j γ -1 k γ 2 j Tj+1-1 t=Tj g t 2 h 2 t h T +1 j k T γ j Tj+1-1 t=Tj g t 2 h 2 t h T +1 γ k T T t=1 g t 2 h 2 t , (28)
where the penultimate inequality is due to the fact that k j 2 -k 2 -j+1 and the last one holds since γ k γ k T .
Bounding the sum of B terms (II). We observe that by definition of B t (k), we have
B T k+1 (k) = B T k+1 -1+1 (k) = 2γ k √ k + Γ 2 T k+1 .
Hence, the term of interest is bounded as
II = k k T h T k+1 B T k+1 (k) γ k = 2 k k T h T k+1 √ k + k k T h T k+1 γ -1 k Γ T k+1 4h T +1 k T + h T +1 k k T γ -1 k Γ T k+1 .
For the third term, similarly to the previous paragraph, but additionally invoking Jensen's inequality, we can write
k k T γ -1 k Γ T k+1 = k T k k T 1 k T j k γ 2 j γ -2 k Tj+1-1 t=Tj g t 2 h 2 t k T j k T k j γ 2 j γ -2 k Tj+1-1 t=Tj g t 2 h 2 t 2 √ k T √ 3 T t=1 g t 2 h 2 t ,
where in the last inequality we used the fact that b k=a 2 -2k 4 3 2 -2a . Thus, overall, we have
II = k k T h T k+1 B T k+1 (k) γ k 2h T +1 k T 2 + 1 3 T t=1 g t 2 h 2 t (29)
The end (combining bounds for I and II). Substituting (28) and ( 29) into (26), we deduce that for any non-decreasing
(h t ) t 1 T t=1 (f (x t )-f (x * )) h T +1 2 x 1 -x * k T 2 + 1 3 T t=1 g t 2 h 2 t + γ k T T t=1 g t 2 h 2 t .
B.1 Basic analysis for AdaGrad and Proof of Lemma 1
In this section we extend the standard analysis of ADAGRAD for our purposes and prove Lemma 1 restated below. Throughout, we consider the following algorithm for t 1
x t+1 = Proj Θ x t - γ h t g t , (30)
where g t ∈ ∂f (x t ), S t = t s=1 g s 2 and (h t ) t 1 is non-decreasing and positive, and
x 1 ∈ R d .
We start with some elementary results. Lemma 3. For all t 1 and all
x 1 ∈ R 0 2γ h t (f (x t ) -f (x * )) x t -x * 2 -x t+1 -x * 2 + γ 2 h 2 t g t 2 .
Proof. By the property of projection
x t+1 -x * 2 x t -x * 2 - 2γ h t x t -x * , g t + γ 2 h 2 t g t 2 x t -x * 2 - 2γ h t (f (x t ) -f (x * )) + γ 2 h 2 t g t 2 , (31)
where we used the fact that f is convex. The result follows after re-arranging.
Lemma 3 applied iteratively yields the following result. Lemma 4. For all T 1, γ > 0 and all x 1 ∈ R, Algorithm (30) satisfies
Proj Θ x T - γ h T g T -x * 2 x 1 -x * 2 + γ 2 T -1 t=1 g t 2 h 2 t + γ2 g T 2 h 2 T .
Finally, we are in position to prove Lemma 1 brought up in the main body of the paper. Lemma 5 (Restated Lemma 1 from Section 4). For all T > 1 and all x 1 ∈ R d we have
T -1 t=1 (f (x t ) -f (x * )) h T x 1 -x * 2 -x T -x * 2 2γ + γ 2 T -1 t=1 g t 2 h 2 t .
Proof. Using Lemma 3, we deduce that
T -1 t=1 (f (x t ) -f (x * )) 1 2γ T -1 t=1 h t x t -x * 2 -x t+1 -x * 2 + γ 2 T -1 t=1 g t 2 h t . (32)
Let us bound the first sum on the right hand side, adding and subtracting h t+1 x t+1x * 2 and using telescoping summation, we obtain
T -1 t=1 h t x t -x * 2 -x t+1 -x * 2 = h 1 x 1 -x * 2 -h T x T -x * 2 + T -1 t=1 (h t+1 -h t ) x t+1 -x * 2 . (33)
Furthermore, by Lemma 4 with γ = γ and the fact that (h t ) t 1 is non-decreasing, we get
T -1 t=1 (h t+1 -h t ) x t+1 -x * 2 T -1 t=1 (h t+1 -h t ) x 1 -x * 2 + γ 2 t s=1 g s 2 h 2 s (h T -h 1 ) x 1 -x * 2 + γ 2 T -1 t=1 (h t+1 -h t ) t s=1 g s 2 h 2 s .
For the second term in the above bound, we can write
T -1 t=1 (h t+1 -h t ) t s=1 g s 2 h 2 s = T -1 s=1 g s 2 h 2 s T -1 t=s (h t+1 -h t ) = T -1 s=1 g s 2 h 2 s (h T -h s ) = h T T -1 t=1 g t 2 h 2 t - T -1 t=1 g t 2 h t .
Substitution of the above into the penultimate inequality yields
T -1 t=1 (h t+1 -h t ) x t+1 -x * 2 (h T -h 1 ) x 1 -x * 2 +γ 2 h T T -1 t=1 g t 2 h 2 t - T -1 t=1 g t 2 h t , (34)
Substituting (34) into (33), we deduce that
T -1 t=1 h t x t -x * 2 -x t+1 -x * 2 h T x 1 -x * 2 -x T -x * 2 + γ 2 h T T -1 t=1 g t 2 h 2 t -γ 2 T -1 t=1 g t 2 h t .
Combination of the above with (32) concludes the proof.
B.2 Proofs of corollaries in Section 4
In this section we provide proofs of four corollaries presented in Section 4.
Proof of Corollary 1. Substituting our choice of h t into Theorem 2, we prove in Lemma 9 in Appendix E, page 27, that
T t=1 g t 2 h 2 t log(log(e(1 + S T ))) .
Substituting the above into Theorem 2 and using (12), we deduce that R T (S T +1 + 1) log(e(S T +1 + 1))
√ 8 x 1 -x * log 2 D γ0 γ 0 + 1 2+ 1 3 log(log(e(1 + S T ))) + 5D γ0 log(log(e(1 + S T ))) log 2 D γ0 γ 0 + 1 .
The proof is concluded after re-arranging and using 2ab a 2 + b 2 .
Proof of Corollary 2. Theorem 2 and Lemma 2 (rather Eq. ( 12)) and Lemma 10 give
T t=1 (f (x t ) -f (x * )) S T +1 log 2 2D γ0 γ 0 2 x 1 -x * √ 2 2+ 1 3 log e S T g 1 2 + 5D γ0 log e S T g 1 2 D γ0 S T +1 log 2 2D γ0 γ 0 2 √ 2 2+ 1 3 log e S T g 1 2
+ 5 log e S T g 1 2 .
The proof is concluded after re-arranging and using 2ab a 2 + b 2 .
Proof of Corollary 3. From Lemma 11 we have
T t=1 g t 2 ε + S t log 1 + S T ε .
Hence, substituting the above into Theorem 2 and using (12), we obtain
R T ε + S T +1 log 2 D γ0 γ 0 + 1 √ 8 x 1 -x * 2+ 1 3 log 1 + S T ε + 5D γ0 log 1 + S T ε .
The proof is concluded after re-arranging and using 2ab a 2 + b 2 .
As promised in Section 4, Theorem 4 of Appendix A can be obtained as a corollary of Theorem 2.
Corollary 4. Under assumptions of Theorem 2, with f an L-Lipschitz function. Setting
h t = L √ T and D γ0 = x 1 -x * ∨ γ 0 , Algorithm 1 satisfies T t=1 (f (x t ) -f (x * )) 12.3D γ0 L T log 2 2D γ0 γ 0 . Proof of Corollary 4. Substituting h ≡ L √ T into Theorem 2 gives R T L √ T 2 x 1 -x * k T 2+ 1 3 + γ k T + L √ T γ k T 2 ,
Eq. ( 12) applied to the above, yields
R T L √ T √ 8 x 1 -x * log 2 D γ0 γ 0 + 1 2+ 1 3 + 5D γ0 log 2 D γ0 γ 0 + 1 L √ T √ 8 2+ 1 3 + 5 12.3 D γ0 log 2 D γ0 γ 0 + 1 .
The proof is concluded.
C Analysis for stochastic PGD in Section 5: proof of Theorem 3
Proof of Theorem 3. We recall that (δ) = 1 ∨ log(log 2 (T )/δ). As in all the previous sections, we denote by [T k , T k+1 -1], the interval where k t = k. The regret of the algorithm can be expressed as
T t=1 (f (x t ) -f (x * )) = k T k=1 (f (x T k ) -f (x * )) =:T1(k) + k T k=1 T k+1 -1 t=T k +1 (f (x t ) -f (x * )) =:T2(k)
.
We are going to apply Lemma 7 of Appendix C.1 on page 23, to the second term and provide deterministic bound on the first one. First let us explain the reason of such separation of terms.
Observe that for t = T k
x t+1 = Proj Θ x t - γ kt L T (δ/(1 + k t-1 ) 2 ) g t .
Meanwhile, for t ∈ [T k + 1, T k+1 -1] we have
x t+1 = Proj Θ x t - γ kt L T (δ/(1 + k t ) 2 ) g t .
That is, the x T k +1 step is outside the pattern and requires additional splitting. The proof proceeds as follows 1. First we give deterministic bound on T 1 (k) terms using Lipschitzness of the objective function f ;
2. Then we use Lemma 7 to bound each T 2 (k);
3. Finally, we show that k T is still bounded by k * as in the warmup analysis.
Analysis for T 1 (k). Since f is assumed to be Lipschitz, we can write
T 1 (k) L x T k -x * .
Then, simply by the triangle inequality and property of the Euclidean projection, we deduce that for all k 2
x T k -x * x T k-1 +1 -x T k-1 + x T k-1 -x * + (T k -T k-1 -1) γ k-1 T (δ/k 2 ) x T k-1 +1 -x T k-1 + x T k-1 -x * + (T k -T k-1 -1) γ k T -1 T (δ) .
Furthermore, we have
x T k-1 +1 -x T k-1 γ k-1 T (δ/(k -1) 2 ) γ k T -1 T (δ) .
Hence, we have
x T k -x * x T k-1 -x * + (T k -T k-1 ) γ k T -1 T (δ) .
Unfolding the above recursion, we deduce that
x T k -x * x T1 -x * + T k γ k T -1 T (δ) x 1 -x * + 0.5γ k T √ T .
We conclude that
k T k=1 (f (x T k ) -f (x * )) Lk T x 1 -x * + 0.5γ k T √ T L T T (δ/(1 + k T ) 2 )k T ( x 1 -x * + 0.5γ k T ) . ( 35
)
Analysis for T 2 (k). Let us first fix the high-probability event on which we are going to work. Note that ρ = T k + 1 is a stopping time and T k+1 -1ρ T . Thus, by Lemma 7 with probability at least 1δ/(1 + k) 2 , it holds that
T 2 (k) L T T (δ/(1 + k) 2 ) γ k 2 + x T k +1 -x * 2 -x T k+1 -x * 2 2γ k + 10 x T k +1 -x * + 68γ k .
We observe that
x T k +1 -x * 2 -x T k+1 -x * 2 x T k +1 -x * 2 -x T k +1 -x * -x T k +1 -x T k+1 2 + 2 x T k +1 -x * x T k +1 -x T k+1 .
Let us bound each term of the product. By the design of the rule,
x T k +1 -x T k+1 x T k +1 -x 1 + x 1 -x T k+1 38γ k + 38γ k 76γ k ,
where B = 28. Furthermore, by Lemma 7 and the fact that the
x T k +1 -x T k γ k for all k 1 x T k +1 -x * x T k -x * + 2γ k-1 x T k-1 +1 -x * + 18γ k-1 x 2 -x * + 18 k-1 j=1 γ j x 2 -x * + 18γ k x 1 -x * + 18γ k + γ 1 x 1 -x * + 19γ k . (36)
Thus, we have shown that with probability at least 1
-δ/(1 + k) 2 T 2 (k) L T T (δ/(1 + k) 2 ) γ k 2 + 76 ( x 1 -x * + 19γ k ) + 10( x 1 -x * + 19γ k ) + 68γ k L T T (δ/(1 + k T ) 2 ) 86 x 1 -x * + (259 + 38 2 )γ k L T T (δ/(1 + k T ) 2 ) (86 x 1 -x * + 1703γ k T ) .
Overall, by the union bound, we have with probability at least 1 -
∞ k=1 δ/(1 + k) 2 1 -δ k T k=1 T 2 (k) L T T (δ/(1 + k T ) 2 )k T (86 x 1 -x * + 1703γ k T ) (37)
Regret bound Putting together (35) and (37), we obtain with probability 1δ
T t=1 (f (x t ) -f (x * )) L T T (δ/(1 + k T ) 2 )k T (87 x 1 -x * + 1704γ k T ) . (38)
Bounding the number of phases It remains to bound the number of phases k T . Fix some t 1 and k k t-1 . Observe that for any k 1, x + t (k)x t γ k . Then thanks to Lemma 7 and Eq. (36) (which hold on exact same event that we consider in (38)), we have
x + t (k) -x * x t -x * + γ k x T k t +1 -x * + 16γ kt + γ k x 1 -x * + 35γ kt + γ k x 1 -x * + 36γ k .
Hence, for all k k t-1
x + t (k) -x 1 2 x 1 -x * + 36γ k 2γ k * + 36γ k .
Implying that k T k * .
Concluding In view of the bound k T k * log 2 (2D γ0 /γ 0 ) and γ k * 2D γ0 , we conclude that with probability at least 1δ
T t=1 (f (x t ) -f (x * )) 3500LD γ0 T T (δ/(1 + k * ) 2 ) log 2 (2D γ0 /γ 0 ) .
Note that the constant 3500 is certainly extremely pessimistic as we did not attempt to optimize it.
C.1 High probability bound
We slightly adapt a version of the Bernstein-Freedman inequality, derived in [CBLS05, Corolary 16], improving log(T ) dependency to log log(T ). Lemma 6 (a version of the Bernstein-Freedman inequality by [START_REF] Cesa-Bianchi | Minimizing regret with label-efficient prediction[END_REF]). Let X 1 , X 2 , . . . be a martingale difference with respect to the filtration F = (F s ) s 0 and with increments bounded in absolute values by K. For all t 1, let
S 2 t = t τ =1 E X 2 τ F τ -1
denote the sum of the conditional variances of the first t increments. Then, for all δ ∈ (0, 1) and T 1, with probability at least 1δ,
max t T t τ =1 X τ 2S T ln log 2 (2T ) δ + 3K ln log 2 (2T ) δ .
Proof. Let X * T = max t T t s=1 X s . For k 1 we have
P X * T > 4(S 2 T + K 2 ) + √ 2K /3; K -2 S 2 T ∈ [2 k-1 -1, 2 k ] P X * T > √ 2 k+1 K 2 + √ 2K /3; K -2 S 2 T ∈ [2 k-1 -1, 2 k ] P X * T > √ 2 k+1 K 2 + √ 2K /3; K -2 S 2 T 2 k e -,
where the last inequality follows from Lemma 15 in [START_REF] Cesa-Bianchi | Minimizing regret with label-efficient prediction[END_REF]. Since 0 K -2 S 2 T T , we take a union bound over k = 1, . . . , log 2 (T ) and notice that
4(S 2 T + K 2 ) + √ 2K /3 2 S 2 T + 3K .
The next result is a version of Lemma 1, that was used to analyze deterministic setup, which accounts for the stochasticity. Lemma 7. Let ρ be a bounded stopping time with respect to the filtration F of the stochastic gradients. Let δ ∈ (0, 1), T 1, x 1 ∈ F ρ and consider T (δ) := 1 ∨ log(log 2 (2T )/δ),
x t+1 = x t - γ L T T (δ) g ρ+t .
Assume that T, δ are such that T 1, then with probability at least 1δ we have for all τ T
x τ -x * x 1 -x * + 16γ , τ t=1 (f (x t ) -f * ) L T T (δ) γ 2 + x 1 -x * 2 -x τ +1 -x * 2 2γ + 10 x 1 -x * + 68γ simultaneously.
Proof. To simplify the expressions, we drop the primes, writing x t for x t , and we set η
= γ L √ T T (δ)
.
Using classical analysis of projected gradient descent, we obtain
t τ g ρ+t , x t -x * ηL 2 2 + x 1 -x * 2 -x τ +1 -x * 2 2η . (39)
Introducing the following martingale difference
X t = ∇f (x t ) -g t+ρ , x t -x * ,
we deduce from the above, and the fact that τ T
τ t=1 (f (x t ) -f (x * )) ηL 2 T 2 + 1 2η x 1 -x * 2 -x τ +1 -x * 2 + τ t=1 X t . (40)
Dealing with randomness. Now we are in position to apply Freedman-Bernstein inequality recalled in Lemma 6.
To this end, we need to bound X t and get an appropriate expression for S t . First observe that each martingale difference satisfies
|X t | 2L x t -x * almost surely . (41)
Furthermore, by the triangle inequality, property of projection and the fact that g t L almost surely, we obtain
x t -x * x t-1 -ηg ρ+t-1 -x * x t-1 -x * + ηL x 1 -x * + ηLT, ∀t T + 1 .
Hence,
|X t | K := 2L x 1 -x * + 2L 2 T η, ∀t T + 1.
The conditional variance S 2 T can be bounded using (41) as
S T 2L T t=1 x t -x * 2 2L √ T max t T
x tx * almost surely .
Thus, invoking Lemma 6, for any δ ∈ (0, 1) with probability at least 1δ,
max τ T τ t=1 X t 4L max t T x t -x * T T (δ) + 6L( x 1 -x * + LηT ) T (δ) . (42)
From now on, we work on this event which holds with probability at least 1δ.
Substituting (42) into (40), we get for all τ T τ t=1
(f (x t ) -f (x * )) ηL 2 T 2 + x 1 -x * 2 -x τ +1 -x * 2 2η + 4LΦ T T T (δ) + 6L( x 1 -x * + LηT ) T (δ) , (43)
where Φ T = max t T x tx * .
Bounding the trajectory. Observing that the left hand side of (43) is non-negative and that it holds for all τ T , we deduce max
0 τ T x τ +1 -x * 2 x 1 -x * 2 + 12Lη T (δ) x 1 -x * + (1 + 12 T (δ)) η 2 L 2 T + 8LηΦ T T T (δ) .
Solving the above inequality, we deduce that Φ T ( x 1x * + 6Lη T (δ)) 2 + (1 + 28 T (δ)) η 2 L 2 T + 4Lη T T (δ)
x 1x * + 6Lη T (δ) + ηL (1 + 28 T (δ)) T + 4Lη T T (δ) .
Substituting the value of η, we further deduce that max t T
x tx * x 1x * + γ 1 + 28 T (δ) x tx * T T (δ) + 6(L x 1x * + L 2 ηT ) T (δ) .
Substitution of (44) into the above inequality, yields Recalling that η = γ/(L T T (δ)) and using some rough bounds, we deduce that The proof is complete.
D On a relation with [MS12]
In case where there exists a known bound g t L on the norms of the subgradients, MacMahan and Streeter [START_REF] Mcmahan | No-regret algorithms for unconstrained online convex optimization[END_REF] propose to tune the step size of gradient descent with a scheme based on a reward doubling argument and cold-restart. Their theory works in a setup of unconstrained online convex optimization with L-bounded subgradients. Since we do not require Lipschitz functions, and we additionally handle the projection step, the two results cannot be directly compared. Nevertheless, there are some similarities and, in a specific instantiation of our Algorithm 1, we recover that of [START_REF] Mcmahan | No-regret algorithms for unconstrained online convex optimization[END_REF].
Below, we sketch the relation between the two, considering the setting of MacMahan and Streeter [START_REF] Mcmahan | No-regret algorithms for unconstrained online convex optimization[END_REF], where the norms of the subgradients are bounded by some known L and the optimization is unconstrained, i.e. Θ = R d . We also assume that the time horizon T is known, since unknown T is handled in MacMahan and Streeter [START_REF] Mcmahan | No-regret algorithms for unconstrained online convex optimization[END_REF] by a time-doubling trick. Using our notation, their analysis starts with a simple bound
T t=1 (f t (x t ) -f t (x * )) T t=1 g t , x t -x * = T t=1 g t , x 1 -x * + T t=1 g t , x t -x 1 T t=1 g t =:G T x 1 -x * - T t=1 g t , x 1 -x t =:Q T = G T x 1 -x * -Q T .
They observe, using a duality argument, that it is sufficient to show that
Q T a -1 exp G T /(bL √ T ) -c, (45)
in order to derive
T t=1 (f t (x t ) -f t (x * )) b x 1 -x * L √ T log ab x 1 -x * L √ T + c .
The principle of their algorithm is to perform gradient descent by phases and, during a phase, to track the reward Q t relative to this phase, and restart with a doubled step-size when the condition Q t > ηL 2 t is met. This step-size doubling ensures that the Condition (45) is met at the time horizon T .
Let us relate this algorithm with a specific instantiation of our Algorithm 1. When the algorithm is the simple Gradient Descent (GD), that does not involve the projection step, we have x T +1 -x 1 = -ηG T for the GD with a fixed step size η. Hence, it holds that
Q T = T t=1 g t , x 1 -x t = η T t=1 g t , G t-1 = η 2 T t=1 G t 2 -G t-1 2 -g t 2 = η 2 G T 2 - T 1 g t 2 = 1 2η x T +1 -x 1 2 - 1 2η Γ2 T +1
where Γ2 T +1 = T 1 η 2 g t 2 . Thus, if in Algorithm 1 we allow cold restarts (the exact thing that we want to avoid), then the condition
x + T (η) -x 1 2 2η 2 L 2 T + Γ2 T + η 2 g T 2
is equivalent to their doubling condition Q T ηL 2 T .
In our notation, the algorithm of MacMahan and Streeter [START_REF] Mcmahan | No-regret algorithms for unconstrained online convex optimization[END_REF] corresponds to a variant of Algorithm 1 with
Bt+1 (k) = 2 + g t 2 L 2 T γ 2 k + Γ 2 t -Γ 2 T k -1 ,
with the major difference that a cold-restart is performed when k t is increased, and the minor difference that step-size doubling happens after (and not before) the condition x + t (k)x 1 Bt+1 (k) is broken.
Assumption 1 .
1 The set Θ ⊆ R d is closed convex, D ⊆ R d is open such that Θ ⊆ D. The function f : D → R is convex on Θ and there exists a bounded minimizer x * ∈ arg min x∈Θ f (x).
RtRegret: f (x) = x 1, x1x * = 14.36
RtRegret: f (x) = x 2, x1x * = 14.36
|
ai, x |, x1x * = 14.38 FREE ADAGRAD ORACLE: η = x1-x
Figure 2 :
2 Figure 2: Regret (left) and step-sizes (right) of three algorithms on loglog scale.
4)
When translating our regret bound on a rate for the optimization error, using xT -average along the trajectory-we have additional log-factors compared to[START_REF] Carmon | Making sgd parameter-free[END_REF][START_REF] Defazio | Learning-rate-free learning by D-adaptation[END_REF], which is an artifact of online-to-batch conversion[START_REF] Mcmahan | No-regret algorithms for unconstrained online convex optimization[END_REF] Theorem 7]. Contrary to us, the algorithms in[START_REF] Carmon | Making sgd parameter-free[END_REF][START_REF] Defazio | Learning-rate-free learning by D-adaptation[END_REF] require yet some knowledge about the Lipschitz constant L.
x t )f * ) ηL 2 T 2 + x 1x * 2x τ +1x * 2 2η + 4L ( x 1x * + 15.5γ) T T (δ) + 6(L x 1x * + L 2 ηT ) T (δ) .
x t )f * ) L T T (δ)
1 do g t ∈ ∂f (x t ) // get subgradient
S t = S t-1 + g t 2 // cumulative grad-norm
h t = (S t + 1) log(e(1 + S t )) B t+1 (k) = 2γ k √ k + Γ 2 t + γ 2 k g t 2 /h 2 t // update ht ht-1 // define the threshold
x + t (k) = Proj Θ x t -γ k ht g t k t = min k k t-1 : x + t (k) -x 1 x t+1 = x + (k t ) B t+1 (k) // probing step // find the step size // make the step
Γ 2 t+1 = Γ 2 t + γ 2 kt gt 2 h 2 t // update Γ 2 t
end
Output :Trajectory (x t ) t 1
and by induction the property holds for all t T . Bounding the regret. Let us now upper-bound the regret. We denote by [T k , T k+1 -1] the interval where k t = k, with the convention T k+1 = T k if we never have k t = k, see Figure1for schematic illustration. Summing the central equation (3), the regret can then be decomposed as follows
Bounding the regret On the other hand, substituting (42) into (40), we obtain
τ t=1 (f (x t ) -f * ) ηL 2 T 2 + x 1 -x * 2 -x τ +1 -x * 2η 2
+ 4L max t T
T (δ) + 4 + 6 T (δ) T
x 1 -x * + γ √ 29 + 4 + 6 . (44)
15.5
+ 1), for all k * 2 , to prove the induction. The concavity of log 2 ensures that log 2 (1 + x) x/ ln(2) for all x > -1.
Thus, it suffices to show that
1 2 log 2 (k or equivalently log 2 (k * ) + √ 2 -1 1 2
1 2 log 2 1 + 1 k * + 1 √ 2, for all k * 2 ,
Thus, for all k * 2
1 2 log 2 1 + 1 k * + 1 1 2 ln(2)k * + 1 1 4 ln(2) + 1 √ 2 ,
which concludes the proof of Lemma 12.
*
Supplementary material for "Parameter-free projected gradient descent"
Appendix A provides details for the proof of Section 3 of the main body. Appendix B contains all the proof for Theorem 2 and Lemma 1. Appendix B.2 provides proof for corollaries in Section 4. Appendix C deals with stochastic version of our algorithm and contains the proof of Theorem 3. Appendix D gives detailed connection with the reward doubling algorithm of [START_REF] Mcmahan | No-regret algorithms for unconstrained online convex optimization[END_REF]. Finally, Appendix E contains auxiliary results that are used in different parts of the proofs.
Below we provide a basic python implementation of our FREE ADAGRAD.
E Auxiliary results
Let (a t ) t 1 be a non-negative sequence, and S t = t τ =1 a τ . For any concave function F on [0, +∞), we have
Applying (46) with Proof. We observe that k = 1 for k * = 1. We will prove that k k * + 0.5 log 2 (k * ) + 0.25 k * + 0.5 log 2 (k * ) + 1.25 .
For proving the first inequality, we only need to prove that 2 y / √ y 2 k * for y = k * + 0.5 log 2 (k * ) + 0.25. Plugging the value of y and taking the square, we get |
03614585 | en | [
"shs.eco"
] | 2024/03/04 16:41:22 | 2023 | https://hal.science/hal-03614585v2/file/AV-FV-PAV%20-%20revised%202.pdf | E Kamwa
email: [email protected]
On Two Voting Systems that Combine Approval and Preferences: Fallback Voting and Preference Approval Voting
Keywords: Approval Voting, Rankings, Condorcet, Properties, Impartial and Anonymous Culture JEL Classification: D71, D72
by exploring some other normative properties of FV and PAV. We show among other things that FV and PAV satisfy and fail the same criteria; they possess two properties that AV does not: Pareto optimality and the fact of always electing the Absolute Condorcet winner when he exists. To provide a practical comparison, we evaluate the probabilities of satisfying the Condorcet majority criteria for three-candidate elections and a considerably large electorate, examining FV and PAV alongside other voting rules. Our findings indicate that PAV outperforms the Borda rule in this regard. Furthermore, we observe that in terms of agreement, FV and PAV align more closely with scoring rules than with Approval Voting. Our analysis is performed under the impartial anonymous culture assumption.
Introduction
When it comes to single-winner elections, the literature and practical applications primarily revolve around two major categories of voting rules. These can be broadly classified as scoring rules based on rankings and rules based on evaluation or approval. Scoring rules (SCR) typically require voters to rank the candidates, either all of them or a subset. Based on these rankings, candidates receive points according to their positions. The candidate with the highest total score, as determined by the rule in question, is declared the winner.
Among the most well-known SCR are the Plurality Rule (PR), the Negative Plurality Rule (NPR) and the Borda Rule (BR). 1 Approval Voting (AV), popularized by [START_REF] Brams | Approval Voting[END_REF], has gained significant popularity: this rule simplifies the voting process by allowing voters to express their approval for the candidates they find acceptable. Under AV, each voter has the freedom to approve as many candidates as they desire. The candidate(s) who receive the highest number of approvals are declared the winner(s). AV has garnered attention as a viable alternative to SCR in various contexts; its simplicity and intuitive nature have contributed to its appeal among both scholars and practitioners. As a result, numerous organizations have adopted AV for their decision-making processes (see [START_REF] Regenwetter | Approval voting and positional voting methods: Inference, relationship, examples[END_REF].
Numerous studies have extensively analyzed the merits and limitations of both SCR and AV. Prominent works, such as those by [START_REF] Felsenthal | Review of paradoxes afflicting procedures for electing a single candidate[END_REF] and the comprehensive Handbook of Approval Voting edited by [START_REF] Laslier | Handbook on Approval Voting, Studies in Choice and Welfare[END_REF], delve into these voting rules and offer insights into their strengths 1 Scoring rules play a significant role in various sports disciplines, such as figure skating, diving, and gymnastics competitions. These rules are often variants of the weighted and/or truncated Borda rule and are designed to establish rankings and determine winners based on a fair and balanced evaluation of participants' performances. In the realm of sports, scoring rules serve as a vast and largely untapped resource, offering numerous remarkable and significant examples of their application. and weaknesses. The objective of such analyses is to provide valuable information and insights to the public and decision-makers, facilitating informed discussions and considerations regarding the selection or implementation of an appropriate voting rule in the context of electoral reform. Through rigorous analysis and empirical evidence, they aim to inform public opinion and provide decision-makers with valuable guidance in choosing the "best" voting rule suited to their specific needs and goals. [START_REF] Norris | Choosing electoral systems: Proportional, majoritarian and mixed systems[END_REF] points out, following a long tradition of analysis of the influence (real or supposed) of voting systems on political systems, that an electoral reform is never trivial; because, depending on the society in which it is implemented, it can lead to unstable political systems, result in strong political polarization, give the role of kingmaker to certain groups or favor certain types of candidates. Indeed, any electoral reform should be driven by well-defined objectives and necessitate thoughtful deliberations regarding both its goals and the normative criteria that the chosen voting rule should meet. In addition, it is crucial that the voting rule implemented is simple and easily understandable for voters. A voting system that is complex or difficult to understand may discourage voter participation and undermine the legitimacy of the electoral process. Simplicity and transparency in the voting rule help ensure that voters can easily grasp the mechanics of the system, have confidence in the process, and make informed choices. Unfortunately, no clear consensus seems to emerge in the literature on the possible superiority of one rule over the others. Arguments in favor of different voting rules are diverse and numerous, reflecting the complexity and multifaceted nature of the topic. Given this lack of consensus, one potential solution is to seek a compromise between the families of voting rules. This is the choice that [START_REF] Brams | Voting systems that combine approval and preferences[END_REF] and [START_REF] Sanver | Approval as an intrinsic part of preference[END_REF] seem to have made. Fallback Voting and Preference Approval Voting [START_REF] Brams | Voting systems that combine approval and preferences[END_REF] and [START_REF] Sanver | Approval as an intrinsic part of preference[END_REF] introduced two voting rules reconciling ranking-based decisions with approval-based decisions:2 Preference Approval Voting (PAV) and Fallback Voting (FV). Under PAV, voters rank all the running candidates and distinguish the ones they approve of from those they disapprove of. If there is no more than one alternative with the majority of approvals (greater than half of the number of voters), PAV picks the AV winner; when more than one candidate is approved by more than half of the electorate, PAV picks the one who is preferred by the majority among them; in case of a majority cycle among these candidates, it picks the one with the highest number of approvals among them. Under FV, voters first indicate all the candidates they approve of (this can range from no candidates to all) and then they rank only these candidates; each level of rankings (of the approved candidates) is considered and if at a given level a majority of voters agree on one highest-ranked candidate, this candidate is the FV winner. 3 The procedures implemented under PAV or FV to determine the winner are defined in such a way as to satisfy both the principle of the "most approved" and that of the "most preferred". However, the above informal definitions of PAV and FV do not appear to take into account situations where ties may arise; in fact, there is no reference made to any tie-breaking rules. Further discussion on this point will be provided in Section 2. Formal definitions of PAV and FV are provided later. [START_REF] Brams | Voting systems that combine approval and preferences[END_REF] have highlighted several desirable properties and drawbacks of FV and PAV. They showed among other things that: FV, PAV, and AV may all give different winners for the same profile; a unanimously approved candidate may not be a FV or a PAV winner; a least-approved candidate may be a FV or PAV winner; a PAV winner may be different from the winners under BR; FV and PAV may fail to pick the Condorcet winner when he exists. Given the limited number of properties considered, the analysis of [START_REF] Brams | Voting systems that combine approval and preferences[END_REF] does not allow a clear judgement on the superiority or not of PAV and FV compared to AV or SCR. It is striking to note that since PAV and FV were introduced, almost no work has been addressed to these rules, contrary to what was the case for AV or SCR. 4 In this paper, our objective is to conduct a comprehensive analysis of the properties of PAV and FV in order to gain deeper insights into these voting rules. Our aim is to draw meaningful conclusions about the strengths and limitations of PAV and FV based on our analysis. Can we say that these rules are a "good" compromise between AV and SCR? Are they better? If the answer is yes, then the choice of PAV or FV as a replacement for SCR or AV would then be justified and these rules would therefore be recommendable for real-world use.
To achieve our goal, this paper is structured into two phases. The first phase focuses on extending the analysis of FV and PAV based on the groundwork laid by [START_REF] Brams | Voting systems that combine approval and preferences[END_REF]. We delve into additional properties of FV and PAV, exploring their performance in relation to other desirable properties commonly used in evaluating voting rules. Many of these properties have been extensively studied in the normative evaluation of SCR and AV. On this basis, we believe that it will henceforth be easier to decide on a comparison between PAV, FV and these rules. It is fair to say that we cannot here review, in an exhaustive way, all the normative properties encountered in the literature. The properties on which we base our study are the following: the Condorcet principle, social acceptability, efficient compromise, Pareto optimality, cancellation, reinforcement, homogeneity, clone independence, and the independence criterion. When these criteria are satisfied, they guarantee a certain consistency between individual preferences and the collective choice.
Each of these criteria will be presented in detail later. Before delving into our analysis, it is important to provide a brief overview of some of the properties we will be discussing. The Condorcet principle allows us, on the one hand, to ensure that when a Condorcet winner (a candidate preferred to any other candidate by more than half of the voters) exists, then that candidate is elected; on the other hand, it allows us to avoid the election of the Condorcet loser (a candidate to whom more than half of the voters prefer any other candidate) when he exists. Social acceptability suggests that a candidate should be elected when the number of voters who rank him among the half of the candidates they prefer is at least as large as the number of voters who rank him in the least preferred half. The efficient compromise principle advocates the election of candidate(s) receiving the highest quantity of support at some efficient level of quality, the quality of support being defined in terms of a candidate's rank in the order of voters' preferences. As we will see later, the non-satisfaction of some of these properties is presented in the literature as unacceptable for a democratic voting rule. It follows that the choice of a voting rule is consequential. We show among other things that FV and PAV are Pareto optimal, and that they always elect the Absolute Condorcet winner when such a candidate exists; 5 and we determine the conditions under which these rules satisfy the reinforcement criterion, and under which they are not vulnerable to the No-Show paradox. From our analysis, it appears that FV and PAV satisfy and respect some of the properties that AV fails.
A great deal of work has been done in recent years on the probability of AV electing the Condorcet Winner (or the Condorcet Loser) when he exists.
In this sense, these studies have made notable comparisons between AV and the three most popular SCR (PR, NPR and BR). We can quote in this respect the works of [START_REF] Diss | On the condorcet efficiency of approval voting and extended scoring rules for three alternatives[END_REF][START_REF] El Ouafdi | On the Condorcet efficiency of evaluative voting (and other voting rules) with trichotomous preferences[END_REF][START_REF] Gehrlein | The Condorcet efficiency of approval voting and the probability of electing the Condorcet loser[END_REF], 2015), and [START_REF] Gehrlein | A note on Approval Voting and electing the Condorcet loser[END_REF]. The second part of this paper will lead in a similar direction. This will provide us with the opportunity to conduct a comparative evaluation of AV, FV, PAV and SCR. First, for voting situations with three candidates and an electorate tending to infinity, we evaluate the probabilities of agreement between AV, FV, and PAV; this analysis is extended to the three scoring rules PR, NPR, and BR. We are also interested in the probabilities of satisfaction or violation of the Condorcet criteria. One advantage that FV and PAV have over AV is their reliance on the principle of "most preferred" candidates. This advantage becomes particularly evident when it comes to electing the Condorcet winner, should one exist. It will therefore be necessary to be aware of the amplitude of this advantage. To do so, our calculations adopt the impartial and anonymous culture assumption. This assumption will be defined later. Our computation analysis teaches us that, for three-candidate elections, the combination of approvals and rankings in FV and PAV brings them closer to SCR in terms of agreement, as opposed to AV; furthermore, they perform better in terms of compliance with the Condorcet criteria than some SCR.
The rest of the paper is organized as follows: Section 2 is devoted to basic definitions. Section 3 presents our results on the properties of FV and PAV.
We provide our probabilistic results in Section 4. Section 5 concludes.
Notation and definitions
Consider a set of n (n ≥ 2) non-abstaining individuals N = {1, 2, . . . , i, . . . , n} who vote sincerely on C = {a, b, c, . . . , m} a set of m (m ≥ 3) candidates. Fallback Voting and Preference Approval Voting It is assumed that the rankings provided by the voters on C are asymmetric, meaning that there are no ties in the rankings. Furthermore, we assume that the approvals of the voters are monotonic with respect to their rankings. This means that if a voter approves a and ranks b ahead of a, this implies that he also approves b. For example, the ranking a ≻ b ≻ c (or simply abc) means that a is ranked ahead of b which is ahead of c and a and b are both approved while c is disapproved.
As voters' inputs are both rankings and approvals, a voting situation is therefore a κ-tuple π = (n 1 , n 2 , . . . , n t , . . . , n κ ) that indicates the total number n t of voters for each of the κ rankings on C such that κ t=1 n t = n; given the assumptions made above, κ = (m + 1)!. Given π, we denote by n ab (π) (or simply n ab ) the number of voters who rank a before b. Candidate a is majority preferred to b if n ab > n ba . We say that candidate a is the Condorcet winner if
n ab > n ba ∀b ∈ C \ {a}; candidate a is the Condorcet loser if n ab < n ba ∀b ∈ C \ {a}.
A candidate a is an Absolute Condorcet winner (resp. an Absolute Condorcet loser) if he is ranked first (resp. last) by more than half of the voters.
Given the rankings and approvals of the voters, we denote by S l (a, π) or simply S l (a) the total number of approvals of candidate a when rankings of level l are considered (l = 1, 2, . . . , m); we say that candidate a is majority approved at level l if S l (a) > n 2 .
Let us now define each of the voting rules under consideration here.
Approval Voting (AV): Under this rule, voters can vote for (approve of) as many candidates as they wish. We denote by AV(a, π) the total number of approvals for candidate a given π. Candidate a is the AV winner if AV(a, π) > AV(b, π) ∀b ∈ C \ {a}. Notice that AV(a, π) = S m (a).
Preference Approval Voting (PAV): According to [START_REF] Brams | Voting systems that combine approval and preferences[END_REF], PAV is determined by two rules and proceeds as follows:
Rule 1 : The PAV winner is the AV winner if i. no candidate receives a majority of approval votes (i.e. is approved by more than half of the electorate)
ii. exactly one candidate receives a majority of approval votes.
Rule 2 : In the case that two or more candidates receive a majority of approval votes, i. The PAV winner is the one among these candidates who is preferred by a majority to every other majority-approved candidate.
ii. In the case of a cycle among the majority-approved candidates, then the AV winner among them is the PAV winner.
Fallback Voting (FV): Following [START_REF] Brams | Voting systems that combine approval and preferences[END_REF], FV proceeds as follows:
1 It is worth noting that the definitions of PAV and FV that we have just presented do not explicitly address situations where ties occur. This is especially true in cases where the number of voters is even. In such situations, it becomes necessary to describe a method for resolving ties. Various tie-breaking methods can be considered, including the random tie-breaking method suggested by [START_REF] Sanver | Approval as an intrinsic part of preference[END_REF]. However, there may be several issues associated with the use of a tie-breaking rule. For the same situation, different tie-breaking rules can lead to different results. The use of a tie-breaking rule can result in the violation of certain normative properties; or it may lack transparency, making it difficult for voters and stakeholders to comprehend and evaluate the decisionmaking process which can undermine the legitimacy and acceptance of the outcome. Then, the use of a tie-breaking rule should be carefully evaluated, taking into account its impact on normative properties, necessity, subjectivity, potential manipulation, and transparency. In the context of this paper, it is not necessary to explore in a particular way the cases where ties can occur.
Since the second part of the paper will have to include the three most popular scoring rules, let us define them at the outset here.
Plurality rule (PR): This rule picks the candidate who is the most ranked at the top. We denote by PR(a, π) the Plurality score of candidate a. Notice that
PR(a, π) = S 1 (a). Candidate a is the PR winner if PR(a, π) > PR(b, π) ∀b ∈ C \ {a}.
Negative Plurality rule (NPR): Under this rule, the winner is determined based on the candidate who receives the fewest last-place rankings from the voters. We denote by NPR(a, π) the number of last places (the Negative Plurality score) of candidate a, this candidate is the NPR winner if
NPR(a, π) < NPR(b, π) ∀b ∈ C \ {a}.
Borda rule (BR): BR gives K -t points to a candidate each time she is ranked t-th; BR(a, π), the Borda score of a candidate, is the sum of the points received. Candidate a is the BR winner if BR(a, π) > BR(b, π) ∀b ∈ C \ {a}.
In the context of a single-winner election, it is possible for multiple candidates to obtain the same score under AV, PR, NPR, or BR. In such situations, a tie-break rule becomes necessary to determine the winner among the candidates with equal scores. In this paper, the setting is such that we will not need to use a tie-break rule.
We can now review the properties of FV and PAV.
Normative properties of FV and PAV
As previously mentioned, [START_REF] Sanver | Approval as an intrinsic part of preference[END_REF] and [START_REF] Brams | Voting systems that combine approval and preferences[END_REF] have identified and analyzed several properties of FV and PAV. They showed that these rules are monotonic, more precisely, they are approval-monotonic and rank-monotonic. A voting rule is approval-monotonic (resp. rank-monotonic) if Fallback Voting and Preference Approval Voting a class of voters, by approving of a new candidate (resp. by raising a candidate in their ranking) -without changing their approval of other candidates -never hurts that candidate and may help the candidate get elected. In this section, we will review additional properties that FV and PAV may either satisfy or fail to satisfy. By examining these properties, we aim to gain a comprehensive understanding of the behavior and performance of FV and PAV as voting rules. This analysis will provide insights into their advantages and limitations in comparison to other voting systems.
Condorcet principle
We know that AV may fail to pick the (Absolute) Condorcet winner when he exists (see [START_REF] Felsenthal | Review of paradoxes afflicting procedures for electing a single candidate[END_REF]. According to [START_REF] Brams | Voting systems that combine approval and preferences[END_REF], FV and PAV may fail to elect the Condorcet winner when he exists; through Propositions 1 and 2, we refine this result.
Proposition 1 When AV selects the Condorcet winner, this candidate is also the PAV winner, but the reverse is not always true.
Proof By definition, PAV always elects the AV winner under Rule 1; this may not be the case under Rule 2. So, for the proof, we only need to focus on Rule 2. Assume that candidate a is both the Condorcet winner and the AV winner. Let us also assume that candidate b (b ̸ = a) is the PAV winner under Rule 2; this means that (i)
AV(b, π) > n 2 > AV(a, π) or (ii) AV(a, π) > n 2 and AV(b, π) > n 2 . It is obvious that (i)
clearly contradicts that a is the AV winner. By definition, b cannot win under (ii)
since n ab > n ba , so a wins. Thus, if AV selects the Condorcet winner, this is also the case for PAV. Let us exhibit a profile to show that the reverse is not always true.
Consider the following profile with 3 candidates and 11 voters:
5 : a ≻ b ≻ c 5 : b ≻ a ≻ c 1 : c ≻ a ≻ b
With this profile, it easy to see that b is the AV winner while a is both the Condorcet winner and the PAV winner. The fact is that when there is an Absolute Condorcet winner, this candidate is the winner under FV, PAV, and PR. As noted further in the paper, FV and PAV exhibit insensitivity to certain paradoxes in domains where an Absolute Condorcet winner exists. The story is quite different on the domain where there is an (Absolute) Condorcet loser. We know that AV can elect the (Absolute)
Condorcet loser (see [START_REF] Felsenthal | Review of paradoxes afflicting procedures for electing a single candidate[END_REF]. We also know from [START_REF] Kamwa | Condorcet Efficiency of the Preference Approval Voting and the Probability of Selecting the Condorcet Loser[END_REF] that PAV may pick the Condorcet loser when he exists. To our knowledge, nothing is known concerning FV. Propositions 3 tells us more on this.
Proposition 3 FV and PAV may elect the (Absolute) Condorcet loser when he exists. When PAV elects the (Absolute) Condorcet loser, this candidate is also the AV winner but the reverse is not true. When FV elects the Absolute Condorcet loser, this candidate is also the AV winner but the reverse is not always true. Fallback Voting and Preference Approval Voting
Proof Consider the following profile with 3 candidates and 11 voters.
5 : a ≻ b ≻ c 3 : b ≻ c ≻ a 3 : c ≻ b ≻ a
With this profile it is easy to see that a is the Absolute Condorcet loser and that he is both the AV winner, the FV winner and the PAV winner. So, FV and PAV may elect the (Absolute) Condorcet loser when he exists.
By definition, PAV can elect a (Absolute) Condorcet loser only under Rule 1; as Rule 1 of PAV is equivalent to AV, it follows that for a given profile, if PAV elects the (Absolute) Condorcet loser, he is also the AV winner. In the following profile, a is both the Absolute Condorcet loser and the AV winner but b is the PAV winner.
5 : a ≻ b ≻ c 3 : a ≻ c ≻ b 5 : b ≻ c ≻ a 4 : c ≻ b ≻ a
Assume that candidate a is the Absolute Condorcet loser; it follows that S l (a) < n 2 for 1 ≤ l ≤ m -1. Candidate a cannot be the FV winner on this range. He can only be elected at l = m; if so, this means that if he gets the highest score of AV, then he is also the AV winner. The above profile is sufficient to show that the reverse is not true since a is both the Absolute Condorcet loser and the AV winner while c is the FV winner. □
Efficient compromise
The efficient compromise axiom was introduced by Özkal- [START_REF] Özkal-Sanver | Efficiency in the degree of compromise: a new axiom for social choice theory[END_REF] as a trade-off between the quantity and quality of support that a candidate may receive; the quantity refers to the number of voters supporting a candidate, and the quality of support is defined in terms of a candidate's rank in the order of voters' preferences. According to [START_REF] Merlin | Compromise rules revisited[END_REF], for any profile, the efficient compromises are candidates receiving the highest quantity of support at some efficient level of quality. A voting rule is said to satisfy the efficient compromise axiom if and only if it always picks efficient compromises.
Following Özkal- [START_REF] Özkal-Sanver | Efficiency in the degree of compromise: a new axiom for social choice theory[END_REF], the Plurality rule meets the efficient compromise; this is also the case for the q-Approval Fallback Bargaining7
for any q ∈ {1, 2, . . . , n} while the Borda rule and all the Condorcet consistent rules do not. What is more, [START_REF] Merlin | Compromise rules revisited[END_REF] showed that if the set of efficient compromises contains only one candidate, all the scoring rules will pick this candidate. Proposition 4 tells us that AV and PAV may fail the efficient compromise axiom except on the domain where there is an absolute majority winner.
Proposition 4 FV, PAV, and AV do not satisfy the efficient compromise axiom.
FV and PAV always satisfy the efficient compromise axiom over the domain where there is an Absolute Condorcet winner.
Proof To show that FV, PAV, and AV do not meet the efficient compromise axiom, let us consider the following profile8 with four candidates {a, b, c, d} and seven voters;
1 : a ≻ b ≻ d ≻ c 2 : a ≻ c ≻ d ≻ b 2 : b ≻ c ≻ d ≻ a 1 : c ≻ b ≻ d ≻ a 1 : d ≻ c ≻ b ≻ a
With this profile, the reader can check that {a, c, d} is the set of efficient compromises while b is the winner under AV, FV, and PAV.
Notice that if an Absolute Condorcet winner exists he is also an efficient compromise. As previously mentioned, when an Absolute Condorcet winner exists, both FV and PAV are equivalent to PR. Given that PR satisfies the efficient compromise axiom, it will select the Absolute Condorcet winner, just like FV and PAV.
□ Fallback Voting and Preference Approval Voting
Social (un)acceptability
In the search for a certain consensus around a candidate, [START_REF] Mahajne | The socially acceptable scoring rule[END_REF] have introduced the concept of social acceptability. They say that a candidate is socially acceptable if the number of voters who rank him among their most preferred half of the candidates is at least as large as the number of voters who rank him among the least preferred half. [START_REF] Mahajne | The socially acceptable scoring rule[END_REF] showed that there always exist at least one socially acceptable candidate in any profile; and they show that there exists a unique scoring rule that always elects such a candidate, the Half Accepted-Half Rejected rule (HAHR).9 In contrast to a socially acceptable candidate, a candidate is said to be socially unacceptable if the number of individuals who rank him among their least preferred half of the candidates is at least as large as the number of voters who rank him among the most preferred half.
Proposition 5 AV, FV, and PAV may not select a socially acceptable candidate and they may select a socially unacceptable candidate. Following Proposition 2, within the domain where an Absolute Condorcet winner exists, FV and PAV always select a socially acceptable candidate and never select a socially unacceptable candidate.
Proof Consider the following profile with 3 candidates and 6 voters.
2 : a ≻ c ≻ b 1 : a ≻ b ≻ c 1 : b ≻ a ≻ c 2 : c ≻ b ≻ a
In this profile, a is a socially acceptable candidate while b is socially unacceptable and b is the winner under AV, FV, and PAV.
It is obvious that, when he exists, an Absolute Condorcet winner is also a socially acceptable candidate. By Proposition 2, FV and PAV always select this candidate.
Such a candidate cannot be socially unacceptable; he is still elected in the presence of a socially unacceptable candidate. But this may not be the case for AV: to see this, just add a voter with a ≻ b ≻ c ; it follows that a is the absolute majority winner and therefore socially acceptable while the AV winner is b. □
Cancellation property
Before going further, let us raise a point about PAV. By definition, Rule 2 of PAV relies on pairwise comparisons to decide the winner; but what if all the majority duels between the majority-approved candidates end up in tie?
In such a case, should all candidates be declared elected, or only the one(s)
with the highest AV score? This situation does not seem to have been taken into account by [START_REF] Brams | Voting systems that combine approval and preferences[END_REF]. In such a scenario, not choosing all the candidates involved implies a violation of the cancellation criterion. The cancellation condition requires that when all the majority comparisons end up in a tie, all the candidates should be selected [START_REF] Young | An axiomatization of Borda's rule[END_REF]. Admittedly it is a bit difficult to apply the cancellation property to AV, because this rule does not fundamentally depend on rankings. Proposition 6 tells us that FV and PAV fail the cancellation criterion and that this also the case for AV when it is based on rankings.
Proposition 6 AV, FV, and PAV do not meet the cancellation property.
Proof Consider the following profile with 3 candidates and 4 voters.
2 : a ≻ b ≻ c 2 : c ≻ b ≻ a
We can see in this profile that all the pairwise comparisons end up in ties while candidate b is the winner of AV, FV, and PAV. So, AV, FV, and PAV fail the cancellation property. □
Pareto optimality
In a given voting situation, candidate a Pareto dominates candidate b if all the voters strictly prefer a to b. A candidate is said to be Pareto-optimal if there is no other candidate that dominates him. According to [START_REF] Felsenthal | Review of paradoxes afflicting procedures for electing a single candidate[END_REF], the election of a candidate a is not tolerable when there is another candidate b that all voters rank before him. [START_REF] Felsenthal | Review of paradoxes afflicting procedures for electing a single candidate[END_REF] drives the point home by arguing that a voting rule that can elect a Pareto-dominated candidate should be disqualified no matter how low the frequency. A voting rule meets the Pareto criterion if for all voting profiles it never elects a Pareto-dominated candidate.
According to [START_REF] Felsenthal | Review of paradoxes afflicting procedures for electing a single candidate[END_REF], AV may elect a Pareto-dominated candidate.
Proposition 7 tells us that this is not the case for FV and PAV.
The reinforcement condition
According to the reinforcement condition10 (Myerson, 1995) when an electorate is divided in two disjoint groups of voters
N 1 (|N 1 | = n 1 ) and N 2 (|N 2 | = n 2 ) such that N 1 ∩ N 2 = ∅ and N 1 ∪ N 2 = N (|N | = n 1 + n 2 = n),
and
the winner is the same for each group, this outcome will remain unchanged when both groups of voters are merged. It is known that AV, PR, NPR, and BR meet the reinforcement condition (see [START_REF] Felsenthal | Review of paradoxes afflicting procedures for electing a single candidate[END_REF]. Proposition 8 tells us that FV and PAV do not meet the reinforcement condition and it characterizes when this is (not) the case.
Proposition 8 Assume that an electorate is divided in two disjoint groups of voters N 1 and N 2 such that the winner is the same for each group.
Considering that PAV is defined by four rules (Rule 1i, Rule 1ii, Rule 2i, and Rule 2ii), it always meets the reinforcement criterion if the winner in each of the two groups of voters is determined by Rule 1i or Rule 1ii; this is also the case when the winner is determined in one group by Rule 1i and in the other group by Rule 1ii. In the other cases, PAV may fail the reinforcement condition.
FV meets the reinforcement condition if the winner in each group is deter- mined at the same level of rankings. In the other cases, it may fail the reinforcement condition.
Proof See Appendix. □
Homogeneity
Given the voting outcome on a voting profile, if duplicating this profile λ times (λ > 1, λ ∈ N) changes the result, we say that the homogeneity property is not satisfied. The violation of the homogeneity property is a major challenge for collective decision rules [START_REF] Nurmi | A comparison of some distance-based choice rules in ranking environments[END_REF]. It is obvious that AV is homogeneous since duplicating a population also duplicates the approvals in the same magnitude. Proposition 9 tells us the same thing concerning FV and PAV.
Proposition 9 FV and PAV are homogeneous.
Proof Suppose we duplicate a profile π, λ times. On the resulting profile, given a candidate x, we have S l (x, λπ) = λS l (x, π), AV(x, λπ) = λAV(x, π), and nxy(λπ) = λnxy(π). It then follows that if a candidate wins under FV at level l in π, he also wins at the same level in λπ; we reach the same conclusion with PAV. Thus, duplicating a profile does not change the outcome under FV and PAV. □
The No-Show paradox and the Truncation paradox
The No-Show paradox describes a situation under which some voters may do better to abstain than to vote since abstaining may result in the victory of a more preferable or desirable candidate [START_REF] Doron | Single Transferable Vote: An example of a Oerverse Social Choice Function[END_REF][START_REF] Fishburn | Paradoxes of preferential voting[END_REF]. The Plurality rule, the Borda rule, and Approval voting are among the few voting rules not vulnerable to the No-Show paradox [START_REF] Felsenthal | Review of paradoxes afflicting procedures for electing a single candidate[END_REF]. It is known that the vulnerability of a voting rule to the No-Show paradox leads to its vulnerability to the Truncation paradox but the reverse is not always true [START_REF] Nurmi | Comparing Voting Systems[END_REF]. The Truncation paradox occurs when some voters may reach a more preferred outcome by submitting a sincere but incomplete ranking (Fishburn andBrams, 1983, 1984). According to [START_REF] Brams | The AMS nominating system is vulnerable to truncation of preferences[END_REF],
AV is sensitive to the Truncation paradox; this is also the case for NPR and BR but not for PR.11 Proposition 10 characterizes the vulnerability of FV and PAV to the No-Show paradox.
Proposition 10 PAV is vulnerable to the No-Show paradox only when the winner is determined by Rule 2i. FV is not vulnerable to the No-Show paradox only when the winner is determined at level l = 1 or l = m. Thus, FV and PAV are vulnerable to the Truncation paradox.
Proof See Appendix. □
Independence of clones
Following [START_REF] Tideman | Independence of clones as a criterion for voting rules[END_REF] We are now going to summarize all the above results in Table 1. In this table, we recap our findings about PAV, AV, and FV as well as those of [START_REF] Brams | Voting systems that combine approval and preferences[END_REF] and [START_REF] Sanver | Approval as an intrinsic part of preference[END_REF]; besides FV and PAV, we include PR, NPR, and BR. The fact that these rules satisfy or do not satisfy one of the criteria retained here comes from results of the literature (see [START_REF] Nurmi | Comparing Voting Systems[END_REF][START_REF] Nurmi | Voting Paradoxes and How to Deal with Them[END_REF][START_REF] Felsenthal | Review of paradoxes afflicting procedures for electing a single candidate[END_REF]. In Table 1, a "Yes" means that the voting rule meets the supposed criterion 12 and a "No" means it does not.
Table 1 Normative properties of the rules
On the basis of the normative criteria used in our analysis, it appears that FV and PAV satisfy and fail the same criteria; they possess two properties that AV does not: Pareto optimality and the fact of always electing the Absolute Condorcet winner when he exists. AV, for its part, meets two criteria that FV and PAV do not: reinforcement and non-vulnerability to the No-Show paradox.
Approval-based rules, compared to score-based rules, satisfy fewer criteria.
Another way to compare these sets of rules would be to check the frequencies for the criteria they violate. This is what we try to do in the next section, with particular attention to the Condorcet criteria.
Computational analysis
Our aim in this section is to evaluate, for voting situations with three candidates and electorates tending to infinity, the probabilities of some voting events. We have chosen to only present here the main messages that stand out from our calculations, while relegating the details of our calculation approach to Appendix C.13 So, we formally define, for three-candidate elections, the preferences and voting rules in Appendix C.1. The impartial and anonymous culture assumption that we assume for our computations is presented in Appendix C.2.
In our calculations, we focus on the following voting events, already described above:
the agreement between the rules;
FV and PAV may pick the least-approved candidate;
FV or PAV may not pick a unanimously approved candidate;
the satisfaction(resp. the violation) of the Condorcet winner criterion (resp.
the Condorcet loser criterion);
We extend our analysis to the three popular scoring rules (PR, NPR, and BR) and consider comparisons with FV and PAV. This extension is justified by the fact that we have pointed out above that, in some configurations, FV is very close to PR.
Concerning the agreement between the rules under consideration, Table 2 summarizes the probabilities we obtained. According to [START_REF] Ju | Collective choices for simple preferences[END_REF] and [START_REF] Xu | Axiomatizations of approval voting[END_REF], when voters have dichotomous preferences, AV always elects the Condorcet winner when he exists. This is not always the case when voters have rather strict preferences or when indifference is allowed, as shown by [START_REF] Diss | On the condorcet efficiency of approval voting and extended scoring rules for three alternatives[END_REF], Gehrlein andLepelley (1998, 2015),
and [START_REF] Kamwa | Condorcet Efficiency of the Preference Approval Voting and the Probability of Selecting the Condorcet Loser[END_REF]. Considering three-candidate elections with a certain degree of indifference under the extended impartial culture condition, 14 Diss et al. 3. In this table, we denote by CE(R) (resp. CL(R)) the Condorcet efficiency (resp. the 14 Under the impartial culture condition [START_REF] Guilbaud | Les théories de l'intérêt général et le problème logique de l'aggrégation[END_REF] it is assumed that each voter chooses his preference (randomly and independently) on the basis of a uniform probability distribution across all strict orders. The extended impartial culture condition allows dichotomous preferences with complete indifference between two or more candidates. 15 The extended anonymous impartial culture condition allows dichotomous preferences with complete indifference between two or more candidates. 16 Recall that PR always elects the Absolute Condorcet winner when he exists. Fallback Voting and Preference Approval Voting probability of electing the Condorcet loser) given the voting rule R; and we define ACE(R) and ACL(R) similarly. (2020) obtain in their setting. 2016) achieve in their respective frameworks. With a limiting probability of nearly 0.01%, PAV performs significantly better than FV whose probability is raised to nearly 3%. Regarding the election of an Absolute Condorcet loser, our results show that among our rules AV is the most likely to elect such a candidate; PAV performs better than FV which performs better than PR.
To refine the comparisons, we may assess more closely how each of the probabilities in Table 3 reacts to the proportion α = n1+n2+n3+n4+n5+n6 n of voters who approve only one candidate. When α = 1, AV and PR are equivalent. For some values of α, we report in Table 4 the probabilities CE(R, α), ACE(R, α), CL(R, α), and ACL(R, α) as functions of α.
FV 1 1 1 1 1 1 1 PAV 1 1 1 1 1 1 1 PR 1 1 1 1 1 1 1 NPR 0.
NPR 0 0 0 0 0 0 0 BR 0 0 0 0 0 0 0
Table 4 demonstrates that the probabilities vary depending on the value of α. We note that for α = 1, CE(AV, 1) = CE(F V, 1) = CE(P AV, 1) = CE(P R, 1)
and ACE(AV, 1) = 1. In the scenario where the electorate consists solely of voters who approve exactly one candidate, PR, AV, PAV, and FV exhibit identical performance according to the (Absolute) Condorcet winner criterion.
PAV appears to surpass all other rules in terms of Condorcet efficiency for 0 ≤ α < 3/4; for 3/4 ≤ α ≤ 1, BR dominates over other rules. PR tends to dominate FV for 0 ≤ α < 1/2 while we get the reverse for 1/2 ≤ α ≤ 1. PR dominates AV for all α; AV is also dominated by NPR for 0 ≤ α < 1/2. As for electing the Absolute Condorcet winner, AV tends to dominate NPR for α ≥ 1/3 and it dominates BR for α ≥ 2/3.
For any value of α, NPR appears to be the rule most likely to elect the Condorcet loser. It appears that CL(AV, α) decreases with α while CL(P AV, α)
increases with α. CL(F V, α) tends to increase for α going from 0 to reach its maximum at α = 1/2 then decreases. CL(P R, α) and CL(N P R, α) tend to grow for 0 ≤ α < 1/2 then to decrease for 1/2 ≤ α < 1. For α = 1, we find that AV, PR, FV, and PAV have the same probability to elect the Condorcet loser.
For 0 ≤ α ≤ 1/2, AV appears to be the rule most likely to elect an Absolute Condorcet loser; over this interval, ACL(AV, α) tends to decrease with α. We also note that for 0 ≤ α ≤ 1/2, FV and PAV never elect an Absolute Condorcet loser. For 1/2 < α < 1, PR is the most likely to elect an Absolute Condorcet loser; it is followed by AV while PAV performs better than FV. For α = 1, AV, FV, PAV, and PR have the same probability (about 2.47%) of electing the Absolute Condorcet loser.
Concluding remarks
The first objective of this paper was to further develop the analysis of [START_REF] Brams | Voting systems that combine approval and preferences[END_REF] regarding the normative properties of FV and PAV. This is how we managed to show that FV and PAV are Pareto optimal as they never elect a Pareto-dominated candidate; FV and PAV are also homogeneous; FV and PAV always elect the Absolute Condorcet winner when he exists; and that on the domain where there is an Absolute Condorcet winner, these rules always elect a socially acceptable candidate, they never elect a socially unacceptable candidate, and they are resistant to manipulation by clones. Nonetheless, these rules do not meet the Cancellation property or the reinforcement criterion and they are vulnerable to the No-Show paradox and to the Truncation paradox.
We managed to find some conditions under which these rules always meet Our analysis shows that FV and PAV tend to deliver on the promise of being rules that could reconcile the advocates of score rules with those of approval voting. FV and PAV share the simplicity that characterizes AV, yet with scoring rules they share the constraint of ranking candidates, which can be a daunting task when there is a large number of candidates.
voters and with Rule 2ii in an other group, or when a candidate wins through Rule 2i (resp. 2ii) in both groups of voters.
We can give a summary that reflects whether or not the criterion is met as follows:
N 2
Rule 1i Rule 1ii Rule 2i Rule 2ii N 1 candidate in each group at a given level l, he remains elected when both groups are merged.
(ii) At l, a has the greatest score among the majority approved candidates in each group. This means that S l 1 (a
) > S l 1 (b) > n1 2 ≥ S l 1 (c) and S l 2 (a) > S l 2 (b) > n2 2 ≥ S l 2 (c
). What we have in (i) implies that c can never win when the two groups merge.
To complete the proof, let us use some profiles to show that when the same FV winner is determined in two groups at two different levels, he may not remain elected when both groups merge.
Profile 1
2 : a ≻ b ≻ c 1 : a ≻ b ≻ c 2 : b ≻ c ≻ a Profile 2 1 : a ≻ b ≻ c; 1 : b ≻ c ≻ a; 2 : a ≻ b ≻ c; 1 : c ≻ b ≻ a; 2 : b ≻ a ≻ c;
In Consider the configuration where candidate a was the winner under Rule 2i. First, let us assume that b was not among the majority approved candidates (AV(b, π) < n 2 ) and that he wins after abstention. After abstention, AV(b, π)- are such that n-β 2 < AV(a, π) -β < AV(b, π) -β or AV(a, π) -β < n-β 2 < AV(b, π) -β; in each case, these conditions lead to AV(a, π) < AV(b, π), which is a contradiction. Thus, PAV is vulnerable to the No-Show paradox only when the winner is determined by Rule 2i.
β < n-β 2 ; b cannot win if AV(a, π) -β > n-β 2 ; if AV(b, π) -β < n-β 2 , b wins if AV(b, π) -β > AV(a, π) -β which is equivalent to AV(b, π) > AV(a,
B.2. FV is not vulnerable to the No-Show paradox only when the winner is determined at level l = 1 or l = m.
When the FV winner is determined at l = 1, any abstention of voters who do not rank this winner first does not affect the approval of the level l = 1. So, for l = 1, the No-Show paradox never occurs.
Let us assume that a is the FV winner at level l = m and that a group C. Details of the computational analysis
C.1. Preferences and the rules in three-candidate elections
We need to present the rankings and approvals in the particular case of three candidates. For the sake of simplicity, we rule out the possibilities of approving nothing; so, given his ranking, a voter may approve at least one candidate and at most all the running candidates. So, given C = {a, b, c}, the 18 possible types of preferences on C are reported in Table 1. Then, a voting situation is the 18-tuple π = (n 1 , n 2 , . . . , n t , . . . , n 18 ) such that
C = {a, b, c} a ≻ b ≻ c (n 1 ) a ≻ b ≻ c (n 7 ) a ≻ b ≻ c (n 13 ) a ≻ c ≻ b (n 2 ) a ≻ c ≻ b (n 8 ) a ≻ c ≻ b (n 14 ) b ≻ a ≻ c (n 3 ) b ≻ a ≻ c (n 9 ) b ≻ a ≻ c (n 15 ) b ≻ c ≻ a (n 4 ) b ≻ c ≻ a (n 10 ) b ≻ c ≻ a (n 16 ) c ≻ a ≻ b (n 5 ) c ≻ a ≻ b (n 11 ) c ≻ a ≻ b (n 17 ) c ≻ b ≻ a (n 6 ) c ≻ b ≻ a (n 12 ) c ≻ b ≻ a (n 18 )
Given the labels of Table 1, the approval scores S l (.) at level l are provided in Table 2; with three candidates, l varies from 1 to 3.
Table 2 The approval scores S l (.)
Candidates
a b c S 1 (.) n1 + n2 + n7 + n8 + n13 + n14 n3 + n4 + n9 + n10 + n15 + n16 n5 + n6 + n11 + n12 + n17 + n18 S 2 (.) n1 + n2 + n7 + n8 + n9 n3 + n4 + n7 + n9 + n10 n5 + n6 + n8 + n10 + n11 +n11 + n13 + n14 + n15 + n17 +n12 + n13 + n15 + n16 + n18 +n12 + n14 + n16 + n17 + n18 S 3 (.) n1 + n2 + n7 + n8 + n9 + n11 n3 + n4 + n7 + n9 + n10 + n12 n5 + n6 + n8 + n10 + n11 + n12 +n13 + n14 + n15 + n16 + n17 + n18 +n13 + n14 + n15 + n16 + n17 + n18 +n13 + n14 + n15 + n16 + n17 + n18
Notice that S 3 (a) = AV(a, π). Candidate a is the AV winner if the conditions described by Eq. 1 are met.
S 3 (a) > S 3 (b) S 3 (a) > S 3 (c) (1)
Recall that S 1 (.) = PR(., π). We provide in Table 3 the scores of the candidates under NPR and BR.17
+ n6 + n10 + n12 + n16 + n18 n2 + n5 + n8 + n11 + n14 + n17 n1 + n3 + n7 + n9 + n13 + n15 BR(., π) 2(n1 + n2 + n7 + n8 + n13 + n14) 2(n3 + n4 + n9 + n10 + n15 + n16) 2(n5 + n6 + n11 + n12 + n17 + n18) +n3 + n5 + n9 + n11 + n15 + n17 +n1 + n6 + n7 + n12 + n13 + n18 +n2 + n4 + n8 + n10 + n14 + n16
C.2. The impartial and anonymous culture assumption
When computing the likelihood of voting events, the impartial and anonymous culture (IAC) assumption introduced by [START_REF] Kuga | Voter Antagonism and the Paradox of Voting[END_REF] and [START_REF] Gehrlein | The probability of the paradox of voting: A computable solution[END_REF] is one of the most widely used assumptions in social choice theory literature. Under this assumption, all voting situations are equally likely to be observed; it follows that the probability of a given event is calculated according to the ratio between the number of voting situations in which the event occurs and the total number of possible voting situations.
For a given voting event, the number of voting situations can be reduced to the solutions of a finite system of linear constraints with rational coefficients.
The appropriate mathematical tools to find these solutions are the Ehrhart polynomials.
For a non-exhaustive overview of these techniques and algorithms, we refer to the recent books by [START_REF] Diss | Evaluating Voting Systems with Probability Models, Essays by and in honor of William V. Gehrlein and Dominique Lepelley[END_REF] and Gehrlein andLepelley (2011, 2017). As in this paper we deal with situations where the number of voters tends to infinity, finding the limiting probabilities under IAC is reduced to the computation of volumes of convex polytopes [START_REF] Bruns | The computation of generalized Ehrhart series in normaliz[END_REF][START_REF] Schürmann | Exploiting polyhedral symmetries in social choice[END_REF]. For our computations, we use the software Normaliz [START_REF] Bruns | Polytope volume by descent in the face lattice and applications in social choice[END_REF][START_REF] Bruns | Computations of volumes and Ehrhart series in four candidates elections[END_REF]. 18 It should be noted that the calculations are relatively simple to implement under Normaliz because it is enough to enter the conditions describing an event and the algorithm returns the volume of the corresponding polytope which is the probability of this event.
C.3. Agreement between the rules
First of all, let us look at situations where two rules coincide. Let us take the case where FV and AV agree on candidate a as the winner. We denote Fallback Voting and Preference Approval Voting by P (AV = F V = a) the limiting probability of this event. This probability is in fact equal to a sum of volumes of polytopes to take into account the different scenarios that can occur under FV as described above. For example, the case where the winner of AV is the same as the winner of FV at level l = 1 is described by the inequalities of Eq. 1 and 2; we denote this volume obtained by V 1∩2 (π). In a similar way, we determine the volumes V 1∩j (π) for j = 3, 4, . . . , 11. Thus, we obtain
P (AV = F V = a) = 11 j=2 V 1∩j (π) = 3864518350115 15850845241344
We can therefore deduce P(F V = AV ), the probability of agreement between AV and FV, as follows As pointed out by [START_REF] Brams | Voting systems that combine approval and preferences[END_REF], a least-approved candidate may be a PAV winner under rule 2i; in our framework, this event is fully characterized by the inequalities of Eq. 15 and 19. We then need to compute the volume V 15∩19 (π) that we multiply by 3 to find P(P AV = LAV ), the limiting probability that PAV elects the least-approved candidate as follows:
P(P AV = LAV ) = 3V 15∩19 (π) = 6095207 75497472 ≈ 0.0807339
So, it is thus in nearly 8.07% of cases that PAV can lead to the election of the least-approved candidate. What about FV? Since at level l = m, S l (.) is equal to the AV score, it is obvious that FV cannot elect the least-approved candidate at this level. It follows then that with three candidates, FV can elect the least-approved candidate only at l = 1 or l = 2; this corresponds to Eq. 2 to 6. Thus, P(F V = LAV ) the limiting probability that FV elects the least-approved candidate, is computed as follows: V 21∩j (π) 19 We notice that there is an incompatibility between Eq. 20 (or Eq. 21) and the conditions of Eq. 3. So, with these conditions FV does not fail to pick a unanimously approved candidate. Fallback Voting and Preference Approval Voting one candidate. For some values of α, we report in Table 4 the probabilities CE(R, α), ACE(R, α), CL(R, α), and ACL(R, α) as functions of α.
P(F V = LAV ) = 3
C.7. The election of the Condorcet loser
Let us assume on C = {a, b, c} that a is the Condorcet loser (resp. the Absolute Condorcet loser); using the labels of For our voting situations with three candidates, P(CL), the existence probability of the Condorcet loser and P(ACL) that of the Absolute Condorcet loser, are as follows: P(CL) = P(CW ) and P(ACL) = P(ACW ).
We know from Prop. 3 that FV and PAV may elect the (Absolute) Condorcet loser when he exists. When a voting rule may elect a Condorcet loser (resp. an Absolute Condorcet loser), it is said to be vulnerable to the Borda paradox (resp. to the Absolute Majority Loser Paradox). By definition, a (Absolute) Condorcet loser, when he exists, can never be elected under rule 2 of PAV; this can only be the case under Rule 1. With FV, the Condorcet loser cannot be elected at level l = 1 and the Absolute Condorcet loser can only be elected at level l = 3. We follow the same methodology as for the Condorcet winner efficiency to determine CL(.) (resp. ACL(.)), the limiting probability of electing the Condorcet loser (resp. the Absolute Condorcet loser) when he exists. From our computations, we get: 20
Proposition 7
7 FV and PAV meet the Pareto criterion. Proof Given π, assume that a is the PAV winner and that he is Pareto-dominated by b. As b Pareto-dominates a, if a voter approves a this is also the case for b; it follows that AV(b, π) ≥ AV(a, π) and b is majority preferred to a since n ba = n. If a wins under Rule 1i of PAV, this means that AV(b, π) < AV(a, π) < n 2 which contradicts AV(b, π) ≥ AV(a, π). If a wins under Rule 1ii of PAV, this leads to AV(a, π) > n 2 and AV(b, π) < n 2 which contradicts AV(b, π) ≥ AV(a, π). If a wins under Rule 2 of PAV, the following three cases can be considered: (i) AV(a, π) > n 2 , AV(b, π) > n 2 , and n ab > n ba , or (ii) AV(a, π) > n 2 > AV(b, π), AV(c, π) > n 2 , and nac > nca for c ∈ C \ {a, b}, or (iii)AV(a, π) > AV(b, π) > n 2 . It turns out that (i) contradicts n ba = n while (ii) and (iii) contradict AV(b, π) ≥ AV(a, π). Thus, b cannot win: PAV meets the Pareto criterion. Given π, assume that a is the FV winner and that he is Pareto-dominated by b. By definition, as b Pareto-dominates a we get S 1 (b) > S 1 (a), and S l (b) ≥ S l (a) for all l > 1. That candidate a wins at a level l implies that n 2 > S l (a) > S l (b) or S l (a) > n 2 > S l (b) or S l (a) > S l (b) > n 2 ; these conditions all contradict that S l (b) ≥ S l (a). So, b cannot be the winner: FV never elects a Pareto-dominated candidate. □
(
2010) and[START_REF] Gehrlein | The Condorcet Efficiency Advantage that Voter Indifference Gives to Approval Voting Over Some Other Voting Rules[END_REF] conclude that: AV is more likely to elect the Condorcet winner than both PR and NPR; BR performs better than AV.[START_REF] Gehrlein | The Condorcet Efficiency Advantage that Voter Indifference Gives to Approval Voting Over Some Other Voting Rules[END_REF] and El Ouafdi et al. (2020) reach a quite similar conclusion when considering the extended impartial anonymous culture condition.15 When it comes to electing the Absolute Condorcet winner when he exists, El[START_REF] El Ouafdi | On the Condorcet efficiency of evaluative voting (and other voting rules) with trichotomous preferences[END_REF] show in their framework that AV does less well than BR but better than NPR.16 Almost nothing is known about the propensity of FV and PAV to elect the Condorcet winner (resp. the Condorcet loser) when he exists.[START_REF] Kamwa | Condorcet Efficiency of the Preference Approval Voting and the Probability of Selecting the Condorcet Loser[END_REF] investigates the limiting Condorcet efficiency of PAV in three-candidate elections while assuming the extended impartial culture condition; he finds that PAV tends to perform better than AV. Considering the framework developed in this paper, we compute the Condorcet efficiency of AV, FV, PAV, PR, NPR, and BR and their propensity to elect the Absolute Condorcet winner when he exists. We do the same job for the election of the Condorcet loser and of the Absolute Condorcet loser. Our results are summarized in Table
the reinforcement criterion or are not sensitive to the No-Show paradox. It turns out that FV and PAV satisfy and fail the same criteria; they possess two properties that AV does not: Pareto optimality and the fact of always electing the Absolute Condorcet winner when he exists. AV, for its part, meets two criteria that FV and PAV do not: the reinforcement criterion and nonvulnerability to the No-Show paradox.Even if, by definition, there is a certain advantage of FV and PAV over AV regarding the respect of the Condorcet majority criteria, we wanted to measure the extent of this advantage. Thus, for voting situations with three candidates, we calculated the probabilities that these rules would elect the (Absolute) Condorcet winner or the (Absolute) Condorcet loser. Our analysis shows that in terms of the election of the Condorcet winner, PAV performs better than BR which dominates FV. When it comes to electing the Absolute Condorcet winner, PAV and FV dominate BR, AV, and PR. To prevent the election of an (Absolute) Condorcet loser, FV and PAV perform better than AV and PR.
Profile 1, a wins at the first level since S 1 (a) = 3, S 1 (b) = 2, and S 1 (c) = 0; he also wins with Profile 2 at level 2 since S 1 (a) = S 1 (b) = 3, S 1 (c) = 1, S 2 (a) = 5, S 2 (b) = 4, and S 2 (c) = 1. When both profiles are merged, b wins since S 1 (a) = 6, S 1 (b) = 5, S 1 (c) = 1, S 2 (a) = 8, S 2 (b) = 18, and S 2 (c) = 1. So, FV may fail the reinforcement criterion when the winner in both groups is elected at two different levels of preferences. B. Proof of Proposition 10 B.1. PAV is vulnerable to the No-Show paradox only when the winner is determined by Rule 2i.
π): this contradicts AV(a, π) > n 2 . So, it not possible to favor b. Let us now assume that b was among the majority approved candidates (AV(b, π) > n2 ) and that he wins after abstention. It is obvious after abstention that b cannot win ifAV(a, π) -β > n-β 2 . After abstention, if AV(a, π) -β < n-β 2 ,two cases are possible: If AV(a, π) > AV(b, π), it is not possible to favor b; let us show how. Given AV(a, π) > AV(b, π), if AV(b, π) -β > n-β 2 and b wins, this means that AV(b, π) > AV(a, π); we get a contradiction. For AV(b, π)-β < n-β 2 , b wins if AV(b, π)-β > AV(a, π)-β which is tantamount to AV(b, π) > AV(a, π): we get a contradiction. If AV(a, π) < AV(b, π), -it is possible to favor b since it is possible to get AV(b, π) -β > n-β 2 such that b wins as illustrated by the following profile with 3 candidates and 19 voters.
of β voters try to favor a more preferred candidate b by abstaining. Assume at l = m that a is the only majority approved candidate, this means that S m (a) > n 2 > S m (b); after abstention, we may get (i) S m (a) -β > n 2 and S m (b) -β < n 2 or (ii)S m (a) -β < n 2 and S m (b) -β < n 2 . Candidate a remains the winner under (i); candidate b wins under (ii) if S m (a)-β < S m (b)-β which is equivalent to S m (a) < S m (b): this contradicts that a was the only majority approved candidate. Let us now assume that a and b are among the majority approved candidates; since a wins, this means that S m (a) > S m (b) > n 2 . After truncation, we can get S m (a) -β > S m (b) -β > n 2 or n 2 > S m (a) -β > S m (b) -β; in each case, b cannot be the winner. It follows that FV is not sensitive to the No-Show paradox when the winner is determined at level l = 1 or l = m. Now, let us assume that a is the FV winner at level l (l ̸ = 1, m) and consider the following profile with 12 voters and 3 candidates. 2 : a ≻ b ≻ c; 1 : a ≻ c ≻ b; 4 : b ≻ a ≻ c; 2 : b ≻ c ≻ a; 1 : c ≻ a ≻ b; 1 : c ≻ b ≻ a; 1 : c ≻ b ≻ a; With this profile, no candidate wins at l = 1 since S 1 (a) = S 1 (c) = 3 and S 1 (b) = 6; at l = 2, S 2 (a) = 8, S 2 (b) = 7, and S 2 (c) = 4: candidate a wins. Assume that the 4 voters with b ≻ a ≻ c abstain. In the new profile, we get: S 1 (a) = S 1 (c) = 3, S 1 (b) = 2 at l = 1 and no one wins; S 2 (a) = S 2 (c) = 4 and S 2 (b) = 3 and no one wins. At l = 3, S 3 (a) = S 3 (c) = 4 and S 3 (b) = 5 and b wins; by abstaining, the 4 voters have favored b who is preferred to a. Thus, FV is vulnerable to the No-Show paradox when the winner is determined at level of approval l ̸ = 1 and l ̸ = m.Since the vulnerability of a voting rule to the No-Show paradox leads to its vulnerability to the Truncation paradox, FV and PAV would therefore be vulnerable to the Truncation paradox. Preference truncation is efficient under FV and PAV only if it consists, as shown by[START_REF] Brams | Voting systems that combine approval and preferences[END_REF], in a contraction of the set of approved candidates.
P
(F V = AV ) = 3P (F V = AV = a) = 3864518350115 5283615080448Note that the calculation of (F V = P AV = a) requires us to review 7 × 10 = 70 possible configurations; among these configurations only 29 are possible because of the incompatibilities between the conditions. Proceeding in a similar way and including the scoring rules in our analysis, we obtain:
unanimously approved candidate may not win underFV or PAVAs[START_REF] Brams | Voting systems that combine approval and preferences[END_REF] note, there may be times when FV and PAV do not elect a unanimously approved candidate. This marks another point of dissonance between these rules and AV. By definition, this can only occur with PAV under Rule 2i. Let us assume on C = {a, b, c} that b is unanimously approved. In our framework, this is tantamount to:n 1 + n 2 + n 5 + n 6 + n 8 + n 11 = 0 (20)If b and c are both unanimously approved, we getn 10 + n 12 + n 13 + n 14 + n 15 + n 16 + n 17 + n 18 = n(21)Given Eq. 20, situations where a is the PAV winner while b (resp. c) is unanimously approved occur when Eq. 14 or 16 (resp. Eq. 15 or 16) hold. The case where both b and c are unanimously approved while a is the PAV winner can only occur if Eq. 16 holds. Then, P(P AV ̸ = U ap) the limiting probability that PAV fails to elect a unanimously approved candidate is computed as follows:P(P AV ̸ = U ap) = 3 2 V 20∩14 (π) + V 20∩16 (π) -V 21∩16 (π)to pick a unanimously approved candidate when Eq. 2 or 4 or 5 or 6 holds. Then, P(F V ̸ = U ap), the limiting probability that FV fails to elect a unanimously approved candidate, is computed as follows: 19 P(F V ̸ = U ap)
□
Proposition 2 FV and PAV always elect the Absolute Condorcet winner when he exists.Proof Assume that candidate a is the Absolute Condorcet winner. As he is ranked first by more than half of the voters, AV(a, π) > n 2 and n ab > n ba for all b ∈ C \ {a}.
Under PAV, if a is the only one to be majority approved, he is obviously elected; if there are more candidates majority approved, a is elected since he is the Condorcet winner. So, PAV always elects the Absolute Condorcet winner. Since a is the Absolute Condorcet winner we get S 1 (a) > n 2 : by definition, he is the FV winner. So, FV always elects the Absolute Condorcet winner. □
Table 2
2 Limiting probabilities of agreement
FV PAV AV PR NPR BR
FV 1 0.7474874 0.7314156 0.6603448 0.7077614 0.7714232
PAV 0.7474874 1 0.6714614 0.7993055 0.6556436 0.8661731
AV 0.7314156 0.6714614 1 0.6032491 0.5779744 0.6428802
PR 0.6603448 0.7993055 0.6032491 1 0.5301230 0.7946785
NPR 0.7077614 0.6556436 0.5779744 0.5301230 1 0.7197796
BR 0.7714232 0.8661731 0.6428802 0.7946785 0.7197796 1
Table 3
3 Voting rules and the limiting probabilities of the Condorcet principleIt emerges that PAV is the best-performing rule in terms of Condorcet efficiency; it is followed by BR. FV performs better than AV but worse than PR. Interestingly, within our framework, AV emerges as the rule with the
CE(R) CL(R) ACE(R) ACL(R)
AV 0.6461261 0.0898578 0.8099504 0.0293452
FV 0.7600535 0.0300712 1 0.0001114
PAV 0.9973310 0.0001028 1 0.0000249
PR 0.8326151 0.0340336 1 0.0139566
NPR 0.6803188 0.0359394 0.7303018 0
BR 0.9044277 0 0.9944123 0
lowest Condorcet efficiency, indicating that it performs worse than scoring rules in this context. The fact that AV performs worse than PR and NPR here contrasts with what Diss et al. (2010), Gehrlein and Lepelley (2015), and El Ouafdi et al. (2020) achieve in their different settings. As for electing the Absolute Condorcet winner when he exists, AV performs worse than BR but better than NPR. This conclusion is in agreement with what El Ouafdi et al.
Table 3
3 also tells us that in our analytical framework, AV is the rule most likely to elect the Condorcet loser when he exists; it does less well than PR and NPR. This result contrasts with what El Ouafdi et al. (2020) or Gehrlein et al. (
Table 4
4 Some computed values of CE(R, α), ACE(R, α), CL(R, α), and ACL(R, α)
α
Rules 0 1/4 1/3 1/2 2/3 3/4 1
AV 0.5384051 0.5916563 0.6396992 0.7544142 0.8361545 0.8590698 0.8814815
CE(R, α) FV 0.7818569 0.7379713 0.7497197 0.8356119 0.8517430 0.8648282 0.8814815
PAV 1 0.9999867 0.9998151 0.9911858 0.9297356 0.8965582 0.8814815
PR 0.8484781 0.8330283 0.8276168 0.8376708 0.8620404 0.8713166 0.8814815
NPR 0.6639015 0.6798728 0.6855510 0.6750562 0.6493909 0.6397497 0.6296296
BR 0.9061312 0.9045864 0.9037684 0.9050865 0.9089313 0.9101536 0.9111111
AV 0.5482718 0.7196296 0.8260673 0.9647654 0.9961465 0.9989699 1
ACE(R, α)
Consider a voting situation where candidate a is PAV winner. Assume a group of β voters (β ≥ 1) who decide to not show up in order to favor a more preferred candidate b. Obviously, if these voters do not approve of candidate a in the original profile, the maneuver is futile. Suppose now that these voters approve candidate a in the original profile. When they abstain, the AV score of each candidate they approved decreases by β. Let us discuss each of the possible configurations.
It is known that AV is not vulnerable to the No-Show paradox (see Felsenthal,
2012); as Rule 1 of PAV is equivalent to AV, it follows that under Rule 1, PAV
is not vulnerable to the No-Show paradox.
Table 1
1 The 18 types of rankings and approvals on
Table 3
3 Scores of the candidates under NPR and BR
Candidates
a b c
NPR(., π) n4
Table 1, this is equivalent to Eq. 23 (resp.
Eq. 24).
n ab < n ba n ac < n ca (23) NPR(a, π) > n 2 (24)
Notice that the first formal introduction of this framework in terms of ordinal versus cardinal preferences is made in[START_REF] Sanver | Approval as an intrinsic part of preference[END_REF].
Notice that FV is called "Majoritarian Approval Compromise" in[START_REF] Sanver | Approval as an intrinsic part of preference[END_REF] and it is an adaptation of the Majoritarian Compromise rule of[START_REF] Sertel | Lectures notes in microeconomics[END_REF] or[START_REF] Sertel | The majoritarian compromise is majoritarian optimal and subgame perfect implementable[END_REF].
[START_REF] Kamwa | Condorcet Efficiency of the Preference Approval Voting and the Probability of Selecting the Condorcet Loser[END_REF] is the only paper to our knowledge that has paid particular attention at least to PAV;[START_REF] Kamwa | Condorcet Efficiency of the Preference Approval Voting and the Probability of Selecting the Condorcet Loser[END_REF] investigated the propensity of PAV to elect the Condorcet winner or the Condorcet loser.
Notice that we stop going down once we reach the approval line for a voter that may be placed differently given different voters.
q-Approval Fallback Bargaining winners are the candidates receiving the support of q voters at the highest possible quality.
This profile is adapted from Özkal-[START_REF] Özkal-Sanver | Efficiency in the degree of compromise: a new axiom for social choice theory[END_REF].
For m even, HAHR is equivalent to the m 2 -approval rule.
This condition is also known as the Separability axiom in[START_REF] Smith | Aggregation of preferences with variable electorate[END_REF] or the Consistency axiom in[START_REF] Young | Social Choice scoring functions[END_REF].
For more details on the Truncation paradox and its occurrence under the scoring rules, we refer to[START_REF] Kamwa | Scoring Rules, Ballot Truncation, and the Truncation Paradox[END_REF] and[START_REF] Kamwa | Susceptibility to manipulation by sincere truncation: the case of scoring rules and scoring runoff systems[END_REF].
For reasons of space, we cannot present the detailed calculations here. These calculation details are available upon request.
t=1 n t = n.
With three candidates, the Borda rule gives 2 to a candidate each times he is ranked first, 1 point when he is second, and 0 when he is ranked last.
To more on Normaliz, we refer the reader to the paper of[START_REF] Bruns | The computation of generalized Ehrhart series in normaliz[END_REF] or the website dedicated to this algorithm, https://www.normaliz.uni-osnabrueck.de.
It turns out that among the rules under consideration, BR is the one with the highest probability of agreement with each of the other rules. In at least 66% of the cases, FV agrees with each of the other rules and it tends to agree more with PAV (74.75%) than with AV (73.14%), and more with NPR (70.77%) than with PR (66.03%). PAV tends to more agree with PR (79.93%) than with AV (67.15%) or with NPR (65.56%). Not surprisingly, AV tends to agree more with FV and PAV than with scoring rules. Regarding scoring rules, PR and BR tend to agree the most with PAV. The general observation derived from Table 2 suggests that the combination of approvals and rankings in FV and PAV tends to align them more closely with scoring rules rather than AV, particularly in terms of agreement. [START_REF] Brams | Voting systems that combine approval and preferences[END_REF] showed that for the same preference profile, AV, FV, and PAV can elect completely different candidates. From our computations, we found that for the same voting profile, PAV, AV, and FV elect the same winner in about 58.86% of cases; they therefore diverge in about 41.13% of cases.
When it comes to the election of a least-approved candidate, this occurs in 8.07% of cases under PAV while it occurs in 4.95% of cases under FV. FV would therefore be almost half as likely to elect a least-approved candidate than PAV. This result would tend to confirm the fact that in terms of agreement, AV coincides more with FV than PAV. Another point of dissonance between FV, PAV, and AV appears when FV and PAV do not elect a unanimously approved candidate. We found that in almost 18.89% of the cases PAV may not elect a unanimously approved candidate, while this is the case in only 8.58% of the cases for FV.
Appendices A.Proof of Proposition 8
Assume that an electorate is divided into two disjoint groups of voters N 1
A.1. PAV and the reinforcement criterion
We know that AV is equivalent to Rule 1 of PAV; since AV meets the reinforcement condition, it follows that PAV meets the reinforcement condition if the winner in each group is elected by Rule 1i or 1ii. To complete the proof, let us show that this is no longer the case in the other configurations. So, consider the following profiles
It is easy to check that a is elected in each of the seven profiles: through Rule
A.2. FV and the reinforcement criterion
Assume that candidate a is the FV winner at level l in both groups N 1 and N 2 . Let us denote by S l j (a) the l-level score of a in group j (j = 1, 2). We distinguish two cases:
(i) at l, a is the only majority approved candidate in each group. This means for all other candidate b, we get
Consider the profile obtained when both populations are merged and assume that b wins at a given level r. It is obvious that we get a contradiction for
Since it is assumed that a is the only majority approved candidate in each group at l, it follows that for all r < l, we get S r j (a)
Therefore, when both groups are merged, it is impossible at level r for b to be majority approved or to score more than a. So, if a is the only majority approved
In this profile, AV(a, π) = 10, AV(b, π) = 13, and AV(c, π) = 5. Candidates a and b are majority approved and a wins since n ab = 10 > n ba = 9.
Assume that the 3 voters with b ≻ a ≻ c abstain. In the new profile π ′ with 16 voters, the scores are AV(a, π ′ ) = 7, AV(b, π ′ ) = 10, and AV(c, π ′ ) = 5: b wins since he is now the only majority approved candidate.
-it is possible to favor b since it is possible to get AV(b, π) -β < n-β 2 such that b wins as illustrated by the following profile with 3 candidates and 18 voters. Given π, if candidate a is the FV winner, the following scenarios are possible:
Candidate a is the only majority approved candidate at l = 1; this is fully described by Eq 2.
No one wins at l = 1 and a is the only candidate majority approved at l = 2.
In this case, we get Eq 3.
No one wins at l = 1 and a with b (or c) are majority approved at l = 2; a gets more approvals than b (or c) at this stage. This situation is characterized by Eq 4 or Eq 5.
No one wins at l = 1 and a, b, and c are majority approved at l = 2. At this stage, a gets more approvals than b and c. In this case, we get Eq 6.
There is no winner at both l = 1 and l = 2 and a is the only candidate who is majority approved at l = 3. This situation is characterized by Eq 7.
There is no winner at l = 1 and l = 2: a with b (or c) are majority approved at l = 3. In this case, we get Eq. 8 or Eq 9.
No candidate is majority approved at l = 1, 2 but they are all majority approved at l = 3; a gets more approvals than b and c. This situation is characterized by Eq 10.
No candidate is majority approved at l = 1, 2, 3 and a gets more approvals than b and c. This situation is characterized by Eq. 11.
If we assume that candidate a is the PAV winner, the following five scenarios are possible:
no candidate gets a majority of approvals and a gets the highest number of approvals; this is fully described by Eq. 12.
only a gets a majority of approvals; in this case, we get Eq. 13.
a and b (or c) get a majority of approvals and a is majority preferred to b (or to c); this leads to Eq. 14 (or Eq. 15). Fallback Voting and Preference Approval Voting all the three candidates get a majority of approvals and a majority dominates b and c; this is described by Eq. 16.
all the three candidates get a majority of approvals, there is a majority cycle and a gets the highest number of approvals; we thus get Eq. 17 or 18. We summarize our results in Table 2.
Using the same approach as above, we were able to determine P(F V = AV = P AV ), the limiting probability that AV, FV, and PAV agree on the same profile.
Using the conditions of Eq. 22, Normaliz gives us the probability P (a = CW ) that a is the winner of Condorcet over π.
P (a = CW ) = 20129 65536
In the same way, we determined the probability that a is the Absolute Condorcet winner (i.e. S 1 (a) > n 2 ):
We therefore deduce P(CW ), the existence probability of the Condorcet winner, and P(ACW ), that of the Absolute Condorcet winner: CL(N P R) = 787367474789 21908225820672 ; We were willing, as we did in the previous section, to refine our findings based on α the proportion of voters who approve of exactly one candidate.
The probabilities CL(R, α) and ACL(R, α) that we obtained in this regard are provided in Table 4. |
04105716 | en | [
"spi.auto"
] | 2024/03/04 16:41:22 | 2007 | https://hal.science/hal-04105716/file/Haptic_feedback_to_assist_powered_wheelc.pdf | M Sahnoun
email: [email protected]
G Bourhis
email: [email protected]
Haptic feedback to assist powered wheelchair piloting
Keywords: assistive technology, haptic feedback, human-machine interaction, electric wheelchair aides techniques, retour d'effort, interaction humain-machine, fauteuil roulant électrique
The objective of this study is to implement a force feedback joystick on a smart electric wheelchair provided with a set of range sensors. The force feedback is calculated according to the proximity of the obstacles and help the user, without forcing him, to move towards the free direction. The first stage of the project consists in validating the interest of this method of control.
In this paper we present our methodology and some first experimental results.
Introduction
A recent American study relating to the interview of 200 rehabilitation clinicians showed that for approximately 10% of patients an electric wheelchair is difficult even impossible to use in the everyday life [START_REF] Fehr | Adequacy of power wheelchair control interfaces for persons with severe disabilities : a clinical survey[END_REF]. Moreover, questioned more specifically on the manoeuvres tasks, 40% of patients report difficulties. However, at the end of the eighties, to alleviate the difficulties of these people, some research teams have tried to give to the electric wheelchair a certain "intelligence" [START_REF] Bourhis | The VAHM robotized wheelchair : system architecture and humanmachine interaction[END_REF], [START_REF] Bourhis | Autonomous vehicle for people with motor disabilities[END_REF].
The "intelligence" of the wheelchair may be defined as the capacity to perceive its external environment and to deduce relevant information in the objective of carrying out autonomous or semi-autonomous movements: obstacle avoidance, doors passing, docking, path following, … If several prototypes of smart wheelchairs with high level functionalities are available in the research laboratories, to our knowledge none has reached the commercial stage [START_REF] Nisbet | Who's intelligent ? Wheelchair, driver or both ?[END_REF]. In particular, the control of a wheelchair in automatic mode poses two major problems, a technical problem and a psychological one. From the technical point of view, a perfect reliability of an autonomous motion supposes to use a sophisticated set of environment sensors and a heavy data-processing treatment not very compatible with the requirements of such an application as regards cost. From the psychological point of view many potential users on the one hand apprehend to leave the whole control of the movement to the machine, on the other hand wish to use their residual motor capacities as well as possible.
When the physical capacities of the user allow it we can mitigate the problem of reliability by controlling the wheelchair in a "shared" mode: the order and the direction of movement are given by the user, the machine, thanks to its environment sensors (ultrasonic rangefinders usually), helps him to avoid the obstacles. The person is thus always free to stop or continue the movement. This type of assisted control presents nevertheless some limitations. In particular certain movements like pushing a slightly opened door become impossible. The psychological drawbacks of the automatic mode do not disappear either: the person loses the control of the movement partly since he divides it with the machine. This can disturb strongly some users, the direction of displacement being not always that proposed via the human-machine interface.
A method to assist the control of the wheelchair while leaving the pilot his whole free will consists in implementing a force feedback on the control joystick depending on the obstacles proximity. We can then speak about an "assisted" control mode: the control of the wheelchair is entirely of the responsibility of the person, the machine, as a movement supervisor, only transmits haptic information to him to enrich the natural visual feedback. In this context the technical and psychological limitations of the automatic and semi-automatic modes do not appear any more.
However it remains to be demonstrated that the control performances will be improved to a significant degree compared to a usual piloting of the wheelchair: it's the objective of this study.
Justification
The control of an "intelligent" wheelchair by a person with disability opens research problematics close to those met in teleoperation [START_REF] Bourhis | The VAHM robotized wheelchair : system architecture and humanmachine interaction[END_REF]. In particular the human-machine interaction is an essential factor to optimize. Thus, many studies have related to the transmission of information from the disabled person who, by hypothesis, has very limited motor capacities, towards an assistive technology device (mobility aid, manipulation aid or communication aid). On the other hand the information feedback from the machine towards the human remains, at the present time, insufficiently explored. It concerns essentially visual feedback associated in certain cases with sound information (voice synthesis). The sense of touch (considered in the broad sense of an "haptic" return i.e. including tactile, proprioceptive and kinaesthetic information), if it is naturally requested for people with visual impairment, is only very rarely used for assisted devices intended for people with motor disabilities. Some work in this way was however reported in the literature. In [START_REF] Brienza | A force feedback joystick and control algorithm for wheelchair obstacle avoidance[END_REF] a joystick was specifically conceived to test in an entirely modelled environment an algorithm of "passive" force feedback (the joystick resists to a movement towards an obstacle) and an algorithm of "active" force feedback (the joystick moves the wheelchair away from the obstacles). The "active" algorithm being proven more effective, it was tested on 5 people with disability [START_REF] Protho | An evaluation of an obstacle avoidance force feedback joystick[END_REF]: for 4 of them the number of collisions in a course test has decreased compared to a piloting without force feedback. In [START_REF] Hong | Shared-control and force reflection joystick algorithm for the door passing of mobile robot or powered wheelchair[END_REF] the authors describe an algorithm of the "active" type based on the potentials method modified: to circumvent the difficulty in passing the doors with this method the authors, to calculate the repulsive force, only take into account the obstacles located at +/-30° in the forward direction of the wheelchair.
In another context, the human-computer interaction, some tests with people with disabilities also showed that a force feedback interface could improve the performances obtained in a pointing task [START_REF] Keates | Investigating the use of force feedback for motion impaired users[END_REF]. These results are corroborated by a study described in [START_REF] Repperger | A study on spatially induced "virtual force" with an information theoretic investigation of human performance[END_REF] bearing on a group of 10 people with motor disabilities.
Other works described in the literature relate to the teleoperation of a mobile assisted by a force feedback. These applications only concern users without disability but their conclusions are indicative all the same on the potential of the method. Thus in [15] experiments are carried out in simulation concerning the teleoperation of a mobile base in hostile environments. The authors note a significant reduction in the number of collisions by using a force feedback joystick compared to a usual one. The duration and the length of the ways on the other hand are only little modified from one situation to another. A similar experimentation in [16], carried out using a 3D force feedback device PHANToM TM restricted to 2D, leads to the same conclusions: the force feedback decreases the collisions without increasing the duration of navigation significantly. However performance measurements are not always sufficient to validate the interest of the force feedback: in [17], mental workload evaluation during the teleoperation of an helicopter lead to the conclusion that certain force feedback calculation algorithms improve the performance but significantly increase the mental workload.
Methodology
Experimental environment
The force feedback calculation is carried out, by hypothesis, according to the proximity of the obstacles measured by range sensors. An experimentation in real conditions thus requires to get a wheelchair equipped with a set of environment sensors. We'll use the robot resulting from the VAHM project (French acronym for "Autonomous Vehicle for People with Motor disabilities") initiated in 1989 in the University of Metz. The objective of this project is to facilitate the control of electric wheelchairs by using methods and technologies coming from mobile robotics [START_REF] Bourhis | Autonomous vehicle for people with motor disabilities[END_REF]. Two prototypes of this smart wheelchair are currently available, both equipped with a belt of 16 ultrasonic sensors, with a dead-reckoning system and with a computer implemented to the back of the wheelchair.
In a first stage, to be free from the technical problems inherent to the tests in real situation, the experiments are carried out in simulation: the environment is represented in 2 dimensions, the force feedback joystick (Microsoft Sidewinder TM Force Feedback Joystick 2) moving a cursor in this environment. Simulation is programmed under Matlab/Simulink TM . It is made up of three main building blocks:
The "joystick interface" block makes it possible to read the position of the joystick and to apply to him a force adjustable in amplitude and direction.
The "graphic animation" block translates the joystick position into a robot motion which it displays on a 2D animation (Figure 1). The possible collisions are underlines by a change of colour of the mobile. The dimensions of the mobile and of the environment are selected in order to correspond to realistic situations. It is the same for the speed of the wheelchair which maximum is fixed to 0.5m/s.
The "force feedback" block reads at regular rate the distances data obtained by the 16 ultrasonic sensors and deduces a force feedback on the joystick.
Force feedback calculation
The principle of the force feedback calculation consists in applying a force to the joystick in the most adapted free direction, i.e. the direction which corresponds the "best" to that indicated by the pilot. The main difficulty is to define this direction. It is important to note that this method does not prohibit any movement decided by the person. We only make the motions leading to a collision more difficult.
A first series of experiments reported in [18] have allowed to validate the experimental system. The force feedback calculation was based on the "potentials method": each obstacle detected by an ultrasonic sensor emits a repulsive force inversely proportional to its distance to the wheelchair; the force feedback applied to the joystick is the vectorial sum of all these forces. The results obtained by this method are not very convincing, in particular in very constrained environments (doors passages). This had already been noted in the literature in the context of the shared control of an electric wheelchair [START_REF] Borenstein | The Vector Field Histogram -Fast Obstacle Avoidance For Mobile Robots[END_REF].
We propose in this paper to calculate the force feedback by two algorithms described in [START_REF] Levine | The Navchair assistive wheelchair navigation system[END_REF] and [START_REF] Borenstein | The Vector Field Histogram -Fast Obstacle Avoidance For Mobile Robots[END_REF], the VFH algorithm (Vector Field Histogram) and the MVFH algorithm (Modified Vector Field Histogram). They indeed were initially conceived to mitigate the deficiencies of the potentials method and were tested successfully on several prototypes of smart wheelchairs [START_REF] Levine | The Navchair assistive wheelchair navigation system[END_REF], [20]. The principle of the VFH algorithm is as follows:
We build an occupation grid around the wheelchair, a cell being incremented with each measurement of the presence of an obstacle and being regularly decremented along the time (factor of lapse of memory).
We deduce a polar histogram (Figure 2) representing the density of obstacles around the wheelchair (the significant values represent close or large-sized obstacles).
We choose as force feedback direction the free direction nearest to the one indicated by the pilot via the joystick. This algorithm was defined initially to carry out a shared control of the wheelchair: the forward direction is then that indicated above as being the force feedback direction. The authors have noted certain difficulties for passing doors of standard size (0,76m for a wheelchair of width 0,63m) and also that, with this method, significant changes in the direction selected by the pilot may not The application context of our study not being the same one (a force feedback is applied but we don't impose any motion to the wheelchair) we are going to test in what follows two algorithms, the VFH and the MVFH, to confirm or invalidate the results obtained above.
Results and discussion
The results which follow concern a panel of 6 experimenters without disability. Each of them, after a training phase, guide 3 times the virtual wheelchair in the test environment of Figure 1 using the joystick according to various experimental conditions: without force feedback ("without FF"), with force feedback calculated by the VFH algorithm and with force feedback calculated by the MVFH algorithm. Moreover each one of these three options is realized with two kinematics configurations for the virtual wheelchair: the driving wheels may be backward or forward, which constitutes an usual option of the electric wheelchairs. In each case we record three parameters significant of the operator performance in the guidance task: the duration of the task, the distance covered and the number of collisions. Of course these parameters are not independent.
First of all, from the observation of the experiments and from the results below we can deduce some qualitative considerations concerning the parameters "Distance" and "Time". The differences in distances covered, the environment being identical for all the tests, are primarily due to the operations carried out to go out of blocking situations. Thus this parameter is strongly correlated with the number of collisions and does not seem a significant comparison element between the navigation methods. The variations of courses durations seem also difficult to exploit. Indeed they are related to the pilot's behaviour: if he accelerates the wheelchair, he decreases the duration of the course but he increases the collision risks, collisions which, if they occur, increase the course duration. Additional experiments will be necessary to evaluate the relevance of this time criterion to compare the methods of wheelchair control. However we can henceforth note that generally this parameter of time is smaller with the force feedback than without this one.
On the other hand the number of collisions appears directly related to the control mode. For 5 experimenters among 6 the use of the force feedback clearly decreases this factor. This corroborates the results reported in the literature. Quantitatively, the MVFH algorithm does not seem to bring of significant improvement compared to the VFH. We observe however an appreciably better behaviour of the MVFH in the passages of doors compared to the VFH and conversely in the corridors (the trajectory is less stable).
The results of the 4 th experimenter are more atypical: the force feedback decreases considerably its performances considering the number of collisions. This is probably due to the fact that he's an usual player of video games and, consequently, he's particularly skilful to use the traditional joystick to guide a mobile. This observation does not call into question the utility of this study since it is intended to people which have difficulties in the electric wheelchair control using a traditional joystick.
Table 1 presents the results for the two basic kinematics configurations for an electric wheelchair: rear and front driving wheels. The strategy of navigation is very different according to the configuration but this fact doesn't influence the remarks made above. This will have undoubtedly to be confirmed on more specific tests in very constrained environments.
Lastly, to try to evaluate the comfort of navigation, it is planned for the continuation of our experiments to associate the usual performance criteria an estimation of the person mental workload. A method containing questionnaires, the NASA-TLX method ("Task Load Index") [START_REF] Pino | Performance measurement of a man-machine system: application to the control of a powered wheelchair[END_REF], might be used for this purpose.
Conclusion
The project above described aims at conceiving a new control mode of electric wheelchair. It is intended for people for whom control by a traditional joystick (or any other adapted sensor) is difficult or impossible because of too severe motor disabilities. It is initially a question of validating the interest of a force feedback to assist the wheelchair control. The first tests reported in this paper have been related to a panel of people without disability in order to elaborate the experimental apparatus and the algorithms. This system being a simulator, it will make it possible to carry out experiments with people with severe motor disabilities without the constraints of safety and reliability which the tests in real conditions imply. If the interest of the "assisted" control mode is validated by the experiments in simulation, the final stage of the project will consist in transposing this system on the smart wheelchair VAHM.
Figure 1 .
1 Figure 1. 2D test environment; The task consists in guiding the virtual mobile from the initial
Figure 2 .
2 Figure 2. Polar histogram (VFH algorithm): we represent in ordinates the obstacles density in
generate any variation of direction of the mobile[19]. Thus they proposed a modified algorithm, the MVFH algorithm, which, during the calculation of the movement direction, minimizes the sum of the histogram and of a parabolic function centred on the direction desired by the person (Fig.3). This one can thus carry out little local deviations of the trajectory. In particular, the door passing manoeuvres are improved by this way.
Figure 3 .
3 Figure 3. Polar histogram (MVFH algorithm); the polar histogram resulting from the VFH method
Table 1 .
1 Experimental results for a rear-wheel drive wheelchair (RWD) and a front-wheel drive wheelchair (FWD) (the values indicated are averages on 3 tests)
Operators Joystick Collisions Distance(m) Time (s) Collisions Distance(m) Time (s)
(RWD) (RWD) (RWD) (FWD) (FWD) (FWD)
1 Without FF 15 620 122 12 647 133
VFH 10 584 111 7 607 115
MVFH 3 588 107 8 628 124
2 Without FF 11 638 161 16 609 157
VFH 9 604 168 8 658 131
MVFH 11 637 181 9 643 125
3 Without FF 29 695 160 39 727 205
VFH 15 593 121 25 662 158
MVFH 10 597 111 18 679 157
4 Without FF 1 583 119 1 628 144
VFH 7 581 112 1 606 113
MVFH 4 588 108 7 613 115
5 Without FF 15 586 123 13 678 180
VFH 11 585 128 9 629 144
MVFH 3 588 121 22 669 152
6 Without FF 20 587 108 15 639 123
VFH 9 578 101 3 612 111
MVFH 7 600 109 6 618 109 |
03762316 | en | [
"sde.ie"
] | 2024/03/04 16:41:22 | 2022 | https://hal.science/hal-03762316v2/file/HOUSHMANDRAD-Mokhtari-Khodahemati-Alipour.pdf | Mohammad Hosein Houshmandrad
email: [email protected]
Mahsa Mokhtari
Hourvash Khodahemmati
Shayan Alipour
Investigation the position of gardens as one of the dimensions of urban vegetation in the management of large cities located in the plains and its maintenance strategies
Keywords: Garden, Plain, Ecology, Metropolis, Urban Management
d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Investigation the position of gardens as one of the dimensions of urban vegetation in the management of large cities located in the plains and its maintenance strategies
Mohammad Hosein Houshmandrad, Mahsa Mokhtari, Hourvash
Khodahemmati, Shayan Alipour
To cite this version:
Mohammad Hosein Houshmandrad, Mahsa Mokhtari, Hourvash Khodahemmati, Shayan Alipour. Investigation the position of gardens as one of the dimensions of urban vegetation in the management of large cities located in the plains and its maintenance strategies. 8th International Conference of Agriculture Engineering, Natural Resources and Environment, Sep 2022, Tehran, Iran. hal-03762316v2
INTRODUCTION
Iranian gardens play a key role in the formation of the city and the landscapes of traditional Iranian cities and are considered to be the reason for the connection of architectural seeds, green infrastructure and natural organs of the city. The Iranian garden is one of the masterpieces of outdoor design in the world and is the result of the thought of a people interacting with nature and using nature to create an outdoor space [START_REF] Shaybani | The position of the Iranian garden in the urban landscape[END_REF]. The city is a social arena that is created on the natural arena [START_REF] Rahmana | Analysis of change of use and how to preserve and maintain green space in line with sustainable development[END_REF]. Urbanization, as the second revolution in human culture, has caused a transformation in the mutual relations of humans with each other, with the increase of the urban population, the exploitation of the environment intensifies [START_REF] Seraei | Examining the level of sustainability of development in cities in arid regions with an emphasis on bio-environmental components: Ardakan city[END_REF]. The increase in population and expansion of urbanization causes urban green spaces to become rough and impenetrable concrete surfaces; And this trend is more serious especially in developing countries and the third world [START_REF] Shi | Suitability Analysis and Decision Making Using GIS[END_REF]. It seems that in Iran, cities that have faced the problem of rapid expansion of their settlements, this development is associated with the destruction and destruction of urban gardens. In other words, a large part of the land required for the physical development of the cities has been obtained by changing the use of the garden area [START_REF] Ismaeilpour | Change in agricultural land use and relative temperature increase in Yazd city due to its rapid growth[END_REF]. The expansion of cities, the increase of urban population, the expansion of urbanization and human progress in the field of technology have caused the gradual distance of man from nature and the destruction of the ecological balance of the environment. Excessive population density and interference in the natural environment and the creation of environments that are designed and built with human thought, reveal more and more the physical and mental environmental needs of humans [START_REF] Sadrmoussi | Analysis on the physical development of Tabriz and the destruction of agricultural lands and urban green space[END_REF]. Therefore, the administrators in the urban area takes decisions to solve this deficiency by building gardens and artificial green spaces inside the cities. Gardens, as one of the unique aspects of urban green spaces, produce oxygen, reduce pollution caused by industrial fine dust in the atmosphere, regulate temperature, stabilize soil, beautify the environment, control pollution, and create artificial air arteries and generate wind in cities. They are located in the plain area. It is worth mentioning that the cities located in the plains do not have natural lungs of air and these gardens are the rule. The artificial lungs of the cities located in the plain area play a role [START_REF] Amiri | The importance and role of green spaces in the city and urban planning[END_REF]. Preserving gardens and preventing their degeneration in the urban area is one of the means to achieve the realization of sustainable development in urban management. It is worth pointing out that for the development of the urban space and area, the cause of the destruction of the gardens is first provided, and after understanding its place and importance, the destroyed space is replaced. The occurrence of this vicious cycle, in addition to incurring exorbitant costs in the management of megacities, makes the efforts of third world and developing countries to achieve sustainable development fruitless.
Plain
Geographically, the plain refers to flat or relatively flat lands that are surrounded by high mountains and these lands generally have one or more flowing rivers. The cities located in the mentioned position generally lack the natural lung of air flow due to being monopolized and isolated by high mountain ranges, i.e. not benefiting from the passage of natural wind in the plains [START_REF] Shoqi | [END_REF].
Functional separation of cities from the consequences of modernism
A behavioral camp is a small social unit that results from the stable integration of an activity and a place in such a way that it can fulfill the essential functions of that behavioral environment in a regular process [START_REF] Gulrokh | Behavareh camp: a basic unit for environmental analysis[END_REF]. Modernism reduced the behavioral function of the formed behavioral camps of the city and threatened the established behavioral camps throughout history and basically due to the characteristics such as crudeness, slowness, inflexibility and difficulty of creating a camp. did not have the new ones [START_REF] Shirazian | Guide to historical maps of Tehran[END_REF]. Such consequences in Iran started from the first Pahlavi period with the implementation of the map (street implementation) and damaged many urban gathering places that were the behavioral base of urban pedestrian flows. Today's urban parks are the result of developments in the field of social life and ways of constructing gardens, which began in the Qajar period along with other aspects of architecture and urban planning, but in the Qajar era and the early Pahlavi era, the need for a park in its modern sense It was not felt and the construction or naming of such spaces had more of an imitative and modernist aspect [START_REF] Masoud | Contemporary architecture of Iran: in the struggle between tradition and modernity[END_REF].
The role of gardens in the formation of behavioral norms
Before the first Pahlavi period, the effective and ineffective gardens were one of the components of the local scale, which as an intermediary element in combination with other urban components such as schools, mosques, tombs, almshouses or in the form of a collection of concentrated gardens in the cities. has forgiven. The emergence of large government gardens with aristocratic mansions, as well as newly emerging imported elements such as the zoo, in the second period of Naser al-Din Shah Qajar's rule and after that, became the basis for the creation of urban parks in the process of the city's peaceful development. This dialectical flow intensified the concept of urban gardens to provide livelihood and entertainment for the common people and the aristocratic concept of green space and designed plants. Nevertheless, until the beginning of the first Pahlavi period, productive and non-productive gardens maintained their value and importance for the public as multi-functional green spaces. In the process of forming and stabilizing the landscape of fine-grained gardens in the city, intra-tissue feedbacks between gardens and other components of small arsenics on a local scale and the role of monitoring residential contexts, cause an increase in behavioral performance, flexibility and, as a result, a sense of belonging to these gardens. have been. With the beginning of the sudden changes of the first Pahlavi period with emphasis on the human society, a modernity was formed in Iran, which, unlike the endogenous modernity of the western society, was not formed based on the gradual changes in the cultural structure of the society. The result of this process was the phenomenon of physical and cultural separation instead of the integration of inner city forces. Even before modernity, the inner city garden had a high behavioral performance by having a local scale and covering a wide range of behaviors supporting this scale., they had a more non-diverse behavioral spectrum and heterogeneity with the needs of the urban cultural context. The result of this process was the gradual loss of the spiritual value of these fine-grained gardens and finally the beginning of destruction and replacement with building masses [START_REF] Zandi | The role of urban gardens in the formation of behavioral camps, a case study of Tehran[END_REF].
The place and importance of gardens
The destruction of gardens in Iran's big cities is not only happening during their development, but also now. Municipalities have not been able to effectively control the destruction of gardens due to lack of full authority. Although the approval of some laws has been able to increase the scope of the municipalities' powers, but there are still widespread violations caused by the deliberate drying of gardens in these megacities [START_REF] Azizi | sustainable urban development, a view and analysis from a global perspective[END_REF]. The significant decrease in the area of gardens in Tehran from 14,000 hectares in 1968 to 9,900 hectares in 1981 and the decrease in the area of gardens in Shiraz from about 4,000 hectares to 1,200 hectares in 1992 is self-evident and a proof of this claim. It can be said that the main reason for the extensive destruction of these gardens is, on the one hand, their placement in the path of urban development, and on the other hand, the significant increase in the added value of these gardens for their owners [START_REF] Rahmana | Analysis of change of use and how to preserve and maintain green space in line with sustainable development[END_REF].
Figure 3 .Aerial view of the gardens of Qasr al-Dasht area of Shiraz, between 2002 and 2014
The importance of the functioning of gardens in the cities that are geographically located in the plains is so much that in case of destruction, it can be claimed that they have no alternative and their removal in any way and for any reason will cause irreparable damage to the city and the urban area. and it brings in its residents and severely challenges the city management. Considering that these gardens are located in a dense and consecutive area, they will follow the following mechanisms:
5-1 Increase in relative humidity
Due to the expansion of its leaf surface compared to other forms of green space, gardens can increase the relative humidity and softness of the air through transpiration. The act of sweating trees is associated with the absorption of calories. Thus, a strip of plants 50 to 100 meters wide reduces the heat by 3 to 4 degrees compared to the city center. At the same time, it adds 50% to air humidity. The temperature difference obtained in this way causes a slight decrease in air pressure. A decrease in air pressure creates winds with a speed of 12 km per hour, and these winds are enough to completely change and clean the air of a big city in one hour [START_REF] Nejad Shirazi | Re-creating the principles of sustainable landscape and energy storage. 6th Renewable[END_REF].
5-2 Dealing with heat islands
Researchers consider urban areas as heat islands; Dark surfaces in cities absorb the sun's heat during the day by 3 to 5 degrees Celsius more than the adjacent lands, and in this way contribute to 30% of air pollution. In urban areas where buildings and plumbing systems have the largest contribution to the earth's surface coverage ,natural cycles are short, and the disturbance in energy transfer has turned them into heat islands. In turn, it increases the pollution of the city [START_REF] Nejad Shirazi | Re-creating the principles of sustainable landscape and energy storage. 6th Renewable[END_REF].
5-3 Reducing the amount of lead
Gardens play an effective role in reducing lead levels. Comparative comparison of trees with other types of plants, such as herbaceous plants and agricultural plants, shows that trees have 10 to 20 times the capacity of herbaceous plants and 2 times of agricultural plants to absorb lead [START_REF] Tabibian | Investigation of the absorption rate of heavy metal lead in the sycamore tree species in traffic areas[END_REF].
5-4 Gardens and wind production
Due to the production of oxygen by gardens and forests, and as a result, the creation of wind by the produced oxygen, forests and gardens are always known as wind factories in both small and large dimensions. Therefore, in the case of big cities located in the plains, gardens are referred to as artificial lungs that produce wind. It is worth mentioning that the wind carries the role of transporting pollutants in both vertical and horizontal directions [START_REF] Sardo | Urban green space and its role in people's lives[END_REF].
5-5 Preventing the phenomenon of air inversion
It is a phenomenon in which, contrary to the natural state, the temperature increases with the increase in altitude, and in this condition, the temperature of the lower layer of the atmosphere is lower than its upper layer. In big cities, temperature inversion usually causes air pollution. Among the reasons for the occurrence of this phenomenon, we can point out the lack of movement between vertical layers of the atmosphere due to the lack of wind, the presence of enclosing mountain ranges, the lack of wind entering the plain, low rainfall, etc [START_REF] Sardo | Urban green space and its role in people's lives[END_REF].
5-6 Noise
Trees in gardens and urban green spaces, if they have the right species and proper planting, can reduce the sound up to 4 decibels. In particular, it reduces noise pollution caused by vehicles and urban transport fleet [START_REF] Nejad Shirazi | Re-creating the principles of sustainable landscape and energy storage. 6th Renewable[END_REF][START_REF] Mobasseri | Impact of driving style on fuel consumption[END_REF][START_REF] Mobasseri | Traffic noise and it's measurement methods[END_REF][START_REF] Mobasseri | A Comparative Study Between ABS and Disc Brake System Using Finite Element Method[END_REF][START_REF] Yofianti | Relationship of plant types to noise pollution absorption level to improve the quality of the road environment[END_REF][START_REF] Houshmandrad | Choosing the most optimal type of contract in infrastructure projects from the employer's point of view in the field of technical participation and getting rid of the pressures of liquidity provision[END_REF]. To simulate noise pollution, the Finite Element Method (FEM) is a suitable and practical tool [START_REF] Mobasseri | Bending and Torsional Rigidities of Defected Femur Bone using Finite Element Method[END_REF][START_REF] Mobasseri | Approximated 3D non-homogeneous model for the buckling and vibration analysis of femur bone with femoral defects[END_REF][START_REF] Habibi | Drug delivery with therapeutic lens for the glaucoma treatment in the anterior eye chamber: a numerical simulation[END_REF][START_REF] Sarmadi | SIMULATION OF NOISE POLLUTION REDUCTION IN A POWER PLANT UNDER CONSTRUCTION USING ANSYS FLUENT SOFTWARE[END_REF][START_REF] Mobasseri | Intelligent Circuit Application for Detecting the amount of Air in Automobile Tires[END_REF][START_REF] Mobasseri | Intelligent Circuit Application for Monitoring the Employees Attendance[END_REF].
5-7 Oxygen production and carbon dioxide absorption
In the urban macro scale, the role of gardens in terms of creating oxygen balance by trees can be significant, and in the urban micro scale, they cannot be ignored in any way. For example, each beech tree with a medium longevity can remove carbon dioxide from the air as much as three times the volume of two single rooms, while 30 to 40 square meters of trees can provide the oxygen needed by one person [START_REF] Fard | The impact and role of trees in the urban landscape and environment[END_REF].
5-8 Radiation control and light reflection
Climate control is realized by modulating the effects of sun, wind and rain. The air temperature in the vicinity of trees is much cooler than in areas without trees. The bigger the tree, the bigger the difference. Absorption of solar radiation (long waves) by trees reduces the temperature difference between day and night. The air under the trees is cooler during the day and warmer at night [START_REF] Fard | The impact and role of trees in the urban landscape and environment[END_REF].
5-9 Energy storage
Proper planting of trees can have a significant effect on energy consumption in buildings. The cost of heating or cooling buildings is reduced if trees are used correctly. Trees absorb 9% of solar energy in summer and can also reduce the internal heat of buildings. In residential areas that are located in windy areas, planting trees as windbreaks can reduce the cost of heating buildings by 4 to 22% depending on the degree of windiness and density of the windbreak [START_REF] Feridouni | Identification of factors affecting energy supply and consumption in Iran with the approach of reforming the pure model[END_REF].
5-10 Wind control
Urban green spaces and especially gardens can be effective in guiding the wind and changing its direction. Urban green spaces, including trees and shrubs, have an effect on wind currents, while preventing soil erosion, they can be effective in selectively directing wind currents and control wind intensity [START_REF] Parisi | mutual effects of wind and trees and shrubs of urban green spaces and ways to reduce wind damage. the first national wind erosion conference[END_REF].
5-11 Water flow control
Trees can slow down the movement and flow of water in the impenetrable surface of the city and delay the passage of water on the surface of the city. Conifer trees up to 40% and broadleaf trees up to 20% have the ability to capture rainwater and return it to the atmosphere through evaporation [START_REF] Fard | The impact and role of trees in the urban landscape and environment[END_REF].
5-12 Trees and floods
By absorbing water from one side and directing it to their organs, trees slow down rapid and flood currents. On the one hand, the surface of tree limbs reduces the speed of floods by one-third and on the other hand, it reduces the costs of constructing flood flow control systems [START_REF] Parisi | mutual effects of wind and trees and shrubs of urban green spaces and ways to reduce wind damage. the first national wind erosion conference[END_REF].
CONCLUSION
Considering the ever-increasing growth of urbanization and the development of urban areas, which causes extensive changes in the field of gardens, and the feeling of the need to increase the urban green space per capita by citizens and urban management in big cities, all of them indicate the strategic importance of gardens as one of the forms of green space to It is special for big cities located in the plains. Considering the ecological effects of gardens on the urban area, such as reducing the amount of lead, dealing with heat islands, preventing the phenomenon of air inversion, increasing relative humidity, controlling radiation, light reflection, etc., urban management is forced to search for solutions to preserve and prevent It has destroyed or changed the use of these gardens, but due to the added financial value of these gardens for the owners and other beneficiaries, it has created problems for the city management in this direction. The suggestions to preserve and prevent the destruction of this treasure are as follows: purchasing these gardens by the municipality and turning them into city parks (either public or family parks); Buying and transferring the ownership of gardens to the Medical Sciences Organization and turning them into botanical and medicinal plant gardens; Creating a memorandum of cooperation between the municipality and the endowment organization in the big cities in order to purchase and endow the gardens in order to prevent their change of use, the acquisition of the gardens by the cultural heritage organization for the use of the museum garden. Changing the use of gardens under the title of garden-school, which can be considered by the Ministry of Education and the private sector active in the field of education, as well as garden-reliance, which, in addition to preserving the body of gardens, can be used as a place for religious and ritual gatherings. be exploited.
Figure 1 .Figure 2 .
12 Figure 1. Aerial image of Sattar Khan street in Tehran, between 2004 and 2015
7. |
03787947 | en | [
"spi"
] | 2024/03/04 16:41:22 | 2022 | https://hal.science/hal-03787947v2/file/Noise%20pollution-HoushmandRad.pdf | Mohammad Hosein Houshmandrad
email: [email protected]
Mahsa Mokhtari
Investigating possible solutions to reduce noise pollution in wind turbines
Keywords: Noise Pollution, Wind Turbine, Reduce Pollution, Blades
In recent years, the use of wind turbines has increased due to the phenomenon of global warming and climate change in different regions, and it is expected that this growth will continue in the future, as is the noise pollution of turbines. Noise pollution is one of the biggest environmental damages using wind turbines, which is within the range of hearing and inaudibility, and this type of pollution can be perceived by other living beings, which can have a negative impact on their lives. Also, the problems of using propellers, such as the noise caused by them, visual problems, the killing of birds and insects, and atmospheric disturbances have also added to the amount of this problem.
Introduction
The potential of using wind resources is far more than the currently used capacity. It is expected that the use of wind turbines will increase in the coming years. It is expected that by 2030, the operating capacity will reach more than 1.2 terawatts (equivalent to 20% of electricity production [START_REF]Global wind energy outlook[END_REF]. Fortunately, it is now possible to design various types of wind turbines for different working conditions, so that it is possible to build small domestic turbines to large marine turbines for installation in areas far from the coast. Small turbines can be easily installed and used in different areas, which have little environmental harm, and their expansion also helps in the development of distributed electricity distribution network. On the other hand, large turbines can be used to exploit large potentials, which are economically less expensive to exploit. These turbines are usually used in multiples, and their collection is called a wind farm. Today, it is possible to make turbines with much larger diameters than in the past, which makes it possible to use more wind energy. The use of wind turbines has its own environmental disadvantages, and in order to expand the use of turbines, these problems must be solved as much as possible. Problems such as the destruction of the environment for the installation of turbine towers, the creation of intermittent bright shadows caused by the rotation of turbine blades, the pressure reduction caused by the flow of eddies downstream of large turbines, and most importantly, the sound pollution (noise) of turbines. Studies show that noise is a problem in the daily life of people who live near wind turbines [START_REF] Bakker | Impact of wind turbine sound on annoyance, self-reported sleep disturbance and psychological distress[END_REF][START_REF] Van Renterghem | Annoyance, detection and recognition of wind turbine noise[END_REF][START_REF] Waye | acoustic characters of relevance for annoyance of wind turbine noise[END_REF][START_REF] Houshmandrad | Choosing the most optimal type of contract in infrastructure projects from the employer's point of view in the field of technical participation and getting rid of the pressures of liquidity provision[END_REF]. The range of human hearing is defined from the frequency of 20-20 Hz to kilohertz, and frequencies less than 20 Hz are called the sub-auditory range. Since wind turbines usually have a small number of blades and their rotation speed is low, their noise at low frequencies and in the sub-audible range should be investigated. Research has shown that the noise of wind turbines at low frequencies will have significant harm on human health [START_REF] Mobasseri | Intelligent Circuit Application for Monitoring the Employees Attendance[END_REF][START_REF] Mobasseri | Traffic noise and it's measurement methods[END_REF]. It causes problems such as headache, lack of concentration, sleep disorder, fatigue, dizziness and constant ringing in the ears (the set of symptoms of the mentioned disease is known as wind turbine syndrome). For example, with a rotational speed of 30 revolutions per second, the noise produced is in the sub-hearing range and causes disturbance in the life of wind power plant personnel [START_REF] Salt | Responses of the ear to low frequency sounds, infrasound and wind turbines[END_REF]. The noise caused by wind turbines can have a negative effect on the hearing system even at hearing frequencies. Of course, in examining the effects of wind turbine noise, it is necessary to pay attention to its effect on other living organisms. The noise of wind turbines is within the hearing range of many animals, birds and insects, and the negative impact of this phenomenon on living beings can threaten the environment and the diversity of animal species in the region [START_REF] Zhu | Modeling of aerodynamically generated noise from wind turbines[END_REF].
Wind turbine
A wind turbine is generally a machine that converts wind energy into electrical energy. In wind turbines, the blades are designed to rotate with the wind. The rotation of the blades rotates the inner axis of the turbine rotor, and finally, after converting the torque into rotational speed in the gearbox, the mechanical power is converted into electrical power. Based on the orientation of the rotating axis, wind turbines are divided into two categories: horizontal axis and vertical axis:
Horizontal axis wind turbines
All components of the turbine (blades, gearbox, generator, etc.) are placed on top of a tall tower in line with the wind. In order to achieve the highest efficiency, these turbines must be placed exactly in the direction of the wind, so horizontal axis turbines must have a system to adjust against the wind, which is called a yawing mechanism. so that the whole nacelle can turn towards the wind. In small turbines, the tail of the fan is responsible for this control. But in systems connected to the network, the yaw control system is active, which rotates the nacelle towards the wind by wind direction sensors and motors.
Advantages of horizontal turbine
The height of the tower provides access to powerful winds. In some places, for every 10 meters increase in height, the wind speed increases by 20% and the efficiency increases by 34%.
Due to the long bases and advanced mechanism, it is possible to use these turbines in the seas.
Due to being directly in front of the wind, they have high efficiency.
Disadvantages of horizontal turbine
They do not work at low altitude.
It is difficult to transport and install them.
Their maintenance is difficult and expensive.
They are affected in the vicinity of the radar.
Their high height and size do not have a beautiful appearance and cause noise pollution.
Vertical axis wind turbines
The main rotor is placed vertically. The most important advantage of vertical wind turbines is that they don't need to be adjusted in relation to the direction of the wind, and they can also be used at low altitudes. In vertical axis turbines or rotors, the rotation axis is perpendicular to the ground and the blades rotate parallel to the ground. For this reason, the surface moved by the wind has to continue moving in the opposite direction of the wind flow after half a revolution, and this problem causes their power factor to decrease. For this reason, the blade curve is very important in these rotors.
Because these wind turbines must be installed closer to the ground and at low altitude, the wind speed is lower, less energy is produced from the specific size of the turbine. Air flow near the ground and other objects can create turbulent flow, which causes vibration consequences, including noise and bearing fatigue, and as a result, maintenance costs may increase and service life may decrease. These turbines are divided into two main types, Savonius and Darrieus.
Advantages of vertical turbines
Vertical wind turbines are less sensitive to wind direction and flow turbulence, this advantage allows them to be used in different areas, including the roof of buildings. They perform well in turbulent and turbulent winds. Vertical wind turbines can be installed close to the ground, which makes them safe and cheap, as well as easy to maintain. They do not need a yawing system.
Disadvantages of vertical turbines
In vertical turbines, when the wind blows, an opposite force is also applied to the other side of the turbine. Therefore, their efficiency in each period is lower than horizontal axis turbines.
It is difficult to install this type of turbine on tall towers; This means that they should operate in slower air currents with more turbulence and near the ground with lower energy extraction efficiency. Due to the mentioned issues, the design and analysis of the blade (airfoil) of vertical turbines is more difficult and expensive.
Their low efficiency can be improved by compact layout and new designs. Due to high flow disturbance, the structure of vertical wind turbines is subject to fatigue. This issue can also be improved to a great extent by predicting aerodynamic loads.
Environmental effects of wind turbine blades
Noise pollution
The Finite Element (FEM) software is one of the most widely used numerical simulation techniques for investigating engineering-industrial phenomena [START_REF] Mobasseri | Intelligent Circuit Application for Detecting the amount of Air in Automobile Tires[END_REF] such as noise pollution [START_REF] Mobasseri | Traffic noise and it's measurement methods[END_REF][START_REF] Mobasseri | Approximated 3D non-homogeneous model for the buckling and vibration analysis of femur bone with femoral defects[END_REF][START_REF] Mobasseri | Bending and Torsional Rigidities of Defected Femur Bone using Finite Element Method[END_REF][START_REF] Mobasseri | A Comparative Study Between ABS and Disc Brake System Using Finite Element Method[END_REF][START_REF] Habibi | Drug delivery with therapeutic lens for the glaucoma treatment in the anterior eye chamber: a numerical simulation[END_REF][START_REF] Mobasseri | Impact of driving style on fuel consumption[END_REF]. Also, the noise of wind turbines is divided into two categories based on the source of its creation:
-Mechanical noise -Aerodynamic noise Mechanical noise is caused by components such as gearbox, generator and bearings and has nothing to do with fluid flow. Currently, by designing and manufacturing gearboxes with almost smooth surfaces, with soft cores but hard surfaces, their noise has been reduced significantly. Also, by using anti-vibration joints and couplings, the mechanical noise of other components can be reduced [START_REF] Wagner | Wind turbine noise[END_REF][START_REF] Zhu | Modelling of noise from wind turbines[END_REF].
Aerodynamic noise is divided into three categories based on the origin of production: low frequency noise; Turbulence noise of the input flow;
Turbine blade inductance noise.
The low frequency noise is caused by noise sources of constant thickness and constant loading around the rotating blade and the change of the speed profile caused by the barrier of the turbine tower path. The turbulence noise of the input flow is related to the presence of eddies in the flow entering the rotor, and considering that the size and number of eddies in the input flow are completely variable, simulating this noise in turbines is a difficult task and due to the changes in the flow at points Different, it is practically not possible to experimentally test this noise in large turbines. But it can be said that large vortices affect the performance of the vane and cause noise in the low frequency range, but small vortices will only locally cause pressure fluctuations and cause high frequency noise. It should be noted that this noise is more important in small turbines than in large turbines [START_REF] Rogers | The effect of turbulence on noise emissions from a micro-scale horizontal axis wind turbine[END_REF].
The turbine blade is known as the most important source of turbine noise, and the noise caused by the turbulence in the boundary layer flow and the eddies resulting from the existence of the blade is called blade noise. The factors that cause noise are the turbulence eddies of the tip of the blades, the separated boundary layer flow and the airfoil remnant, the eddies caused by the instability in the laminar boundary layer flow, the turbulent flow of the boundary layer of the escaping edge and the eddies caused by the width of the escaping edge.
Opponents of wind turbines, due to the fact that the noise produced in wind turbines is infrasound, fuel a condition called wind turbine syndrome and defend their statements that this condition is harmful and causes heart diseases and dizziness. Also, noise pollution can harass animals and insects and make them run away from the wind farm. It can also be dangerous for dolphins and cause increased anxiety, decreased hearing and illness caused by sudden pressure drop, and in the end, they try to breathe quickly and go under water to escape from the noise.
Birds hitting feathers
Perhaps this is the most sensitive part of the problem of wind turbines, and designers try to place wind farms far from the place of bird migration. According to the available statistics, about 8000 birds are killed due to collision with feathers. In addition to birds being killed, bats are also victims of collisions with blades, and the reason for this is the sudden pressure difference and loss of balance. Bats, which play an important role in the ecosystem as insectivores, are at risk of extinction. In Denmark, the study shows that birds are able to identify the presence of turbines during migration, but pressure changes around them cause internal problems in them, and designers are trying to create an optimal distance in wind farms.
Atmospheric disturbances of blades
It has been said that the wind farms themselves cause disruption in air circulation and increase in temperature, which has a negative effect on the atmosphere in the long term. However, according to the research of the French researcher in the "Nature" magazine, the establishment of farms has caused climate changes in Europe to a very small extent and will remain at this level until 2018.
Studying the continental pattern
In a research conducted at the University of Texas, it was found that wind farms have a negative effect on the warming of that area. Researchers have estimated that the temperature of the areas where there are wind turbines is about 72.0 degrees higher than other areas during 10 years. Energy scientists plan to produce more than 200 gigawatts electricity in Europe through wind turbines by 2018, while currently 110 gigawatts have been recorded. since 1987 and based on the commitment that European countries were required to make; 20% of Europe's energy should be through wind [START_REF] Göçmen | Airfoil optimization for noise emission problem and aerodynamic performance criterion on small scale wind turbines[END_REF].
Temperature difference
Comparing the temperature difference between the two environments shows that the maximum temperature difference is 3.0 degrees and this value is higher in winter. Scientists observed some decrease in temperature in the south and increase in temperature in the north, which are caused by the western winds. All these differences are small, and according to Robert and Tard, these differences are very small compared to the differences in the climate system of the world due to greenhouse gases, but research is necessary before the expansion of wind farms.
Generation of lightning by vanes
Wind turbine blades also cause lightning in certain weather conditions. When these vanes reach their maximum height, they create lightning that is sent up to a height of 2 km above the atmosphere. Juan Montaña from the Polytechnic University of Catalonia conducted an experiment in which it was found that several turbines that are located close to each other can generate lightning at the same time.
Determination of wind and storm speed by vanes
According to the studies conducted, wind turbines in the sea can prevent the destructive power of storms. A research group from Stanford University came to the conclusion that the installation of wind turbines in the sea can reduce the damage caused by big storms like the hurricanes that hit the United States.
Disturbing natural views and visual problems
On the way of expansion of wind turbines by the opponents of this industry, there are claims that turbines destroy the natural landscape. In some cases, it is necessary to cut the branches of the trees or the flashing of the wind turbine blades in the horizon and the phenomenon of intermittent flashing occurs due to the passage of light between the blades. If this frequency is between 5 and 20 Hz, it causes a phenomenon called epilepsy, which is a type of vision disease that causes lack of concentration and weakness in the alertness system, but considering that this phenomenon occurs periodically during the daytime hours. It rarely happens, no cases have been observed yet.
Secondary contamination of the environment
The consumption of fossil fuels for the production, transportation, installation and recycling of blades is one of the other problems of this industry, which is increasing day by day [START_REF] Houshmandrad | Investigation the position of gardens as one of the dimensions of urban vegetation in the management of large cities located in the plains and its maintenance strategies[END_REF] .
Solutions to reduce noise pollution in wind turbines
Deformation of blade profile
Gochman and Ozderm optimized the profiles of six well-known airfoils of small wind turbines using a validated code and using Emmitt and Brooks relations for noise simulation [START_REF] Bertagnolio | Trailing edge noise model validation and application to airfoil optimization[END_REF]. Using experimental models, Bertagnolio et al showed that by changing the shape of the profile, it is possible to reduce the sound level by 1 to 3.5 dB [START_REF] Arakawa | Numerical approach for noise reduction of wind turbine blade tip with earth simulator[END_REF]. Araqua also showed changing the shape of the tip of the blade can significantly reduce the sound. It should also be noted that noise reduction should not affect the sound [START_REF] Lowson | Noise evaluation of coning rotor[END_REF].
Changing the three-dimensional layout of the blade
Another method of noise reduction is the three-dimensional change of the blade section. For example, the change of the turbine rotor angle is equivalent to the three-dimensional change of the cross section. By making the turbine conical, the tip of the blade moves down and the diameter of the turbine is reduced, which is considered an advantage because the space required for the turbines is reduced and more turbines can be installed. did Due to the displacement of the tip of the blade towards the rear, the maximum allowed angle in the upstream flow will depend on the permitted distance to the turbine tower, and in the downstream turbines, the downstream turbine can be placed in a cone angle close to 90 degrees. By examining the turbulence noise, Lawson showed that conical rotors do not have much noise increase compared to normal rotors. Crawford also investigated the effect of cone shape on low frequency for downstream wind turbines. In general, it can be said that not much research has been done on this issue [START_REF] Crawford | Advanced engineering models for wind turbines with application to the design of a coning rotor concept[END_REF][START_REF] Howe | Aerodynamic noise of a serrated trailing edge[END_REF].
Serration of the trailing edge of the vane
One of the effective methods is serration of the blade (cactus). In 1991, Howe showed that this reduces noise. This was the starting point for research in this field, when it was shown that serrations of flat blades and different airfoils can reduce noise. Brown et al. showed that teeth in straight blades are more effective than teeth in curved blades [START_REF] Chong | Airfoil self noise reduction by non-flat plate type trailing edge serrations[END_REF]. Chonk and colleagues were able to reduce the noise with non-flat teeth. The effects of the tooth on all blades are not the same and in some cases it causes an increase in noise [START_REF] Jones | Numerical investigation of airfoil self-noise reduction by addition of trailing-edge serrations[END_REF]. Haag et al., by investigating the noise on a 1 MW turbine, concluded that the noise reduction is up to a frequency of less than 1500 Hz and the noise increase is above 2000 Hz [START_REF] Haag | Noise reduction of a 1 MW size wind turbine with a serrated trailing edge[END_REF]. Jones and Sandberg investigated the effect of short teeth on noise reduction and found that short teeth reduce noise at low frequencies, but long teeth reduce noise at all frequencies. Because the shape of the teeth depends on the geometry and flow of the airfoil, therefore, in the blades where the angle of attack and chord length are variable, these teeth must be accurately calculated [START_REF] Jones | Direct numerical simulations of noise generated by the flow over an airfoil with trailing edge serrations[END_REF].
Conclusion
The proposed solutions to reduce the noise pollution of turbines are very few and finding an effective solution for this problem requires much more research. One of the proposed solutions is changing the geometry of the blade, which is not applicable in many cases because changing the geometry of the blade will affect the aerodynamic performance of the wind turbine and its production power. Another solution is to use teeth on the trailing edge of the blade, which requires more research. Most of the research in this field has been done on fixed vanes, which cannot take into account the three-dimensional effects of the flow and rotation of the vane. Another solution is to taper the turbine, which can be done by using a hinge at the root of the blade. Blade taper requires the use of a control system to determine the appropriate angle in different working conditions. It should be noted that so far little research has been done on the use of conical turbines. In this research, experimental and semi-experimental models have been used to simulate noise pollution, which are not highly accurate, and in addition, all mechanisms of noise pollution production have not been investigated. |
03694028 | en | [
"spi.gciv"
] | 2024/03/04 16:41:22 | 2022 | https://hal.science/hal-03694028v2/file/Identifying%20and%20ranking%20the%20causes%20of%20delays%20in%20different%20phases%20of%20oil%20industry%20projects%20in%20EPC%20contracts%20using%20TOPSIS%20method.pdf | Mohammad Hosein Houshmandrad
email: [email protected]
Kavous Ghezelbeigloo
Identifying and ranking the causes of delays in different phases of oil industry projects in EPC contracts using TOPSIS method
Keywords: EPC contracts, delays, oil industry, rankings, TOPSIS method. 1
Delay, any deviation from scheduled agreements is affected by internal and external factors of the system. In this article, we identify and study the causes and factors affecting the delay in performing engineering projects by the method (EPC) and also rank the causes and factors based on the levels of importance and finally provide a solution to reduce delays. Also, the impact of these factors on the three main objectives of the project, namely time, cost and quality of the project, has been evaluated using the survey method and their ranking based on AHP and Topsis methods. The results show that factors such as delay in receipt of the contractor's claims by the employer, weakness of the contractor financial strength, unrealistic (low) price at the time of the tender by the contractor, only with the aim of winning have the greatest impact on delays in various phases of EPC Are. The research method is applied and the method of obtaining the data is descriptive.
INTRODUCTION
In general, most projects face delays. These delays can lead to additional costs and damages to project stakeholders, which in turn can lead to claims by groups involved in the project [START_REF] Ah | Construction delay: a quantitative analysis[END_REF]. Therefore, it is necessary to have precise and clear mechanisms for delay analysis to determine the extent of the delay, its impact on different parts and the whole project, and also to reveal the causes and causes of the delay [START_REF] Azadi Maa | Development of a forward chain approach for calculating selfdelay of project activities[END_REF]. The biggest problem of national development projects is the delay in the engineering, procurement and construction phases, and in many cases the amount of delay is so great that considering the inflation rate, the economic justification of the project is questioned; This issue becomes more important when we consider the strong dependence of the country's economy on the oil and gas industry and the effect of this industry on the growth and development of our country [START_REF] Assaf | Causes of delay in large construction projects[END_REF][START_REF] Mobasseri | Intelligent Circuit Application for Detecting the amount of Air in Automobile Tires[END_REF][START_REF] Mobasseri | Intelligent Circuit Application for Monitoring the Employees Attendance[END_REF][START_REF] Mobasseri | Impact of driving style on fuel consumption[END_REF][START_REF] Mobasseri | Traffic noise and it's measurement methods[END_REF][START_REF] Mobasseri | A Comparative Study Between ABS and Disc Brake System Using Finite Element Method[END_REF][START_REF] Habibi | Drug delivery with therapeutic lens for the glaucoma treatment in the anterior eye chamber: a numerical simulation[END_REF][START_REF] Mobasseri | Bending and Torsional Rigidities of Defected Femur Bone using Finite Element Method[END_REF][START_REF] Mobasseri | Approximated 3D non-homogeneous model for the buckling and vibration analysis of femur bone with femoral defects[END_REF][START_REF] Houshmandrad | Choosing the most optimal type of contract in infrastructure projects from the employer's point of view in the field of technical participation and getting rid of the pressures of liquidity provision[END_REF][START_REF] Houshmand | employer in the field of technical assistance and relief from the pressures of funding[END_REF]. In recent years, governments at various levels have sought to limit spending without reducing services. At the country level, various initiatives to change the role of government in asset management have been evaluated. Given the financial needs, the ministries of the countries are increasingly moving to the management of infrastructure facilities in a quasi-commercial way and are looking to discover new ways of partnerships between government and the private sector [START_REF] Arditi | Selecting a delay analysis method in resolving construction claims[END_REF]. As a result of these initiatives, many different modes of participation have emerged, many of which overlap with each other, but have a slight difference in meaning and method. The construction of ports by government agencies for investments requires an increasing effort to improve these methods, especially the bidding process and the selection of bidders, which is both financially and technically difficult. It is wetter and more expensive than the standard. The two defining characteristics for each of the project implementation methods are how the project steps are connected and also the provision of the required financial resources by the public or private sectors. Each of these different methods of implementation has different strengths and weaknesses that by selecting the strategy needed to carry out the project, the project will be successfully implemented [START_REF] Castro | A polynomial rule for the problem of sharing delay costs in PERT networks[END_REF].
STEP of CONDUCTING RESEARCH
Since the evaluation indicators of each system will be different depending on the main objectives of its creation and the important tasks expected from that system as well as the effective factors and the type of costs incurred, so to identify and rank the effective indicators of the causes of creation Delays in various phases of implementation of oil industry projects in EPC contracts The following steps are taken. After compiling the questionnaire, its reliability is first determined by Cronbach's alpha. Then, to determine the normality of data distribution, we use the Kolmogorov-Smirnov test. In order to determine the sample size (statistical population), we have used the Cochran's formula. In order to identify the eligibility of selected indicators to be included in the final questionnaire, first the initial questionnaire is prepared and analyzed by AHP method and then after compiling the final questionnaire to rank and identify the effective indicators by TOPSIS method is used [START_REF] Cheng | An application of fuzzy Delphi and fuzzy AHP on evaluating wafer supplier in semiconductor industry[END_REF][START_REF] Ding | Using fuzzy AHP method to evaluate key competency and capabilities of selecting middle managers for global shipping logistics service providers[END_REF].
3.
DATA COLLECTION METHODS
After identifying the experts and experts in this field, the questionnaires were distributed and while explaining the problem and how to fill in the questionnaires, they were interviewed. Considering that 2 questionnaires have been used in this research, it is worth mentioning that the first questionnaire was used to identify and validate the causes and factors of delay. Finally, 120 answers were received from EPC experts who have experience in attending or investing in this type of project, and based on this number of questionnaires, the main questionnaire has been developed. The questionnaire was then sent by 34 experts active in the field of consulting services, contractors, employers and investors active in the field of EPC projects, which after receiving the answers, will be finalized to the extent of The impact of each factor on the time, cost and quality of the project should be evaluated.
4.
DATA ANALYSIS
After sending the initial questionnaire, 120 received questionnaires were analyzed. The results of the initial questionnaire analysis were analyzed by AHP method among 120 questionnaires received from experts to confirm the qualifications of the indicators. Three of the indicators in the final questionnaire have been removed and the new items mentioned by the respondents, who in their opinion have the greatest impact on causing delays, have replaced them. The items mentioned by the respondents that have a great impact are described in Table [START_REF] Ah | Construction delay: a quantitative analysis[END_REF].
Table 1. Factors mentioned by the respondents to the open -ended questionnaire questions Impact rate Agent identified by respondents very much
1-Uncertainty about the status of nuclear negotiations and the continuation of sanctions and its impact on the cost of the project due to the high exchange rate very much 2-Lack of investment of professional foreign contractors in the field of EPC projects due to the lack of economic stability in Iran
Investigating the effect of effective factors in project PC delays on the probability of occurrence
Since it is not necessary to present nomenclature at the beginning of the paper, each variable or symbol used in the text must be clearly defined after its first appearance in the text. Figure 1 shows the extent to which factors affect the objectives of EPC contracts, including the time, cost, and quality of such projects. The occurrence of any of the effective factors in carrying out EPC projects of the oil company is indicated on the probability of occurrence. As it is known, factors such as weak contractor financial strength, delay in receiving the contractor's claims by the employer and unrealistically low price at the time of the tender by the contractor (only with the aim of winning the tender) have the greatest impact on the time and cost of projects They have EPC oil companies. Thus, the factors that have the greatest effect on delays in oil company EPC projects (11 most important factors) are shown in Table 4. In the next step, the numerical results of the analysis of factors affecting the delay of procurementconstruction projects using the concept of mathematical hope (PI) and in the next step to prioritize the factors affecting the delays of procurement-construction projects based on the level We discuss the importance of each factor (due to the limited number of pages of the article, the presentation of analytical tables in this section is omitted).
RANKING THE LEVELS OF IMPORTANCE OF VARIOUS FACTORS INFLUENCING THE DELAY OF PROCUREMENT PROJECTS USING THE TOPSIS METHOD
Perform the eight analytical steps of the TOPSIS method, respectively, and the final ranking of the various factors influencing the delay of procurement-construction projects will be as described in Table 6. Before that, in Table 5, we calculate the coefficient of proportionality of each of the ranking factors.
high 3 - 4 - 5 - 6 -very much 7 - 3 Provide inappropriate procedure 4 Delayed
3456734 Hastening the opening of projects due to political and social issues and its opposite results very much Technical weakness of the employer's representative in coordination between consulting engineers and the contractor and timely resolution of technical and executive problems of the project very much Administrative bureaucracy in the employer organization very much Adherence of consulting engineers to the non-expert order of the employer, which increases time and cost. Lack of proper knowledge of the project area Table2. Final questionnaire Factors influencing the delay of procurement projects Failure to employ experienced and specialized personnel Design and engineering phase 1 Technical weakness of the employer's representative in coordination between consulting engineers and contractors and timely resolution of technical and executive problems of the project 2 Lack of proper coordination between employer and contractor (design consultant) response to the engineering department of the contractor (consultant) to the required changes in the plans and ... during the implementation of the project (lack of coordination and necessary cooperation of the engineering department with the implementation department) 5 Assignment of the project to a company formed through a consortium of several contractors 6 Failure to determine the exact amount of goods required or missing goods in the list provided by the engineering department for the project procurement department 7 Errors and errors in design (items such as: errors in announcing the size, type and material of items listed in the map, etc.) 8 Delay in starting the activities of the engineering department due to the prolongation of the recruitment process and the preparation of the project engineering team 9 Delayed response to the client's engineering department for required changes to project drawings and documentation Uncertainty over the status of the nuclear talks and the persistence of sanctions and their impact on the cost of the project due to the high exchange rate Procurement phase Lack of investment of professional foreign contractors in EPC projects due to lack of economic stability in Iran Administrative bureaucracy in the employer organization Failure to hire specialized and experienced personnel in the contractor's logistics and trade team Sanctions on Iran by some countries The impact of government programs and laws on the activities of the supply sector (for example, preventing the import of a specific product that is required by the project) Unpredictability of inflation rate and increase in prices of materials compared to the time of rate submission and estimation of project costs Submission of unrealistic (low) prices at the time of the tender by the contractor, for the sole purpose of winning the tender Delay in receipt of contractor claims by the employer Contractor financial weakness Exchange rate change Procurement of materials without proper quality Lack of preparation of materials required for the project based on the list of engineering (items such as: insufficient supply and in accordance with BOM) Failure to follow the issues and problems of the product supply phase by project managers Delays in delivery and transportation of goods by suppliers and depreciation / breakdown of materials during transportation Restricting the list of suppliers of materials (Vendor List) by the National Oil Company and forcing them to supply materials and materials Selection of inexperienced and unsuitable builders and suppliers by the contractor Construction and implementation phase Extension of account opening time and activation of letter of credit (LC) Problems of customs clearance and prolongation of the clearance process Failure to hire specialized and experienced staff by the contractor Poor performance of the Technical Inspection Office Command change in work Selection of second-hand contractors and inexperienced employees due to their low wages by the contractor Improper planning as well as not using proper project control methods Involvement of employer project managers and experts in more than one project and their lack of sufficient focus in this project Lack of sufficient financial resources as well as spending the project budget on activities other than the project mentioned by the contractor Delays in the project due to contractor tools, equipment and supplies Unfavorable weather conditions such as rain, wind, dust, etc. Delay in pursuing issues and solving problems (inside and outside the organization) by the employer project managers Lack of contractor manpower to perform all activities mentioned in the schedule, as well as inappropriate and inefficient allocation of personnel in different areas of work by the contractor Adherence of consulting engineers to the non-expert order of the employer, which increases time and cost Poor job description provided by the employer (non-transparency of the items mentioned in the job description and also incomplete job description) Risk of project environments and delays in securing the site for project implementation Performing activities outside the regulated job description and allocating people / resources / budget to these activities Lack of proper knowledge of the project area Haste in opening projects due to political and social issues and its opposite consequences Fuel supply problem (for cars, diesel welding engine, air compressor, etc.) Delay in decision-making and assignment of tasks to the land opponents of the project site by the employer OFTEN THE MOST USEFUL AND AT THE SAME TIME THE FIRST STEP IN DATA ORGANIZATION IS TO SORT THE DATA ACCORDING TO A LOGICAL CRITERION AND THEN TO EXTRACT THE CENTRAL INDICATORS AND SCATTERING AND IF NECESSARY CALCULATE THE CORRELATION BETWEEN THE TWO CATEGORIES OF INFORMATION AND USE MORE ADVANCED ANALYSIS SUCH AS FORECASTING. FOR THIS PURPOSE, CENTRAL INDICATORS ARE CALCULATED. CENTRAL INDICATORS ARE IN THREE TYPES OF VIEW (MODE), MEDIAN (MEAN) AND MEAN (MEAN), EACH OF WHICH HAS ITS OWN APPLICATION. IN RESEARCH WHERE THE SCALE OF DATA MEASUREMENT IS THE MINIMUM DISTANCE, THE AVERAGE IS THE BEST INDICATOR. THEREFORE, IN TABLE 3, DESCRIPTIVE STATISTICS ARE PRESENTED IN FULL.
Table 4 .
4 Factors with the greatest impact on the delay of procurement-construction projects (10 most important factors) The main factors influencing the delay of procurement-construction projects Weakness of the contractor x20 Haste in opening projects due to political and social issues and its opposite consequences x46 Submission of unrealistically low prices at the time of the tender by the contractor (for the sole purpose of winning the tender x18 Delay in receipt of contractor claims by the employer x19 Lack of preparation of materials required for the project based on the list of engineering (items such as: insufficient supply and in accordance with BOM) x23 Procurement of materials without proper quality X22 Lack of sufficient financial resources as well as spending the project budget on activities other than the project mentioned by the contractor x36 Involvement of employer project managers and experts in more than one project and their lack of sufficient focus in this project x35 Delays in the project due to contractor tools, equipment and supplies x37 Adherence of consulting engineers to the non-expert order of the employer and as a result increase time and cost x41 4.
Figure 1 .
1 Figure 1. The effect of factors affecting the objectives of EPC contracts, including time, cost and quality
Table 3 . Descriptive statistics of identified factors in causing delays in oil company EPC projects Descriptive Statistics
3
Factor Number of variables minimum maximum Average The standard deviation
x1 170 1.00 2.00 1.1765 .38263
x2 170 2.00 3.00 2.5588 .49836
x3 170 1.00 3.00 1.8235 .41957
x4 170 2.00 4.00 2.4485 .51381
x5 170 1.00 2.00 1.3088 .46372
x6 170 2.00 4.00 2.4779 .51592
x7 170 1.00 2.00 1.3309 .47227
x8 170 1.00 3.00 1.8971 .35023
x9 170 2.00 3.00 2.6397 .48186
x10 170 1.00 2.00 1.1471 .35547
x11 170 2.00 3.00 2.7059 .45733
x12 170 2.00 4.00 2.7500 .45134
x13 170 1.00 2.00 1.4632 .50049
x14 170 2.00 4.00 2.2868 .46996
x15 170 2.00 4.00 2.3971 .50596
x16 170 2.00 3.00 2.3456 .47732
x17 170 2.00 4.00 2.4559 .51450
x18 170 4.00 5.00 4.4559 .49989
x19 170 4.00 5.00 4.5000 .50185
x20 170 4.00 5.00 4.7500 .43461
x21 170 1.00 3.00 1.8235 .41957
x22 170 4.00 5.00 4.1912 .39468
x23 170 4.00 4.00 4.0000 0.00000
x24 170 1.00 2.00 1.5000 .50185
x25 170 1.00 3.00 1.7794 .43361
x26 170 2.00 4.00 2.7794 .43361
x27 170 2.00 4.00 2.8824 .45638
x28 170 1.00 3.00 1.7059 .47325
x29 170 2.00 4.00 2.2059 .42370
x30 170 3.00 4.00 3.2206 .41618
x31 170 1.00 3.00 1.6691 .48770
x32 170 1.00 2.00 1.0809 .27366
Table 4 ,
4 the highest probability of occurrence in terms of level of importance is X20 and indicates the financial weakness of the contractor and the lowest level of importance is X32 and is a command to change.
Table 4 . Calculation of the proportionality coefficient of each of the ranking factors Ranking factors Proportion coefficient and importance of effective factors in delaying procurement- construction projects
4
Impact on quality 19400
Impact on time 19522
Impact on cost 19521
Table 6. The final result of ranking the various factors influencing the delay in oil company EPC contracts
CONCLUSIONS
In this article, the causes and factors affecting the delay in performing engineering projects by the method (EPC) method, their ranking based on the levels of importance as described in Table 6. Also, the impact of these factors on the three main objectives The project means that the time, cost and quality of the project have been evaluated. Using the results of this study, companies active in the field of EPC projects, during the project, by reducing the incidence of the above factors, will be able to reduce additional project time, prevent additional costs and increase the quality of work 6. |
03940511 | en | [
"sdv"
] | 2024/03/04 16:41:22 | 2023 | https://hal.science/hal-03940511v2/file/BBCH%20Camelia%20sinensis-2023.05.pdf | Graine, émergence de la radicule Seed, hypocotyl and cotyledons growing towards soil surface Graine, hypocotyle et cotylédons en croissance vers la surface du sol
The BBCH code of Camellia sinensis identifies the stages of phenological development of the tea plant resulting from the observation and needs of tea growing, the first version of which is proposed in the book "Théiers, théieraies, feuilles de thés" (Tea Plants, Tea Orchads, Tea Leaves) Alain guerder, ed. Écohameau de Barthès, 2021, 240 p. (isbn 978-2-9580-3750-5) Le code BBCH du Camellia sinensis identifie les stades de développement phénologiques du théier issu de l'observation et des besoins de la théiculture dont la première version est proposée dans l'ouvrage "Théiers, théieraies, feuilles de thés" Alain guerder, éd. Écohameau de Barthès, 2021, 240
Flower
closed, the petals have their final size and are still closed, the outer ones have a green trace, the internal organs of the flower are not visible Fleur fermée, les pétales ont leur taille définitive et sont encore fermés, les externes ont une trace verte, organes internes de la fleur non visibles Half-opened, the petals are half-open. internal organs are visible. Cup-shaped flower Mi-éclose, les pétales sont mi-ouvertes. les organes internes sont visibles. Fleur en forme de coupe Open, beginning of anthesis, horizontal petals, brightly colored stamens, fragrant flowers emitting the maximum number of molecules, pale yellow stigmas, slightly swollen and secreting a small amount of mucus Épanouie, début de l'anthèse, pétales horizontals, étamines de couleur vive, fleurs odorantes émettant le maximum de molécules, stigmates jaune pâle, légèrement enflés et sécrétant une petite quantité de mucus Post-flowering, the stamens are brown Post-épanouissement, les étamines sont brunes End of flowering the petals are withered, the stamens are brownish, the anthers no longer release pollen and shrivel up Fin de floraison les pétales sont flétris, les étamines sont brunâtres, les anthères ne libèrent plus de pollen et se ratatinent Fall of the corolla, the petals and the stamens have fallen, the sepals persist and envelop the ovary, the stigma fades and blackens Chute de la corolle, les pétales et les étamines sont tombées, les sépales persistent et enveloppent l'ovaire, le stigmate se fanent et noirci 10 % des bouquets sont passés par le stade épanouissement (605) 50 % des bouquets sont passés par le stade épanouissement End of flowering, all the flowers of all the bouquets have fallen. Fruit set visible Fin de floraison, toutes les fleurs de tout les bouquets sont tombées. Nouaison visible
BBCH Camelia sinensis-2023.05
0 Germination, sprouting, bud development Germination, levée, développement du bourgeon
Seed, tegument is split Graine, le tégument est fendu
Seed, radicle emerge
des feuilles de la tige principale
10The true leaf is differentiated and has not started unrolling La vraie feuille est différenciée et n'a pas commencé son déroulement Unfolding, the leaf begins to unfold Déroulement, la feuille commence à se dérouler Leaf unfolded and not starting to growth Feuille ayant fini son déroulement et n'ayant pas commencé à grandir Leaf size less then 50% of its final size Feuille d'une taille inférieure à 50 % de sa taille définitive Leaf at 50 % of final size Leaf of a size between 50 % and less than ⅔ (66%) of its final size Feuille d'une taille entre 50 % et moins de ⅔ (66 %) de sa taille définitive Leaf opened between ⅔ and almost 100% of its final size Feuille ouverte à plus de ⅔ ou à presque 100 % de sa taille définitive
1n 11 1 1nx p. Plante, au repos, bourgeon complètement fermé, à l'arrêt ou dormant couvert par (isbn isbn 978-2-9580-3750-5) Développement Leaf at final size Seed, dormancy Cutting, fresh Plant, dormancy, leaf bud closed, in resting period or dormancy and covered by green scales (banjhi) Feuille à sa taille définitive Graine, dormante So on leaf n at stage x Ainsi de suite feuille n au stade x Bouture, juste prélevée The 1st true leaf begins to unfold des cataphylles (banjhi) The 1st leaf at 50% of its final size
18 19 Seed, imbibition beginning. Cutting, wound healing, timing of hormone application Plant, beginning of bud swelling cataphylls elongate (bud break) The 8th and last possible leaf begins to unfold 8th leaf reach final size Seed, small swelling visible at one end All leaves are mature, reaching full size Graine, début de l'imbibition Bouture, cicatrisation, moment de l'application d'hormones Plante, début du gonflement du bourgeon, les cataphylles s'allongent La 8ᵉ feuille à sa taille définitive (débourrement) Graine, petit renflement visible à une extrémité Toutes les feuilles sont matures, ayant atteint leur taille définitive
Cuttng, callus formation begins Bouture, formation du cal
Seed, imbibition complete Graine, imbibition achevée
Cutting, end of bud swelling Bouture, fin du gonflement des bourgeons
Plant, cataphylls have their maximum size, the leaves are not visible Plante, les cataphylles ont leurs taille maximum, les feuilles ne sont pas visibles
(mature bud) (bourgeon mature)
Seed, elongation of radicle, formation of root hairs. Graine, élongation de la radicule, apparition des poils absorbants
Cuttings, root branching Bouture, les racines se ramifient
Seed, cotyledons and hypocotyl emerge from the seed Graine, cotylédons et hypocotyle émergent de la graine
Plant, buds open, cataphylls begin to separate, Plante, les bourgeons s'entrouvrent, les cataphylles commencent à se séparer,
leaf tips are visible l'extrémités des feuilles est visible
Emergence. Seed, sprout emerges from the soil surface Plants, the ends of the leaves exceed the cataphylls by a few mm Émergence. Graine, la pousse émerge de la surface du sol Plantes, l'extrémités des feuilles dépassent les cataphylles de quelques mm
Stage before 119 is an immature shoot never to be harvested (arimbu)
Le stade avant 119 est une pousse immature à ne jamais récolter (arimbu) 2 Formation
of side shoots Formation des pousses latérales
Feuille à 50 % de sa taille finaleLa 1 e vraie feuille commence à se dérouler La 1 e feuille est à 50 % de sa taille finale The 1st leaf at its final size (mother-leaf) La 1ᵉ feuille à sa taille définitive (feuille-mère) La 8 e et dernière feuille possible commence à se dérouler Stages 104, 106 and 107 of the last leaf before the shoot comes to rest correspond to the stages identified for the picking so-called "expanded aspect" (kāimiàncǎi) typical of blue teas Les stades 104, 106 et 107 de la dernière feuille avant que la pousse ne se mette en repos correspond aux stades repérés pour la cueillette dite "aspect déployé" (kāimiàncǎi) typique des thés bleus On average when a leaf is at stage 109, the leaf above is at ~105-108, the one still above is at ~102-105 and the 3ᵉ after is at 101 Floral initiation, the state change is not visible Initiation florale, le changement d'état n'est pas visible Differentiation, bud visibly become a flower bud covered with green bud scales Différenciation, bourgeon visiblement bouton floral couvert de pérules vertes Mature bud, reached its final size and buds scales are still closed and green Bouton mature, a atteint sa taille définitive, pérules encore fermées et vertes Bud scales spread apart, ends just spread apart, the color of the petals is visible Pérules écartés, les extrémités s'écartent juste, la couleur des pétales est visible
4 Main shoot leaf development Development of vegetative propagation organs Développement des organes de propagation végétative
The stem is not lignified La tige n'est pas lignifiée
Beginning of lignification Début de la lignification
The stem is semi-lignified. best time to take cuttings La tige est semi-lignifiée. moment idéal pour prélever des boutures
The stem is completely lignified La tige est entièrement lignifiée
5 Inflorescence emergence (cyme) Apparition de l'inflorescence (cyme)
6 Flowering, anthesis Floraison, anthèse
(Flower opening stages) (Stades d'ouverture de la fleur)
601 The petals begin to open, the internal organs are not visible Les pétales commencent à s'ouvrir, les organes internes ne sont pas visibles
603
605 Quand une feuille est au stade 109, la feuille au-dessus est à ~105-108,
celle encore au-dessus est à ~102-105 et la 3ᵉ après est à 101
606
607 608 609 End of blooming, the petals are browned and their margins rolled up From top to bottom the leaves are named: last leaf, flowering pekoe, orange pekoe, pekoe, souchong, congou, bohea, mother leaf (the base leaf is always called mother leaf regardless of the number of leaves in the mature stem) Du sommet à la base les feuilles sont nommées : Fin d'épanouissement, les pétales sont brunis et leurs marges enroulées dernière feuille, flowering pekoe, orange pekoe, pekoe, souchong, congou, bohea, feuille-mère (la feuille de la base se nomme toujours feuille-mère quel que soit le nombre de feuille de la tige mature)
60nx Flowers of the bunch n, counting from the base to the top of the shoot, at stage x Fleurs du bouquet n, en comptant de la base au sommet de la pousse, au stade x
20 22 6n 23 2n 6010 6011 6nx 2nx 615 655 695 699 Undeveloped axillary buds The 1st flowers of the 1st bouquet, the lowest on the shoot, begin to open Growth of 2nd order shoots The petals of the 1st bouquet are beginning to open, internal organs not visible The cataphylls of the 2nd order axillary bud elongate (Overall flowering stages of the plant) Shoots of 2nd order at finale size Growth of 3d order shoots So on shoots of order n at stage x Shoots of 2nd order at 50 % of full size 10% of bouquets went through the blooming stage (605) 50% of bouquets went through the blooming stage All bouquets have gone through the blooming stage Les 1e fleurs du 1e bouquet, le plus bas sur la pousse, commencent à s'ouvrir Bourgeons axillaires non développées Les pétales du 1e bouquet commencent à s'ouvrir, organes internes non visibles Croissance des pousses de 2e ordre (Stades globales de floraison de la plante) Les cataphylles du bourgeon axillaire de 2ᵉ ordre s'allongent Ainsi de suite pousse de l'ordre n au stade x Pousse de 2 e ordre à 50 % de sa taille finale Pousse de 2 e ordre à sa taille définitive Tous les bouquets sont passés par le stade épanouissement Croissance des pousses de 3 e ordre
3 Main shoot development Développement de la pousse
31 Start of shoot growth, the axis is visible between the cataphylls Début de la croissance de la pousse, l'axe est visible entre les cataphylles
35 The stem has reached 50% of its final length La tige a atteint 50 % de sa longueur finale
39 The sem has reached its final length La tige a atteint sa longueur définitive
7
Fruit development Développement des fruits 70 (Fruit development) (Développement du fruit) Fruit set, the fruit has visibly started to develop Nouaison, le fruit a visiblement commencé à se développer The fruit reaches 50% of its final size 70nx
Fruits of the bunch n, counting from the base to the top of the shoot, at stage x Fruits du bouquet n, en comptant de la base au sommet de la pousse, au stade x 7010 7015
The fruits of the 1st bunch have 50% |
04105998 | en | [
"phys.meca.mefl",
"phys.phys.phys-flu-dyn"
] | 2024/03/04 16:41:22 | 2013 | https://hal.science/hal-04105998/file/FP05-2013-martinez.pdf | MICRO PIN INTERACTION IN FIN VICINITY AT SUPERSONIC CONDITIONS 48 th APPLIED AERODYNAMICS SYMPOSIUM
INTRODUCTION
At supersonic flight conditions, shock waves strongly change pressure repartition on any solid surface. This fact makes it possible to imagine a way to steer flying vehicles. The main idea in this study was to use small disturbances to generate shocks. Lift forces on a body are mainly due to pressure applied on large surfaces. This is why a finned configuration was chosen for this paper.
Here are presented the results of numerical simulations and wind tunnel experiments of pin interaction with fins on the Basic Finner reference aerodynamic architecture at Mach number 2.
INTEREST OF THE STUDY
Supersonic flight addresses the future of engineering flight studies, as well for civil as for military applications. At supersonic conditions, rudder torque might make steering design quite tricky. It would be better to change pressure repartition on a surface thanks to a small additional device than moving the surface itself. As described previously, supersonic conditions lead to shock waves formation, related to high pressure changes. The goal is to use a small device dedicated to shock wave generation. This was achieved thanks to a cylindrical pin attached to the body surface. This concept was early developed by Massey and Silton [START_REF] Massey | Design and Wind Tunnel Testing of Guidance Pins for Supersonic Projectiles[END_REF], [START_REF] Silton | Integrated Numerical and Experimental Investigation of Actuator Performance for Guidance of Supersonic Projectiles[END_REF]. In order to evaluate the aeroballistic performances of the concept, it is of the highest importance to achieve a parametric study concerning the positioning, size and shape of the pin relatively to the reference model which has been chosen to be the wellknown Basic Finner (described in the following). Hereby, the observed parameter is the position of the pin. Shape, size and Mach number are fixed. In order to increase the results validity, we compared numerical simulation and wind tunnel results. Numerical results were obtained with ANSYS CFX, while wind tunnel tests were conducted with an aerodynamic balance and pressure sensitive paints (PSP).
REFERENCE MODEL
The reference model used in this study is the Basic Finner, which has been measured many times. The DRDC team completed an extensive free flight campaign [START_REF] Dupuis | Aeroballistic Range and Wind Tunnel Tests of Basic Finner Reference Projectile from Subsonic to High Supersonic Velocities[END_REF]. This finned model is of a very simple shape. Its length to calibre ratio has a value of ten, which produces a stable static margin.
Figure 1. Basic Finner
To make them comparable, wind tunnel model and numerical simulations were performed with the same size, Mach number and Reynolds number. Due to the S20 test section the model caliber has been set to 20mm.
WIND TUNNEL FACILITY
The S20 wind tunnel situated at ISL (Saint Louis, FRANCE) in which the experiments were conducted, is a blow down facility, supplied with air pressure by storage under 250 bar. The Mach number in the 20x20cm² test section, can reach a range between 1.4 and 4.4. The turbulence rate is below a few percents.
Figure 2. S20 wind tunnel
An aerodynamic balance form the Abble company was used due to its high reproducibility and its low hysteresis quality. The measurements were processed to obtain the six static aerodynamic coefficients for each configuration.
To confirm the experiments validity, a Schlieren image was also used to visualize the shock wave produced by the model tip. Those waves reflect on the wind tunnel walls, which may dramatically change the aerodynamic forces on the model. The Schlieren photography allowed confirming that the wall proximity did not influence the results and could eventually give a clue for potential unexplained phenomena, which was not the case. The S20 wind tunnel has been used at Mach number 2, with a total temperature of 300K and a generating pressure of 4.5 bar.
GOAL ANALYSIS
The pin steering concept was exposed by Silton and Massey [START_REF] Silton | Investigation of Actuator Performance for Guiding Supersonic Projectiles[END_REF]. In 2008, they reported drag increase between 7% and 15% at Mach 2 for the shortest pins, so that the device would only be dedicated to short range use. It was decided at ISL to study relatively smaller pins to decrease the induced drag. Indeed, the resistance forces should occur directly on the pin itself, by pressure difference between leeward and windward sides. That is why the wind projected pin surface should be as small as possible. In Eq. 1, let us define a criteria we call the device benefit ratio, based on the pin related side force ∆F s (in the vehicle frame) to the axial additional force ∆F a generated by the pin. This benefit ratio R d should be maximised under constraint of |∆F s | being enough for vehicle guidance. This ratio is interesting because it firstly can be compared to the vehicle proper lift to drag ratio (Eq. 2).
a F s F vehicle R = (2)
Secondly considering the lateral velocity correction aimed with the system, it is time related, m being the vehicle mass and V s the lateral velocity. If we consider a constant additional side force ∆F s and axial force ∆F a during ∆t, the additional velocity compared to the vehicle proper velocity is:
m F m F a s . t a V . t dt . m s F s V ∆ ∆ ∫ ∆ = ∆ ∆ = ∆ = ∆ (3) pin a s R a V s V F F = ∆ ∆ = ∆ ∆ (4)
Eq. 4 means that the axial velocity loss for a targeted additional lateral velocity is directly related to the benefit ratio. From Eq. 3, one can say that only the needed time to obtain the lateral velocity correction is depending on the |∆F s |, which must thus be constrained. One goal of the study would then be to establish if a pin size decrease would increase the benefit ratio and if we can keep guidance capability.
PINNED MODEL
Basic Finner
The Basic Finner model was set to a caliber of 20mm, for a total length of 200mm. The pin is a small cylinder compared to the Basic Finner as it can be seen on Fig. 3 and4. Its length is 3mm and its diameter is 1.5mm. A set of positioning holes were drilled during the manufacturing, so that the pin can be screwed in any of them precisely. The unused holes are filled in with plaster and polished before the experiments. The Reynolds number based on vehicle length for our experiments has been calculated to be 10.9 x 10 6 .
Pin position
In the following lines, we will refer to different positions of the pin on the vehicle surface. Due to manufacturing needs, we chose a set of five lines along which the pin could be placed. As shown in fig. 5, line #1 is situated next to the vertical fin, Z being the vertical direction. Line #5 is situated next to the horizontal fin, Y being the horizontal and lateral direction. X is the vehicle axis, oriented in the flow direction.
WIND TUNNEL RESULTS
In the Basic Finner configuration, drag is mainly due to base pressure, imposed by the wake and base flow. In the wind tunnel, the model is held by a sting which size is not negligible compared to the caliber. As a consequence, drag has been measured but will not be presented here. This section will hold a first part containing information about aerodynamic coefficient, and a second one about angular balance.
Aerodynamic coefficients
Compared to the reference body without actuator, using a pin is not far from superimposing a coefficient offset on the torque coefficients. In addition, variation of the normal force at 0° angle of attack (AoA) is small, as it can be seen in Fig. 8.
That is why fig. 9, 10 and 11 represent coefficient offset at 0° angle of attack. Roll torque offset is indeed the roll torque itself, because the reference configuration is axisymetrical. Line 3 is supposed to generate no roll torque, when the other lines should be symmetric, which is almost the case.
Angular balance
As seen before, neither Cz α has not been dramatically changed by adding a pin to the Basic Finner, nor has been the yaw and pitch coefficients Cm α and Cn α . As a consequence the vehicle tends to get an angular balance that is not 0° anymore. From a linear point of view, as far as small angles are involved, Fig. 9 and 10 can be directly converted into an angular balance. This is shown in Fig. 12 and 13.
Those two small angles can be converted into a total aerodynamic angle which is represented in Fig. 14. It can be observed that position 7-1 and 7-5 are the best positions for an angular control with an angle of 0.4°. This will be explained in the following section.
NUMERICAL SIMULATIONS
Experimental reference results
Reference Basic Finner has been computed first. To start, we considered the reference report on Basic Finner [START_REF] Dupuis | Aeroballistic Range and Wind Tunnel Tests of Basic Finner Reference Projectile from Subsonic to High Supersonic Velocities[END_REF].
Cx 0 Cz α Cm α 0.554 11.4 -25.2
Table 1. Free flight measurement results [START_REF] Dupuis | Aeroballistic Range and Wind Tunnel Tests of Basic Finner Reference Projectile from Subsonic to High Supersonic Velocities[END_REF] at Mach 2 At Mach 2, the main aerodynamic coefficients can be found in Tab. 1. Those are the interpolated data from the free flight.
Pin orientation
During the numerical study, pin orientation has been changed to see roll angle influence. Fig. 17 shows four configurations in a front view. Configuration 1 is so called "roll angle 0°", configuration 2 is "roll angle 90°", configuration 3 is "roll angle 45°" and configuration 4 is "roll angle -45°".
Figure 17. Pin configurations
Simulation validation
The first simulated case is the Basic Finner without pin. Compared to the reference report on Basic Finner [START_REF] Dupuis | Aeroballistic Range and Wind Tunnel Tests of Basic Finner Reference Projectile from Subsonic to High Supersonic Velocities[END_REF], drag is over estimated: CFX results lead to Cx value of 0.59 when 0.55 were awaited. This is not surprising as far as the pressure based solver was set to a stationary RANS simulation. Numerical simulation value of Cz α reaches 11.0 and Cm α is -21.4. This last value is a bit too small, but should be compared to our wind tunnel tests in Tab. Concerning Cx increase, Fig. 18 shows that it is kept below 3%, which is awaited as far as pin size remains small relatively to the Basic Finner caliber. values. This offset applies at the rear of the body, which causes also a torque offset, as it can be seen in Fig. 21 to 23. Fig. 21 shows the roll torque coefficient is mainly an offset with a slight angle of attack influence. It seems logical that only one pin creates a roll torque. This could be minimised by using another pin having a contrary effect.
CONCLUSION
In this study, we have shown that numerical simulation is reliable enough to investigate pin positioning and system efficiency. A total balance angle of 0.4° can be obtained by using pin 7-1. For a Cz α value of 11.0, the benefit ratio for this device is about 5.9. The benefit ratio would only be 2.95 for pin 1-5 (balance angle of 0.2°). The system shows a good efficiency for a guidance mechanism, without increasing drag too much.
Figure 3 .
3 Figure 3. Basic Finner wind tunnel model
Figure 4 .
4 Figure 4. Fins compared to pin size
Figure 5 .
5 Figure 5. Axis orientation and pin position
Figure 6 .
6 Figure 6. Pin axial position
Figure 8 .Figure 10 .Figure 11 .
81011 Figure 8. Relative variation of normal force (∆Cz α /Cz α )
Figure 14 .
14 Figure 12. Yaw angle offset
7. 1 .
1 Numerical configuration ANSYS CFX solver has been used to calculate configuration 7-1. The chosen turbulence model is Shear Stress Transport (SST). Boundary conditions have been set to the ISL wind tunnel specifications. The mesh has been created with ANSYS Mesher and is unstructured. It is made of 140349 nodes and 4452328 cells. The boundary layer is taken into account with 17 refined cell layers (first one being 10µm) with a grow rate of 1.3. As displayed in Fig.15, the mesh was refined in pin, fin and tip proximity as well as in the wake in order to better capture the flow modification around the pin.
Figure 15 .
15 Figure 15. Vertical mesh cut around Basic Finner
7. 5 .
5 Fig. 18 to 23 represent aerodynamic coefficient change relatively to Basic Finner without pin as a function of the angle of attack. For each configuration, three angles have been considered: -3°, 0° and +3°.Concerning Cx increase, Fig.18shows that it is kept below 3%, which is awaited as far as pin size remains small relatively to the Basic Finner caliber.
Figure 20 .
20 Figure 18. ∆C x
Figure 23
23 Figure 21. ∆Cm x
Figure 25 .
25 Figure 25. Pressure simulation on the fins Fig. 26 is the result of a pressure sensitive paint experiment. The map is scaled as Fig. 25. Fig. 25 and 26 represent the pressure repartition around the pin for a roll angle of 0° and an angle of attack of 3°. It is possible to notice the overpressure in front of the pin and on the windward side of the horizontal fin. The
Figure 26 .
26 Figure 26. PSP result @ 3° AoA
2.
[3] free flight tests, velocity may have
decreased, thus changing Cm α value.
Cx 0 Cz α Cm α
DRDC 0.554 11.4 -25.2
ISL S20 / 10.3 -22.6
Simu 0.59 11.0 -21.4
Table 2. Basic Finner coefficients comparison
at Mach 2
Though not perfect, numerical simulation
parameters seem to give acceptable results.
The difference with the DRDC experimental
value might be explained by a different
Reynolds number of 13.7x10e6. In addition,
Cm α decreases quickly with Mach number
decreasing towards Mach 1. During the DRDC
ACKNOWLEDGEMENTS
The authors would like to thank warmly the French DGA for sustaining this study inside of a contract called PEA MANEGE, under number 2011 92 00 20. The authors would also like to mention the fruitful partnership between ISL, Nexter Munitions and ONERA who are working together on this contract. Last but not least, the wind tunnel team should be thanked for their proficiency. |
04106134 | en | [
"shs"
] | 2024/03/04 16:41:22 | 2021 | https://hal.science/hal-04106134/file/ECCODOM.L%27habitat%20social%20en%20Outre%20Mer.pdf | Lydie Laigle
L'habitat social en Outre-Mer
L'habitat social en Outre-Mer Lydie LAIGLE (CSTB-Ecole des Ponts) Introduction
Nous avons réalisé quelques entretiens, durant l'été 2020, auprès des responsables clientèle et du patrimoine des organismes de logement social dans les territoires d'outre-mer suivants : Ile de la Réunion, Guadeloupe et Martinique. Ces entretiens ont mis en évidence des éléments clés permettant d'appréhender les obstacles et les conditions d'émergence des écogestes, de les rapporter aux évolutions des logements sociaux, de l'environnement notamment climatique et des conditions de vie des habitants.
Pour cette première phase de l'enquête sociologique, cinq personnes travaillant dans les organismes d'habitat social ont été interviewées, ainsi qu'une personne de USHOLM. Il s'agit de deux chargés de clientèle du principal bailleur de l'Ile de la Réunion, de deux personnes travaillant chez deux bailleurs de la Guadeloupe et d'une personne exerçant son activité au sein d'un des bailleurs de Martinique1 .
Le questionnement qui leur a été adressé visait à identifier quels sont les facteurs explicatifs du changement des pratiques des habitants : comment les bailleurs expliquent-ils l'avènement de pratiques écoresponsables, mais aussi la persistance de pratiques allant à l'encontre des écogestes au sein de l'habitat social ? En d'autres termes, qu'est-ce qui conduit les habitants à limiter ou favoriser l'aération de leur logement, recourir à la climatisation, s'équiper en appareils consommateurs d'énergie ? Dans quelle mesure l'agencement des immeubles et l'organisation des logements, leur architecture, les matériaux utilisés et la prise en compte de l'évolution du climat peuvent être améliorés pour favoriser l'adoption des écogestes par les habitants ? De plus, quelles sont les actions que les bailleurs envisagent pour accompagner le développement de ces écogestes et des pratiques écoresponsables au sein de leurs résidences d'habitat social ? Comment prévoient-ils d'organiser la contribution conjointe des bailleurs et des habitants, mais aussi d'une chaine d'acteurs plus large (associations locales, collectivités territoriales…) au développement de ces pratiques ?
Pour mener les entretiens auprès des bailleurs, nous avons bâti un questionnaire disponible en annexe. Toutefois, le plan d'exposition de cette synthèse ne suit pas précisément les réponses apportées au questionnaire. Il intègre des éléments supplémentaires d'analyse fournis par les bailleurs sociaux sur les obstacles et les possibilités existantes pour accompagner le développement des écogestes dans un contexte ultra-marin qui demeure assez contraint du point de vue du changement climatique, et de la situation économique et sociale. De plus, nous avons fait le choix de réaliser une synthèse sans citer les dires des bailleurs, ce qui aurait augmenté le nombre de pages et rendu moins fluide l'écriture. Cela nous a conduit à mettre les comptes-rendus des entretiens réalisés en annexe, afin que le lecteur puisse s'y reporter pour connaitre la « parole » et les réflexions des bailleurs.
Une adaptation progressive des bâtiments au changement climatique
Les territoires ultra-marins sont soumis aux changements climatiques qui affectent, d'après les interlocuteurs interviewés, la résistance du bâti aux variations thermiques, aux vents marins, à la pluie et à l'humidité, mais aussi aux risques de tempêtes et de cyclones.
Ces changements climatiques peuvent accroitre les impacts sur le parc d'habitat social soumis déjà à un taux d'usure et d'humidité plus important que dans les territoires métropolitains. Les bailleurs interviewés font état de problèmes récurrents qui affectent l'état du parc tel que des infiltrations d'eau (par toitures ou façades dégradées), des moisissures liées à l'humidité si l'aération n'est pas suffisante, une fragilité des façades et de la peinture due à l'air marin. Le problème d'infiltration est probablement accru aux Antilles, à La Réunion et à Mayotte par le climat tropical avec saisons humides et cyclones et en Guyane par le climat équatorial avec saison des pluies.
Nos interlocuteurs font état d'hivers plus froids et d'étés plus chauds s'accompagnant de sécheresses plus sévères, notamment sur l'Ile de la Réunion, d'une humidité plus intense aux Antilles avec un risque accru de tempêtes et de cyclones voire de submersion. Ils reconnaissent que ces tendances d'évolution du climat les obligent à prévoir des améliorations dans la conception des bâtiments et leur agencement au milieu urbain et naturel, les systèmes et équipements installés (ouvrants, volets, brasseurs d'air…) et les systèmes de production énergétique et d'eau chaude sanitaire mis en place. De plus, ils affirment devoir prendre davantage en compte la durée de vie et la pérennité des matériaux, leur résistance hydrique et thermique (en évitant les toits en taule…), afin de ne pas avoir à réaliser des réhabilitations lourdes tous les dix ans et d'être en mesure de fournir de meilleures prestations de confort et d'adaptation au changement climatique aux habitants dans des conditions de maitrise des charges acceptables compte tenu du taux de précarité sociale de la population.
Un contexte géographique, social, économique et climatique contraint
Les bailleurs évoquent aussi une marge d'action dans ces multiples domaines qui peut se trouver limitée par le contexte géographique, social, économique et climatique propre aux territoires d'outremer. Le contexte géographique se caractérise par une rareté du foncier disponible à la construction et une faible possibilité de relocaliser l'habitat dans les terres, ces territoires étant marqués par des bandes de littoral souvent réduites en raison de la proximité des montagnes. Cette situation géographique explique d'ailleurs l'un des défauts principaux de l'agencement des immeubles souvent construits avec des vis-à-vis de moins de dix mètres qui ne sont pas favorables, on le verra par la suite, au développement de pratiques écoresponsables de la part des habitants, notamment aux pratiques d'aération naturelle permettant de baisser le taux d'humidité et la température nocturne dans les logements. Le contexte social se caractérise, quant à lui, par une population en précarité et recomposition familiale soumise à des variations démographiques intenses. La précarité et les modes de vie familiaux peuvent conduire à une suroccupation des logements comme sur l'Ile de la Réunion où la natalité reste forte. La tendance à la décohabitation plus rapide des jeunes et à l'avènement d'une proportion élevée de familles monoparentales conduit à une forte demande de logement social sur des typologies de type T3 et à des demandes de mutations importantes. La baisse démographique aux Antilles pose le problème de l'adaptation de l'habitat au vieillissement de la population. A cela s'ajoutent des modes de vie qui évoluent vers une dépendance à l'automobile du fait de l'insuffisance des transports collectifs, un multi-équipement des ménages notamment jeunes (informatique, électroménager…) qui viennent contrarier les pratiques écoresponsables des anciennes générations plus habituées à cuisiner et « cultiver leur jardin ». Ces différentes tendances d'évolution couplées aux variations climatiques et aux contraintes économiques conduisent les bailleurs à rechercher des solutions qui peuvent satisfaire à moindres couts et de charges les locataires dans un environnement déjà caractérisé par un coût constructif plus élevé qu'en métropole. Le contexte économique est marqué par une forte dépendance aux territoires métropolitains pour l'acheminement des matériaux de construction, des biens de consommation, le traitement des déchets. A cela s'ajoute une faible production énergétique locale (alors que le potentiel solaire est important) et des secteurs d'activités économiques qui ne permettent pas le plein emploi de la population. Les bailleurs doivent composer avec ce contexte défavorable caractérisé d'un côté par le surenchérissement des produits et de l'autre par un pouvoir d'achat réduit dépendant des aides sociales.
Le contexte climatique conduit les bailleurs à mettre davantage l'accent sur des adaptations du bâti qui offrent des possibilités de traiter autrement l'énergie, l'eau et les déchets, à partir d'évolutions dans la conception des immeubles et les équipements, afin de favoriser des pratiques écoresponsables des habitants et notamment éviter le développement de la climatisation.
Certaines de ces caractéristiques dans la géographie, l'agencement et la localisation du parc influencent l'installation de la climatisation dont l'usage par les habitants dépend aussi de leurs revenus et de leurs conditions socio-familiales d'existence.
Les facteurs explicatifs du développement de la climatisation
Le parc de logement social dans les territoires ultra-marins est relativement récent (moins de 30 ans) et d'une qualité satisfaisante. Mais, on l'a vu, il est soumis à une usure rapide et présente certaines défaillances qui vont à l'encontre d'un confort thermique et de la pratique des écogestes par les habitants.
Aux dire des bailleurs, les principaux défauts hérités des immeubles construits il y a plus de 10 ans (toitures en taule, vis-à-vis de moins de 10 m entre immeubles…) peuvent nuire au développement des écogestes (ventilation naturelle…) et favoriser l'usage de la climatisation pour les ménages qui peuvent se le permettre financièrement.
La conception de l'habitat, sa localisation et son agencement dans l'espace sont souvent des facteurs explicatifs du recours à la climatisation. D'après les bailleurs, des défauts de conception jouent sur l'usage de la climatisation. Les toitures en taule créent un phénomène de surchauffe de l'air ambiant dans les logements dont la seule ventilation naturelle ne peut faire baisser la température. De plus, lorsque les logements ne sont pas traversants et orientés dans le sens des alizés, il devient plus difficile de faire diminuer la température par l'aération naturelle. De même, des défauts de conception (manque d'avancées et de débords de toiture, manque d'ombrage, mauvais positionnement des cheminements de ruissellement de l'eau pluviale…) peuvent conduire les habitants à se prémunir de l'entrée du soleil et de l'eau dans leur logement en oblitérant leurs ouvrants et rajoutant des rideaux qui limitent la ventilation naturelle.
A ces facteurs propres à la conception des logements, s'ajoutent les désagréments qu'ils créent dans le « vivre-ensemble ». Les vis-à-vis entre immeubles et appartements, comme les nuisances sonores entre logements et celles provenant de l'extérieur conduisent les habitants à privilégier une fermeture des fenêtres la nuit pour pouvoir dormir sans être réveillés par le voisinage. Cette manière de se protéger contre les nuisances sonores nuit à l'aération naturelle nocturne propice à une baisse de température du logement. De même, les vis-à-vis conduisent les habitants à se protéger de la vue d'autrui le soir, en rajoutant des rideaux et en baissant leurs volets. Ces défauts de conception ne favorisent guère une cohabitation sereine en habitat collectif et finalement nuisent à l'adoption de pratiques régulant la ventilation naturelle, afin de lutter contre la montée en température et l'humidité dans les logements.
Lorsqu'ils sont confrontés à ces situations, les habitants préfèrent, s'ils en ont les moyens, investir dans une climatisation, plutôt que de ne pas se sentir chez soi. D'autres facteurs, tels que la localisation de l'immeuble, son inscription géographique, jouent sur l'usage de la climatisation. Les bailleurs ont mentionné que la climatisation est plus répandue en centre-ville car il y a une forte pollution de l'air et d'autres nuisances liées au trafic automobile. De plus, le phénomène d'ilot de chaleur y est plus intense et les possibilités de rafraichissement par ombrages et aération nocturne moins grandes (proximité plus importante des autres immeubles…). A l'inverse en périurbain et dans les Terres, il est plus aisé de construire des logements traversants et d'obtenir une ventilation naturelle rafraichissante la nuit, par ailleurs utile pour lutter contre un niveau d'humidité plus important. La climatisation y est donc moins répandue.
Ainsi, les tendances communes observées montrent que l'usage de la climatisation est dû à des facteurs géographiques et climatiques, de conception du bâti, autant qu'à la perception par l'habitant des nuisances issues de son milieu de vie. Mettre une climatisation permet d'éviter que les bruits, la pollution de l'air, les moustiques… entrent dans le logement, et permet de ne pas être vu chez soi et de pouvoir dormir tranquillement. La seule baisse de la température n'est pas un facteur suffisant pour le passage à l'acte vers une climatisation.
Les organismes d'habitat social ont bien pris la mesure de cette association de facteurs liés à la conception des immeubles et à la manière dont les habitants entendent se protéger des nuisances perçues de leur environnement pour expliquer le développement de la climatisation. En tout état de cause, le climat propre à chaque territoire d'outre-mer et la perception de la chaleur selon les âges de la population, son niveau de revenu et ses modes de vie jouent sur le choix de la climatisation. Les bailleurs mentionnent que les familles pauvres ont moins recours à la climatisation que les familles dont les adultes ont un emploi et des revenus stables.
Ces différents facteurs expliquent les différences observées dans le développement de la climatisation selon les territoires ultra-marins et les types de population. Au-delà des tendances communes décrites ci-dessus, on constate des différences.
Le recours à la climatisation est plus développé en Guadeloupe et en Guyane, qu'à la Martinique et à la Réunion. Les caractéristiques climatiques jouent indéniablement, dans la mesure où la Grande Terre de la Guadeloupe est plus soumise à une sécheresse qu'à la Martinique et la Réunion où les microclimats locaux (montagnes et zones au vent) permettent une aération naturelle plus aisée. Les habitudes de vie de la population et son niveau de revenu jouent aussi. Aux dires des bailleurs, les réunionnais ont l'habitude de s'accommoder de la chaleur et d'une aération naturelle inscrites dans leurs pratiques de vie. La climatisation est plus répandue dans les logements intermédiaires et au sein des ménages ayant un emploi que dans les logements très sociaux, dans les bas que dans les hauts de l'Ile (où une différence de température de moins quatre degrés par rapport au littoral sous le vent à l'Est peut être appréciée).
La climatisation est aussi davantage développée dans les centres-villes soumis à l'ilot de chaleur et à la pollution de l'air qu'en périurbain ou à la campagne où se pose davantage la problématique de la lutte contre l'humidité « ambiante ». Le recours à la climatisation est moins prégnant dans les logements traversants bien orientés pour bénéficier des alizés, et plus répandu, on l'a vu, dans les immeubles soumis aux vis-à-vis et aux nuisances sonores dont les habitants tentent de se protéger en fermant leurs fenêtres et en rajoutant des rideaux qui nuisent à la ventilation naturelle.
La climatisation apparait comme un système de compensation de certains défauts de conception des bâtiments qui n'ont pu être réalisés avec des pièces traversantes, des systèmes d'aération, d'isolation et de protection solaire suffisants. Mais la climatisation est aussi une réponse apportée par l'habitant pour obtenir une plus grande intimité de vie dans son logement, lutter contre les nuisances engendrées par un agencement des bâtiments qui rend difficile la cohabitation et le vivre-ensemble. Ce n'est donc pas uniquement une recherche de confort « thermique » qui conditionne le recours à la climatisation, mais l'amélioration des conditions de vie dans l'habitat en lien avec l'environnement (humain et climatique).
Les évolutions du cadre bâti proposées par les organismes d'habitat social
Les directeurs du patrimoine et les chargés de clientèle des organismes d'habitat social interviewés ont conscience de ces multiples facteurs qui influencent l'usage de la climatisation et le montant des consommations énergétiques et des charges des habitants. Selon eux, les marges de progrès pour lutter contre ces phénomènes se situent dans des améliorations à apporter dans :
-La conception des bâtiments, -Les équipements installés, -Les pratiques des habitants.
C'est pourquoi, depuis cinq ans, les organismes d'habitat social procèdent à des :
-Evolutions dans le cadre bâti, sa localisation et son adaptation au changement climatique, -Evolutions dans les équipements et systèmes énergétiques, d'eau et de traitement des déchets.
Ces évolutions sont menées en lien avec les pratiques quotidiennes de vie des habitants, le confort recherché et l'attention accordée à la maitrise des charges. Elles s'opèrent en tenant compte de trois questions majeures qui structurent les adaptations envisagées dans le bâti, les systèmes installés et les pratiques des habitants : la question de l'énergie, celle de l'eau et celle des déchets.
Les évolutions dans la conception de l'habitat
La majorité des bailleurs envisagent des évolutions similaires consistant à favoriser des logements plus grands et plus traversants, orientés si possible dans le sens des alizés, afin de favoriser une ventilation naturelle. Ils pensent aussi mieux développer des terrasses en limitant les vis-à-vis lorsque le foncier le permet, et tentent d'éradiquer les toitures en taule, d'utiliser des matériaux plus résistants et protecteurs de la chaleur et de la pluie. Ils essaient de construire davantage dans les zones mieux aérées (hauts des iles) et de permettre un autre rapport au végétal (constitution de rez-de-jardin individualisés et de jardins partagés en pied d'immeuble collectif).
Tous ont insisté sur le fait de pouvoir développer une architecture mieux adaptée aux climats des outre-mer, en soulignant que les opérations réalisées en VEFA présentent l'inconvénient d'avoir une architecture d'inspiration métropolitaine (du Sud de la France) peu adaptée aux territoires ultramarins. Il est notamment mentionné le faible recours à des puits canadiens en zone humide, à des toitures mieux isolées protégeant des radiations solaires, à des protections limitant l'apport de rayons solaires dans les logements, des orientations et des ouvrants permettant de bénéficier des alizés.
Les évolutions dans les systèmes installés
Les bailleurs envisagent aussi de mettre à disposition des locataires des systèmes de ventilation, de persiennes et de production d'eau chaude sanitaire fondée sur l'énergie solaire, afin de limiter le recours à la climatisation et d'offrir un plus grand confort, dans des conditions de limitation des charges.
Au-delà de ces tendances communes, il convient de souligner les différences observées.
A Si les habitants semblent sensibilisés au changement climatique, du fait qu'ils peuvent en observer les manifestations à l'échelle de leur territoire, et aux dépenses énergétiques, du fait de leur situation de précarité sociale, il n'en demeure pas moins qu'ils sont happés par des modes de vie plus consommateurs qui ont une empreinte carbone plus élevée. Toutefois, d'après les bailleurs, ils n'ont que peu de marge de manoeuvre pour inverser la tendance, étant confrontés à des situations de vie qui se complexifient : plus de déplacements automobiles du fait de l'éloignement aux zones d'emplois et peu de transport collectif ou de déplacements à vélo (en raison du manque d'infrastructures et de la topographie) ; multi-équipement informatique pour rester connecté au « reste du monde » et à la famille, et procéder aux démarches de recherche d'emploi et d'aides sociales.
Leurs conditions de vie dans un habitat collectif pas toujours conçu pour des climats tropicaux et pour permettre la survivance de pratiques insulaires et environnementales héritées des anciens jouent aussi en défaveur de la pérennisation de certains écogestes et pratiques écoresponsables. Certains bailleurs reconnaissent que les habitants ont gardé un certain rapport de proximité et un attachement indéfectible à la nature environnante qui affirme sa présence par l'activité des mers, des océans et des montagnes souvent volcaniques. Ils soulignent aussi que l'habitat social collectif ne favorise guère la survivance des pratiques de vie développées en harmonie avec cet environnement. Ils relatent l'existence de pratiques (d'aération diurne, d'ouverture des fenêtres…) qui étaient compatibles avec des habitats tropicaux (sur le modèle de la case…) mais ne le sont plus dans des immeubles collectifs, mais aussi une demande de plus en plus forte des habitants pour faire persister certaines pratiques de jardinage et de potager en pied d'immeuble. Deux des six bailleurs interviewés indiquent avoir mis à disposition des habitants des « lopins » de terre pour constituer des jardins potagers partagés. L'un deux a même établi une convention avec certains éleveurs qui ont proposé de venir faire paitre leurs bêtes pour la tonte des espaces verts, cette pratique de vie avec quelques animaux et des poules étant encore inscrite dans la culture collective.
Les bailleurs prennent donc conscience des multiples freins au développement des écogestes dans l'habitat social, mais aussi des opportunités qui s'offrent à eux pour adapter le bâti et les espaces communs à des pratiques (d'aération, d'usage de l'eau, de consommation énergétique et de traitement des déchets…) qui pourraient devenir plus écoresponsables.
Les obstacles et les conditions favorables au développement des écogestes
La majorité des bailleurs interviewés reconnaissent que le développement des écogestes ne peut se limiter à une adaptation des usages des locataires dans les logements tels qu'ils sont conçus et équipés. En somme, sensibiliser à la maitrise des charges et des consommations (d'électricité, d'eau…), à l'aération naturelle, sans offrir aux habitants les moyens de le faire dans des conditions de confort et de cohabitation satisfaisantes leur semble vain.
Cela signifie que le développement des écogestes doit prendre place dans des évolutions significatives d'adaptation des logements et de leurs « équipements » (solaires, ouvrants…). Cela implique aussi que les bailleurs prennent en considération les manières de surmonter les obstacles au développement de ces écogestes (vis-à-vis importants entre immeubles, nuisances sonores entre logements…), et qu'ils fournissent aux habitants des dispositifs (double volets ou persiennes inclinables…) pour leur offrir la possibilité de régler au mieux ces interfaces avec leur environnement, afin d'atteindre les conditions de confort (thermique, sonore…) les mieux adaptées à leur situation (vieillissement, santé…).
Si l'interface avec l'environnement (humain, construit et naturel) de proximité est important à considérer pour offrir aux habitants des moyens de s'en protéger ou d'en bénéficier, et que chacun trouve des écogestes adaptés à sa situation, il ressort aussi que des systèmes collectifs fondés sur les énergies renouvelables, le recyclage de l'eau et des déchets doivent être envisagés. Dans ce cas, ce sont des écogestes liés à l'usage de ces systèmes offrant une meilleure maitrise des charges dans un esprit de la gestion des « communs » qui sont mobilisés. Le raisonnement est complémentaire au précédent puisqu'il s'agit de rendre visible le lien entre les pratiques des habitants, leur contribution à la sauvegarde de la planète et leur maitrise collective des charges. Les systèmes collectifs installés (ECS solaire, récupération eau pluviale…) doivent être pensés dans cet esprit d'un double dividende (pour la planète et la résilience du bâti ; pour le confort de l'habitant et sa maitrise des charges). Aux dires des bailleurs interviewés, l'accent mis sur un seul de ces items au détriment de l'autre peut être voué à l'échec. Les bailleurs ne pensent pas pouvoir sensibiliser les habitants aux écogestes uniquement par le recours à l'argument environnemental ou bien celui de la maitrise des charges sans résoudre les problèmes du vivre-ensemble et de l'association de ces deux défis (social et environnemental) au sein d'un ensemble de pratiques. Les bailleurs ont progressivement adopté une approche plus transversale des écogestes prenant en compte :
-Les adaptations possibles du cadre bâti qui peuvent les favoriser, -Les systèmes de production et d'utilisation de l'électricité, de l'eau et des déchets qui peuvent s'ancrer dans la vie quotidienne des habitants et faire naitre de nouvelles pratiques écoresponsables, -Les actions possibles d'accompagnement des locataires vers des changements dans leurs pratiques.
Ces évolutions sont menées en lien avec les pratiques quotidiennes de vie des habitants, le confort recherché et l'attention accordée à la maitrise des charges.
Les bailleurs se démarquent de démarches visant uniquement la responsabilité individuelle des habitants dans un environnement bâti, économique et naturel contraint. Les pistes qu'ils envisagent pour traiter la question des écogestes à l'avenir se caractérisent par les axes suivants : -S'appuyer sur les pratiques (d'aération, de cuisine, de consommation et de jardinage) qui font partie de la culture de vie des habitants, et leur propension à être connectées à une empreinte planétaire et une maitrise des dépenses (énergétique…), ainsi qu'aux initiatives locales d'adaptation au changement climatique touchant le cadre bâti, mais aussi l'utilisation des ressources locales ; -Tenir compte des interactions à bâtir entre les pratiques locales (de vie, de consommation énergétique, de production des déchets) et le développement de circuits territorialisés d'usage et de transformation des ressources (énergie, eau, déchets…) dans un contexte globalisé de forte dépendance avec le territoire métropolitain ; -Prendre en considération les obstacles aux changements des pratiques des habitants, qu'ils proviennent de l'agencement du cadre bâti, de l'évolution des modes de vie, d'une protection visà-vis des conditions environnementales changeantes, ou des évolutions socio-familiales ; -Faire le lien entre les questions de l'énergie, de l'eau et de l'ECS et des déchets, afin de favoriser de nouvelles synergies entre ces domaines d'action susceptibles de promouvoir des écogestes (récupération de l'eau pluviale et des déchets, utilisation de l'électricité produite par panneaux photovoltaïques pour assurer le réchauffement de l'eau chaude sanitaire mais aussi les usages collectifs de l'électricité, utiliser les ordures ménagères pour leur valorisation énergétique…) ; -Prévoir des adaptations substantielles de l'habitat, des espaces communs et pieds d'immeubles, de la gestion de proximité des bailleurs (suivi de clientèle, gardien d'immeubles…), mais aussi des partenariats avec les acteurs locaux (associations et collectivités) pour accompagner le développement des écogestes en phases avec l'évolution du climat.
Les actions d'accompagnement envisagées par les organismes d'habitat social pour le développement des écogestes
Les organismes d'habitat social soulignent que les habitants ont une relation paradoxale à l'environnement. Leurs modes de vie en habitat social dans des quartiers densément peuplés et des logements « touche-à-touche » les écartent d'une relation au milieu naturel qui fait pourtant partie de leur culture de vie. Les bailleurs évoquent certaines remarques des habitants qui se plaignent de ne pas voir la mer ou les montagnes de leur logement, de ne pas avoir la possibilité de jardiner en pied d'immeuble ou de planter des arbustes dans leur rez-de-jardin. En même temps, ils constatent que les modes de vie changent et que les jeunes ont tendance à s'équiper de frigos américains, de grandes télévisions, et utilisent des produits congelés afin de moins cuisiner.
En Guadeloupe, les bailleurs contactés insistent sur l'installation de brises soleil en façade et le changement des ouvrants mais sans prévoir des brasseurs d'air, ce qui peut expliquer dans une certaine mesure le recours plus important à la climatisation installée par les locataires, du fait d'un climat chaud notamment à Point-à-Pitre. Certains bailleurs de Guadeloupe prévoient d'ailleurs d'établir une charte dédiée à l'installation des climatiseurs définissant les types de climatiseurs possibles, avec une consommation maitrisée, et installés par des professionnels agréés.Le second type d'équipement privilégié par les bailleurs concerne les systèmes de production d'eau chaude sanitaire (ECS). A la Réunion, l'accès à la climatisation est moins perçu comme problème que l'accès à l'eau chaude sanitaire (ECS). Sur l'Ile de la Réunion, les habitants demandent de disposer d'une production d'ECS. Celle-ci n'existait pas dans le parc d'habitat social ancien. Cette demande s'explique par des hivers plus rigoureux, une forte natalité au sein de jeunes couples. Les bailleurs de la Réunion développent, depuis quelques années, des systèmes d'ECS solaire reposant sur le chauffage du fluide en toiture. Ils ont aussi installé des ballons d'eau chaude consommant de l'électricité en cas de panne des panneaux solaires dont les conditions de maintenance ne sont pas totalement satisfaisantes d'après nos interlocuteurs (uniquement quelques entreprises qui maitrisent cette compétence). Les deux systèmes par ballons et par panneaux solaires sont mis à disposition des habitants qui peuvent activer un système ou l'autre par un disjoncteur. Les habitants apprécient la baisse de charges liée à l'ECS solaire, mais ils regrettent que ce système demande une maintenance difficile à organiser, ce qui occasionne des pannes. En Guadeloupe et Martinique, les bailleurs développent aussi de l'ECS solaire mais en privilégiant des panneaux photovoltaïques. Les ballons d'ECS bénéficient de la production d'électricité photovoltaïque qui peut aussi être utilisée pour les lumières des parties communes. Les bailleurs souhaitent ainsi mieux maitriser les charges d'électricité de leurs locataires en développant les usages autant individuels que collectifs de l'électricité produite par PV. Ce second système apparait plus fiable, les pannes étant moins nombreuses aux dires des interlocuteurs interviewés. Il est aussi important de mentionner qu'il n'y a pas de récupération de l'eau pluviale et que la circulation de l'eau peut endommager la façade ou accentuer ses fissures, du fait d'événements soudains de pluies très intenses. Les bailleurs envisagent donc d'améliorer ces systèmes de circulation de l'eau pluviale.Le troisième axe de l'amélioration des équipements concerne la gestion des déchets. Le contexte insulaire des territoires d'outre-mer implique que la grande partie des déchets ne peut pas être traitée sur place. Les bailleurs ont installé des poubelles enterrées et fermées, notamment à la Réunion, afin d'éviter d'attirer des moustiques porteurs de la dingue. Ils ont aussi essayé de mettre plusieurs bacs dans les cuisines afin de sensibiliser au tri sélectif. Mais, selon eux, ces pratiques de tri et de traitement des déchets ne sont pas encore entrées dans les moeurs et activités quotidiennes des habitants. 'explique, selon eux, par des systèmes contraints d'approvisionnement des produits (pour la plupart en provenance du territoire métropolitain) et un manque de filières locales (de réemploi et de traitement). De ce fait, les habitants ont tendance à laisser de « vieux frigos » ou de « vieilles voitures » sur les parkings, afin de pouvoir disposer de pièces de rechange difficiles à faire venir du territoire métropolitain. De plus, la taille des cuisines ne semble pas propice à l'installation de plusieurs bacs de tri sélectifs. Les acteurs reconnaissent un déficit de la capacité de traitement des déchets et une saturation des sites de stockage : ils envisagent d'améliorer la coopération et la mutualisation des moyens de gestion avec les collectivités territoriales. Enfin, en ce qui concerne les déchets verts, leur usage possible pour la biomasse n'est pas encore très développé. D'après une étude réalisée par Amorce 2 , « de façon générale, c'est la mise en place d'équipements structurants pour le tri et le traitement qui patine, avec de nombreuses unités prévues, planifiées mais qui n'ont pas toujours été construites. La raison : ces investissements sont lourds à porter pour les collectivités ».Du point de vue de l'aération naturelle, les bailleurs interviewés reconnaissent que les pratiques d'aération naturelle font partie des modes de vie des habitants. Ceux-ci vivent avec les fenêtres et les persiennes ouvertes, parfois tout au long de la journée. Mais de telles pratiques ont tendance à faire monter la température dans les logements, d'autant que la ventilation nocturne est limitée par les visà-vis entre immeubles et les nuisances sonores.
la Réunion où le développement de la climatisation apparait moins significatif (sauf dans le logement social intermédiaire), les organismes d'habitat livrent les logements avec des brasseurs d'air fixés aux plafonds. A la Martinique, le bailleur interviewé recourt plutôt à des systèmes de persiennes inclinables permettant une aération naturelle quelque soient les conditions météorologiques (pluies, tempêtes, vent violent). Ces deux systèmes ne présentent pas des problèmes d'usage de la part des habitants aux dires des bailleurs interviewés.
Partout, l'accès à l'eau potable et la régulation de l'évacuation des eaux de pluies est une question qui mérite attention. En Martinique, les bailleurs évoquent un problème d'accès à l'eau potable dû à des coupures du réseau. De ce fait, les habitants font des « réserves » d'eau et ont pris l'habitude d'assumer les tâches de la vie quotidienne (vaisselle, douche…) en utilisant peu d'eau. A la Réunion, il est moins fait mention de ce problème d'accès à l'eau que de son usage. Les bailleurs cherchent à baisser le niveau de consommation d'eau pour le lavage des terrasses ou des balcons (voire des voitures) et l'arrosage (des plantes…), tout en évitant les désagréments causés à autrui. Il n'est en effet pas rare que l'eau de lavage d'un balcon « arrive » chez le voisin du dessous (en cas de vent).
Les évolutions dans les pratiques des habitants
Le traitement des déchets constitue un axe majeur de l'amélioration des pratiques et du développement des écogestes selon les bailleurs interviewés. La lente évolution des pratiques des habitants, sEn ce qui concerne les consommations énergétiques, les bailleurs affirment que les habitants en situation de précarité sociale font attention à leurs dépenses énergétiques et émettent parfois des réclamations lorsqu'ils constatent des parties communes ou parkings qui restent trop longtemps éclairés. Mais ils reconnaissent aussi que les modes de vie évoluent. Les jeunes couples ont tendance à avoir un taux élevé d'équipement électroménager et informatique (frigos américains, congélateurs, microondes, internet et téléphonie portable …). La cuisine occupe toujours une place prépondérante dans les pratiques de vie, même si les jeunes couples passent moins de temps à cuisiner et ont tendance à acheter des produits faciles à préparer. Cette tendance s'explique aussi par l'augmentation des familles monoparentales. De plus, il est mentionné que les familles ont tendance à laisser les veilles des différents appareils allumées et à connaitre un multi-équipement qui oblige les bailleurs à installer au moins quatre prises par pièces, mais aussi à donner des conseils pour limiter la consommation énergétique de ces appareils.
Cela signifie que la responsabilité à l'égard des écogestes et de leur contribution à la vie sociale et environnementale ne peut être pensée uniquement sous l'angle individuel (l'impact de l'installation d'une clim ou d'un frigo sur la facture énergétique…), mais de façon collective. Il s'agit d'une part d'organiser la contribution des bailleurs et des habitants aux écogestes, d'autre part la contribution collective des habitants (par les relations qu'ils entretiennent entre eux et autour d'initiatives conjointes), mais aussi par des actions relayées ou soutenues par les collectivités et « associations » locales.L'ECS solaire est un bon exemple d'une contribution conjointe des bailleurs et des habitants dans un contexte d'approvisionnement carboné de l'énergie. La contribution des bailleurs est d'installer des systèmes d'ECS solaires fiables tombant peu en panne grâce à des conditions de maintenance améliorées et dont l'impact sur la maitrise des charges est conséquent par un usage individuel collectif de la production d'eau chaude ou d'électricité produite. La contribution des habitants est d'utiliser quotidiennement ces systèmes installés, tout en informant les bailleurs de certains défauts de fonctionnement et d'usage qui peuvent survenir, et d'idées sur l'adaptation de ces systèmes à des usages collectifs non anticipés initialement (électricité pour éclairage des parties communes…). L'amélioration de la circulation de l'eau pluviale et son utilisation pour constituer des réserves d'eau qui serviront à arroser les jardins partagés en pied d'immeuble est aussi un exemple de contribution conjointe aux écogestes. C'est une opportunité pour les bailleurs de mieux protéger les façades, d'éviter des nuisances entre voisins liées à l'écoulement de l'eau, de récupérer une ressource rare, et pour les habitants d'avoir de l'eau « gratuite » à leur disposition pour mener des activités collectives qui le nécessitent. L'approche des déchets est plus difficile à sérier pour les bailleurs dans la mesure où elle dépend d'une chaine d'acteurs multiples allant de l'habitant, aux acteurs du stockage, du recyclage et du réemploi jusqu'aux différents niveaux de collectivités. Ces acteurs sont amenés à coopérer et à mutualiser des initiatives, pour favoriser un changement des pratiques des habitants et favoriser leurs initiatives (réutilisation collective des pièces venant des encombrants ; développement des filières de déchets verts et du compostage, valorisation énergétique des ordures ménagères …).
Pour ces jeunes, il s'agit d'atteindre un standard de vie sans réellement avoir conscience de l'empreinte environnementale induite. Paradoxalement, les habitants ont une conscience développée de la fragilité de leur milieu de vie et des manières dont le changement climatique l'affecte, mais ils se trouvent inscrits dans une « modernité » qui les poussent à consommer, sans réellement pouvoir évaluer les impacts de leurs pratiques sur les phénomènes qu'ils déplorent et sur le montant de leur consommation énergétique et leur niveau de charges. Les bailleurs évoquent aussi une relation à la maitrise des charges et des consommations énergétiques à la fois caractérisée par une certaine déréalisation et un apprentissage pragmatique, au moment de la facturation, de la traduction économique de leurs pratiques. Ils ajustent leurs comportements « après-coup » lorsqu'ils considèrent que le montant des charges et de leur facture énergétique devient trop élevé. Enfin, les bailleurs reconnaissent que le changement des pratiques des habitants ne peut s'effectuer durablement sans être accompagné par des acteurs locaux autour d'actions concrètes qui demandent un suivi et une mobilisation d'acteurs aux compétences complémentaires.La seule information classique par mémo et « flyers », si elle est nécessaire, ne semble pas adaptée à un changement des pratiques sur le long terme, si elle ne s'accompagne pas de dispositifs concrets d'adaptation du bâti et des infrastructures. Les bailleurs envisagent de développer plusieurs types de sensibilisation en donnant des exemples concrets permettant de rapporter les pratiques de vie quotidienne et d'usage des équipements (électroménagers, solaires…) des habitants au niveau associé de consommation (énergétique, d'eau…) et de charges.Ils envisagent aussi d'insister sur les finalités attendues des écogestes sur le « bien-être », le confort et la santé des occupants. Ils veulent ainsi démontrer l'apport environnemental des écogestes dans leurs multiples dimensions : apport pour le « bien-vivre » chez soi, mais aussi pour la santé de l'habitant et sa capacité à « réguler » son logement en vue d'aboutir à un compromis tenable pour chaque famille entre le « confort recherché » et les charges induites.Selon les bailleurs, la sensibilisation ne peut s'opérer uniquement par des « guides de bonnes pratiques » transmis par l'écrit. La culture de l'oralité encore en vigueur dans les territoires d'outremer et le besoin de démonstration concrète sur la mise en situation des écogestes conduisent les bailleurs à privilégier une sensibilisation ludique par des vidéos qui peuvent d'ailleurs être réalisées avec les habitants dans leur logement. Les bailleurs envisagent de réaliser des vidéos qui pourraient toucher un public diversifié et des thématiques conjointes : des jeunes couples aux personnes plus âgées, et des enfants par le biais de l'école, en montrant les circuits d'approvisionnement et d'empreinte environnementale induits par les manières de manger, de cuisiner, de réguler thermiquement son logement, d'utiliser ses appareils électroménagers, et les apports de ces différentes pratiques pour le « bien-être » des habitants et de la planète, dans des conditions sociales assumées par une meilleure maitrise des charges. L'idée est de partir d'exemples précis (cuisine, frigos, ventilation, climatisation…) pour repositionner ces pratiques dans des marges d'actions existantes et la possibilité de « faire autrement » en expliquant ce que cela apporte pour l'habitant, le bailleur et le milieu de vie (la contribution à la sauvegarde de l'environnement). Ces derniers peuvent se faire le relais des questions, des réclamations et des attentes des habitants, et construire avec eux leurs projets de pratiques écoresponsables. Il s'agit de créer une sorte d'animation locale permettant de mettre en application sur le terrain l'expérimentation de ces pratiques en recueillant le ressenti direct des habitants, afin de le faire remonter aux personnels de la gestion locative.Enfin, les bailleurs pensent que dans l'habitat social le recours à certaines associations locales peut aider à mobiliser des collectifs d'habitants autour d'actions concrètes. L'un d'entre eux évoque la possibilité d'associer le personnel d'un centre social pour vérifier que les poubelles sont bien fermées, afin d'éviter d'attirer des moustiques porteurs de la dingue, maladie risquée pour la santé des habitants. La présence sur site de ces acteurs et leur contribution à des actions concrètes dont les finalités sont expliquées étant, pour les bailleurs, un moyen d'enclencher un changement de pratiques et d'éviter l'usage d'aérosols par les habitants.Les bailleurs insistent aussi sur la nécessité d'associer une chaine d'acteurs plus large pour rendre ces pratiques écoresponsables tenables économiquement et sur le plan de leur faisabilité. Ils mentionnent par exemple la nécessité de revoir certains éléments de la conception des logements (cuisines plus grandes pour le tri sélectif, débords de toit ou brise soleil…) et les dispositifs de ECS solaire afin qu'ils nécessitent moins de maintenance ou que celle-ci soit assurée par des opérateurs mieux armés et plus disponibles. Sans cette association d'une chaine d'acteurs, les pratiques écoresponsables risquent de se voir limiter dans leur développement.Enfin, compte tenu de la précarité des moyens existants, et de l'importance de délivrer des conditions fiables à l'émergence des écogestes, les bailleurs pensent rechercher une mutualisation des contributions des acteurs locaux. Ils envisagent de recourir aux compétences des collectivités locales qui peuvent contribuer à mettre en place des filières en circuits courts de traitement des déchets et de production alternative d'énergie dont l'habitant est un maillon essentiel mais pas le seul. Réorienter les pratiques d'usage de l'énergie et favoriser une maitrise des charges passent selon eux par la mise en synergie des acteurs de l'énergie et des déchets dont la coopération peut faire avancer vers la gestion concertée et raisonnée des ressources et leur mise à disposition des pratiques des habitants. Les bailleurs soutiennent que les associations de jeunes peuvent travailler avec les autres associations sur plusieurs villes et avec plusieurs collectivités, afin de mutualiser les moyens et de capitaliser l'expérience acquises d'un site sur l'autre. Les bailleurs pensent que de telles pistes d'action peuvent être expérimentées dans le programme ECCODOM et faire l'objet d'un soutien et d'un suivi par le CSTB et EDF ou l'ADEME afin d'en tirer les enseignements pour leur mise en application concrète et pérenne au sein d'un système d'acteurs qu'il convient de mettre sur pied. Pour synthétiser, les actions d'accompagnement envisagées par les bailleurs sont les suivantes : -Des guides de bonnes pratiques sur les écogestes en donnant des exemples concrets de la vie quotidienne (cuisine…) et des usages des équipements (frigo…), et en les rapportant au niveau associé de consommation (énergétique, d'eau…) et de charges. -Une sensibilisation autour des finalités attendues des écogestes en termes de « bien-être », de confort et de santé des occupants, afin de démontrer l'apport environnemental des écogestes et leurs conséquences sur les charges ; démonstration d'action à visées concrètes mais aussi éducatives (des jeux, des démonstrations dynamiques prenant en compte les écosystèmes…) ; Une sensibilisation par les gardiens d'immeubles autour d'une mise en pratique des écogestes et le ressenti des habitants avec des retours effectués vers la gestion de la clientèle et les directions du patrimoine des bailleurs ; -Le recours à certaines associations locales pour accompagner le changement des pratiques sur le long terme ; Dans le logement très social, les bailleurs pensent qu'il est nécessaire d'impliquer le tissu associatif local, voir des relais éducatifs, pour sensibiliser aux enjeux de santé, d'environnement et de maitrise des charges des pratiques habitantes. -Le recours à une chaine d'acteurs plus large pour rendre ces pratiques écoresponsables tenables économiquement et sur le plan de leur faisabilité ; Sensibilisation par actions concrètes impliquant des acteurs locaux : ECS solaire (maintenance et multi-usages) et tri des déchets ; -Une sensibilisation par la mobilisation d'une chaine d'acteurs en lien avec collectivités ; le recours aux compétences des collectivités locales qui peuvent contribuer à mettre en synergie des acteurs de l'énergie et des déchets, de la construction et du climat, afin de créer les conditions favorables à l'émergence des écogestes.Il ressort de cette première exploration que les conceptions des logements, leur agencement dans l'espace urbain et périurbain, la composition socio-familiale des habitants et leurs pratiques de vie constituent des facteurs influençant les possibilités comme les contraintes rencontrées dans l'émergence des écogestes et des pratiques écoresponsables. De même les équipements mis à disposition des habitants (ECS solaire, persiennes et doubles volets…) apparaissent comme des éléments essentiels favorisant ou contraignant l'émergence des écogestes. Enfin, si le taux de précarisation sociale est important et la maitrise des charges perçue comme vitale, il convient de rapporter la facture énergétique au confort recherché par les habitants : accès à l'eau chaude sanitaire et à une température limitant une surchauffe des logements, mais aussi à un mieux vivre en cherchant à limiter les nuisances sonores et visuelles. Ces deux derniers facteurs peuvent conduire à l'usage de la climatisation, si d'autres systèmes à même de les réguler ne sont pas mis en place. La maitrise des charges apparait comme un atout, surtout si elle s'accompagne d'une recherche de résilience de l'habitat au changement climatique. La mise à disposition par les bailleurs de systèmes de production énergétique et d'eau chaude fondés sur des énergies renouvelables s'inscrit dans cet esprit d'une contribution conjointe des bailleurs et des habitants à la maitrise des charges et à la résolution des problèmes structurels des bâtiments dans leur environnement. L'un des axes d'action issu de cette première synthèse est de rapporter les possibilités de développement des écogestes aux évolutions des conceptions des bâtiments replacés dans leur environnement et aux pratiques susceptibles d'être adoptées par les habitants en considérant les héritages et les changements dans les manières de vivre (de cuisiner, d'aérer son logement, de jardiner…). Les bailleurs ont conscience qu'il convient de prendre en compte ces évolutions sociales et culturelles (précarité, changements familiaux, modes de vie…), ainsi que les conditions sanitaires et de confort recherchées par les habitants pour concevoir un cadre bâti plus résilient et mieux adapté à des pratiques éco-responsables. Le second axe concerne les démarches et les actions d'accompagnement au développement des écogestes envisagées par les bailleurs. Au-delà d'une information sur les écogestes possibles et leur contribution à un confort et une maitrise des charges, les bailleurs pensent favoriser un accompagnement ludique et pédagogique autour d'initiatives concrètes individuelles et collectives en mobilisant les habitants et le personnel de la gestion locative (gardiens et gestionnaire d'immeubles), mais aussi des acteurs du tissu associatif local et des collectivités territoriales. L'objectif est de mettre en synergie les acteurs du cadre bâti, de l'énergie, de l'eau et des déchets, afin d'inscrire les écogestes dans des démarches plus intégrées de gestion locale des ressources et une plus grande attention portée à la pérennité des milieux ultra-marins.
Les bailleurs reconnaissent aussi que les démarches de sensibilisation requièrent des changements dans
leur organisation du suivi de leur clientèle, notamment la présence de gardiens ou de gestionnaires
Ces différentes remarques expliquent les manières dont les bailleurs envisagent les actions d'accompagnement aux écogestes et pratiques écoresponsables. Selon eux, il est important d'organiser des démarches d'accompagnement qui comportent plusieurs facettes. d'immeubles. Conclusion
-
Une sensibilisation ludique par des vidéos qui peuvent d'ailleurs être réalisées avec les habitants dans leur logement par une mise en situation de l'écogeste et ce que l'habitant peut gagner (sur charges, bâti, confort) et sur sa contribution à la sauvegarde de l'environnement ; -
Voir en Annexe les comptes-rendus d'entretiens
https://www.banquedesterritoires.fr/dechets-dans-les-dom-la-solution-est-elle-dans-la-cooperation. D'après cette étude : « Pour les emballages et papiers issus de la collecte séparée, les cinq DOM possèdent sept centres de tri -un dans chaque DOM et trois à la Réunion. Hormis Mayotte, tous ont au moins une unité de compostage de déchets verts. La Martinique dispose d'une unité de méthanisation des biodéchets. L'île aux fleurs possède aussi la seule unité de valorisation énergétique des ordures ménagères (UIOM) des DOM. La pression des flux se reporte sur les installations de stockage de déchets non dangereux (ISDND) : "Tous les DOM sont équipés mais sur certains territoires, ces installations sont les seuls exutoires de traitement des déchets, et leur arrivée à saturation entraîne une problématique de gestion des déchets à très court terme ». |
04106189 | en | [
"sdv.mhep"
] | 2024/03/04 16:41:22 | 2021 | https://theses.hal.science/tel-04106189/file/vd_Maure_Alexandra.pdf | Keywords: Tuberculose, Mycobacterium tuberculosis, macrophages, criblage à haut débit, thérapie dirigée vers l'hôte, épigénétique, sirtuine, antibiotique, résistance host-targeting strategies Tuberculosis, Mycobacterium tuberculosis, macrophages, high throughput screening, host-directed therapy, epigenetics, sirtuin, antibiotic, resistance ABBREVIATIONS 1, 25-D3: 1, 25-dihydroxyvitamin D3 Apoptosis-associated speck-like protein containing a caspase recruitment Immune-mediated inflammatory disorders Latent tuberculosis infection Multi-drug resistant tuberculosis Mitochondrial-derived reactive oxygen species
Titre : Améliorer le traitement de la tuberculose grâce à des stratégies dirigées vers l'hôte.
Résumé : La tuberculose (TB) est l'une des dix premières causes de mortalité dans le monde et représente une menace majeure pour la santé publique. Cette maladie peut être guérie grâce à un traitement de 6 mois associant jusqu'à quatre antibiotiques. Cependant, il est alarmant de constater que des souches multirésistantes (MDR) de Mycobacterium tuberculosis (MTB) ne cessent d'apparaître et de se propager. Leur résistance aux antibiotiques de première ligne rende les traitements beaucoup plus difficiles. Les options thérapeutiques pour guérir la TB MDR sont limitées, coûteuses et associées à un taux de réussite plus faible. De plus, elles sont basées sur l'utilisation de médicaments de deuxième et troisième intention plus toxiques et sur une durée plus longue. Malgré des efforts de recherche considérables, le vaccin quant à lui, ne suffit pas à enrayer cette épidémie. Récemment, les approches dirigées vers l'hôte sont apparues comme une stratégie innovante et prometteuse pour éradiquer la tuberculose. Ces thérapies, contrairement aux antibiotiques qui ciblent spécifiquement les bactéries, visent à améliorer les défenses immunitaires de l'hôte. Leur combinaison avec les antibiotiques existants ou futurs peut se révéler d'une utilité majeure pour améliorer le traitement de la TB. MTB a su développer différentes stratégies pour contourner l'immunité innée et adaptative de l'hôte qu'elle colonise. Pour exemple, MTB module l'expression des gènes de l'hôte en ciblant l'épigénome ce qui lui permet de se développer dans les cellules immunitaires. Mon projet de thèse vise à déterminer si l'interférence pharmacologique de ces modifications de l'épigénome induites par MTB diminue la survie bactérienne intracellulaire chez l'hôte. Dans cette optique, nous avons mis au point une technique de criblage de composés épigénétiques qui modifient l'expression des gènes de la cellule. Nous avons testé l'efficacité des composés sur des macrophages infectés par une bactérie MTB exprimant la GFP grâce à l'utilisation d'un microscope confocal automatisé à haut débit. Ce criblage nous a permis de découvrir une molécule qui réduit considérablement l'infection bactérienne dans ces cellules en empêchant MTB de se multiplier. Cette molécule est un inhibiteur putatif de la Sirtuine 2 (SIRT2). La SIRT2 est une désacétylase dépendante du NAD+ qui régule de nombreux processus cellulaires et joue un rôle dans le contrôle des infections bactériennes telle que la listériose. Nous avons démontré que ce composé n'était pas toxique pour les cellules humaines. De plus, il n'affecte pas la croissance bactérienne en milieu liquide, suggérant un mécanisme d'action dépendant de l'hôte. Aussi, cette molécule potentialise l'activité de plusieurs antibiotiques antituberculeux sur des souches de MTB sensibles et résistantes aux médicaments, aussi bien in vitro dans des macrophages humains qu'in vivo chez la souris. Ces travaux de thèse pourraient permettre la création d'un nouveau traitement contre la tuberculose.
LISTE DES ÉLÉMENTS SOUS DROITS
Liste de tous les éléments retirés de la version complète de la thèse faute d'en détenir les droits Page 10: Figure 1. Global representation of countries with at least 100,000 incident cases of TB reported in 2019. The first eight countries (Bangladesh, China, India, Indonesia, Nigeria, Pakistan, Philippines and South Africa) with the highest cases of TB, accounts for nearly 65% of the total cases worldwide. Picture from (World Health Organization, 2020).
Page 32: Figure 2. Cell cycle of MTB infection. MTB is transmitted from a sick individual to a new host via fine MTB-containing droplets. Once in the lower part of the lung, MTB is phagocytosed by individual macrophages. Infected macrophages recruit other macrophages, forming a macrostructure so-called granuloma, composed of macrophages and other immune cells. Inside the granuloma, MTB replication is slow-down (or even stopped depending of the site of infection) and MTB can persist there for decades. However, in some cases, granuloma can undergo into necrosis, with the release and the spreading of MTB in the extracellular environment. Figure from [START_REF] Cambier | Host Evasion and Exploitation Schemes of Mycobacterium tuberculosis[END_REF].
Page 36: Figure 3. Host-MTB interaction influences the granuloma fate. At the local infection site, MTB interacts with different immune cell types. This host-MTB interaction influences the balance of pro-inflammatory and anti-inflammatory mediators, controlling the level of inflammation and modulating the granuloma fate. Picture from [START_REF] Cadena | Heterogeneity in tuberculosis[END_REF].
Page 65: Figure 4. Host-directed therapies against MTB. MTB interferes with different host defenses pathways. HDTs are used to overcome MTB resistance to microbial killing. Each box indicates drugs or classes of drugs with proven effects on antimicrobial immunity. HDAC: histone deacetylase; IFNγ: interferon-γ; mTOR, mammalian target of rapamycin; PDE: phosphodiesterase; TB: tuberculosis; TNF: tumour necrosis factor [START_REF] Kaufmann | Host-directed therapies for bacterial and viral infections[END_REF].
Page 76: Figure 5. Schematic representation of the three different epigenetic mechanisms: DNA methylation, histone modification and non-coding RNAs. DNA methylation occurs on CpG islands, on the carbon of the cytosine ring. Histone modification, and more specifically histone acetylation, induces a remodeling of the chromatin into euchromatin, allowing gene expression in those regions. Histone deacetylation leads to heterochromatin, where the gene expression is silenced. Non-coding RNAs are composed of (i) Long non-coding RNAs (lncRNAs) that impede transcription as well as chromatin stability, and (ii) microRNAs (miRNAs) that post-transcriptionally regulate gene expression. Figure from [START_REF] Gartstein | Prenatal influences on temperament development: The role of environmental epigenetics[END_REF]. nodes or genitourinary tract, and are more common in immunocompromised individuals [START_REF] Pai | Tuberculosis[END_REF]. According to the World Health Organization (WHO) and its global report published in 2019, TB is one of the top ten causes of human death worldwide (World Health Organization, 2020), responsible for 1.4 million death per annum, with 10 million of new TB cases every year. The majority of people that have developed TB in 2019 are from South-East Asia (44%), Africa (25%) and Western Pacific (18%). In these areas, some have a TB incidence higher than 500 new cases per 100,000 people per year (Figure 1) (World Health Organization, 2020).
The causative agent of human TB is mainly Mycobacterium tuberculosis (MTB). Transmission of the disease usually occurs upon inhalation of fine MTB-containing airborne droplets expelled by a symptomatic patient (generally by coughing). While large size droplets (>5 μm) are trapped in the upper respiratory system by mucus and ciliary action [START_REF] Turner | Cough and the transmission of tuberculosis[END_REF], smaller droplets enter the lower lung directly to the alveoli, where MTB is then phagocytized by tissue-resident alveolar macrophages [START_REF] Ryndak | Mycobacterium tuberculosis Primary Infection and Dissemination: A Critical Role for Alveolar Epithelial Cells[END_REF]. This primary infection results in different outcomes: active TB disease, latent TB infection (with potential reactivation), or eradication of the infection.
Clinical outcomes of tuberculosis
It is commonly stated that 5 to 10% of individuals infected with MTB develop active pulmonary TB within during their life [START_REF] Pai | Tuberculosis[END_REF]. However, these percentages need to be nuanced « élément sous droit, diffusion non autorisée » "copyrighted material, unauthorized distribution" Figure 1. Global representation of countries with at least 100,000 incident cases of TB reported in 2019 (World Health Organization, 2020).
depending on many factors such as age, living country, comorbidities or close contact with TB patients [START_REF] Trauer | Risk of active tuberculosis in the five years following infection ⋯ 15%?[END_REF]. Furthermore, several longitudinal epidemiological studies have shown that the majority of TB occurs within the first months after infection and rarely after two years [START_REF] Behr | Is Mycobacterium tuberculosis infection life long?[END_REF]. In active TB, the patient usually shows symptoms such as cough, fever, weight loss, night sweats and haemoptysis in advanced disease [START_REF] Loddenkemper | Clinical aspects of adult tuberculosis[END_REF]. At this stage of the active pulmonary TB, MTB can spread from a sick patient to a healthy individual. In some rare cases, dissemination also happens via the hematogenous and the lymphatic system to other organs, causing extrapulmonary TB.
In other cases, primary infection does not lead to active TB and remains nearly asymptomatic.
In immunocompetent individuals, the pathogen is usually eliminated by the host immune responses. However, if not fully eradicated from the host, MTB can persist in a latent state for decades, known as latent tuberculosis infection (LTBI). This latter is defined as a state of persistent immune response to stimulation by MTB antigens without evidence of clinical symptoms (S. H. [START_REF] Lee | Tuberculosis infection and latent tuberculosis[END_REF]. LTBI may progress into infectious active TB [START_REF] Getahun | Latent Mycobacterium tuberculosis Infection[END_REF].
Many factors can influence the development of an active TB or a reactivation of LTBI: virulence of the MTB strain, immunocompetence of the host, vaccination status, environmental stresses and social factors. Worryingly, about a quarter of the world's population is estimated to be latently infected by MTB (A. [START_REF] Cohen | The global prevalence of latent tuberculosis: A systematic review and meta-analysis[END_REF], representing a major reservoir for potential active TB cases. This assumption is based on meta-analyses of two immunoreactivity diagnostic tests: the tuberculin skin test (TST) and interferon-γ release assays (IGRAs).
However, TST or IGRAs could still be positive even if the infection has been successfully eliminated [START_REF] Esmail | The ongoing challenge of latent tuberculosis[END_REF]. It also should be stated that due to the short period between infection and active TB disease, some studies prefer to mention newly acquired infection rather than reactivation of an old one [START_REF] Behr | Revisiting the timetable of tuberculosis[END_REF][START_REF] Cardona | Reactivation or reinfection in adult tuberculosis: Is that the question[END_REF].
Risk factors
Risk factors play a key role in the spreading of TB, from exposure to active disease. Among them, some are considered "exogenous" such as the infectiousness of the strain, close contact and social behaviors like smoking or overuse of alcohol. Other host related factors are rather considered as "endogenous".
Human immunodeficiency virus (HIV): TB is the leading cause of death among people living with HIV. In 2019, TB caused 1.4 million deaths and 208,000 were diagnosed HIV-positive. In the African region, the burden of HIV-associated TB is the highest, with 86% of TB patients that are HIV-positive (World Health Organization, 2020). HIV coinfection is known as the most associated risk factor for developing active TB disease. Compared to healthy individual, a person with an early HIV-1 infection has 2 to 5 times more chance to develop active TB [START_REF] Bell | Pathogenesis of HIV-1 and mycobacterium tuberculosis co-infection[END_REF]. This ratio goes up to 20 times for people with an advanced HIV-1 disease [START_REF] Bell | Pathogenesis of HIV-1 and mycobacterium tuberculosis co-infection[END_REF]. HIV coinfection increases the susceptibility to primary infection and the risk of reactivation for LTBI. Moreover, it potentiates the immunosuppression of the host mainly by a deprivation of CD4 + T cells and a diminution of cytokines production (Bruchfeld et al., 2015). HIV is also linked to an inhibition of phagocytosis in macrophages [START_REF] Mazzolini | Inhibition of phagocytosis in HIV-1-infected macrophages relies on Nef-dependent alteration of focal delivery of recycling compartments[END_REF]. Furthermore, the use of antiretroviral therapy (ART) may lead to a TB immune reconstitution inflammatory response syndrome which is an increased inflammatory pathology related to TB after ART treatment [START_REF] Bell | Paradoxical reactions and immune reconstitution inflammatory syndrome in tuberculosis[END_REF].
Young age and gender: In 2019, children under 15 years of age represented 12% of the TB cases. However, clinical and radiographic manifestations of the disease are less specific in children compared to adults, making a medical diagnosis more challenging (World Health Organization, 2020). Following primary infection, TB-related mortality is higher during infancy (< 4 years) (Narasimhan et al., 2013). Young children are also at higher risk of contracting and developing TB infection due to the age-related immature immune system, as well as being involved in the TB spreading [START_REF] Alcaïs | Tuberculosis in children and adults: Two distinct genetic diseases[END_REF].
Men and women over the age of 15 account for 56% and 32% of the patients that have developed TB in 2019, respectively. The male:female (M:F) ratio of incident TB cases goes from 1.3 in the Eastern Mediterranean region to 2.1 in the European and Western Pacific regions.
In children, the global M:F ratio is close to 1 (World Health Organization, 2020). A metaanalysis reveals that TB prevalence is significantly higher among men than women in low-and middle-income countries [START_REF] Horton | Sex Differences in Tuberculosis Burden and Notifications in Low-and Middle-Income Countries: A Systematic Review and Meta-analysis[END_REF]. Many hypotheses have been put forward to explain this gender difference, notably the behavior and the physiology.
Men's behavior, such as smoking and elevated alcohol consumption in low-income countries, exposes them more often to TB burden [START_REF] Nhamoyebonde | Biological differences between the sexes and susceptibility to tuberculosis[END_REF]. Case notification rates are also higher for men and the ratio of prevalent-to-notified cases of TB (an indication of how long patients take to be diagnosed) is 1.5 times higher among men than women [START_REF] Horton | Sex Differences in Tuberculosis Burden and Notifications in Low-and Middle-Income Countries: A Systematic Review and Meta-analysis[END_REF].This study suggests that men are disadvantaged in accessing TB care and are less likely to achieve a timely diagnosis. However, this explanation is still under debate, as other studies suggest a bias for under-reporting and under-diagnosis of TB amongst women [START_REF] Srivastava | Tuberculosis in women: A reflection of gender inequity[END_REF].
Physiological differences may also explain the different TB outcomes between genders.
Indeed, hormones are known to have diverse effects on many immune cell types, including dendritic cells (DC), macrophages, and natural killer cells (NK). For instance, estradiol enhances macrophage activation [START_REF] Neyrolles | Sexual inequality in tuberculosis[END_REF]. Estrogen stimulates secretion of interferon-gamma (INF-γ), Tumor Necrosis Factor α (TNF-α) and interleukin 12 (IL-12) (a pro-inflammatory cytokine involved in immune defense against MTB) and inhibits production of IL-10 (an anti-inflammatory cytokine) [START_REF] Nhamoyebonde | Biological differences between the sexes and susceptibility to tuberculosis[END_REF].
Diabetes: Many studies suggest that diabetes is a risk factor for TB, as patients with diabetes have a three-time higher risk of developing active TB. Moreover, they have a worse outcome regarding the treatment efficacy, mycobacterial clearance by the immune system, progression from LTBI to active TB, relapse and death (nearly two-fold compared to people without diabetes) [START_REF] Gautam | Diabetes among tuberculosis patients and its impact on tuberculosis treatment in South Asia: a systematic review and meta-analysis[END_REF]. In diabetes-TB cases, activation of immunity is reduced or delayed in alveoli with lower proportion of activated alveolar macrophages, higher antiinflammatory IL-10 cytokines and lower IFN-γ pro-inflammatory cytokines (Restrepo, 2016).
Finally, monocytes from diabetic individual have a significant reduced binding and phagocytosis for MTB [START_REF] Restrepo | Understanding the Host Immune Response Against Mycobacterium tuberculosis Infection[END_REF]. A similar conclusion has been drawn from experiments using alveolar macrophages from diabetic mice [START_REF] Martinez | Diabetes and immunity to tuberculosis[END_REF].
Socioeconomic factors and malnutrition: Socioeconomic deprivation refers to the relative disadvantage in terms of economic and social resources necessities for an individual [START_REF] Apolinário | Tuberculosis inequalities & socio-economic deprivation in Portugal[END_REF]. It is defined by numerous factors such as social integration, lack of education, low income, overcrowding or unemployment. Those factors are partially responsible for health inequities, with mistreatment outcomes and an increased risk of developing disease such as TB [START_REF] Duarte | Tuberculosis, social determinants and co-morbidities (including HIV)[END_REF]. Close contacts, like in overcrowded households, homeless populations or prisoners increase the risk of infection by MTB [START_REF] Story | Tuberculosis in London: The importance of homelessness. problem drug use and prison[END_REF]. Another well-established risk factor for TB is nutritional deficiencies. The relation between TB and malnutrition is bi-directional: TB itself reduces appetite and changes in metabolic processes. Undernutrition status potentiates TB through immunodeficiency, but also increases the risk of LTBI reactivation and active TB which consequently increases the risk of death [START_REF] Feleke | Nutritional status of tuberculosis patients, a comparative cross-sectional study[END_REF]. This results from an impairment of the host immune response, mostly through the impairment of T-lymphocytes and macrophages functions with a diminution of cytokines production; but also regarding the lack of vitamins and minerals, such as vitamins A, B12, D and calcium, as a key mediator of the innate immune system [START_REF] Chandrasekaran | Malnutrition: Modulator of immune responses in tuberculosis[END_REF].
Genetic polymorphisms: Some case-control studies point out a link between host genetic factors and susceptibility to TB. For example, Mendelian susceptibility to mycobacterial diseases, a familial predisposition to develop a disease with weakly virulent mycobacteria, is related to polymorphism on nine characterized genes (IFNGR1-2, STAT1, IL12RB1, IL12B, IRF8, ISG15, TYK2, IKBG and CYBB) (Harishankar et al., 2018).
Other examples of genes polymorphisms and its links with TB are described below:
-Polymorphisms of human leukocyte antigens (HLA) of Class I and Class II alter the binding affinity of antigens and impair the antigenic presentation to CD4+ and CD8+ Tcells. HLA-DRB1 polymorphisms are the most associated ones with TB susceptibility and disease development in East Asian populations [START_REF] Tong | Polymorphisms in HLA-DRB1 Gene and the Risk of Tuberculosis: A Meta-analysis of 31 Studies[END_REF].
-TLRs are pattern recognition receptors critical for pathogens detection and activation of the host innate and adaptive immune responses. TLRs are found on many cell types, with several playing a role in controlling MTB infection by the immune system, including TLRs 1, 2, 6, 8 and 9 (Azad et al., 2012). As a result, polymorphisms in those TLRs modulate the outcome of TB, such as the increase or decrease risk of developing TB amongst certain ethnic populations (Y. [START_REF] Zhang | Toll-Like Receptor -1, -2, and -6 Polymorphisms and Pulmonary Tuberculosis Susceptibility: A Systematic Review and Meta-Analysis[END_REF]. Moreover, presence of the X chromosome has a putative impact on tuberculosis immunity, with the X-linked Toll-like receptor 8 (TLR8) gene polymorphisms implicated in susceptibility to pulmonary TB in male children (Dalgic et al., 2011).
-Natural Resistance-Associated Macrophage Protein 1 (NRAMP1), encoded by Solute Carrier Family 11A Member 1 gene (SCL11A1), is a transporter that influences MTB survival within the host, by regulating iron and cations homeostasis in macrophages [START_REF] Meilang | Polymorphisms in the SLC11A1 gene and tuberculosis risk: A meta-analysis update[END_REF]. NRAMP1/ SCL11A1 gene is heterogeneous across populations and the D543N single nucleotide polymorphism of SLC11A1 (NRAMP1) is significantly associated with susceptibility to infection by MTB Beijing strains in Indonesian TB patients [START_REF] Azad | Innate immune gene polymorphisms in tuberculosis[END_REF].
-The vitamin D-binding protein (VDBP) polymorphisms have been shown to impede the inflammatory profile upon MTB infection [START_REF] Coussens | Ethnic Variation in Inflammatory Profile in Tuberculosis[END_REF]. VDBP polymorphism is also related to ethnic variation and associated with susceptibility to TB [START_REF] Hawthorne | Vitamin D binding protein (DBP) levels during tuberculosis treatment are affected by DBP genotype / haplotype but not by total vitamin D levels[END_REF].
Genome-wide association studies (GWAS) has also been used to search for specific genetic variations that may be associated with susceptibility to TB. In 2018, a study described the homozygosity for TYK2 P1104A allele as being a rare monogenic cause of TB [START_REF] Boisson-Dupuis | Tuberculosis and impaired IL-23-dependent IFN-γ immunity in humans homozygous for a common TYK2 missense variant[END_REF]. However, it should be kept in mind that TB is a polygenic disease which is influenced by many genes.
Bacterial load, lineage and virulence of the strains: MTB strains are divided into lineages 1 to 4 and lineages 7 and 8. These lineages are geographically spread: lineage 1 (Indo-Oceanic or East African-Indian) is disseminated around the Indian Ocean, lineage 2 (East-Asian; including Beijing family strains) and lineage 4 (Euro-American) are worldwide spread, lineage 3 (Central Asian Strain) dominates the East Africa, Central-and South-Asia, while lineage 7 is mainly present in Ethiopia and lineage 8 in African Great Lakes region [START_REF] Brites | The nature and evolution of genomic diversity in the mycobacterium tuberculosis Complex[END_REF]Ngabonziza et al., 2020).
MTB lineage is a contributing factor to clinical outcomes of TB and well spread among populations. Some lineages might be more associated and adapted to certain populations, leading to different outcomes depending of the host [START_REF] Gagneux | Variable host-pathogen compatibility in Mycobacterium tuberculosis[END_REF]. Higher virulence is usually associated with higher bacterial concentrations [START_REF] Tram | Virulence of Mycobacterium tuberculosis Clinical Isolates Is Associated With Sputum Pre-treatment Bacterial Load, Lineage, Survival in Macrophages, and Cytokine Response[END_REF]. Isolates with high virulence phenotype are likely to belong to East Asian/Beijing lineage [START_REF] Ribeiro | Mycobacterium tuberculosis strains of the modern sublineage of the beijing family are more likely to display increased virulence than strains of the ancient sublineage[END_REF].
Furthermore, strains from lineages 2 and 4 are considered as the most transmissible strains, with a higher virulence and a different host immune response (notably by a reduced proinflammatory response upon infection) [START_REF] Coscolla | Consequences of genomic diversity in mycobacterium tuberculosis[END_REF][START_REF] Wiens | Global variation in bacterial strains that cause tuberculosis disease: A systematic review and metaanalysis[END_REF].
Some studies have also highlighted that lineages 1 and 2 are more often associated with extrapulmonary TB [START_REF] Caws | The influence of host and bacterial genotype on the development of disseminated disease with Mycobacterium tuberculosis[END_REF][START_REF] Click | Relationship Between Mycobacterium tuberculosis Phylogenetic Lineage and Clinical Site of Tuberculosis[END_REF].
Immunosuppressive conditions: Patients with immune-mediated inflammatory disorders (IMID) like rheumatoid arthritis also have a higher risk of developing active TB. These diseases are generally treated with anti-TNF-α to reduce inflammation. However, TNF-α has a role in MTB containment by the immune system. Moreover, an anti-TNF-α treatment increases the risk of MTB reactivation. Therefore, IMID is not only detrimental for TB outcomes due to dysregulations of the immune system, but also with the therapeutic approach used [START_REF] Petruccioli | Mycobacterium tuberculosis Immune Response in Patients With Immune-Mediated Inflammatory Disease[END_REF]. These immunosuppressive conditions are also found in transplanted patients due to their treatment [START_REF] Aguado | Tuberculosis and Transplantation[END_REF]. Patients with IMID therefore represent a high-risk group for developing TB.
Coronavirus Disease 2019 (COVID-19):
The COVID-19 is a disease caused by the SARS-CoV-2 virus which became a worldwide pandemic in 2020. Regarding its association with TB, a first meta-analysis based on six Chinese studies indicates that people with TB are not more likely to get COVID-19. However, it has been shown that those individuals present a higher risk of developing serious complications from COVID-19 [START_REF] Gao | Association between tuberculosis and COVID-19 severity and mortality: A rapid systematic review and meta-analysis[END_REF]. Other modelling analysis studies suggest that this disease could cause up to 6.3 million TB cases and 1.3 million additional deaths between 2020 and 2025 [START_REF] Cilloni | The potential impact of the COVID-19 pandemic on the tuberculosis epidemic a modelling analysis[END_REF].These numbers are related to the increasing household contacts and the reduction of healthcare access due to lockdown.
However, more studies are needed to fully understand the impact of COVID-19 on TB.
Alcohol: A meta-analysis has demonstrated that alcohol use, alcohol dosage and alcoholrelated problems are strongly associated with an increased risk of MTB infection (Imtiaz et al., 2017). According to this study, alcohol consumption is a well-known contributor to the TB burden, with the most severe impacts estimated for the African region. Furthermore, alcohol is often associated with an increase in poor treatment outcomes including treatment failure and death [START_REF] Ragan | The impact of alcohol use on tuberculosis treatment outcomes: A systematic review and meta-analysis[END_REF].
Tobacco smoke and indoor air pollution: Tobacco smoking is a risk factor for LTBI, progression to active TB, MTB treatment failure and higher rate of TB-related death [START_REF] Amere | Contribution of smoking to tuberculosis incidence and mortality in high-tuberculosis-burden countries[END_REF].
A recent study performed in Delhi shows significant associations with TB burden and exposure to indoor pollution (Jayachandran et al., 2007a). Indoor air pollution is composed of toxic particles (carbon monoxide, nitrogen oxide, polyaromatic hydrocarbons) from use of solid fuels, smokes, firewood and biomass combustion. Moreover, smoking and inhaling pollutants alters the cellular and immune responses by exacerbating pathological immune responses or by attenuating the normal defensive function of the immune system [START_REF] Torres-Juarez | LL-37 immunomodulatory activity during Mycobacterium tuberculosis infection in macrophages[END_REF]Qiu et al., 2017). Therefore, impaired mucociliary clearance function or alveolar macrophage processing directly affects the ability of MTB to access the alveoli and cause damage.
Many risk factors influence the progression of TB. To reduce poor outcomes associated with certain risk factors, one of the main objectives is to prevent and diagnose TB as early as possible to limit transmission and conversion to active TB.
II. Diagnosis and prophylaxis
In 2014, the WHO has adopted the "End TB Strategy". The goal of this project is to strengthen the fight against TB, with the aim to drastically reduce TB burden worldwide by 2035. By this date, the objectives are a 90% reduction in the TB incidence rate, a 95% reduction in the annual number of TB deaths and 0% TB-affected households. This program is mainly focused on TB prevention, with increased funding for care and research to better diagnose this disease and develop new TB treatments and vaccines.
Diagnosis
In 2019, 7.1 million people were newly TB-diagnosed, far from the estimated 10 million of new cases in previous years. This gap is mainly due to the under-reporting and under-diagnosis related to health care accessibility in some countries such as India, Nigeria and Indonesia (World Health Organization, 2020). Yet diagnosis is an important part of the fight against TB.
Rapid diagnosis and treatment are key factors to prevent deaths and limit further transmission of the disease. The choice of a diagnostic tool depends on the TB status: LTBI or active TB disease.
Latent tuberculosis infection
In a case of TB suspicion (subclinical TB with low symptoms or LTBI), two diagnostic tools are available: the tuberculin skin test and the interferon gamma release assay.
Tuberculin skin test or TST: TST is carried out by intradermally injecting a tuberculin purified protein derivative (PPD). In a TB positive individual, a cell-mediated immunity to PPD is present and a hypersensitivity reaction to the TST occurs after 48 to 72 hours. An induration reaction larger than 15 mm is considered positive. While easy to perform at a large scale, this test presents some limitations. A positive result can be due to a former mycobacterial infection. Repeated vaccination with Bacille Calmette-Guérin (BCG -the vaccine for TB) and exposure to non-tuberculous mycobacteria may also induce a false-positive TST [START_REF] Farhat | False-positive tuberculin skin tests: what is the absolute effect of BCG and non-tuberculous mycobacteria?[END_REF]. In contrast, patients with chronic kidney disease with high levels of parathormone and vitamin D may present a negative TST due to reduced cell-mediated immunity [START_REF] Deniz | Factors affecting TST level in patients undergoing dialysis: a multicenter study[END_REF]. More generally, TST tests show lower positive results notably among HIV patients and other immunocompromised individuals depending on the degree of immunosuppression [START_REF] Richeldi | Performance of tests for latent tuberculosis in different groups of immunocompromised patients[END_REF].
Interferon gamma release assays or IGRAs: IGRAs are in vitro blood test assays based on cellmediated immune response to two mycobacterial antigens: the early secretory antigenic target (ESAT-6) and the 10-kDa culture filtrate protein (CFP-10). These tests detect the release of IFNγ by circulating T cells. Interestingly, IGRAs can distinguish a BCG-induced response and most environmental mycobacteria from an MTB infection as the antigens detected by the assay are more specific to MTB [START_REF] Pai | Gamma interferon release assays for detection of Mycobacterium tuberculosis infection[END_REF]. These tests have also reduced sensitivity regarding immunocompromised patients [START_REF] Richeldi | Performance of tests for latent tuberculosis in different groups of immunocompromised patients[END_REF].
In both cases, TST and IGRAs cannot be used to predict whether LTBI will progress to active TB and whether the positive test results are due to an acute or past disease, as memory lymphocyte can produce cytokines even if the infection is ancient. New tests are currently under development to address these issues. As an example, a skin test named C-Tb combining the two previous tests has been developed. In a similar way as TST, C-Tb is a hypersensitivity skin test, using recombinant ESAT-6 and CFP-10 proteins, like IGRAs. It is used in endemic countries without the problem of TST false BCG-positive results [START_REF] Aggerbeck | C-Tb skin test to diagnose Mycobacterium tuberculosis infection in children and HIV-infected adults: A phase 3 trial[END_REF].
Researchers have recently identified some mycobacterial antigens specifically and naturally expressed during LTBI, useful to discriminate an active TB from a latent TB [START_REF] Meier | A systematic review on novel mycobacterium tuberculosisantigens and their discriminatory potential for the diagnosis of latent and active tuberculosis[END_REF].
Measurement of cytokines or TB biomarkers predicting the progression to active TB disease is also under investigation [START_REF] Carranza | Diagnosis for Latent Tuberculosis Infection: New Alternatives[END_REF]. However, such tests require more expensive facilities that are not always available in low-income countries.
Active tuberculosis
In active TB, several technologies are used to diagnose the disease: imaging techniques (chest X-Ray or computed tomography PET-CT scan), sputum smear positivity, culture-based methods and molecular tests [START_REF] Pai | Tuberculosis[END_REF].
Chest radiography plays a role in diagnosis and management of TB. Images of a patient with TB usually show lymphadenopathy, parenchymal disease, pleural effusion or bronchial stenosis which can help the diagnosis of TB and evaluate its impact on the lungs [START_REF] Nachiappan | Pulmonary tuberculosis: Role of radiology in diagnosis and management[END_REF]. Although imaging is a fundamental part of the diagnosis, the presence of the disease still needs to be confirmed via some microbiological investigations.
Sputum smear analysis is the most widely used test to diagnose active TB disease in low-and middle-income countries. A mycobacterial culture is performed on both solid and liquid media followed by a direct visualization of mycobacteria using light microscopy. However, this smear analysis has some limitations among HIV-positive patients (Méndez-Samperio, 2017) and in children [START_REF] Reuter | Challenges and controversies in childhood tuberculosis[END_REF]. Some of these individuals are sputum-scarce which makes this technique of analysis difficult to execute. In younger children, gastric lavage and bronchoalveolar lavage represents one of the main procedures to obtain a culture of MTB [START_REF] Reuter | Challenges and controversies in childhood tuberculosis[END_REF]. For sputum-scare adults, sputum induction can be performed using ultrasonic nebulization of hypertonic saline [START_REF] Méndez-Samperio | Diagnosis of Tuberculosis in HIV Co-infected Individuals: Current Status, Challenges and Opportunities for the Future[END_REF]. However, the sputum obtained from these techniques can be paucibacillary which might lead to misdiagnosis [START_REF] Méndez-Samperio | Diagnosis of Tuberculosis in HIV Co-infected Individuals: Current Status, Challenges and Opportunities for the Future[END_REF][START_REF] Reuter | Challenges and controversies in childhood tuberculosis[END_REF]. To overcome this problem, the lateral flow urine lipoarabinomannan assay (LF-LAM) has emerged as a novel technique to detect active TB disease amongst HIV-positive and sputum smear-negative patients [START_REF] Sabur | Diagnosing tuberculosis in hospitalized HIV-infected individuals who cannot produce sputum: Is urine lipoarabinomannan testing the answer[END_REF]. It for RIF and INH resistance, have been designed to provide further information on additional antibiotic resistances such as the detection of the main mutations responsible for the resistance [START_REF] Georghiou | Analytical performance of the Xpert MTB/XDR® assay for tuberculosis and expanded resistance detection[END_REF][START_REF] Ok | Rapid Molecular Diagnosis of Tuberculosis and Its Resistance to Rifampicin and Isoniazid with Automated MDR/MTB ELITe MGB® Assay[END_REF]. Nowadays, drug susceptibility testing is a major part of TB diagnosis in order to prescribe the most accurate treatment to the patient.
DNA sequencing, also known as targeted next-generation sequencing (tNGS) has been proposed to detect resistance on multiple gene regions. These assays, like the portable Deeplex®-MycTB assay (GenoScreen, Lille, France), allow early detection of multidrug resistance and the prescription of more appropriate TB treatments accordingly [START_REF] Feuerriegel | Rapid genomic first-and second-line drug resistance prediction from clinical Mycobacterium tuberculosis specimens using Deeplex-MycTB[END_REF].
Vaccines
Discovered by Calmette and Guérin in 1921 from a live-attenuated strain of Mycobacterium bovis, the BCG vaccine is currently the major vaccine used worldwide to prevent TB. Since 1974, BCG vaccine has been used as the main part of the WHO's Expanded Program on Immunization to expand and strengthen vaccination throughout the world. Nowadays, studies
show that the effects of BCG vaccination vary widely regarding the age of the patient, the endemic situation of the country, the form of TB (extra-or pulmonary) or the time of protection [START_REF] Ahmed | A century of BCG: Impact on tuberculosis control and beyond[END_REF]. As an example, its effectiveness varies from 80%, notably against meningeal TB in children, to 0% in endemic areas like India [START_REF] Mangtani | Protection by BCG vaccine against tuberculosis: A systematic review of randomized controlled trials[END_REF].
Moreover, the vaccine-induced immunity diminishes over time and provides a maximum protection from 10 to 20 years [START_REF] Rodrigues | How does the level of BCG vaccine protection against tuberculosis fall over time?[END_REF]. Several hypotheses explain these differences between effectiveness and protection time. Over time and due to different culture methods, many BCG strains with different genetic backgrounds like BCG Pasteur, Danish, Glaxo or Japan, have emerged worldwide [START_REF] Bottai | The BCG strain pool: Diversity matters[END_REF]. Those genetic and phenotypic differences inevitably affect the efficacy of the BCG vaccine. Genetic or nutritional difference between population may also have an effect on BCG vaccination [START_REF] Boisson-Dupuis | Inherited and acquired immunodeficiencies underlying tuberculosis in childhood[END_REF]. Moreover, pre-exposition to other mycobacteria (NTM or environmental bacteria) provides a protective immunity against TB that is not improved by BCG vaccination [START_REF] Ahmed | A century of BCG: Impact on tuberculosis control and beyond[END_REF]. Those pre-expositions may decrease the BCG vaccine efficacy in TBendemic areas. In this context, developing a more effective vaccine against MTB that provides protection amongst all populations regarding their age and the different forms of TB, is crucial to overcome this global health emergency.
In August 2020, the WHO Global report on TB announced 14 vaccines in Phase I, II or III trials.
Briefly, the new TB vaccine candidates are classified into different categories: preventive preexposure vaccines for vaccination prior to infection by MTB; preventive post-exposure vaccines to prevent LTBI to become active TB; therapeutic vaccines combined with antibiotic treatment to clear the mycobacterial infection or to prevent re-infection. These vaccine candidates are either whole cell-derived vaccines or subunit vaccines [START_REF] Schrager | The status of tuberculosis vaccine development[END_REF].
Mycobacterial whole cell-derived vaccines: These vaccines are derived from MTB, BCG or NTM strains. As they are whole cell vaccines, they contain many different antigenic components that stimulate the immune response in a more diverse way. There are two types of whole cellderived vaccines: viable or inactivated.
-Viable whole-cell vaccines are based on genetic attenuation of a live strain, like VPM1002 -a recombinant BCG strain expressing a membrane-perforating listeriolysin.
This live vaccine with VPM1002 is currently in clinical trial phase III for prevention of TB infection in infants.
-Inactivated whole-cell vaccines are based on killed or fractionated whole mycobacteria, like MIP/Immuvac -a heat-killed Mycobacterium indicus pranii vaccine.
They are considered safer than live vaccines since they do not contain any infectious particles but they may require multiple doses.
Subunit vaccines: Subunit vaccines target selected antigens that trigger an immune response against MTB. These vaccines are more suitable to prevent active TB among infected or latently infected individuals, enhancing prior immunity mediated by T cells. Indeed, the use of several selected antigens may not be sufficient to induce effective and full immunity against the pathogen but strengthen a pre-existing immune response, induced by BCG for example [START_REF] Sable | Tuberculosis vaccine development: Progress in clinical evaluation[END_REF]. The subunit vaccines are divided into two categories:
-Adjuvanted protein subunit vaccines: they are based on the administration of antigenic proteins along with an adjuvant in order to potentiate the host immune response, like M72 /AS01E (M72 immunogenic fusion protein with AS01E as an adjuvant system).
-Recombinant viral-vectored vaccines: These vaccines are similar to protein subunit vaccines but no adjuvant addition is needed. The use of a viral vector allows a more robust immune response to the antigens. As an example, TB/FLU-04L is based on an attenuated replication-deficient influenza virus vector expressing antigens Ag85A and ESAT-6.
Other strategies are under investigation to increase the BCG vaccination efficacy. For example, BCG revaccination was evaluated in different studies but the results are variable [START_REF] Rodrigues | Effect of BCG revaccination on incidence of tuberculosis in school-aged children in Brazil: The BCG-REVAC cluster-randomised trial[END_REF][START_REF] Nemes | Prevention of M. tuberculosis Infection with H4:IC31 Vaccine or BCG Revaccination[END_REF]. The main limitation to revaccination seems to be related to the exposure to NTM or LTBI which can alter the immune response of the host and be inefficient for an enhanced protection. Moreover, several studies question the intradermal route of immunization for the BCG vaccine. A recent study performed in rhesus macaques
shows that intravenous administration of BCG vaccine clearly increases the protection against a TB infection with an enhanced immune response [START_REF] Darrah | Prevention of tuberculosis in macaques after intravenous BCG immunization[END_REF].
Some progress has been made with regard to the development of improved or new vaccines against TB. However, a number of studies still need to be conducted to produce a new licensed effective vaccine for the future. Developing efficient therapeutic treatments remains a crucial part of the fight towards the eradication of TB.
III. Treatments
Patients developing active TB can be effectively cured with an appropriate anti-TB treatment regimen. Usually, it relies on a 6-month multidrug therapy with the main objectives of killing mycobacteria, eliminating latent infection and preventing acquisition of drug resistance.
Drug-sensitive tuberculosis
In the 1980s, a four antibiotic treatment was endorsed to treat TB with:
-Rifampicin (RIF): RIF and more generally rifamycins (like rifapentine or rifabutine) stalls bacterial transcription by inhibiting the DNA-dependent RNA polymerase [START_REF] Wehrli | Rifampin: Mechanisms of action and resistance[END_REF]. and the nucleic acid biosynthesis by targeting dihydrofolate reductase [START_REF] Timmins | Mechanisms of action of isoniazid[END_REF][START_REF] Unissa | Overview on mechanisms of isoniazid action and resistance in Mycobacterium tuberculosis[END_REF].
Isoniazid (INH
-Pyrazinamide (PZA): PZA is a prodrug converted to pyrazinoic acid (POA) via the bacterial pyrazinamidase PncA once inside the bacteria. Several mechanisms of action have been proposed over the years. The most accepted one is that POA disrupts the membrane potential and affects the membrane transport function at acidic pH (Y. [START_REF] Zhang | Mode of action of pyrazinamide: Disruption of Mycobacterium tuberculosis membrane transport and energetics by pyrazinoic acid[END_REF]. However, the importance of the acidic pH for the antibacterial activity of POA is still under debate [START_REF] Peterson | Uncoupling environmental pH and intrabacterial acidification from pyrazinamide susceptibility in Mycobacterium tuberculosis[END_REF]. Other recent studies suggest that POA binds the ribosomal protein S1/RpsA, the guanosine pentaphosphate synthase (GpsI) or the aspartate decarboxylase PanD [START_REF] Gopal | Pharmacological and Molecular Mechanisms Behind the Sterilizing Activity of Pyrazinamide[END_REF]. PanD is responsible for the formation of β-alanine from L-aspartate, which is essential for vitamin B5 and coenzyme A biosynthesis in MTB. Liaison between PanD and POA triggers PanD degradation by MTB, thus blocking biosynthesis of the essential Coenzyme A (Q. Sun et al., 2020).
-Ethambutol (EMB): EMB affects the synthesis of the mycobacterial cell wall by blocking the polymerization of arabinogalactan via the inhibition of arabinosyl transferases, EmbA, B and C [START_REF] Belanger | The embAB genes of Mycobacterium avium encode an arabinosyl transferase involved in cell wall arabinan biosynthesis that is the target for the antimycobacterial drug ethambutol[END_REF]. It also targets the glutamate racemase (MurI), a crucial enzyme of peptidoglycan (PG) biosynthesis pathway of MTB [START_REF] Pawar | Ethambutol targets the glutamate racemase of Mycobacterium tuberculosis-an enzyme involved in peptidoglycan biosynthesis[END_REF].
This treatment is given during six months: an intensive phase of two months with the four antibiotics, followed by a continuation phase of four months using RIF and INH only. The success rate of this multidrug therapy is up to 85% for a drug-susceptible TB (World Health Organization, 2020).
In 2014, the WHO published the first guidelines on LTBI management (updated in 2020) [START_REF] Organizat | WHO consolidated guidelines on tuberculosis. Module 1: Prevention. Tuberculosis preventive treatment[END_REF]. The first recommendation is about the identification of LTBI amongst selected populations such as HIV-positive individuals, children with TB household contacts, people initiating anti-TNF treatment, receiving dialysis or populations with higher risk factors (health workers, immigrants from high burden countries, prisoners or homeless people). The risk of progression from LTBI to active TB is higher for these people and an early anti-TB treatment would manage to kill the bacteria before LBTI turns into active TB. Whether the patient has LTBI or active TB, such long treatment periods generate many side effects such as hepatotoxicity, skin rash, gastrointestinal intolerance, nephrotoxicity and peripheral neuropathy [START_REF] Pai | Tuberculosis[END_REF]. These side effects and the long duration of treatment usually lead to a decrease in patient compliance (patient's adherence to treatment and willingness to follow medical advice). Thus, another part of the TB treatment is the directly observed therapy (DOT) introduced to increase treatment adherence among patients [START_REF] Karumbi | Directly observed therapy for treating tuberculosis[END_REF]. DOT is based on the supervision of drug intake by health professionals or at home by family members or community health workers. However, this measure does not solve the problem of poor adherence to TB treatment because of its inconveniences for the patient. Other techniques to promote patient compliance have been deployed, such as video observation therapy and the use of smartphones for drug intake, proven to be more effective than DOT [START_REF] Story | Smartphone-enabled video-observed versus directly observed treatment for tuberculosis: a multicentre, analyst-blinded, randomised, controlled superiority trial[END_REF]. Patient compliance is a crucial step in preventing the spread of TB and avoiding the development of drug-resistant strains, which are much more difficult to treat.
Drug-resistant tuberculosis
Resistant TB is caused by MTB that is resistant to one of the four first-line antibiotics.
Multidrug-resistant TB (MDR-TB) occurs when MTB is resistant to at least RIF and INH. In 2019, an estimated 3.3% of new cases and 18% of previously treated cases were MDR or RIF resistance (RR-TB) respectively (World Health Organization, 2020). The global cases were in India (27%), in China (14%) and in Russia (8%) (World Health Organization, 2020). Alarmingly, extensively drug resistant TB (XDR-TB) have emerged. XDR-TB are MDR-TB with additional resistance to fluoroquinolone (FQ) and at least one of the injectable drugs such as kanamycin, capreomycin and amikacin [START_REF] Seung | Multidrug-resistant tuberculosis and extensively drug-resistant tuberculosis[END_REF]. Many cases have been reported in India or in sub-Saharan Africa [START_REF] Pietersen | Long-term outcomes of patients with extensively drug-resistant tuberculosis in South Africa: A cohort study[END_REF][START_REF] Prasad | Adverse drug reactions in tuberculosis and management[END_REF]. Even more worrying, several cases of totally resistant MTB, meaning untreatable TB, are already notified in India [START_REF] Udwadia | Totally drug-resistant tuberculosis in India[END_REF]. Treatment of MDR/XDR-TB is longer and can go up to 20 months [START_REF] Mirzayev | World health organization recommendations on the treatment of drug-resistant tuberculosis, 2020 update[END_REF]. The difficulty of treating these types of strains represent a major public health challenge to control the global TB epidemic. MTB acquires antibiotic resistances through mutations in chromosomal genes, for example during suboptimal or inappropriate treatment [START_REF] Seung | Multidrug-resistant tuberculosis and extensively drug-resistant tuberculosis[END_REF]. However, patients with MDR-TB can be directly infected with an already resistant MTB strain -which is called transmitted resistance [START_REF] Leung | Transmission of multidrug-resistant and extensively drug-resistant tuberculosis in a metropolitan city[END_REF]. The mechanisms of drug resistance in MTB are modifications/alterations or overexpression of the drug targets, like the RNA polymerase for RIF, or an impairment of pro-drug activation, related to KatG mutation for INH [START_REF] Gygli | Antimicrobial resistance in Mycobacterium tuberculosis: Mechanistic and evolutionary perspectives[END_REF]. Antibiotics are also subject to efflux. Indeed, efflux pump are upregulated during antibiotic treatment to export antimicrobial compounds from the bacterial cell. This efflux lead to the maintenance of a sublethal intracellular antibiotic concentration that contributes to the selection of more resistant mutants. Therefore, the overexpression of efflux pump genes contributes to resistant phenotype observed in MTB [START_REF] Laws | Efflux pumps in Mycobacterium tuberculosis and their inhibition to tackle antimicrobial resistance[END_REF]. As a result, other antibiotics (named second-line antibiotics) need to be used to treat those resistant TB.
Groups
For MDR-TB, second line antibiotics are associated with different regimens. For some of these antibiotics, they were originally used to treat other infectious diseases but were proven to be active against MDR-TB. Among them:
-Aminoglycosides: amikacin (and streptomycin) are injectable drugs used to treat MDR-TB. They inhibit protein synthesis by binding to the 16S rRNA in the 30S small ribosomal subunit [START_REF] Recht | Basis for prokaryotic specificity of action of aminoglycoside antibiotics[END_REF].
-Fluoroquinolones (FQ): levofloxacin and moxifloxacin are the most valuable second line antibiotics against MDR-TB. They inhibit DNA gyrase and topoisomerase IV which are essential for bacterial DNA replication [START_REF] Pranger | The Role of Fluoroquinolones in the Treatment of Tuberculosis in 2019[END_REF].
-Carbapenems: meropenem, ertapenem and imipenem belong to a subclass of β-Lactams. Their action is based on the inhibition of the PG synthesis. They are not the main second-line antibiotics used but new evidence of their efficacy has led to clinical studies and dose optimization for their inclusion in future MDR-TB treatments (P. [START_REF] Kumar | Repurposing of carbapenems for the treatment of drug-resistant tuberculosis[END_REF].
-Others drugs class: linezolid is an oxazolidinone which binds to the bacterial ribosome (the 50S subunit) and prevents protein synthesis (B. [START_REF] Singh | Linezolid for drug-resistant pulmonary tuberculosis[END_REF]; clofazimine is an antileprosy drug which supposedly acts via the release of reactive oxygen species (ROS) damaging DNA, lipids and proteins in the bacterium [START_REF] Lechartier | Mode of action of clofazimine and combination therapy with benzothiazinones against Mycobacterium tuberculosis[END_REF]; ethionamide and prothionamide are prodrugs activated by EthA which then bind to
InhA and inhibit the cell wall synthesis (F. [START_REF] Wang | Mechanism of thioamide drug action against tuberculosis and leprosy[END_REF].
In 2012, the Food and Drug Administration (FDA) approved bedaquiline (BDQ), a diarylquinoline, as a new anti-TB drug (the first in over 40 years) [START_REF] Cox | FDA Approval of Bedaquiline -The Benefit-Risk Balance for Drug-Resistant Tuberculosis[END_REF]. In 2014, the European Medicines Agency (EMA) also approved BDQ and another antibiotic called delamanid (a nitroimidazo-oxazole derivative) for pulmonary MDR-TB (Zumla et al., 2015a).
BDQ inhibits the mycobacterial F-ATP synthase, a key respiratory chain enzyme [START_REF] Hards | Bactericidal mode of action of bedaquiline[END_REF]; delamanid blocks the mycobacterial cell wall synthesis [START_REF] Xavier | Delamanid: A new armor in combating drug-resistant tuberculosis[END_REF].
As new antibiotics, they are a major part of the treatment of MDR-TB according to WHO 2020 guidelines [START_REF] Mirzayev | World health organization recommendations on the treatment of drug-resistant tuberculosis, 2020 update[END_REF].
The choice of antibiotics used for the MDR-TB treatment depends on many factors: the strain resistance, the previous use of antibiotics, the age of the patient or the side effects profile (World Health Organization, 2020). Each treatment needs to be personalized depending of the case. Success rate of MDR/RR-TB treatments is around 57% (World Health Organization, 2020). Antibiotics used to treat resistant TB are summed up in Table 1.
Improvement of current TB treatments as well as development of new ones remain a major challenge to eradicate TB.
Newly approved drugs and the current pipeline
According to the latest global TB report in 2020, 22 drugs for TB treatment (sensible or resistant) are currently in Phase I, II or III trials. Apart from BDQ and delamanid which are still under review for new regimens/association and new purpose, pretomanid is the latest anti-TB drug approved by FDA in 2019 for adults with pulmonary XDR-TB or non-responsive MDR-TB [START_REF] Deb | Pretomanid: The latest USFDA-approved anti-tuberculosis drug[END_REF]. This antibiotic is a nitroimidazo-oxazine and has the same mechanism of action that delamanid. [START_REF] Silva | Shortened tuberculosis treatment regimens: What is new?[END_REF].
Name
Shorter treatments for drug-resistant TB are also under investigation such as the endTB trial for MDR-TB (with BDQ or delamanid involved) or the Standard Treatment Regimen of Anti-Tuberculosis Drugs for Patients with MDR-TB (STREAM) by which the WHO recommends to shorten the treatment for uncomplicated MDR-TB (without resistance to FQ) from 20 month to 9-month in 2016 (A. [START_REF] Lee | Current and future treatments for tuberculosis[END_REF]. In 2020 and in accordance with WHO guidelines, BDQcontaining regimens are now the most promising regimens for MDR-TB treatments (WHO, 2020b).
In the last few years, there has been an increasing interest to enhance the host immune response as an additional strategy to improve TB treatment. In the current pipeline, many trials are performed to examine the efficacy and the safety of host directed therapies to shorten TB treatment or prevent lung damages (Young et al., 2020). These drugs show interesting promises when adjunct to current TB treatments. This will be discussed further in this thesis.
B. Host-pathogen interaction
I. Cycle of infection
MTB, once inhaled, penetrates deep into the host lungs of the new host and reaches the alveoli. There, MTB is recognized by the host immune system, which triggers an immune response and the recruitment of effector cells. This immune response leads to the formation of a granuloma, a special structure that helps controlling mycobacterial infection (Figure 2).
Mycobacterium tuberculosis
Human TB is mainly caused by bacteria belonging to the MTB complex (MTBC). This group contains many species closely related (> 99% nucleotide sequence identity) including MTB, M. africanum, M. canettii and other animal adapted M. bovis, M. microti or M. caprae [START_REF] Gagneux | Ecology and evolution of Mycobacterium tuberculosis[END_REF]. MTB belongs to the Mycobacteriaceae family which contains the genus Mycobacterium [START_REF] Cook | Physiology of Mycobacteria[END_REF]. In 1998, the genome of MTB H37Rv (laboratory strain) was fully sequenced, allowing a deeper understanding of the biology of this mycobacterium. Its genome size is about 4.4.10 6 base pairs and contains approximately 4,100 genes with a high Guanine-Cytonine (GC) content of 65.6 % (S. T. [START_REF] Cole | Deciphering the biology of mycobacterium tuberculosis from the complete genome sequence[END_REF]. MTB is an obligate human pathogen without animal or environmental reservoirs. It is an aerobe, a facultative intracellular and a slow-growing bacterium (with a generation time of 20 hours in optimal laboratory conditions).
The dominant feature of MTB is its particular cell envelope composed of proteins, abundant lipids and carbohydrates [START_REF] Cook | Physiology of Mycobacteria[END_REF]. The thick cell wall of mycobacteria has low permeability and plays a key role in intrinsic antibiotic resistance [START_REF] Maitra | Cell wall peptidoglycan in Mycobacterium tuberculosis: An Achilles' heel for the TBcausing pathogen[END_REF].
Furthermore, many components of this cell envelope are linked to the virulence of MTB such as proteins inhibiting the antimicrobial functions of macrophages or metal-transporter proteins [START_REF] Maitra | Cell wall peptidoglycan in Mycobacterium tuberculosis: An Achilles' heel for the TBcausing pathogen[END_REF]. This mycobacterial cell envelope is composed of several layers including the cytoplasmic membrane, the cell wall (composed of PG, arabinogalactan and mycolic acids), surface lipids and the capsule:
« élément sous droit, diffusion non autorisée » "copyrighted material, unauthorized distribution" [START_REF] Cambier | Host Evasion and Exploitation Schemes of Mycobacterium tuberculosis[END_REF].
Figure 2. Cell cycle of MTB infection
Cytoplasmic membrane: It contains glycerophospholipids such as phosphatidyl ethanolamine, phosphatidyl inositol and phosphatidylinositol mannosides (PIM). There are also lipomannans (LMs) and lipoarabinomannans (LAMs) which are anchored in the plasma membrane and extend towards the PG. These lipids are imperative for cell wall integrity and play an important role in the interaction with the host immune response [START_REF] Fukuda | Critical Roles for Lipomannan and Lipoarabinomannan in Cell Wall Integrity of Mycobacteria and Pathogenesis of Tuberculosis[END_REF].
Cell wall: It is the middle layer of the envelope and is composed of PG, arabinogalactan and mycolic acids (MA) (Dulberger et al., 2019).
-Peptidoglycan: PG is composed of N-acetylglucosamine (NAG) and N-acetylmuramic disaccharide (NAM) cross-linked by peptides which help maintaining the integrity of the membrane and the cell shape.
-Arabinogalactan: The middle layer of the cell wall core is made of arabinogalactan sugars (arabinose chains linked to galactose residues). Arabinogalactan polymers are connected to PG.
-Mycolic acids: MA are long-chain fatty acids that composed the main part of the mycomembrane. They are free or attached to trehalose sugar forming trehalose monomycolate (TMM) or trehalose dimycolate (TDM) and contribute to the impermeability of the mycobacterial envelope [START_REF] Chiaradia | Dissecting the mycobacterial cell envelope and defining the composition of the native mycomembrane[END_REF].
Surface lipid phthiocerol dimycocerosate (PDIM): PDIM, located at the surface of MA, are glycolipids crucial for MTB infection and virulence. They are associated with immune escape and attenuation of the host immune response by masking patterns recognized by the immune system [START_REF] Cambier | Mycobacteria manipulate macrophage recruitment through coordinated use of membrane lipids[END_REF].
Capsule:
The outermost structure is a capsule composed of polysaccharides and proteins. The capsule plays a role in host-pathogen interactions and virulence (Dulberger et al., 2019).
Physiopathology
The infection begins when MTB is inhaled by the host via the respiratory tract and reaches the alveolar space. It is composed of several cell types including epithelial pulmonary alveolar type I cells (which covers more than 95% of the alveolar surface) and type II pneumocytes and immune cell defenses [START_REF] Hussell | Alveolar macrophages: Plasticity in a tissue-specific context[END_REF]. In this alveolar space, MTB encounters the dominant defense cells and the first line of the immune defense: the alveolar macrophages (AM) (S. B. [START_REF] Cohen | Alveolar Macrophages Provide an Early Mycobacterium tuberculosis Niche and Initiate Dissemination[END_REF]. AM are distinct from interstitial macrophages located between airway epithelium and the blood vessels. Indeed, AM reside in a compartment with environmental specificities such as the level of O2, the presence of airway mucus or the permanent contact with environmental antigens [START_REF] Hussell | Alveolar macrophages: Plasticity in a tissue-specific context[END_REF] . The airspace represents a specific environment which needs to be preserved from pathogens and debris but also from structure destruction by the inflammatory response of the immune system. Therefore, AMs have attenuated inflammatory bursts with production of anti-inflammatory mediators such as transforming growth factor-β (TGF-β) and IL-10 to avoid a too strong proinflammatory response in the alveoli [START_REF] Hussell | Alveolar macrophages: Plasticity in a tissue-specific context[END_REF].
Once inhaled and in the alveoli, MTB is phagocytized by AM but also other immune cells such as dendritic cells. More rarely, MTB enters directly into alveolar epithelial cells [START_REF] Scordo | Alveolar Epithelial Cells in Mycobacterium tuberculosis Infection: Active Players or Innocent Bystanders?[END_REF]. Subsequently, dendritic cells transport MTB (or mycobacterial antigens) to pulmonary lymph nodes for the development of the adaptive immune response [START_REF] Pai | Tuberculosis[END_REF]. It was also showed that MTB-infected AMs relocate from the airways to the pulmonary interstitium, allowing the dissemination of MTB to other immune cell types and lymph nodes (S. B. [START_REF] Cohen | Alveolar Macrophages Provide an Early Mycobacterium tuberculosis Niche and Initiate Dissemination[END_REF]. Therefore, recognition and internalization of the pathogen by this line of defense is the first step of the activation of the protective immune response needed for controlling the infection.
Granuloma formation
The migration of MTB-infected AMs initiates the host immune response with the production of different cytokines and chemokines. This response leads to the recruitment of additional cells into the site of infection which form a cell mass: the granuloma [START_REF] Pagán | The Formation and Function of Granulomas[END_REF]. The granuloma architecture is complex, with several layers composed of diverse cell types. At the early stage of its development, the granuloma is highly vascularized, allowing the recruitment of monocytes, macrophages, dendritic cells, neutrophils, lymphocytes CD4 + and CD8 + T cells and B cells [START_REF] Ramakrishnan | Revisiting the role of the granuloma in tuberculosis[END_REF]. At a later stage, the center of the granuloma becomes necrotic area, hypoxic and deprived of nutrients, known as the caseum. This latter is surrounded by different populations of macrophages activated or not, infected or not (such as epithelioid cells and foamy macrophages with lipid droplets), lymphocytes and a peripheral fibrotic layer (synthetized by fibroblasts) (Figure 3) [START_REF] Russell | Foamy macrophages and the progression of the human tuberculosis granuloma[END_REF][START_REF] Pagán | The Formation and Function of Granulomas[END_REF].
The granuloma is an environment of exchange between the pathogen and the host. At first, its structure was thought to be necessary and beneficial for the host to control and clear the MTB infection without excess inflammatory damages [START_REF] Saunders | Restraining mycobacteria: Role of granulomas in mycobacterial infections[END_REF]. However, further studies show that granuloma also contributes to MTB proliferation and dissemination. Indeed, in order to survive inside the granuloma, MTB has developed several defense strategies, mostly demonstrated in studies by using zebrafish or nonhuman primate [START_REF] Cadena | Heterogeneity in tuberculosis[END_REF]. For example, MTB secretes a 6kDa early secreted antigenic target (ESAT-6), which induces host production of matrix metalloproteinase 9 (MMP9) by epithelial cells [START_REF] Volkman | Tuberculous granuloma induction via interaction of a bacterial secreted protein with host epithelium[END_REF]. This MMP9 production results in an enhancement of the recruitment of new macrophages which may in turn be infected by MTB, contributing to the development of the granuloma and the multiplication of the bacterium [START_REF] Volkman | Tuberculous granuloma induction via interaction of a bacterial secreted protein with host epithelium[END_REF]. Furthermore, MTB influences the polarization of macrophages, by increasing the IL-10 production of infected macrophages, driving the differentiation of naïve macrophages into a more permissive antiinflammatory population [START_REF] Mcnab | Type I IFN Induces IL-10 Production in an IL-27-Independent Manner and Blocks Responsiveness to IFN-γ for Production of IL-12 and Bacterial Killing in Mycobacterium tuberculosis -Infected Macrophages[END_REF].
MTB inside the granuloma can enter in a dormant state to avoid recognition and elimination (Peddireddy et al., 2017). More recently, positron emission tomography and computed tomography imaging (PET/CT) was proven to be helpful to study the granuloma dynamics and to distinguish LTBI which are more likely to progress into active TB in macaque [START_REF] Lin | PET CT Identifies Reactivation Risk in Cynomolgus Macaques with Latent M. tuberculosis[END_REF] and in human [START_REF] Esmail | Characterization of progressive HIVassociated tuberculosis using 2-deoxy-2-[18F]fluoro-D-glucose positron emission and computed tomography[END_REF]. Dysregulation of the host immune system may lead to progression of the infection inside the granuloma, collapsing of the caseum and releasing of the pathogen into airways (where MTB can be transmitted by a TB-infected symptomatic individual) or to other tissues (where MTB infects new cell hosts) [START_REF] Ramakrishnan | Revisiting the role of the granuloma in tuberculosis[END_REF].
Another study demonstrates that the fate of the granuloma is also dependent on the inflammatory status inside the granuloma [START_REF] Marakalala | Inflammatory signaling in human tuberculosis granulomas is spatially organized[END_REF]. Indeed, inside the caseum, leukotriene-A4 Hydrolase (LTA4H) synthesizes leukotriene B4 (LTB4), a pro-inflammatory « élément sous droit, diffusion non autorisée » "copyrighted material, unauthorized distribution" Figure 3. Host-MTB interaction influences the granuloma fate [START_REF] Cadena | Heterogeneity in tuberculosis[END_REF]. eicosanoid that is associated with the production of TNF-α [START_REF] Marakalala | Inflammatory signaling in human tuberculosis granulomas is spatially organized[END_REF]. TNF-α is a key cytokine for immune response that triggers granuloma formation. However, an excess of TNF-α is deleterious as it induces mitochondrial ROS (mtROS) production, increasing the microbicidal activity of macrophages, but also inducing programmed necrosis which then increases bacterial proliferation and spreading [START_REF] Roca | TNF dually mediates resistance and susceptibility to mycobacteria via mitochondrial reactive oxygen species[END_REF].
Granuloma therefore represents an extremely dynamic structure where MTB is targeted with a wide range of bactericidal mechanisms. However, over time, MTB has developed several strategies to counteract those bactericidal mechanisms in order to proliferate within the macrophages and further in the granuloma.
II.
Bactericidal mechanisms of macrophages
Macrophages: an overview
There are two major types of macrophages in the lung: the AM and the interstitial macrophage (IM). AM are resident lung macrophages derived from fetal primitive yolk sac [START_REF] Guilliams | Alveolar macrophages develop from fetal monocytes that differentiate into long-lived cells in the first week of life via GM-CSF[END_REF] whereas IM are thought to be recruited during MTB infection (S. [START_REF] Srivastava | Beyond macrophages: The diversity of mononuclear cells in tuberculosis[END_REF].
They are commonly divided into separate groups M1 and M2 derived from polarization of naïve macrophages [START_REF] Lugo-Villarino | Macrophage polarization: Convergence point targeted by Mycobacterium tuberculosis and HIV[END_REF]. M1 macrophages, classically called activated macrophages or pro-inflammatory macrophages (stimulated by lipopolysaccharides (LPS) or interferon-γ (IFN-γ)), are characterized by a high antigen presentation and a high expression of pro-inflammatory cytokines (IL-12, IL-23 and TFN-α). M2 macrophages, alternatively called activated macrophages or anti-inflammatory macrophages, express different patterns on their surface which trigger a different immune response from M1 (low production of IL-12 and TNF-α and higher production of IL-10). M2 macrophages are divided into subcategories M2a, M2b, M2c, and M2d depending on the origin of their stimulation (IL-4 or IL-13; TLR ligands and IL-1β; glucorticoïdes, IL-10 or TGF-β; IL-10 and vascular endothelial growth factors (VEGF), respectively). They express various surface markers, secrete diverse cytokines and have different biological functions (Y. [START_REF] Wang | M1 and M2 macrophage polarization and potentially therapeutic naturally occurring compounds[END_REF]. In vitro, specific cytokines or stimulating factors are likely to produce one type of macrophages. For example, granulocyte-macrophage colony-stimulating factor (GM-CSF) tends to produce M1 macrophages ("M1-like macrophages") while macrophage colony-stimulating factor (M-CSF) produces M2 macrophages ("M2-like macrophages") [START_REF] Murray | Macrophage Activation and Polarization: Nomenclature and Experimental Guidelines[END_REF]. However, this M1 and M2
polarization must be nuanced because in vivo, macrophages possess a spectrum of different functions regarding their activation and their environmental localization [START_REF] Murray | Macrophage Activation and Polarization: Nomenclature and Experimental Guidelines[END_REF].
Macrophages possess on their surface pattern recognition receptors (PRRs) which recognize conserved microbial motifs, also called pathogen-associated molecular patterns (PAMPs), including MTB's components [START_REF] Hussell | Alveolar macrophages: Plasticity in a tissue-specific context[END_REF]. Among these surfaces: PRRs, TLRs, C-type lectin receptors (CLRs) like mannose receptors (MR), scavenger receptors (SR) and C-type lectin DC-specific intercellular adhesion molecule-3 grabbing nonintegrin (DC-SIGN) are known to have a role in host-MTB interaction [START_REF] Guirado | Macrophages in tuberculosis: Friend or foe[END_REF]. For example, TLRs (notably TLR2 and TLR4) link the 19kDa lipoprotein LM and other PIMs from MTB [START_REF] Faridgohar | New findings of Toll-like receptors involved in Mycobacterium tuberculosis infection[END_REF]. These recognitions trigger immune signaling cascade within the host, starting from the myeloid differentiation primary response protein 88 (MyD88) and leading to the translocation of the nuclear factor-kappa B (NF-κB), which promotes the expression of pro-inflammatory cytokines and activates genes expression involves in the regulation of the host protection [START_REF] Hossain | Pattern recognition receptors and cytokines in Mycobacterium tuberculosis infection -The double-edged sword?[END_REF]. However, certain PRRs are also used by the pathogen to decrease the host immune response. Indeed, MR interacts with PAMPs mannosylated LAM (ManLAM), LAM and PIMs on MTB surface. This binding triggers an antiinflammatory response of the macrophage by stimulating the release of anti-inflammatory cytokines, inhibiting the production of pro-inflammatory IL-12 and preventing oxidative responses, thereby enhancing MTB survival within the host [START_REF] Killick | Receptor-mediated recognition of mycobacterial pathogens[END_REF]. Also, DC-SIGN has a dual role in MTB infection. Firstly, DC-SIGN was reported to render the cells more prone to be infected by MTB [START_REF] Tailleux | DC-SIGN Induction in Alveolar Macrophages Defines Privileged Target Host Cells for Mycobacteria in Patients with Tuberculosis[END_REF]. However, using DC-SIGN depleted macrophages, it has also been shown that DC-SIGN has an anti-inflammatory role upon MTB infection [START_REF] Lugo-Villarino | The C-type lectin receptor DC-SIGN has an anti-inflammatory role in Human M(IL-4) macrophages in response to Mycobacterium tuberculosis[END_REF].
Recognition is the first interaction between macrophages and MTB. Macrophages are defense immune cells with many bactericidal mechanisms used to kill various pathogens. These mechanisms will be described below. MTB defense mechanisms against them will then be outlined.
Phagosome
Pathogen recognition by macrophage triggers several signaling pathways that result in the uptake of the pathogen. The internalization of the pathogen, also called phagocytosis, forms an intracellular vacuole delimited by a bilayer membrane named the phagosome (Elisabeth [START_REF] Tjelle | Phagosome dynamics and function[END_REF]. This latter then undergoes a maturation process and acquires bactericidal properties and clearance processes [START_REF] Rosales | Phagocytosis: A Fundamental Process in Immunity[END_REF]. The transition between different stages of the phagosome and the regulation of membrane fusion are mostly mediated by the Rab family GTPases ((Elisabeth [START_REF] Tjelle | Phagosome dynamics and function[END_REF][START_REF] Gutierrez | Functional role(s) of phagosomal Rab GTPases[END_REF].
The newly phagosome merges directly with the early endosome mainly via the small GTPase Rab5. Rab5 recruits the early endosome antigen 1 (EEA1) and class III PI-3K human vacuolar protein-sorting 34 (VPS34). VPS34 produces phosphatidylinositol 3-phosphate (PI3P) which retains EEA1 at the cytosolic leaflet of the phagosome and promotes the recruitment of proteins involved in phagosomal maturation, including Rab7 [START_REF] Christoforidis | The rab5 effector EEA1 is a core component of endosome docking[END_REF]. The pH of the early phagosome is estimated between 6.1 -6.5, which too high to kill bacteria [START_REF] Rosales | Phagocytosis: A Fundamental Process in Immunity[END_REF].
As the phagosome matures, Rab5 is replaced by Rab7. This Rab GTPase regulates the fusion of the early phagosome with late endocytic compartments. The transition between the two Rab remains elusive but several studies suggest the implication of the homotypic fusion and protein sorting (HOPS) complex VPS41 subunit or the SAND1/Mon1-Ccz1 complex for the recruitment of Rab7 [START_REF] Barry | Impaired stimulation of p38α-MAPK/Vps41-hops by lps from pathogenic coxiella burnetii prevents trafficking to microbicidal phagolysosomes[END_REF][START_REF] Cabrera | The mon1-ccz1 gef activates the rab7 gtpase ypt7 via a longin-fold-rab interface and association with pi3p-positive membranes[END_REF]. In the meantime, other Rab (Rab 1 and Rab2) regulate phagosome trafficking with the endoplasmic reticulum (ER), post-Golgi and ER-Golgi intermediate compartment [START_REF] Gutierrez | Functional role(s) of phagosomal Rab GTPases[END_REF]. Fusion with late endosomes and elements of the Golgi complex allows the incorporation of lysosomal-associated membrane proteins and luminal proteases (cathepsins, proteases and hydrolases) within the phagosome [START_REF] Kinchen | Phagosome maturation: Going through the acid test[END_REF]. Also, the lumen of the phagosome becomes more acidic (pH 5.5 -6.0) via a gradual accumulation of V-ATPases on the membrane and the translocation of protons (H + ) [START_REF] Kinchen | Phagosome maturation: Going through the acid test[END_REF].
The last stage of the phagosome maturation involves fusion with lysosomes to become the phagolysosome [START_REF] Nguyen | Better Together: Current Insights Into Phagosome-Lysosome Fusion[END_REF]. The phagolysosome has a very acidic lumen (pH around 5.0 -5.5). It contains hydrolytic enzymes including cathepsins, proteases, lysozymes and lipases which have their optimal efficacy at this pH, allowing bacterial clearance [START_REF] Nguyen | Better Together: Current Insights Into Phagosome-Lysosome Fusion[END_REF].
Reactive oxygen species and reactive nitrogen species
ROS are reactive species derived from molecular oxygen. They are implicated in numerous processes in immune cells, such as the regulation of immune pathways (Nathan & Cunningham-Bussel, 2013). ROS also exert an antimicrobial activity against bacteria [START_REF] Sies | Reactive oxygen species (ROS) as pleiotropic physiological signalling agents[END_REF]. They are divided into two subclasses: (i) highly reactive free radicals such as superoxide anion (O2 •-) or radical hydroxyl (HO • ) and (ii) non-radical species including H2O2 (Nathan & Cunningham-Bussel, 2013). ROS are produced in response to various signals,
including some related to the presence of a pathogen (recognition of the PAMPs) or diverse cytokines such as TNF [START_REF] Roca | TNF dually mediates resistance and susceptibility to mycobacteria via mitochondrial reactive oxygen species[END_REF]. There are two major sources of ROS inside macrophages. First of all, NADPH oxidases produce ROS into the phagosomal lumen (precisely the phagocyte NADPH oxidase NOX2 isoform). The enzymatic complex is composed of two membrane-bound subunits, cytosolic components and a small G protein Rac which catalyze the production of superoxide from oxygen and NADPH [START_REF] Babior | NADPH oxidase[END_REF]. Secondly, mitochondria-derived ROS (mtROS) are produced during the generation of ATP by the mitochondrial respiratory electron transport chain (ETC). The most abundant mtROS is O2 •- which will be dismutated by the superoxide dismutases (SOD) into H2O2 [START_REF] Herb | Functions of ros in macrophages and antimicrobial immunity[END_REF]. Other oxidases in subcellular compartments also produce H2O2 such as peroxisome by long chain fatty acid oxidation, endoplasmic reticulum by protein oxidation or the lipoxygenases generating lipid-derived ROS [START_REF] Sies | Reactive oxygen species (ROS) as pleiotropic physiological signalling agents[END_REF].
The generation of ROS within the cell has to be finely regulated. Indeed, a high intracellular level concentration of ROS may be detrimental for the host. ROS toxicity is related to the impairment of structural and functional properties of proteins by binding and damaging the amino-acids like arginine, threonine and lysine [START_REF] Sies | Reactive oxygen species (ROS) as pleiotropic physiological signalling agents[END_REF]. Moreover, they also directly bind and damage DNA or nucleic acids through oxidation of nucleotides pool or trigger lipid peroxidation via oxidation of fatty acid [START_REF] Shastri | Role of oxidative stress in the pathology and management of human tuberculosis[END_REF]. However, a certain level of intracellular ROS is crucial for many cellular processes including cell proliferation, differentiation, apoptosis and defense mechanisms against pathogens. For example, individuals with chronic granulomatous disease, a genetic disease linked to a mutation in the gene encoding for NOX2, have a lack of ROS production and an increased susceptibility to bacterial infection [START_REF] Babior | NADPH oxidase[END_REF]. Moreover, ROS mediate immune pathways, acting as a regulatory agent in the immune system. For example, ROS activate the nucleotide-binding oligomerization domain (NOD)-like receptor containing pyrin domain 3 (NRLP3) inflammasome, responsible for the maturation of pro-inflammatory cytokines such as IL-1β
and IL-18 (see section B.II.c.) [START_REF] Abais | Redox Regulation of NLRP3 Inflammasomes: ROS as Trigger or Effector?[END_REF]. Finally, mtROS impede the pro-inflammatory response of infected macrophages. They induce covalent disulfide linkage of NF-κB essential modulator (NEMO) leading to the activation of the NF-κB pathway and terminally to the secretion of pro-inflammatory cytokines [START_REF] Herb | Mitochondrial reactive oxygen species enable proinflammatory signaling through disulfide linkage of NEMO[END_REF].
Another form of reactive species is used by the macrophage against bacterial pathogens: the nitric oxide (NO) and its reactive nitrogen species (RNS) derivatives. NO is produced by the nitric oxide synthase (NOS). Three isoforms of this enzyme exist including the inducible NOS (iNOS or NOS2) [START_REF] Wink | Nitric oxide and redox mechanisms in the immune response[END_REF]. Expression of iNOS is induced by TLR ligands like LPS and inflammatory cytokines, including IFN-γ. iNOS catalyzes NO production by converting Larginine into L-citrulline, which is subsequently converted into RNS [START_REF] Wink | Nitric oxide and redox mechanisms in the immune response[END_REF]. NO may also be converted into other radicals such as nitrogen dioxide (NO2 . ), nitrous trioxide (N2O3) or very reactive oxidant peroxynitrile (OONO -) [START_REF] Wink | Nitric oxide and redox mechanisms in the immune response[END_REF]. These radicals are known to switch off and on certain cellular pathways, but also to have a direct effect on bacteria. Interestingly, NO plays different roles between human and animal macrophages.
Indeed, NO synthase-deficient mice exhibit increased susceptibility to MTB, while human macrophages generate little NO in response to inflammatory stimuli even though MTBinfected individuals have higher levels of NO in their lungs [START_REF] Macmicking | Identification of nitric oxide synthase as a protective locus against tuberculosis[END_REF]Choi et al., 2012). The effect of NO in human macrophages is still a matter of debate. Findings differ about NO production in human macrophages in vivo and ex vivo [START_REF] Nathan | Role of iNOS in Human Host Defense[END_REF] and nitrites have been suggested to be produced by the bacteria themselves [START_REF] Cunningham-Bussel | Nitrite produced by Mycobacterium tuberculosis in human macrophages in physiologic oxygen impacts bacterial ATP consumption and gene expression[END_REF].
Inflammasome
Inside macrophages, recognition of PAMPs by membrane PRRs (e. i. TLRs) or cytosolic PRRs (e.
i. nucleotide-binding oligomerization domain-like receptors (NOD-like receptors or NLRs)) is the core of the immune responses. Among these responses, activation of NLRs triggers the assembly of protein complexes called inflammasomes [START_REF] Lamkanfi | Mechanisms and functions of inflammasomes[END_REF].
Inflammasomes mainly activate caspase 1 resulting in the production of pro-inflammatory IL-1ß and IL-18 molecules. IL-1β and IL-18 are present in the cell as inactive precursors (pro-IL-1β, not constitutively expressed and pro-IL-18, constitutively expressed) which are then cleaved into mature cytokines (Chong & Sullivan, 2007a). IL-1β binds to its receptor IL-1R which stimulates the release of pro-inflammatory cytokines such as IL-6 and TNF-α [START_REF] Dinarello | A clinical perspective of IL-1β as the gatekeeper of inflammation[END_REF]. Similarly, IL-18 binds to IL-18R. Presence of IL-1R or IL-18R results in the activation of a signaling cascade and transcription factors activation, such as NF-κB, all regulating the inflammation process (P. T. [START_REF] Liu | Toll-like receptor triggering of a vitamin D-mediated human antimicrobial response[END_REF]. Inflammasomes also induce a specific death of infected cells named pyroptosis. Pyroptosis is a cell death which limits the replication of intracellular pathogens by eliminating infected immune cells, exposing released bacteria to other phagocytic cells and releasing inflammatory content such as IL-1β and IL-18 [START_REF] Bergsbaken | Pyroptosis: Host cell death and inflammation[END_REF].
NOD-like receptor family, pyrin domain containing 3 (NLRP3) and absent in melanoma 2 (AIM2) inflammasomes are two inflammasomes that activate caspase 1 in response to various stimuli, notably upon MTB infection [START_REF] Briken | Mycobacterium Tuberculosis genes involved in regulation of host cell death[END_REF]. NLRP3 inflammasome activation requires a priming signal with the up-regulation of its expression (linked to TLRs or NLRs stimulation of the cell) followed by a stimulus like ionophores (K + , Ca 2+ , Na + and Cl -), mtROS or lysosomal content as a result of membrane damages to be active [START_REF] Kelley | The NLRP3 inflammasome: An overview of mechanisms of activation and regulation[END_REF]. AIM2 is activated following the detection of exogenous double-stranded DNA. This binding leads to a conformational change allowing the recruitment of the apoptosis-associated speck-like protein containing a caspase recruitment domain (ASC) protein and caspase 1 [START_REF] Saiga | Critical role of AIM2 in Mycobacterium tuberculosis infection[END_REF]. A study shows that Aim2 -/-mice have an increased susceptibility to MTB and decreased levels of IL-1β and IL-18 in the lungs after MTB infection [START_REF] Saiga | Critical role of AIM2 in Mycobacterium tuberculosis infection[END_REF]. MTB activation of AIM2 inflammasome seems to be related to the ESX-1 secretion system, as MTB ESX-1 deficient mutant fails to inhibit the AIM2-inflammasome activation [START_REF] Shah | Cutting Edge: Mycobacterium tuberculosis but Not Nonvirulent Mycobacteria Inhibits IFN-β and AIM2 Inflammasome-Dependent IL-1β Production via Its ESX-1 Secretion System[END_REF]. Other
NODs are implicated in the reconnaissance of pathogen notably NOD2 in the case of MTB, subsequently resulting in NF-κB activation (A. K. [START_REF] Pandey | Nod2, Rip2 and Irf5 play a critical role in the type I interferon response to Mycobacterium tuberculosis[END_REF].
Autophagy
Autophagy is a mechanism of intracellular degradation by which cytoplasmic materials are delivered to the lysosome for degradation. The main form of autophagy is named macroautophagy (refers here as autophagy) (M. Y. [START_REF] Wu | Autophagy and Macrophage Functions: Inflammatory Response and Phagocytosis[END_REF]. Autophagy maintains cell homeostasis by removing defective organelles and protein aggregates, but also by playing a role in cell differentiation and development. Autophagy is also essential for immunity by triggering intracellular pathogens degradation, which is a type of autophagy called xenophagy [START_REF] Mintern | Autophagy and mechanisms of effective immunity[END_REF]; M. Y. [START_REF] Wu | Autophagy and Macrophage Functions: Inflammatory Response and Phagocytosis[END_REF]. Moreover, there is an inter-play between autophagy and the inflammasome. Induction of autophagy is dependent on cytokines levels.
For example, IFN-γ, TNF-α and IL-1 induce autophagy while IL-4, IL-10 and IL-13 are inhibitory [START_REF] Harris | Autophagy and cytokines[END_REF]. Autophagy also self-regulates the production of cytokines including IL-1β and IL-18 via the down-regulation of inflammasomes, thus influencing inflammation inside the cells [START_REF] Saitoh | Regulation of inflammasomes by autophagy[END_REF].
Autophagy is a process involving the formation of a double membrane structure named autophagosome. This formation is a four-step process: (i) autophagosome initiation, (ii) autophagosome elongation, (iii) autophagosome closure and (iv) autophagosome fusion with lysosomes where autophagosomal contents are degraded [START_REF] Mintern | Autophagy and mechanisms of effective immunity[END_REF].
Initiation of the autophagosome formation is highly regulated by signaling pathways involving more than 30 autophagy related proteins (ATG) (M. Y. [START_REF] Wu | Autophagy and Macrophage Functions: Inflammatory Response and Phagocytosis[END_REF]. The first step mostly occurs via activation of the mammalian target of rapamycin (mTOR) that binds to the serine/threonine kinase ULK1 complex. This complex then activates the class III phosphoinositide 3-kinase (PI3K) complex (composed notably of VPS34, Beclin-1 and ATG14L) that generates PI3P, thus marking the autophagosome formation [START_REF] Ktistakis | Digesting the Expanding Mechanisms of Autophagy[END_REF].
Elongation of this structure is directed by covalent linkage between ATG12 with ATG5/ATG16L1 and microtubule-associated light chain 3 (LC3), also known as ATG8. LC3-I, generated by the proteolytic cleavage of pro-LC3 by ATG4, is conjugated to phosphatidylethanolamine, becoming LC3-II, by a series of reactions that involves the ATG5/ATG12/ATG16 complex, ATG7 and ATG3 [START_REF] Mizushima | Autophagy: Renovation of cells and tissues[END_REF]. After closure, autophagosome fuses with lysosome to form an autolysosome where content degradation occurs. This step is mediated by a fusion machinery including the soluble N-ethylmaleimidesensitive factor attachment protein receptor (SNARE) family like syntaxin (Y. Wang et al., 2016).
Xenophagy is a selective autophagy targeting intracellular pathogens [START_REF] Bah | Macrophage autophagy and bacterial infections[END_REF]. In macrophages, xenophagy has been characterized during MTB infection [START_REF] Gutierrez | Autophagy is a defense mechanism inhibiting BCG and Mycobacterium tuberculosis survival in infected macrophages[END_REF]. MTB has the ability to escape phagosomes and to access cytosol [START_REF] Gutierrez | Autophagy is a defense mechanism inhibiting BCG and Mycobacterium tuberculosis survival in infected macrophages[END_REF].
Mycobacterial DNA is recognized by the cytosolic sensor named c-GAS (cyclic GMP-AMP synthase). c-GAS triggers the ubiquitination of bacteria by Parkin and Smurf1 [START_REF] Franco | The Ubiquitin Ligase Smurf1 Functions in Selective Autophagy of Mycobacterium tuberculosis and Anti-tuberculous Host Defense[END_REF], whereas galectins fixed on damaged lysosomes or phagosomal membranes trigger TRIM (S. [START_REF] Kumar | Galectins and TRIMs directly interact and orchestrate autophagic response to endomembrane damage[END_REF]. Resulting poly-ubiquitinated proteins bind to autophagy adaptors or receptors such as p62/sequestrosome, neighbor of Bcra1 (NBR1) and nuclear dot protein 52kDa (NDP52). Subsequently, these adaptors recruit LC3 via their LC3-interacting motif and start the autophagy process [START_REF] Bah | Macrophage autophagy and bacterial infections[END_REF]. A study indicated that mice with monocytederived cells lacking Atg5 are more susceptible to MTB and inflammation, showing the role of autophagy in the defense against MTB [START_REF] Castillo | Autophagy protects against active tuberculosis by suppressing bacterial burden and inflammation[END_REF].
More generally, c-GAS is activated to catalyze the production of cyclic GMP-AMP, leading to the activation of the downstream sensor stimulator of IFN genes (STING) [START_REF] Ablasser | CGAS in action: Expanding roles in immunity and inflammation[END_REF]. Upon MTB infection, this process activates the tank-binding kinase 1 (TBK1) and triggers the STING/TBK-1/IRF3 signaling pathway, leading to secretion of type I IFN (with IFNβ production), activation of NF-κB, and production of various Th1 cytokines [START_REF] Abe | Cytosolic-DNA-Mediated, STING-Dependent Proinflammatory Gene Induction Necessitates Canonical NF-B Activation through TBK1[END_REF][START_REF] Watson | The cytosolic sensor cGAS detects Mycobacterium tuberculosis DNA to induce type I interferons and activate autophagy[END_REF]. Interestingly, recent study shows that c-GAS -/-and STING -/-have comparable MTB bacterial burden in the lung and similar inflammation levels compared to WT mice, suggesting other antimycobacterial immune responses in vivo [START_REF] Marinho | The cGAS/STING Pathway Is Important for Dendritic Cell Activation but Is Not Essential to Induce Protective Immunity against Mycobacterium tuberculosis Infection[END_REF].
Apoptosis
Different forms of cell deaths are observed for MTB-infected macrophages. The most wellcharacterized ones are (i) necrosis, commonly described as anarchic cell lysis, and (ii)
apoptosis, a programmed and structured death initiated by the cell [START_REF] Moraco | Cell death and autophagy in tuberculosis[END_REF].
Apoptosis is an important macrophage defense mechanism against pathogens. During apoptosis, plasma membrane remains intact, allowing the containment and the control of the infection. It also provides an important source of bacterial antigens that stimulate other immune cells (J. [START_REF] Lee | Macrophage apoptosis in tuberculosis[END_REF]. MTB-infected macrophages induce apoptosis in different ways:
-Extrinsic pathway activated by the binding of TNF-α to TNF receptor 1 (TNFR1) and Fas ligand (FasL) to Fas. These bindings lead to the formation of a death-inducing signal complex (DISC) by recruitment of the death domain which terminally activates caspases 8 and 10 (Behar et al., 2011).
-Intrinsic pathway is induced by intracellular stresses such as DNA damages, oxidative stress or nutrient deprivation. Those stresses lead to the permeabilization of the outer mitochondrial membrane and the releasing of cytochrome c. This latter binds to the cytosolic apoptotic protease activating factor (APAF1) forming the apoptosome. This complex recruits and activates caspase 9 (J. [START_REF] Lee | Macrophage apoptosis in tuberculosis[END_REF].
The cleavage/activation of these caspases in turn activate the executioner apoptosis caspases and cell reduction and terminal death of the cell [START_REF] Shi | Mechanisms of caspase activation and inhibition during apoptosis[END_REF].
Apoptosis is also induced by arachidonic acid (AA) which is the precursor of eicosanoids lipid mediators such as prostaglandin E2 (PGE2). AA is a part of the cell membrane and is obtained by the action of phospholipases on cell membrane phospholipids. Cyclooxygenases (COX 1
and 2) synthesize prostaglandins H from AA which can be further synthesized in PGE2 [START_REF] Oliveira | Antimicrobial peptides as potential anti-tubercular leads: A concise review[END_REF][START_REF] Rocca | Cyclooxygenases and prostaglandins: Shaping up the immune response[END_REF]. PGE2 acts as an anti-necrosis and pro-apoptotic mediator by initiating the protection of the mitochondria and stimulating the plasma membrane repair from bacteria-induced damage (via induction of synaptotagmin-7 (Syt-7)
which facilitates the fusion of the plasma membrane with lysosomal vesicles) [START_REF] Behar | Apoptosis is an innate defense function of macrophages against Mycobacterium tuberculosis[END_REF]Divangahi et al., 2009a).
Antimicrobial defenses
Antimicrobial peptides (AMPs) are small structures (50-60 amino acid residues) with different properties including antimicrobial and immunomodulatory activities [START_REF] Oliveira | Antimicrobial peptides as potential anti-tubercular leads: A concise review[END_REF].
Several AMPs have been described to play a key role regarding MTB infection such as cathelicidins, defensins, lactoferrins and hepcidins.
Cathelicidins: Cathelicidins are a family of mammalian AMPs with only one representative in human: LL-37/hCAP-18 for human cationic antimicrobial peptide-18 [START_REF] Rivas-Santiago | Expression of cathelicidin LL-37 during Mycobacterium tuberculosis infection in human alveolar macrophages, monocytes, neutrophils, and epithelial cells[END_REF]. This peptide is produced by immune cells including neutrophils and macrophages in response to MTB infection. LL-37 is expressed after stimulation of TLR2 by mycobacterial peptides and upregulation of vitamin D3 receptor [START_REF] Rivas-Santiago | Expression of cathelicidin LL-37 during Mycobacterium tuberculosis infection in human alveolar macrophages, monocytes, neutrophils, and epithelial cells[END_REF]. A study demonstrates that administration of phenylbutyrate and vitamin D3 increases the production of LL-37 resulting in a decrease of the bacterial load [START_REF] Mily | Significant effects of oral phenylbutyrate and Vitamin D3 adjunctive therapy in pulmonary tuberculosis: A randomized controlled trial[END_REF]. Despite its own antibacterial activity, LL-37 also modulates the expression of cytokines in macrophages.
Another study shows that cathelicidin induces the expression of anti-inflammatory IL-10 and TGF-β for balancing inflammatory response without reducing the antimycobacterial activity of the macrophage [START_REF] Torres-Juarez | LL-37 immunomodulatory activity during Mycobacterium tuberculosis infection in macrophages[END_REF].
Defensins: Defensins are small cationic peptides of 29~35 amino acid residues [START_REF] Shin | Antimicrobial Peptides in Innate Immunity against Mycobacteria[END_REF]. They are mainly expressed in epithelial cells and neutrophils but their expression is also detected in monocytes and macrophages [START_REF] Duits | Expression of β-defensin 1 and 2 mRNA by human monocytes, macrophages and dendritic cells[END_REF]Kisich et al., 2001a). Defensins are produced following the recognition of PAMPs by TLRs [START_REF] Shin | Antimicrobial Peptides in Innate Immunity against Mycobacteria[END_REF]. They cause direct bacterial cell membrane disruption but also chemo-attract macrophages to the site of infection (RI et al., 1993).
Lactoferrin: Lactoferrin is an iron binding glycoprotein which belongs to the transferrin family.
It is present in many tissues and is related to iron homeostasis [START_REF] Arranz-Trullén | Host antimicrobial peptides: The promise of new treatment strategies against tuberculosis[END_REF]. A study shows the ability of lactoferrin to decrease the MTB bacterial load in mice [START_REF] Welsh | Influence of oral lactoferrin on Mycobacterium tuberculosis induced immunopathology[END_REF]. It also enhances pro-inflammatory responses in bone marrow-derived monocytes and J774A.1 cells in response to mycobacterial infection, notably by increasing the ratio of IL-12:IL-10 cytokines [START_REF] Hwang | Lactoferrin modulation of IL-12 and IL-10 response from activated murine leukocytes[END_REF].
Hepcidin: Hepcidin is synthesized in hepatocytes following an infectious or inflammatory process [START_REF] Oliveira | Antimicrobial peptides as potential anti-tubercular leads: A concise review[END_REF]. This AMP inhibits iron absorption and macrophage iron release via the degradation of the iron export protein ferroportin 1 (Zumla et al., 2015b). As iron is an essential nutrient for all living organisms, inhibition of iron exportation by macrophages creates a deleterious environment for the bacteria, thus inhibiting MTB growth inside phagosomes (Sow et al., 2007). However, another recent study shows that deficiency of hepcidin in mice does not impede MTB growth in vivo in the TB mouse model, questioning the role of this AMP in MTB infection [START_REF] Harrington-Kandt | Hepcidin deficiency and iron deficiency do not alter tuberculosis susceptibility in a murine M. tb infection model[END_REF].
Nutrients modulation: example of metals
During infection, a dynamic interaction between host and pathogen is established regarding nutrients. In order to proliferate, intracellular pathogenic bacteria need to benefit from host nutrients and metabolites. As a consequence, the host immune system has developed strategies to limit nutrients availability with the aim of starving the pathogen [START_REF] Núñez | Innate Nutritional Immunity[END_REF]. This process of nutrient sequestration is named nutritional immunity [START_REF] Healy | Nutritional immunity: the impact of metals on lung immune cells and the airway microbiome during chronic respiratory disease[END_REF]. Nutritional immunity is most widely used to refer to the sequestration of metallic nutrients such as iron, zinc or copper [START_REF] Healy | Nutritional immunity: the impact of metals on lung immune cells and the airway microbiome during chronic respiratory disease[END_REF]. Metals are essential in various biological processes, for both cells and bacteria [START_REF] Chandrangsu | Metal homeostasis and resistance in bacteria[END_REF]. Hence, nutritional immunity, especially iron and zinc, is a key defense mechanism from immune cells against bacteria.
a. Iron
Tissue resident macrophages such as AMs modulate iron availability into their environment by sequestering and/or releasing iron [START_REF] Winn | Regulation of tissue iron homeostasis: The macrophage "ferrostat[END_REF]. Although essential for many reactions, free iron is potentially toxic by generating reactive hydroxyl radicals. In this context, iron is stored inside macrophages in order to be mobilized during infection [START_REF] Winn | Regulation of tissue iron homeostasis: The macrophage "ferrostat[END_REF].
Uptake of iron by macrophages is mediated through the iron transporters transferrin receptor and divalent metal transporter 1 . Once inside the cell, iron is stored in the iron storage proteins ferritin or hemosiderin [START_REF] Healy | Nutritional immunity: the impact of metals on lung immune cells and the airway microbiome during chronic respiratory disease[END_REF]. The transmembrane protein ferroportin (FPN) exports iron from the cell into the extracellular space [START_REF] Healy | Nutritional immunity: the impact of metals on lung immune cells and the airway microbiome during chronic respiratory disease[END_REF].
Macrophages use several strategies to limit access to iron to the pathogens. In reaction to TLRs stimulation by BCG, macrophages promote intracellular iron sequestration through hepcidin upregulation and FPN downregulation [START_REF] Abreu | Role of the hepcidin-ferroportin axis in pathogenmediated intracellular iron sequestration in human phagocytic cells[END_REF]. FPN is also located in the phagosomal membrane, which can be beneficial for the phagocytized pathogens. As a result, FPN is rapidly removed from early phagosomes and traffics back to the cell membrane, thus depriving the intracellular bacteria from iron [START_REF] Flannagan | Rapid removal of phagosomal ferroportin in macrophages contributes to nutritional immunity[END_REF]. Another study performed on the phagosomes of peritoneal macrophages of C57BL/6 mice infected with M.
avium and MTB, shows that iron concentration inside the phagosome is significantly higher when bacteria are present (Wagner et al., 2005a). In this case, iron is delivered to the phagosome through the transferrin receptor which binds the iron-transferrin complex (Wagner et al., 2005a). However, IFN-γ downregulates this transferrin receptor thus limiting the iron pool inside the phagosome (Wagner et al., 2005a).
During infection, the natural resistance-associated macrophage protein-1 (NRAMP1 or Slc11a1) is recruited to the phagosomal membrane. Several studies demonstrate the role of this protein in host protection mechanisms against mycobacteria [START_REF] Barton | Nramp 1: A link between intracellular iron transport and innate resistance to intracellular pathogens[END_REF]; M. S. [START_REF] Gomes | NRAMP1-or cytokine-induced bacteriostasis of Mycobacterium avium by mouse macrophages is independent of the respiratory burst[END_REF]. For example, NRAMP1 polymorphisms are associated with increased susceptibility to TB in humans (Bellamy et al., 2009). Different modes of action are proposed regarding its activity. First, NRAMP1 acts as a divalent metal efflux pump at the phagosomal membrane, depriving pathogens present in the phagosome from iron [START_REF] Forbes | Divalent-metal transport by NRAMP proteins at the interface of host-pathogen interactions[END_REF]. Another study suggests that NRAMP1 allows the influx of iron inside the phagosome, generating hydroxyl radicals via the Fenton/Haber-Weiss reaction, deleterious for the bacteria [START_REF] Lafuse | Regulation of Nramp1 mRNA stability by oxidants and protein kinase C in RAW264.7 macrophages expressing Nramp1(Gly169)[END_REF]. NRAMP1 also modulates NO production by increasing the expression of iNOS in Raw264.7 cell mice [START_REF] Fritsche | Nramp1 Functionality Increases Inducible Nitric Oxide Synthase Transcription Via Stimulation of IFN Regulatory Factor 1 Expression[END_REF]. However, the role of NRAMP1 in TB infection might be limited as deletion of NRAMP1 in MTB-infected mice does not result in higher bacterial burden [START_REF] North | Consequence of Nramp1 deletion to Mycobacterium tuberculosis infection in mice[END_REF].
Other mechanisms related to iron take place during bacterial infection. Macrophages possess siderophores (low-molecular weight iron chelators) such as lipocalin-2 (LCN-2) that bind to bacterial siderophores, preventing them from iron uptake [START_REF] Flo | Lipocalin 2 mediates an innate immune response to bacterial infection by sequestrating iron[END_REF][START_REF] Dahl | Lipocalin-2 functions as inhibitor of innate resistance to mycobacterium tuberculosis[END_REF].
AMs utilize heme (iron coupled with protoporphyrin IX) to produce ROS and NO, generating a bactericidal effect (Müllebner et al., 2018). Iron is an important nutrient for the bacteria and its control can be used as a major defense mechanism by the cell.
b. Zinc
Besides iron, zinc is another essential metal for cell homeostasis and bacterial defenses. Zinc level inside the cell is mainly regulated by two families of transporters: SLC39A importers (ZIPs family) and SLC30A exporters (ZnTs) [START_REF] Kambe | Current understanding of ZIP and ZnT zinc transporters in human health and diseases[END_REF]. Macrophages have several strategies related to zinc mobilization for bactericidal activity: they can either sequester zinc for zinc starvation or use zinc to intoxicate the pathogen [START_REF] Neyrolles | Zinc and copper toxicity in host defense against pathogens: Mycobacterium tuberculosis as a model example of an emerging paradigm[END_REF]. Zinc starvation by the host is mediated by the S100 superfamily of Ca 2+ -binding proteins, including S100A8/S100A9 heterotetramer named calprotectin (CP) (J. Z. [START_REF] Liu | Zinc sequestration by the neutrophil protein calprotectin enhances salmonella growth in the inflamed gut[END_REF]. Moreover, the pro-inflammatory GM-CSF stimulates zinc import into the cytosol via ZIP2 during fungal infection (Vignesh et al., 2013a). Within the cell, zinc is sequestered with high affinity by metallothioneins (MT), thus limiting the zinc availability to intracellular pathogens (Baltaci et al., 2017). Cytosolic zinc is exported to the Golgi via zinc exporters ZnT4 and Znt7 [START_REF] Vignesh | Zinc Sequestration: Arming Phagocyte Defense against Fungal Attack[END_REF]. Zinc sequestration strategy also increases the phagosomal H + channel function and ROS generation by NOX [START_REF] Gammoh | Zinc in infection and inflammation[END_REF].
Similarly to iron, a high zinc concentration is toxic for living organisms. Therefore, an excess of zinc inside the macrophage is another efficient strategy to fight intracellular pathogens.
Mycobacterial infection causes a burst of free zinc inside the cell [START_REF] Botella | Mycobacterial P 1-Type ATPases mediate resistance to Zinc poisoning in human macrophages[END_REF]. Indeed, following MTB infection, it has been observed that zinc is liberated from MT upon oxidation via NOXs [START_REF] Maret | The function of zinc metallothionein: A link between cellular zinc and redox state[END_REF] but also from intracellular-containing protein called the zincosomes [START_REF] Gammoh | Zinc in infection and inflammation[END_REF]Niederweis et al., 2015). Moreover, during BCG infection, an increase of the extracellular zinc uptake occurs via ZIP8, suggesting an import of zinc for intoxication (Begum et al., 2002). Recently, a study shows that early accumulation of zinc inside Escherichia coli containing phagosomes occurs via the SLC30a1/Znt1 transporter [START_REF] Neyrolles | Antimicrobial zinc toxicity in Mϕs: ZnT1 pays the toll[END_REF][START_REF] Stocks | Frontline Science: LPS-inducible SLC30A1 drives human macrophage-mediated zinc toxicity against intracellular Escherichia coli[END_REF]). Yet, Slc30a1 gene silencing does not impair bacterial clearance (as shown for E.
coli), indicating that this mechanism probably relies on additional pathways.
Nutrients modulation by the host is one of the many strategies used against pathogens and more specifically MTB. However, MTB, as a well-adapted intracellular pathogen, has developed over time efficient mechanisms to counteract these defense mechanisms from the host, in order to survive and to proliferate. The main strategies employed by MTB to fight against the host defenses are detailed below.
III.
Mycobacterium tuberculosis defenses against the host 1. Phagosomal resistance
Once the host invaded, MTB is phagocytized by macrophages and trapped inside a phagosome. The phagosome being a deleterious environment for the bacteria, MTB has developed strategies to avoid degradation such as arrest of the phagosome maturation and inhibition of the phagosomal acidification.
Arrest of phagosomal maturation: MTB-containing phagosome does not possess Rab7 at the surface [START_REF] Chandra | Mycobacterium tuberculosis Inhibits RAB7 Recruitment to Selectively Modulate Autophagy Flux in Macrophages[END_REF]. Absence of this marker is a sign of maturation blockage in the early step phagosome. Hence, MTB prevents Rab7 acquisition through the alteration of the EEA1 recruitment and further activation of the PI3K VPS34 cascade leading to production of PI3P [START_REF] Fratti | Role of phosphatidylinositol 3-kinase and Rab5 effectors in phagosomal biogenesis and mycobacterial phagosome maturation arrest[END_REF]. Lack of EEA1 recruitment is related to the Ca 2+ -binding protein calmodulin and the Ca 2+ cytosolic level [START_REF] Vergne | Tuberculosis toxin blocking phagosome maturation inhibits a novel Ca 2+/calmodulin-PI3K hVPS34 cascade[END_REF]. Vergne et al. reported that bacterial surfaces expressing LAM, more specifically ManLAM for MTB, which blocks the activation of the Ca 2+ / calmodulin pathway [START_REF] Vergne | Tuberculosis toxin blocking phagosome maturation inhibits a novel Ca 2+/calmodulin-PI3K hVPS34 cascade[END_REF][START_REF] Vergne | Manipulation of the endocytic pathway and phagocyte functions by Mycobacterium tuberculosis lipoarabinomannan[END_REF]. Moreover, LprG binding to LAM is essential for the increasing surface expression of LAM, therefore giving MTB its virulence ability to arrest phagosomal maturation [START_REF] Shukla | Mycobacterium tuberculosis Lipoprotein LprG Binds Lipoarabinomannan and Determines Its Cell Envelope Localization to Control Phagolysosomal Fusion[END_REF].
Another host protein named coronin 1 (or TACO) has been linked to the Ca 2+ / calcineurin inhibition pathway (Jayachandran et al., 2007). Indeed, the MTB-containing phagosome recruits TACO to block lysosomal delivery of the phagosome (Jayachandran et al., 2007b).
Moreover, MTB has other proteins that inhibit phagosome maturation such as secretory acid phosphatase (SapM). SapM is involved in the maturation arrest of phagosomes through the PI3P pathway [START_REF] Puri | Secreted Acid Phosphatase (SapM) of Mycobacterium tuberculosis Is Indispensable for Arresting Phagosomal Maturation and Growth of the Pathogen in Guinea Pig Tissues[END_REF]. SapM dephosphorylates PI3P thereby inhibiting its activity as a membrane trafficking regulator of lysosomes [START_REF] Fernandez-Soto | Mechanism of catalysis and inhibition of Mycobacterium tuberculosis SapM, implications for the development of novel antivirulence drugs[END_REF][START_REF] Puri | Secreted Acid Phosphatase (SapM) of Mycobacterium tuberculosis Is Indispensable for Arresting Phagosomal Maturation and Growth of the Pathogen in Guinea Pig Tissues[END_REF].
Mycobacterial protein kinase G (PKnG) is a serine/threonine kinase known to block phagosome-lysosome fusion through its activity on host Rab7l1 (it blocks transition of inactive Rab7l1-GDP to active Rab7l1-GTP, enabling its recruitment to phagosome) [START_REF] Pradhan | Mycobacterial PknG Targets the Rab7l1 Signaling Pathway To Inhibit Phagosome-Lysosome Fusion[END_REF]. A recent study also demonstrated that putative lipoprotein LppM is involved in the blocking of the phagosomal maturation [START_REF] Deboosère | LppM impact on the colonization of macrophages by Mycobacterium tuberculosis[END_REF].
Inhibition of the acidification: Along with phagosomal maturation arrest, intracellular MTB has developed different strategies to counteract phagosomal acidification. MTB secretes the protein tyrosine phosphatase (PtpA) which binds to subunit H of the macrophage V-ATPase (D. [START_REF] Wong | Protein tyrosine kinase, PtkA, is required for Mycobacterium tuberculosis growth in macrophages[END_REF]. This binding inhibits V-ATPase trafficking to the mycobacterial phagosome and thus acidification (P. [START_REF] Zhou | Phosphorylation control of protein tyrosine phosphatase A activity in Mycobacterium tuberculosis[END_REF]. Moreover, MTB produces 1tuberculosinyladenosine (1-TbAd) which accumulates in host acidic compartments, neutralizes the pH and swells lysosomes, leading to lysosomal and phagosomal dysfunctions [START_REF] Buter | Mycobacterium tuberculosis releases an antacid that remodels phagosomes[END_REF]. Additionally, interaction of bacterial trehalose-6,6'-dimycolate (TDM) with the monocyte-inducible C-type lectin receptor (Mincle) delays phagosomal maturation and acidification [START_REF] Patin | Trehalose dimycolate interferes with FcγRmediated phagosome maturation through Mincle, SHP-1 and FcγRIIB signalling[END_REF]. Another mechanism of MTB acidic maturation arrest is the depletion of V-ATPase by the cytokine-inducible SH2-containing protein (CISH) (Queval et al., 2017a). MTB-infected macrophage produces GM-CSF, which triggers the expression of CISH via the nuclear STAT5. CISH targets V-ATPase catalytic subunit A (ATP6V1A) for ubiquitination and therefore its degradation by proteasomes (Queval et al., 2017a).
MTB interacts with the host to prevent maturation and acidification of the phagosome.
However, the pathogen itself possesses strategies to resist in acidic environmental conditions.
First of all, the composition of MTB cell wall acts as a natural barrier [START_REF] Vandal | Acid resistance in Mycobacterium tuberculosis[END_REF].
Moreover, certain proteins are overexpressed under low pH conditions such as the functional serine protease Rv3671c, protecting MTB from acidification [START_REF] Biswas | Structural insight into serine protease Rv3671c that protects M. tuberculosis from oxidative and acidic stress[END_REF] or the poreforming outer membrane protein OmpATb, required for MTB survival under low pH conditions [START_REF] Raynaud | The functions of OmpATb, a pore-forming protein of Mycobacterium tuberculosis[END_REF].
Phagosomal escaping
Inhibition of the phagosomal maturation is an important defense used by MTB to survive inside the cell. However, evidence suggests that MTB, contrarily to BCG, is also capable of escaping the phagosome by translocating into the cytosol [START_REF] Vanderven | The Minimal Unit of Infection: Mycobacterium tuberculosis in the Macrophage[END_REF]. Several studies show that ESAT-6 secretion system-1 (ESX-1), a type VII secretion system, is required for the phagosomal rupture process [START_REF] Houben | ESX-1-mediated translocation to the cytosol controls virulence of mycobacteria[END_REF][START_REF] Simeone | Phagosomal rupture by Mycobacterium tuberculosis results in toxicity and host cell death[END_REF]. ESX-1 is encoded by a genetic locus region of difference 1 (RD1), which is absent in the vaccine strain BCG in human [START_REF] Pym | Loss of RD1 contributed to the attenuation of the live tuberculosis vaccines Mycobacterium bovis BCG and Mycobacterium microti[END_REF]. This region comprises genes coding for ESAT-6 and CFP-10 proteins, also known as EsxA and EsxB, respectively [START_REF] Brodin | Functional Analysis of Early Secreted Antigenic Target-6, the Dominant T-cell Antigen of Mycobacterium tuberculosis, Reveals Key Residues Involved in Secretion, Complex Formation, Virulence, and Immunogenicity[END_REF]; K.-W. [START_REF] Wong | The Role of ESX-1 in Mycobacterium tuberculosis Pathogenesis[END_REF].
It has been suggested that once EsxA is released by its putative chaperone EsxB [START_REF] De Jonge | ESAT-6 from Mycobacterium tuberculosis dissociates from its putative chaperone CFP-10 under acidic conditions and exhibits membrane-lysing activity[END_REF], it is inserted into cell host membrane and mediated its membrane lytic activity through pore-forming activity [START_REF] Peng | Characterization of differential pore-forming activities of ESAT-6 proteins from Mycobacterium tuberculosis and Mycobacterium smegmatis[END_REF][START_REF] Smith | Evidence for pore formation in host cell membranes by ESX-1-secreted ESAT-6 and its role in Mycobacterium marinum escape from the vacuole[END_REF]. The lytic activity of EsxA was described to depend on the lipid composition of the membrane and its fluidity [START_REF] Ray | Effects of membrane lipid composition on Mycobacterium tuberculosis EsxA membrane insertion: A dual play of fluidity and charge[END_REF]. However, a recent study suggests that EsxA alone does not lyse the host cell membrane and that the phagosomal lysis was due to residual detergent used in the purification protocol [START_REF] Conrad | Mycobacterial ESX-1 secretion system mediates host cell lysis through bacterium contact-dependent gross membrane disruptions[END_REF]. However, another team manages to prove that EsxA still shows membranolytic activity using another purification protocol [START_REF] Augenstreich | Phthiocerol Dimycocerosates From Mycobacterium tuberculosis Increase the Membrane Activity of Bacterial Effectors and Host Receptors[END_REF] Along with ESX-1, mycobacterial lipid phthiocerol dimycocerosate (PDIM) and dimycocerosate (DIM) are also implicated in the phagosomal rupture [START_REF] Barczak | Systematic, multiparametric analysis of Mycobacterium tuberculosis intracellular infection offers insight into coordinated virulence[END_REF][START_REF] Quigley | The cell wall lipid PDIM contributes to phagosomal escape and host cell exit of Mycobacterium tuberculosis[END_REF].
PDIM was previously suggested to prevent phagosomal maturation and acidification [START_REF] Astarie-Dequeker | Phthiocerol dimycocerosates of M. tuberculosis participate in macrophage invasion by inducing changes in the organization of plasma membrane lipids[END_REF]. Recently it has been described that PDIM attenuated mutants have a diminution in virulence and fail to promote host cell death or to induce type I IFN (associated with phagosomal damages) [START_REF] Augenstreich | ESX-1 and phthiocerol dimycocerosates of Mycobacterium tuberculosis act in concert to cause phagosomal rupture and host cell apoptosis[END_REF]. Moreover, MTB escape is increased in presence of PDIM [START_REF] Augenstreich | Phthiocerol Dimycocerosates From Mycobacterium tuberculosis Increase the Membrane Activity of Bacterial Effectors and Host Receptors[END_REF][START_REF] Quigley | The cell wall lipid PDIM contributes to phagosomal escape and host cell exit of Mycobacterium tuberculosis[END_REF]. However, host cell has developed some mechanisms to repair such phagosomal damage. For example, endosomal sorting complex required for transport (ESCRT) machinery is recruited in the MTB-containing phagosomes directly to damaged area for closure of the plasma membrane [START_REF] Jimenez | ESCRT Machinery Is Required for Plasma Membrane Repair[END_REF]. This recruitment occurs in an ESX-1 dependent manner [START_REF] Mittal | Mycobacterium tuberculosis type VII secretion system effectors differentially impact the ESCRT endomembrane damage response[END_REF].
Once in the cytosol, MTB faces host cytosolic defenses such as induction of autophagy.
Damaged phagosomes expose glycans at their surface that are then detected by galectins.
With the rupture of the phagosome, mycobacterial extracellular DNA is found into the host cell cytosol (Manzanillo et al., 2012). This cytosolic mycobacterial DNA is detected by c-Gas of the AIM2 inflammasome but no activation of AIM2 is reported during MTB infection (Ontiveros et al., 2019). AIM2 inhibition is supposedly linked to ESX-1 secretion system because ESX-1 deficient mutant fails to inhibit the AIM2-inflammasome activation [START_REF] Shah | Cutting Edge: Mycobacterium tuberculosis but Not Nonvirulent Mycobacteria Inhibits IFN-β and AIM2 Inflammasome-Dependent IL-1β Production via Its ESX-1 Secretion System[END_REF]. Moreover, they supposed that ESX-1 secrete an unknown effector which inhibits AIM2 activation [START_REF] Shah | Cutting Edge: Mycobacterium tuberculosis but Not Nonvirulent Mycobacteria Inhibits IFN-β and AIM2 Inflammasome-Dependent IL-1β Production via Its ESX-1 Secretion System[END_REF]. This inhibition results in a decrease in IL-1β secretion, which has previously been shown to promote MTB infection in mice [START_REF] Novikov | Mycobacterium tuberculosis Triggers Host Type I IFN Signaling To Regulate IL-1β Production in Human Macrophages[END_REF][START_REF] Novikov | Mycobacterium tuberculosis Triggers Host Type I IFN Signaling To Regulate IL-1β Production in Human Macrophages[END_REF]. The cGas/STING pathway activation has been related to ESX-1. BCG, lacking RD-1 region, does not activate the STING pathway but recombinant BCG expressing ESX-1 T7
secretion system of M. marinum ruptures the phagosome and therefore induces the cGas/STING/TBK1/IRF-3/type I interferon axis [START_REF] Gröschel | Recombinant BCG Expressing ESX-1 of Mycobacterium marinum Combines Low Virulence with Cytosolic Immune Signaling and Improved TB Protection[END_REF].
Isolates from patient with severe TB were shown to produce a lower cytokine response compared to moderate forms of TB [START_REF] Sousa | Mycobacterium tuberculosis associated with severe tuberculosis evades cytosolic surveillance systems and modulates IL-1β production[END_REF]. Interestingly, diminution of IL-1β has been linked to the ability of these strains to escape from cGAS and inflammasomes. Those strains were described as being mutated in the ESX-1 secretion system which consequently restricts phagosome rupture and induce lower release of IL-1β [START_REF] Sousa | Mycobacterium tuberculosis associated with severe tuberculosis evades cytosolic surveillance systems and modulates IL-1β production[END_REF].
Moreover, cytosolic mycobacterial DNA, detected by c-Gas, triggers the autophagy [START_REF] Franco | The Ubiquitin Ligase Smurf1 Functions in Selective Autophagy of Mycobacterium tuberculosis and Anti-tuberculous Host Defense[END_REF][START_REF] Kumar | Galectins and TRIMs directly interact and orchestrate autophagic response to endomembrane damage[END_REF][START_REF] Thurston | Galectin 8 targets damaged vesicles for autophagy to defend cells against bacterial invasion[END_REF]. Autophagy being dependent on the fusion with lysosomes like phagosomes, MTB also regulates autophagic flux by, for example, inhibiting the Rab7 recruitment [START_REF] Chandra | Mycobacterium tuberculosis Inhibits RAB7 Recruitment to Selectively Modulate Autophagy Flux in Macrophages[END_REF]. MTB also inhibits autophagy via induction of microRNA-33 (mi-R33) [START_REF] Ouimet | Mycobacterium tuberculosis induces the MIR-33 locus to reprogram autophagy and host lipid metabolism[END_REF]. miRNAs are small non-coding RNAs that bind to the 3ʹ untranslated region (3ʹ UTR) of mRNA, inhibiting their translation or promoting their degradation [START_REF] O'brien | Overview of microRNA biogenesis, mechanisms of actions, and circulation[END_REF]. Presence of mi-RNA33 leads notably to a decrease of key effectors in autophagy pathways such as ATG5, ATG12, LAMP1 and LC3 and enhanced pathogen survival [START_REF] Ouimet | Mycobacterium tuberculosis induces the MIR-33 locus to reprogram autophagy and host lipid metabolism[END_REF].
Reactive oxygen and nitrogen species resistance
MTB possesses several proteins and regulation systems to limit the damages caused by the
ROS production inside the cells including:
KatG: KatG is a catalase-peroxydase produced by MTB in response to oxidative stress for bacterial protection. KatG detoxifies ROS generated by the macrophage by catalyzing the decomposition of H2O2 into O2 and H2O. katG knock-out MTB strain survives less in a mouse infection model, compared to the wild-type, suggesting that KatG is an important virulence factor in a host capable of NOX oxidative burst [START_REF] Ng | Role of KatG catalase-peroxidase in mycobacterial pathogenisis: Countering the phagocyte oxidative burst[END_REF].
Superoxide dismutase (SOD): SOD is a superoxide radical scavenger essential for MTB. It detoxifies superoxide radical O2 .-into O2 and H2O2 and reduces toxic NO through the inhibition of iNOS activity [START_REF] Liao | The role of superoxide dismutase in the survival of Mycobacterium tuberculosis in macrophages[END_REF]. MTB produces two SODs: an iron-dependent enzyme (SodA or FeSOD) and a copper-and zinc-dependent enzyme (SodC or CuZnSOD) [START_REF] Liao | The role of superoxide dismutase in the survival of Mycobacterium tuberculosis in macrophages[END_REF].
SodA is constitutively expressed and is an essential protein of MTB. A study shows that attenuation of sodA expression resulted in attenuated virulence in mice [START_REF] Edwards | Iron-cofactored superoxide dismutase inhibits host responses to Mycobacterium tuberculosis[END_REF].
Moreover, a genome-wide genetic interaction assay has demonstrated that SodA forms a membrane-associated oxidoreductase complex with the integral membrane protein (DoxX) and a predicted thiol-oxidoreductase (SseA). This SodA-DoxX-SseA complex links radical detoxification with cytosolic thiol homeostasis; its defect leads to accumulation of cellular oxidative damage [START_REF] Nambi | The Oxidative Stress Network of Mycobacterium tuberculosis Reveals Coordination between Radical Detoxification Systems[END_REF]. Other studies have investigated the role of sodC for MTB. Deletion of this gene in MTB results in a hypersensitivity to O2 .-and H2O2 in vitro, in mice but not in guinea pigs [START_REF] Dussurget | Role of Mycobacterium tuberculosis copper-zinc superoxide dismutase[END_REF][START_REF] Piddington | Cu,Zn superoxide dismutase of Mycobacterium tuberculosis contributes to survival in activated macrophages that are generating an oxidative burst[END_REF].
Peroxidase: Thiol peroxidase (Tpx) from MTB has a NADPH-linked peroxidase activity and an anti-oxidant activity in a thiol-dependent metal-catalyzed oxidation system. This enzyme reduces H2O2, t-butyl hydroperoxide, and cumene hydroperoxide [START_REF] Rho | Functional and Structural Characterization of a Thiol Peroxidase from Mycobacterium tuberculosis[END_REF]. A tpx knock-out MTB strain is more sensitive to H2O2 and NO, compared to the wild type strain [START_REF] Hu | Acute and persistent mycobacterium tuberculosis infections depend on the thiol peroxidase TPX[END_REF]. This mutant has a reduced peroxidase activity, its virulence was attenuated and it fails to grow and survive in bone marrow derived macrophages.
Others peroxidases are used by MTB as an antioxidant defense system such as the alkyl hydroperoxide reductase subunit C (AhpC) which is reduced by the alkyl hydroperoxide reductase subunit D (AhpD) or the thioredoxin TrxB and TrxC (C. F. [START_REF] Wong | AhpC of the mycobacterial antioxidant defense system and its interaction with its reducing partner Thioredoxin-C[END_REF]. Strains overexpressing AhpC are more resistant to cumene hydroperoxide than wild type strains [START_REF] Sherman | AhpC, oxidative stress and drug resistance in Mycobacterium tuberculosis[END_REF].
Mycothiol (MSH):
MSH is a low-molecular-mass thiol with an antioxidant activity. It maintains the reducing environment within MTB, protects from oxidants and detoxifies thiol-reactive compounds [START_REF] Buchmeier | A mycothiol synthase mutant of Mycobacterium tuberculosis has an altered thiol-disulfide content and limited tolerance to stress[END_REF]. MSH is required for the peroxiredoxin AhpE peroxidase activity [START_REF] Hugo | Mycothiol/mycoredoxin 1-dependent reduction of the peroxiredoxin AhpE from mycobacterium tuberculosis[END_REF]. Its deletion in M. smegmatis leads to a higher sensitivity to acid, alkylating and oxidative stress, demonstrating its importance for the bacterial growth [START_REF] Rawat | Comparative analysis of mutants in the mycothiol biosynthesis pathway in Mycobacterium smegmatis[END_REF].
Host cell death interference
Several studies investigated the role of apoptosis or necrosis on MTB fate. Classically, avirulent MTB tends to induce apoptosis whereas virulent MTB induces necrosis [START_REF] Behar | Evasion of innate immunity by mycobacterium tuberculosis: Is death an exit strategy?[END_REF][START_REF] Porcelli | Tuberculosis: Unsealing the apoptotic envelope[END_REF]. Virulent MTB can also trigger apoptosis but only in specific conditions, revealing the complexity of cell death induced by MTB [START_REF] Butler | The Balance of Apoptotic and Necrotic Cell Death in Mycobacterium tuberculosis Infected Macrophages Is Not Dependent on Bacterial Virulence[END_REF][START_REF] Danelishvili | Mycobacterium tuberculosis infection causes different levels of apoptosis and necrosis in human macrophages and alveolar epithelial cells[END_REF]. Necrotic macrophages are considered as a replicative niche for MTB. Indeed, releasing of MTB by necrosis allows its survival and its dissemination to other cells while inhibition of necrosis by chemical inhibitors limits MTB multiplication [START_REF] Lerner | Mycobacterium tuberculosis replicates within necrotic human macrophages[END_REF]. Hence, MTB has developed over time many processes to maximize their survival within the macrophage, notably by inducing necrosis and inhibiting apoptosis.
A toxin, named tuberculosis necrotizing toxin (TNT), produced by MTB induces necrosis of infected cells. TNT is secreted by MTB into the cytosol via the ESX-1 secretion-associated proteins EsxE and EsxF [START_REF] Tak | Pore-forming Esx proteins mediate toxin secretion by Mycobacterium tuberculosis[END_REF]. TNT hydrolyzes the cofactor NAD + , depleting the host from this essential cofactor and leading to cell necrosis (J. [START_REF] Sun | The tuberculosis necrotizing toxin kills macrophages by hydrolyzing NAD[END_REF]. In parallel, MTB produces an immunity factor for TNT (IFT) that inhibits the TNT toxicity within the bacteria (J. [START_REF] Sun | The tuberculosis necrotizing toxin kills macrophages by hydrolyzing NAD[END_REF].
NuoG is used by MTB to inhibit apoptosis by targeting the ROS production. nuoG encodes a subunit of the type I NADH dehydrogenase that neutralizes ROS produced by NOX2 inside phagosome. Neutralization of ROS inhibits TNF-α secretion and TNF-α induced apoptosis. MTB lacking nuoG does not inhibit apoptosis anymore, which significantly reduced its virulence in mice [START_REF] Velmurugan | Mycobacterium tuberculosis nuoG Is a virulence gene that inhibits apoptosis of infected host cells[END_REF].
Additionally, MTB promotes necrosis by interacting with the lipids of the host cell. PGE2 was previously cited as a pro-apopoptotic eicosanoid regulating the plasma membrane repair and the mitochondrial protection (see section B.II.e.). Virulent MTB suppresses COX2 expression in infected macrophages which leads to PGE2 inhibition (M. [START_REF] Chen | Lipid mediators in innate immunity against tuberculosis: Opposing roles of PGE 2 and LXA 4 in the induction of macrophage death[END_REF]. Instead, the lipoxygenases generate lipoxins (including LXA4) and leukotriene from arachidonic acid.
Production of LXA4 impairs the apoptotic signaling by inhibition of apoptosis mediators (caspases) by the mitochondria and increases the expression of antiapoptotic proteins of the Bcl-2 family [START_REF] Prieto | Lipoxin A4 impairment of apoptotic signaling in macrophages: Implication of the PI3K/Akt and the ERK/Nrf-2 defense pathways[END_REF]. The absence of PGE2 also inhibits the plasma membrane repair by Syt-7, promoting cell necrosis (Divangahi et al., 2009b). Furthermore, virulent MTB disrupts the mitochondrial transmembrane potential, leading to mitochondrial degradation and necrosis (M. [START_REF] Chen | A Mechanism of Virulence: Virulent Mycobacterium tuberculosis Strain H37Rv, but Not Attenuated H37Ra, Causes Significant Mitochondrial Inner Membrane Disruption in Macrophages Leading to Necrosis[END_REF].
Cytokines modulation and macrophages polarization
MTB survival depends on the level of pro-or anti-inflammatory cytokines secreted in the environment. For example, M2 macrophages, which produce more anti-inflammatory cytokines, are more vulnerable to MTB infection than M1 macrophages [START_REF] Khan | Macrophage heterogeneity and plasticity in tuberculosis[END_REF].
Endoplasmic reticulum (ER) stress is increased in M1 polarized macrophages which leads to a more effective clearance of intracellular MTB than M2 macrophages [START_REF] Lim | Roles of endoplasmic reticulum stress-mediated apoptosis in M1-polarized macrophages during mycobacterial infections[END_REF]. In this context, MTB has developed strategies to limit M1 polarization of macrophages and therefore the production of cytokines. Avirulent MTB infection activates M1 macrophages whereas virulent MTB promotes M2 macrophages polarization [START_REF] Lim | Roles of endoplasmic reticulum stress-mediated apoptosis in M1-polarized macrophages during mycobacterial infections[END_REF]. A study also demonstrates that the virulence factor ESAT-6 participates in the transition to the M2 phenotype through the inhibition of TLR2 signaling [START_REF] Lim | Roles of endoplasmic reticulum stress-mediated apoptosis in M1-polarized macrophages during mycobacterial infections[END_REF][START_REF] Pathak | Direct extracellular interaction between the early secreted antigen ESAT-6 of Mycobacterium tuberculosis and TLR2 inhibits TLR signaling in macrophages[END_REF]. These observations are corroborated by another study showing that M2 macrophages are predominant in granuloma in vitro [START_REF] Huang | Mycobacterium tuberculosis-induced polarization of human macrophage orchestrates the formation and development of tuberculous granulomas in vitro[END_REF].
IL-10 is one of the main cytokines whose production is beneficial for MTB. Indeed, IL-10 production exacerbates TB disease in mice and promotes MTB growth inside macrophages by blocking phagosomal maturation [START_REF] Beamer | Interleukin-10 Promotes Mycobacterium tuberculosis Disease Progression in CBA/J Mice[END_REF][START_REF] O'leary | IL-10 blocks phagosome maturation in Mycobacterium tuberculosis-infected human macrophages[END_REF]. Transgenic mice overexpressing IL-10 are more susceptible to MTB infection and have a higher M2 macrophages population [START_REF] Schreiber | Autocrine IL-10 Induces Hallmarks of Alternative Activation in Macrophages and Suppresses Antituberculosis Effector Mechanisms without Compromising T Cell Immunity[END_REF]. In a consistent manner, IL-10-deficient mice have reduced bacterial load in their lungs [START_REF] Redford | Enhanced protection to Mycobacterium tuberculosis infection in IL-10-deficient mice is accompanied by early and enhanced Th1 responses in the lung[END_REF]. MTB increases IL-10 production via several pathways including activation of the ERK pathways. MTB activates this pathway via expression of C-C Chemokine Receptor 5 (CCR5) in host macrophages or by stimulating TLR2 to induce extracellular signal-regulated kinases (ERK1/2) phosphorylation which terminally leads to IL-10 production [START_REF] Das | Immune subversion by Mycobacterium tuberculosis through CCR5 mediated signaling: Involvement of IL-10[END_REF][START_REF] Richardson | Toll-like receptor 2-dependent extracellular signal-regulated kinase signaling in Mycobacterium tuberculosis-infected macrophages drives anti-inflammatory responses and inhibits Th1 polarization of responding T cells[END_REF].
Dormancy
Macrophage infection with TB triggers a robust immune response and granuloma formation in lungs. In the granuloma, MTB is exposed to different stresses such as hypoxia, nitric oxide, carbon monoxide and nutrient starvation [START_REF] Saunders | Restraining mycobacteria: Role of granulomas in mycobacterial infections[END_REF]. These conditions limit MTB replication and promote MTB dormancy [START_REF] Lipworth | Defining dormancy in mycobacterial disease[END_REF]. This dormant state allows bacteria to regain metabolic activity as well as the ability to replicate again once the environment is more favorable [START_REF] Lipworth | Defining dormancy in mycobacterial disease[END_REF]. Dormancy of MTB is an immuneevading ability, allowing the pathogen to persist within the host for many years before a putative reactivation.
State of dormancy and reactivation are mainly regulated by the dormancy survival regulator (DosR) which coordinates expression of around 50 genes coding for proteins implicated in hypoxia survival [START_REF] Leistikow | The Mycobacterium tuberculosis DosR regulon assists in metabolic homeostasis and enables rapid recovery from nonrespiring dormancy[END_REF][START_REF] Sivaramakrishnan | The DosS-DosT/DosR Mycobacterial Sensor System[END_REF]. DosS-DosR form a two-component sensor system, with DosS closely related to the DosT sensor kinase [START_REF] Honaker | Unique roles of DosT and DosS in DosR regulon induction and Mycobacterium tuberculosis dormancy[END_REF]. DosS and DosT under deleterious environment auto-phosphorylate and transfer the phosphate to DosR, resulting in its induction. Induction of DosR regulon enables the expression of genes coding for proteins important for survival during hypoxia [START_REF] Honaker | Unique roles of DosT and DosS in DosR regulon induction and Mycobacterium tuberculosis dormancy[END_REF][START_REF] Sivaramakrishnan | The DosS-DosT/DosR Mycobacterial Sensor System[END_REF]. dosR, dosS and dosT are required for longterm persistence of MTB in the rhesus macaque model [START_REF] Mehra | The DosR Regulon Modulates Adaptive Immunity and Is Essential for Mycobacterium tuberculosis Persistence[END_REF]. The virulence phoP-phoR system also has a direct impact on hypoxia-inducible gene expression of MTB in addition to targeting dosR regulon (P. R. [START_REF] Singh | Metabolic Switching of Mycobacterium tuberculosis during Hypoxia Is Controlled by the Virulence Regulator PhoP[END_REF].
During dormancy, metabolic pathways of MTB are modulated, in reaction to nutrient starvation. Analysis of the transcriptome, proteome and metabolome of MTB during hypoxia emphasizes significant alteration in the lipid catabolism and cholesterol degradation [START_REF] Galagan | The Mycobacterium tuberculosis regulatory network and hypoxia[END_REF]. MTB uses fatty acid as the major source of energy. Host triacylglycerol (TAG) accumulation in the form of intracellular lipid droplets is critical for the survival of MTB in a dormant state [START_REF] Daniel | Mycobacterium tuberculosis Uses Host Triacylglycerol to Accumulate Lipid Droplets and Acquires a Dormancy-Like Phenotype in Lipid-Loaded Macrophages[END_REF]. MTB uses host TAG but also synthesizes his own TAG notably via the triglycerol synthase 1 (Tgs1), whose gene expression is regulated by DosR [START_REF] Daniel | Mycobacterium tuberculosis Uses Host Triacylglycerol to Accumulate Lipid Droplets and Acquires a Dormancy-Like Phenotype in Lipid-Loaded Macrophages[END_REF]. Other lipid changes are observed during hypoxia notably regarding TMM and TDM. Those cord factors, which form the outer lipid barrier of MTB and promote inflammation, are rapidly depleted during hypoxia [START_REF] Eoh | Metabolic anticipation in Mycobacterium tuberculosis[END_REF]. These decreases are reflected in a lower induction of pro-inflammatory TNF-α and IL12p40 cytokines in mouse bone marrow derived macrophages [START_REF] Eoh | Metabolic anticipation in Mycobacterium tuberculosis[END_REF]. This cell envelope remodeling renders MTB less immuno-stimulatory.
Several studies highlight the role of the mycobacterial isocitrate lyase (Icl) during these metabolic changes. Briefly, the degradation of lipids by β-oxidation generates acetyl-coA and the metabolism of cholesterol generates propionyl-coA [START_REF] Mckinney | Persistence of Mycobacterium tuberculosis in macrophages and mice requires the glyoxylate shunt enzyme isocitrate lyase[END_REF][START_REF] Pandey | Mycobacterial persistence requires the utilization of host cholesterol[END_REF]. Propionyl-coA is toxic at a high concentration and Icl prevents its excessive accumulation (X. [START_REF] Yang | Cholesterol metabolism increases the metabolic pool of propionate in Mycobacterium tuberculosis[END_REF]. Of note it also allows the glyoxylate shunt, a short-cut of the tricarboxylic acid (TCA) cycle, which is essential for MTB dormancy [START_REF] Mckinney | Persistence of Mycobacterium tuberculosis in macrophages and mice requires the glyoxylate shunt enzyme isocitrate lyase[END_REF].
Regarding its dormant state, the energetic metabolism of hypoxic MTB is also changed towards a decline in ATP production (five to six times lower in hypoxic non-replicating bacteria compared to aerobic one) [START_REF] Rao | The protonmotive force is required for maintaining ATP homeostasis and viability of hypoxic, nonreplicating Mycobacterium tuberculosis[END_REF].
MTB in a dormant state slows down its metabolic activity, its replication and suspension of diverse physiological functions [START_REF] Lipworth | Defining dormancy in mycobacterial disease[END_REF]. As a result, MTB develops a transient resistance or tolerance to antibiotics acting on growing bacteria by targeting cell wall synthesis mechanisms (such as INH), as cell wall synthesis is decreased in dormant MTB [START_REF] Tudó | Examining the basis of isoniazid tolerance in nonreplicating Mycobacterium tuberculosis using transcriptional profiling[END_REF]. A transcriptomic analysis shows that dormant MTB acquires a phenotypic tolerance and that expression of genes coding for proteins involves in INH metabolization is decreased [START_REF] Karakousis | Altered expression of isoniazidregulated genes in drug-treated dormant Mycobacterium tuberculosis[END_REF]. However, mechanisms of INH tolerance remain to be fully elucidated.
Regarding BDQ, the low remaining ATP synthases enzymatic activity of the bacteria allows the antibiotic to be active and kills the pathogen [START_REF] Koul | Diarylquinolines Are Bactericidal for Dormant Mycobacteria as a Result of Disturbed ATP Homeostasis[END_REF]. However, this killing may be delayed due to the utilization of the other metabolic pathways than ATP synthase for energy production [START_REF] Koul | Delayed bactericidal response of Mycobacterium tuberculosis to bedaquiline involves remodelling of bacterial metabolism[END_REF].
Drug tolerance has also been demonstrated on clinical strains, in an in vivo context. A study performed on 245 MTB clinical isolates shows that there is a significant change in first line drug resistance for 38.5% strains under hypoxic conditions (Z. [START_REF] Liu | Impact of hypoxia on drug resistance and growth characteristics of mycobacterium tuberculosis clinical isolates[END_REF]. In a consistent manner, work performed on rabbit caseum indicates that slowly replicating or non-replicating bacteria display an extreme drug tolerance to first-and second-line antibiotics (except RIF which appears to be still active) [START_REF] Sarathy | Extreme drug tolerance of mycobacterium tuberculosis in Caseum[END_REF]. Another important aspect of drug tolerance is related to the accessibility of drugs to the bacteria in granuloma. Indeed, antibiotics differentially diffuse inside the granuloma, rending the combination of antibiotics necessary for a complete sterilization of the caseum [START_REF] Cicchese | Both Pharmacokinetic Variability and Granuloma Heterogeneity Impact the Ability of the First-Line Antibiotics to Sterilize Tuberculosis Granulomas[END_REF].
It has been suggested that reactivation of dormant MTB is dependent of five resuscitationpromoting factors (RpfA-E) [START_REF] Kana | The resuscitation-promoting factors of Mycobacterium tuberculosis are required for virulence and resuscitation from dormancy but are collectively dispensable for growth in vitro[END_REF]. In a C57Bl/6 mice model, mutants lacking several rpf genes are defective in their ability to disseminate and reactivate even after immunosuppression of the host [START_REF] Biketov | The role of resuscitation promoting factors in pathogenesis and reactivation of Mycobacterium tuberculosis during intra-peritoneal infection in mice[END_REF]. Rpf plays a role in the cell wall hydrolysis of the granulomatous cells, which is an initial step of reactivation [START_REF] Kana | Resuscitation-promoting factors as lytic enzymes for bacterial growth and signaling[END_REF].
More recently, it has been shown that Clp is also important for MTB reactivation under DosR control [START_REF] Mcgillivray | The Mycobacterium tuberculosis Clp Gene Regulator Is Required for in Vitro Reactivation from Hypoxia-induced Dormancy[END_REF]. Indeed, a transcriptional analysis comparing gene expression of MTB during hypoxia or undergoing reactivation reveals that Clp (Rv2745c) is induced during those two states [START_REF] Mcgillivray | The Mycobacterium tuberculosis Clp Gene Regulator Is Required for in Vitro Reactivation from Hypoxia-induced Dormancy[END_REF]. Clp is a protease crucial for the degradation of intracellular proteins. However, its role during hypoxic conditions remains to be determined.
Nutrients capture
Lipids
Nutrients inside phagosome are scarce. In this context, MTB has developed several strategies to survive in the phagosome. Inside the macrophage, MTB utilizes the host cholesterol and fatty acids as main sources of carbon (S. T. [START_REF] Cole | Deciphering the biology of mycobacterium tuberculosis from the complete genome sequence[END_REF]. Some studies suggest that MTB deregulates the lipid metabolism of the host cell, leading to the formation of more permissive foamy macrophages [START_REF] Russell | Foamy macrophages and the progression of the human tuberculosis granuloma[END_REF][START_REF] Singh | Mycobacterium tuberculosis-driven targeted recalibration of macrophage lipid homeostasis promotes the foamy phenotype[END_REF]. Recently, it has been demonstrated that differentiation into foamy macrophages is promoted by IL-10 / signal transducer and activator of transcription 3 (STAT3) axis via upregulation of the enzyme acyl
CoA cholesterol acyltransferase (ACAT) [START_REF] Genoula | Formation of foamy macrophages by tuberculous pleural effusions is triggered by the interleukin-10/signal transducer and activator of transcription 3 axis through ACAT upregulation[END_REF]. These foamy macrophages contain many lipid bodies which represent a supply for MTB. Importation of cholesterol and fattu acid through the cell envelope is managed respectively by the Mce4 complex and by
Mce1 and Rv3723/LucA [START_REF] Nazarova | Rv3723/LucA coordinates fatty acid and cholesterol uptake in Mycobacterium tuberculosis[END_REF][START_REF] Singh | Mycobacterium tuberculosis-driven targeted recalibration of macrophage lipid homeostasis promotes the foamy phenotype[END_REF]. Cholesterol is degraded via β-oxidation. Degradation of the cholesterol rings chains results in the production of acetyl-CoA, propionyl-CoA, succinyl-CoA and pyruvate needed for bacterial metabolism [START_REF] Wilburn | Cholesterol and fatty acids grease the wheels of Mycobacterium tuberculosis pathogenesis[END_REF]. Fatty acids are either directly used into biosynthetic pathways or degraded [START_REF] Wilburn | Cholesterol and fatty acids grease the wheels of Mycobacterium tuberculosis pathogenesis[END_REF]. Furthermore, MTB uses triacylglycerol of the foamy macrophages to enter in a dormancy-like state [START_REF] Daniel | Mycobacterium tuberculosis Uses Host Triacylglycerol to Accumulate Lipid Droplets and Acquires a Dormancy-Like Phenotype in Lipid-Loaded Macrophages[END_REF]. Moreover, degradation of cholesterol has been shown to upregulate the genes involved in the entry of MTB persistent state [START_REF] Pawełczyk | Cholesterol-dependent transcriptome remodeling reveals new insight into the contribution of cholesterol to Mycobacterium tuberculosis pathogenesis[END_REF].
Another important metabolism implicating lipids concerns the eicosanoids. As previously described, they are important signaling molecules notably in the apoptotic/necrotic cell death and modulation of inflammation (see B.I.3 and B.II.6).
Metals
Intracellular MTB faces nutrient starvation, including iron deprivation. To survive, the pathogen acquires iron by stealing it from the host through different mechanisms. First of all, MTB produces siderophores with a high-affinity for iron. There are three types of mycobacterial siderophores: mycobactin, carboxymycobactin and exochelin (K. [START_REF] Patel | Mycobacterial siderophore: A review on chemistry and biology of siderophore and its potential as a target for tuberculosis[END_REF]. Mycobactin is a lipophilic siderophore anchored in the membrane and cell wall of MTB whereas carboxymycobactin has a similar structure but is more hydrophilic (K. [START_REF] Patel | Mycobacterial siderophore: A review on chemistry and biology of siderophore and its potential as a target for tuberculosis[END_REF]. Carboxymycobactin is secreted and released in the extracellular space to chelate and internalize iron. They are also capable of scavenging and removing iron from human transferrin and lactoferrin for the bacteria [START_REF] Luo | Mycobactin-mediated iron acquisition within macrophages[END_REF]. Exochelins have been mostly
described in other mycobacteria such as M. smegmatis and M. avium [START_REF] Horwitz | The Exochelins of Pathogenic Mycobacteria: Unique, Highly Potent, Lipid-and Water-Soluble Hexadentate Iron Chelators with Multiple Potential Therapeutic Uses[END_REF].
Synthesis of mycobacterial siderophores is mediated by cytoplasmic synthases encoded by
mbt-1 and mbt-2 [START_REF] Krithika | A genetic locus required for iron acquisition in Mycobacterium tuberculosis[END_REF]. Regulation of this synthesis is dependent on the irondependent regulator (IdeR). When iron is in a sufficient amount, IdeR represses genes associated with mycobactin synthesis and expresses genes coding for proteins related to ironstorage bacterioferritin, BfrA and BfrB [START_REF] Sritharan | Iron homeostasis in Mycobacterium tuberculosis: Mechanistic insights into siderophore-mediated Iron uptake[END_REF]. Inactivation of ideR is lethal for MTB in macrophages due to its increased sensitivity to oxidative stress (R. [START_REF] Pandey | IdeR is required for iron homeostasis and virulence in Mycobacterium tuberculosis[END_REF].
Export of mycobactin and carboxymycobactin is related to a membrane protein system composed of mycobacterial membrane protein small 4 and 5 (MmpS4-5) and mycobacterial membrane protein large 4 and 5 (MmpL4-5) transporters [START_REF] Wells | Discovery of a Siderophore Export System Essential for Virulence of Mycobacterium tuberculosis[END_REF]. Some studies also suggest the role of ESX-3 type VII secretion system in iron acquisition by MTB [START_REF] Serafini | The ESX-3 Secretion System Is Necessary for Iron and Zinc Homeostasis in Mycobacterium tuberculosis[END_REF][START_REF] Tufariello | Separable roles for Mycobacterium tuberculosis ESX-3 effectors in iron acquisition and virulence[END_REF]. Indeed, ESX-3 mutant shows an overproduction of mycobactins and fails to assimilate iron-linked mycobactin [START_REF] Tufariello | Separable roles for Mycobacterium tuberculosis ESX-3 effectors in iron acquisition and virulence[END_REF]. The precise role of ESX-3 in iron absorption has yet to be fully understood. Another study indicates that microvesicles containing mycobactins are secreted in response to iron restriction; these vesicles are supposed to deliver iron and to promote growth of iron-deficient bacteria (Prados- [START_REF] Prados-Rosales | Role for mycobacterium tuberculosis membrane vesicles in iron acquisition[END_REF]. However, mechanisms of action related to these vesicles are not fully characterized yet. Import of ferric-carboxymycobactins through the inner membrane is mediated by the IrtA/IrtB transporter [START_REF] Rodriguez | Identification of an ABC Transporter Required for Iron Acquisition and Virulence in Mycobacterium tuberculosis[END_REF]. IrtA also possesses a flavin adenine dinucleotide (FAD) domain which catalyzes iron reduction [START_REF] Ryndak | The Mycobacterium tuberculosis High-Affinity Iron Importer, IrtA, Contains an FAD-Binding Domain[END_REF].
However, irtAB mutant still has a residual carboxymycobactin uptake which indicates the presence of another import mechanism not discovered yet [START_REF] Rodriguez | Identification of an ABC Transporter Required for Iron Acquisition and Virulence in Mycobacterium tuberculosis[END_REF].
MTB also captures iron from the host via heme and hemoglobin uptake. Heme, the precursor of hemoglobin, binds to one iron ion. Bacteria are able to scavenge the heme to capture iron [START_REF] Jones | Mycobacterium tuberculosis can utilize heme as an iron source[END_REF]. Two putative systems are implicated in heme uptake: the MmpL3-MmpL11-Rv0203 system [START_REF] Tullius | Discovery and characterization of a unique mycobacterial heme acquisition system[END_REF] and the proline-proline-glutamate 36 (PPE36), PPE62 and the dipeptide transporter Dpp [START_REF] Mitra | PPE surface proteins are required for heme utilization by Mycobacterium tuberculosis[END_REF][START_REF] Mitra | Heme and hemoglobin utilization by Mycobacterium tuberculosis[END_REF]. Once inside the bacteria, the bacterial heme oxygenase MhuD degrades the heme into mycobilin and iron, the latter becoming available for the bacteria [START_REF] Nambu | A new way to degrade heme: The mycobacterium tuberculosis enzyme MhuD catalyzes heme degradation without generating CO[END_REF]. MhuD acts also as a heme storage under iron depleted condition [START_REF] Matthews | MhuD from Mycobacterium tuberculosis: Probing a Dual Role in Heme Storage and Degradation[END_REF].
Besides iron, intracellular MTB also scavenges zinc to survive. Zinc uptake by MTB is mediated by the zinc uptake regulator (Zur or FurB) [START_REF] Maciaģ | Global analysis of the Mycobacterium tuberculosis Zur (FurB) regulon[END_REF]. In a zinc limited environment, zur expression is upregulated. Moreover, in those environmental conditions, MTB also shows an adaptive response with an increased resistance to oxidative stress (with higher KatG expression) [START_REF] Dow | Zinc limitation triggers anticipatory adaptations in Mycobacterium tuberculosis[END_REF]. However, the most described host defense mechanism related to zinc is intoxication rather than starvation [START_REF] Neyrolles | Zinc and copper toxicity in host defense against pathogens: Mycobacterium tuberculosis as a model example of an emerging paradigm[END_REF](Neyrolles et al., , 2015)). To counteract zinc toxicity, MTB extrudes excessive zinc via the P1B-type ATPase efflux pump, CtpC and CtpV [START_REF] Botella | Mycobacterial P 1-Type ATPases mediate resistance to Zinc poisoning in human macrophages[END_REF][START_REF] Ward | CtpV: A putative copper exporter required for full virulence of Mycobacterium tuberculosis[END_REF]. The ctpC mutant is hypersensitive to physiological concentrations of zinc and exhibits impaired growth in human macrophages [START_REF] Botella | Mycobacterial P 1-Type ATPases mediate resistance to Zinc poisoning in human macrophages[END_REF].
During the host-pathogen interaction, maintaining a proper metal homeostasis through acquisition and expulsion is crucial for the pathogen survival. As previously described in this thesis, MTB is a well-adapted pathogen that has developed a number of mechanisms to counteract the host immune defenses. Therefore, if the host cannot manage to kill the pathogen, eradication will rely on the use of antibiotics. However, over time, a rapid increase of MTB resistant strains is observed against the drugs currently used to treat TB. As a result, new strategies have to be developed to fight both susceptible and resistant TB, with the host as the central target.
C. New strategies to fight tuberculosis
I.
Host-directed therapies
Definition
The current anti-TB treatment regimen is challenging for the patient. Indeed, this treatment has to be taken over a long period of time and may cause toxicity, generally leading to a possible non-compliance of the patient. As a consequence, a lack of compliance often promotes the acquisition of antibiotic resistance by MTB and the development of MDR-MTB and XDR-MTB [START_REF] Prasad | Adverse drug reactions in tuberculosis and management[END_REF]. These observations highlight the increased importance for novel or improved strategies to treat TB.
In this context, host-directed therapies (HDT) appear to be promising treatments against TB.
They aim to control an infection by modulating the host immune response [START_REF] Machelart | Host-directed therapies offer novel opportunities for the fight against tuberculosis[END_REF]. Unlike antibiotics, these HDT are less likely to induce bacterial resistance as they solely act on the host cellular pathways, by limiting inflammation or tissue damage for instance [START_REF] Kaufmann | Host-directed therapies for bacterial and viral infections[END_REF]. Moreover, HDT have been shown to be efficient on MDR-/XDR-MTB [START_REF] Zumla | Host-directed therapies for multidrug resistant tuberculosis[END_REF]. Due to the possible combinations and synergistic effects with antibiotics, they can also shorten the TB treatment regimens and thereby increase the patient compliance. HDT encompass many molecules targeting different host cellular pathways that are detailed below (Figure 4).
Granuloma-targeted HDT
As mentioned previously, the granuloma is the site of a balance between the host and the pathogen. While it prevents MTB spreading, granuloma is also described as an environment where bacteria multiply and eventually propagate if not controlled [START_REF] Pagán | The Formation and Function of Granulomas[END_REF]. Granuloma-targeted therapy is a major advance in the fight against TB, designed to prevent tissue damages, to enhance protective immune response and to lead to a better control or a resolution of the infection [START_REF] Hortle | Host-directed therapies targeting the tuberculosis granuloma stroma[END_REF].
« élément sous droit, diffusion non autorisée » "copyrighted material, unauthorized distribution" Figure 4. Host-directed therapies against MTB [START_REF] Kaufmann | Host-directed therapies for bacterial and viral infections[END_REF].
Granuloma has poor vascularization, low availability of nutrients and a hypoxic center [START_REF] Pagán | The Formation and Function of Granulomas[END_REF]. In these environmental conditions, mycobacterial dormancy is promoted, limiting the impact of some antibiotics on bacterial infection [START_REF] Miranda | The tuberculous granuloma: An unsuccessful host defence mechanism providing a safety shelter for the bacteria?[END_REF]. Furthermore, drug penetration is severely impaired in necrotic lesions of granuloma [START_REF] Prideaux | Mass spectrometry imaging of levofloxacin distribution in TB-infected pulmonary lesions by MALDI-MSI and continuous liquid microjunction surface sampling[END_REF][START_REF] Sarathy | Caseum: A niche for mycobacterium tuberculosis drugtolerant persisters[END_REF].
On one hand, granuloma-targeted therapy promotes granuloma's vascularization (with the potential to promote drug diffusion), increases the access of the host immune cells and promotes tissue healing [START_REF] Dartois | The path of anti-tuberculosis drugs: From blood to lesions to mycobacterial cells[END_REF]. On the other hand, promoting angiogenesis without an adequate antibiotic treatment is detrimental as it may facilitate MTB spreading through blood vessels. Interestingly, patients with TB tend to have a higher level of angiogenic factors like VEGF [START_REF] Alatas | Vascular Endothelial Growth Factor Levels in Active Pulmonary Tuberculosis[END_REF][START_REF] Matsuyama | Increased Serum Level of Vascular Endothelial Growth Factor in Pulmonary Tuberculosis[END_REF] as MTB increases the production of VEGF via cis-cyclopropanation of TDM or angiopoietin-2 (another angiogenic factor) [START_REF] Oehlers | Infection-induced vascular permeability aids mycobacterial growth[END_REF][START_REF] Polena | Mycobacterium tuberculosis exploits the formation of new blood vessels for its dissemination[END_REF][START_REF] Walton | Cyclopropane Modification of Trehalose Dimycolate Drives Granuloma Angiogenesis and Mycobacterial Growth through Vegf Signaling[END_REF]). Unlike the previous example, inhibiting angiogenesis is also an HDT strategy to limit MTB growth, by blocking the VEFG production or its VEGFR receptor. Anti-VEGF-like bevacizumab inhibits the growth and the dissemination of MTB in rabbits [START_REF] Datta | Anti-vascular endothelial growth factor treatment normalizes tuberculosis granuloma vasculature and improves small molecule delivery[END_REF] and in mice [START_REF] Polena | Mycobacterium tuberculosis exploits the formation of new blood vessels for its dissemination[END_REF]. Small molecules such as pazopanib, interfering with the kinase activity downstream of VEGFR activation and antibodies blocking VEGFR, reduce bacterial burden in a zebrafish model [START_REF] Oehlers | Infection-induced vascular permeability aids mycobacterial growth[END_REF]. Some studies also report the use of anti-VEFG on ocular TB with good results, showing a potential future for this HDT to treat different types of TB [START_REF] Invernizzi | Optic Nerve Head Tubercular Granuloma Successfully Treated with Anti-VEGF Intravitreal Injections in Addition to Systemic Therapy[END_REF]Jain et al., 2019).
Fibrosis surrounding granuloma is one of the main characteristics of TB disease. Matrix metalloproteinases (MMP) are a key mediator in TB pathology regarding the granuloma degradation and are connected with clinical markers of tissue damages [START_REF] Elkington | MMP-1 drives immunopathology in human tuberculosis and transgenic mice[END_REF].
During the disease, MTB upregulates mmp9 expression which leads to recruitment of uninfected macrophages, thus favoring the bacterial spreading [START_REF] Volkman | Tuberculous granuloma induction via interaction of a bacterial secreted protein with host epithelium[END_REF]. In the MTB-infected human tissue model, global inhibition of MMPs results in smaller and fewer granuloma and reduced bacterial load [START_REF] Parasa | Inhibition of tissue matrix metalloproteinases interferes with Mycobacterium tuberculosis-induced granuloma formation and reduces bacterial load in a human lung tissue model[END_REF]. In addition, a study has demonstrated that MMP inhibition stabilizes the vascular integrity of the granuloma and increases the delivery of anti-TB drugs such as RIF and INH [START_REF] Xu | Matrix metalloproteinase inhibitors enhance the efficacy of frontline drugs against Mycobacterium tuberculosis[END_REF]. A clinical trial has been conducted with doxycycline, a MMP9 inhibitor, to determine the role of MMP inhibition in patients. This trial reveals that doxycycline inhibits matrix destruction and reduces pulmonary cavity volume [START_REF] Miow | Doxycycline host-directed therapy in human pulmonary tuberculosis[END_REF]. While MMP inhibitors are valuable and promising candidates as HDT for TB treatment, deeper investigations are needed to validate the benefits of this HDT. Indeed, MMP are valuable enzymes with many functions in the host.
Therefore inhibiting these proteases could lead to diverse side effects, potentially toxic for human.
Immune cell functions-targeted HDT
Besides granulomas, HDT can directly target the immune cell functions in order to prevent bacterial growth and dissemination, by limiting MTB entry into the host, modulating the ROS production or enhancing autophagy.
Host trafficking
One way to interfere with the MTB infection cycle is to target the entry of mycobacteria inside macrophages. Tyrosine kinases are known to be activated during the internalization of MTB and is therefore a prime target for HDT. The Abelson (Abl) tyrosine kinase is involved in bacterial uptake and modulates the acidification of the phagosome via the V-ATPase expression [START_REF] Bruns | Abelson Tyrosine Kinase Controls Phagosomal Acidification Required for Killing of Mycobacterium tuberculosis in Human Macrophages[END_REF]. Imatinib, an inhibitor of the Abl tyrosine kinase, reduces the number of granulomatous lesions and bacterial load in infected organs [START_REF] Napier | Imatinib-Sensitive tyrosine kinases regulate mycobacterial pathogenesis and represent therapeutic targets against tuberculosis[END_REF]. This inhibitor also impedes the phagosomal acidification and has an antimicrobial activity against BCG [START_REF] Steiger | Imatinib Triggers Phagolysosome Acidification and Antimicrobial Activity against Mycobacterium bovis Bacille Calmette-Guérin in Glucocorticoid-Treated Human Macrophages[END_REF]. Furthermore, a synergistic effect between imatinib and firstline anti-TB drug RIF has been shown in murine macrophages cell lines infected by M. marinum [START_REF] Napier | Imatinib-Sensitive tyrosine kinases regulate mycobacterial pathogenesis and represent therapeutic targets against tuberculosis[END_REF]. The Src tyrosine kinase also regulates phagosomal maturation, acidification and autophagy [START_REF] Karim | Express path analysis identifies a tyrosine kinase Src-centric network regulating divergent host responses to Mycobacterium tuberculosis infection[END_REF]. Its inhibition by the AZD0530 inhibitor decreases the bacterial survival in human cell line THP-1 macrophages and in guinea pigs [START_REF] Chandra | Targeting Drug-Sensitive and -Resistant Strains of Mycobacterium tuberculosis by Inhibition of Src Family Kinases Lowers Disease Burden and Pathology[END_REF]. A larger in silico study with repurposed drugs reveals that dovitinib, AT9283, and ENMD-2076 target receptors of tyrosine kinase. Tyrosine kinase inhibition mediated by these drugs decreases the survival of intracellular MTB [START_REF] Korbee | Combined chemical genetics and data-driven bioinformatics approach identifies receptor tyrosine kinase inhibitors as host-directed antimicrobials[END_REF]. All of these studies identify the tyrosine kinase signaling pathway as a major source of HDT against MTB.
Reactive oxygen species
While ROS production is an important defense mechanism for macrophages against pathogen infection, excess of ROS and low level of antioxidants lead to an uncontrolled oxidative stress and induce necrosis of the cell [START_REF] Morgan | TNFα and reactive oxygen species in necrotic cell death[END_REF]. In this context, using HDT to finely tune the ROS production allows the enhancement of the antimicrobial activity while maintaining the integrity of the cell. N-acetylcysteine (NAC) provides cysteine, necessary for the synthesis of glutathione (GSH), an intracellular antioxidant [START_REF] Ejigu | N-Acetyl Cysteine as an Adjunct in the Treatment of Tuberculosis[END_REF]. GSH creates redox homeostasis that protects cells and tissues from oxidative damage [START_REF] Ejigu | N-Acetyl Cysteine as an Adjunct in the Treatment of Tuberculosis[END_REF]. In addition, S-nitrosoglutathione (GSSNO; reaction of NO with GSH) exerts an anti-mycobacterial effect [START_REF] Venketaraman | Glutathione and nitrosoglutathione in macrophage defense against Mycobacterium tuberculosis[END_REF]. Moreover, NAC itself has an anti-mycobacterial activity [START_REF] Amaral | N-acetyl-cysteine exhibits potent anti-mycobacterial activity in addition to its known anti-oxidative functions[END_REF]. Indeed, a study performed on MTB-infected guinea pigs demonstrates that adjunction of NAC decreases spleen bacterial load as well as the severity of the lungs lesions by limiting oxidative conditions [START_REF] Palanisamy | Evidence for oxidative stress and defective antioxidant response in guinea pigs with tuberculosis[END_REF]. The effect and safety of NAC has been tested in two randomized control trials. The first one shows that patients on intensive anti-TB regimen phase supplemented by NAC, have an improved lung pathology with faster sputum conversion [START_REF] Mahakalkar | Nacetylcysteine as an add-on to Directly Observed Therapy Short-I therapy in fresh pulmonary tuberculosis patients: A randomized, placebo-controlled, double-blinded study[END_REF]. The second one performed on HIVpositive patients concludes that the use of NAC is safe for TB patients with HIV coinfection [START_REF] Safe | Safety and efficacy of Nacetylcysteine in hospitalized patients with HIV-associated tuberculosis: An open-label, randomized, phase II trial (RIPENACTB Study)[END_REF]. However, the result on sputum conversion on this study was not assessed. Further confirmations will be needed to clarify the potential role of NAC as an HDT for TB treatment.
Autophagy
Autophagy is a common mechanism used by the cell to control MTB infection. Enhancement of autophagy is often associated with a better bacterial clearance. Several drugs have been identified to promote autophagic pathways at different levels.
During MTB infection, the PI3P/Akt/mTOR is activated, facilitating the intracellular survival of MTB (P. [START_REF] Singh | Harnessing the mTOR Pathway for Tuberculosis Treatment[END_REF]. PI3P/Akt/mTOR is a signaling pathway that plays a key role in cell homeostasis by regulating apoptosis, cell cycle and autophagy. When activated, mTOR blocks the initiation of the complex that starts the autophagosome formation (P. [START_REF] Singh | Harnessing the mTOR Pathway for Tuberculosis Treatment[END_REF]. Therefore, inhibition of mTOR appears to be a potential target for HDT.
Everolimus, a derivative of rapamycin -a mTOR inhibitor, is efficient against MTB infection inside granulomas and displays additive effects with first line anti-TB drugs [START_REF] Ashley | Antimycobacterial effects of everolimus in a human granuloma model[END_REF]). Bruton's tyrosine kinase (BKT) acts on the PI3P/Akt/mTOR pathway. Ibrutinib, a BKT inhibitor, reduces intracellular MTB in human macrophages and in mice, and restores the autophagy flux for autophagosomes-containing MTB [START_REF] Hu | Ibrutinib suppresses intracellular mycobacterium tuberculosis growth by inducing macrophage autophagy[END_REF].
Vitamins are necessary for regulation of many host cellular functions. Specifically, vitamin D, also called calcitriol, plays a role in controlling TB infection. Vitamin D binds to its receptor VDR which acts as a transcription factor, regulating numerous genes coding for cathelicidin, defensines and MMP enzymes [START_REF] Periyasamy | Vitamin D -A host directed autophagy mediated therapy for tuberculosis[END_REF]. Cathelicidin (LL-37), previously described in this thesis, an antimicrobial peptide produced following MTB infection and TLRs stimulation (P. T. [START_REF] Liu | Toll-like receptor triggering of a vitamin D-mediated human antimicrobial response[END_REF], increases autophagy and the production of protective cytokines [START_REF] Periyasamy | Vitamin D -A host directed autophagy mediated therapy for tuberculosis[END_REF]. Vitamin D adjunctive therapy has been widely tested in trials with no clear conclusion drawn. One trial suggested that vitamin D supplementation promotes radiographic improvement and increases host immune activation but only among patients with a previous vitamin D deficiency [START_REF] Salahuddin | Vitamin D accelerates clinical recovery from tuberculosis: results of the SUCCINCT Study [Supplementary Cholecalciferol in recovery from tuberculosis]. A randomized, placebocontrolled, clinical trial of vitamin D supplementation in patients with pulmonary tuberculosis[END_REF]. A recent meta-analysis on vitamin D supplementation concludes that this vitamin has no beneficial effect on the anti-TB treatment but diminishes the sputum conversion in patients only with a certain polymorphism in VDR (J. [START_REF] Zhang | Effectiveness of vitamin D supplementation on the outcome of pulmonary tuberculosis treatment in adults: A meta-analysis of randomized controlled trials[END_REF]. Observed differences might be linked to the dosage, the type of administration of the vitamin D or the endogenous level of vitamin D among patients between trials. Still, anti-TB treatment associated with vitamin D could be beneficial, but only on a certain population.
Metformin is an anti-hyperglycemic agent with immunomodulatory effects, that promotes autophagy and phagosome-lysosome fusion via the 5'AMP-activated protein-kinase (AMPK) pathway (G. [START_REF] Zhou | Role of AMPactivated protein kinase in mechanism of metformin action[END_REF]. Furthermore, metformin induces mtROS production and promotes phagosome maturation in vitro and in vivo in MTB-infected models [START_REF] Singhal | Metformin as adjunct antituberculosis therapy[END_REF]. Importantly, this drug also enhances anti-TB drug activity of INH and ethionamide in a mice model [START_REF] Singhal | Metformin as adjunct antituberculosis therapy[END_REF]. Regarding clinical data, several studies have been repeatedly demonstrated that metformin is beneficial for diabetic patients by reducing the risk of TB [START_REF] Yu | Impact of metformin on the risk and treatment outcomes of tuberculosis in diabetics: A systematic review[END_REF]M. Zhang & He, 2019). Nowadays, several trials are ongoing to evaluate the use of metformin with other anti-TB treatments among patients with [START_REF] Padmapriyadarsini | Evaluation of metformin in combination with rifampicin containing antituberculosis therapy in patients with new, smear-positive pulmonary tuberculosis (METRIF): Study protocol for a randomised clinical trial[END_REF] or without HIV (Phase II clinical trial). A study on blood and immune cells from healthy individuals shows that metformin decreases the host inflammatory response upon stimulation of MTB [START_REF] Lachmandas | Metformin Alters Human Host Responses to Mycobacterium tuberculosis in Healthy Subjects[END_REF]. However, disparate results have been obtained in nondiabetic versus diabetic mice [START_REF] Sathkumara | Disparate effects of metformin on mycobacterium tuberculosis infection in diabetic and nondiabetic mice[END_REF]. Further non-retrospectives studies in non-diabetic patients are needed to confirm the benefits of metformin in the treatment of TB.
Numerous drugs were shown to induce autophagy and reduce intracellular MTB growth in macrophages such as loperamide, verapamil or valproic acid [START_REF] Abate | New verapamil analogs inhibit intracellular mycobacteria without affecting the functions of Mycobacterium-specific T cells[END_REF][START_REF] Juárez | Loperamide restricts intracellular growth of Mycobacterium tuberculosis in lung macrophages[END_REF]. More recently, BDQ has been found to activate autophagy and improve bacterial clearance [START_REF] Giraud-Gatineau | The antibiotic bedaquiline activates host macrophage innate immune resistance to bacterial infection[END_REF]. Together, these studies suggest that targeting autophagy pathways by HDT could be a valuable strategy to improve TB treatment.
Inflammatory modulators and cytokines modulation by HDT
Corticosteroids
While inflammatory response related to MTB infection is crucial for the granuloma formation, it may also be detrimental for the host, by creating tissue damages if not controlled.
Corticosteroids have anti-inflammatory and immunosuppressive actions and have been extensively employed to treat infections and autoimmune diseases [START_REF] Coutinho | The anti-inflammatory and immunosuppressive effects of glucocorticoids, recent developments and mechanistic insights[END_REF]. The use of corticosteroids as an adjunctive treatment for TB is based on their antiinflammatory capacity, preventing chronic and unproductive inflammation. Mechanisms of action of corticosteroids are based on the regulation of anti-inflammatory genes such as IL-10 via their interaction with the glucocorticoid-responsive element after linkage to glucocorticoid receptors [START_REF] Schutz | Corticosteroids as an adjunct to tuberculosis therapy[END_REF]. Moreover, corticosteroids interact with NF-κB pathways, inhibiting the transcription of inflammatory mediators [START_REF] Scheinman | Characterization of mechanisms involved in transrepression of NF-κB by activated glucocorticoid receptors[END_REF]. They also mediate several signal transduction cascades via their binding to G protein-coupled receptors [START_REF] Schutz | Corticosteroids as an adjunct to tuberculosis therapy[END_REF]. Recently, corticosteroids, notably dexamethasone, have also been shown to inhibit necrotic cell death induced by MTB via the inhibition of the host cell death p38 mitogen-activated protein kinase (MAPK) pathway mediated by mitochondrial membrane integrity [START_REF] Gräb | Corticosteroids inhibit Mycobacterium tuberculosis-induced necrotic host cell death by abrogating mitochondrial membrane permeability transition[END_REF].
Many trials have been conducted to evaluate the role of corticosteroids, like dexamethasone and prednisolone, as adjuvant for TB treatment with very different outcomes [START_REF] Schutz | Corticosteroids as an adjunct to tuberculosis therapy[END_REF]. For instance, a meta-analysis performed on all forms of TB concludes that corticosteroids are effective to reduce mortality [START_REF] Critchley | Corticosteroids for prevention of mortality in people with tuberculosis: A systematic review and meta-analysis[END_REF]. However, another meta-analysis on pulmonary TB only suggests that adjunctive corticosteroids do not improve long-term treatment efficacy but induce more toxic side effects [START_REF] Xie | The efficacy and safety of adjunctive corticosteroids in the treatment of tuberculous pleurisy: A systematic review and meta-analysis[END_REF]. Together, these two analyses point out that localization of TB -pulmonary or extrapulmonary -as well as the patient should be considered before using this adjunct therapy. Finally, it is important to bear in mind that corticosteroids reduce the cytokine responses to TB antigens used in IGRA assays and may interfere with the results [START_REF] Clifford | The impact of anti-tuberculous antibiotics and corticosteroids on cytokine production in QuantiFERON-TB Gold In Tube assays[END_REF]. Dexamethasone is currently the only drug in phase IV trials for the therapy of meningeal TB and further investigations are required to clarify the effect of corticosteroids on TB outcomes.
Phosphodiesterase inhibitors
Phosphodiesterases (PDE) are part of a large family of enzymes with different structures, substrates specificity and tissue distribution [START_REF] Page | Phosphodiesterase inhibitors in the treatment of inflammatory diseases[END_REF]. Host PDE hydrolyze two second messengers: cyclic adenosine monophosphate (cAMP) and cyclic guanosine monophosphate (cGMP). cAMP has an anti-inflammatory and tissue protective effect when highly concentrated in the cell [START_REF] Page | Phosphodiesterase inhibitors in the treatment of inflammatory diseases[END_REF]. In this context, PDE are suitable candidates for HDT, as their inhibition should result in a reduction of inflammation and an improvement of antimycobacterial responses. A study performed on the PDE-3 inhibitor cilostazol and the PDE-5 inhibitor sildenafil shows that these molecules reduce tissue damage and TNF-α, promote bacterial clearance and accelerate lung sterilization in mice [START_REF] Maiga | Successful shortening of tuberculosis treatment using adjuvant hostdirected therapy with FDA-approved phosphodiesterase inhibitors in the mouse model[END_REF]. Other studies on PDE-4 inhibitors such as CC-11050, CC-3052 and roflumilast demonstrate similar properties on rabbits and on mice [START_REF] Koo | Phosphodiesterase 4 inhibition reduces innate immunity and improves isoniazid clearance of Mycobacterium tuberculosis in the lungs of infected mice[END_REF][START_REF] Subbian | Adjunctive Phosphodiesterase-4 Inhibitor Therapy Improves Antibiotic Response to Pulmonary Tuberculosis in a Rabbit Model[END_REF].
More interestingly, these PDE inhibitors associated with INH lead to a higher reduction on the bacterial load and a better lung pathology [START_REF] Koo | Phosphodiesterase 4 inhibition reduces innate immunity and improves isoniazid clearance of Mycobacterium tuberculosis in the lungs of infected mice[END_REF][START_REF] Subbian | Phosphodiesterase-4 inhibition combined with isoniazid treatment of rabbits with pulmonary tuberculosis reduces macrophage activation and lung pathology[END_REF][START_REF] Subbian | Adjunctive Phosphodiesterase-4 Inhibitor Therapy Improves Antibiotic Response to Pulmonary Tuberculosis in a Rabbit Model[END_REF]. A phase II clinical trial on humans is ongoing with CC-11050 to assess its potential role to shorten TB treatment.
Cytokines
Cytokines play a key role in TB outcomes. Depending on their pro-or anti-inflammatory properties, they modulate the host immunity as well as the cell response to the pathogen.
HDT targeting cytokines aim to reduce excessive or inappropriate cytokine responses like with anti-TNF-α in certain cases [START_REF] Mootoo | TNF-α in tuberculosis: A cytokine with a split personality[END_REF]. These HDT also promote Th1 cytokines such as IL-2, IL-12, IFN-γ and GM-CSF, important for the host defense systems against pathogens (Zeng et al., 2017). TNF-α is a cytokine with different effects on the host. An appropriate TNF-α level preserves granuloma integrity whereas high levels worsen the outcome of TB [START_REF] Mootoo | TNF-α in tuberculosis: A cytokine with a split personality[END_REF]. Therefore, TNF-α therapies preferentially focus on reducing the level of this cytokine [START_REF] Morgan | TNFα and reactive oxygen species in necrotic cell death[END_REF]. The use of TNF-α antibodies (adalimumab and infliximab) during the specific paradoxical reactions (exacerbation of the disease and worse inflammation after initiation of anti-TB therapy) has been highly effective for treating this type of patients [START_REF] Blackmore | Therapeutic use of infliximab in tuberculosis to control severe paradoxical reaction of the brain and lymph nodes[END_REF][START_REF] Wallis | Drug tolerance in Mycobacterium tuberculosis[END_REF]. Use of anti-TNF-α as a therapy should therefore be carefully considered and only in certain cases (like paradoxical reactions).
IFN-γ is one of the main cytokines involved in the protective response against MTB and is considered as a good candidate for cytokines therapy [START_REF] Cavalcanti | Role of TNF-Alpha, IFN-Gamma, and IL-10 in the Development of Pulmonary Tuberculosis[END_REF]. IFN-γ activates macrophages, promotes cell proliferation, apoptosis and bacterial killing [START_REF] Cavalcanti | Role of TNF-Alpha, IFN-Gamma, and IL-10 in the Development of Pulmonary Tuberculosis[END_REF]. IFN-γ deficient mice are highly susceptible to MTB infection [START_REF] Flynn | An essential role for interferon γ in resistance to mycobacterium tuberculosis infection[END_REF]. Given the importance of this cytokine, several human trials have been implemented to assess the efficacy of HDT targeting IFN-γ. In one study, patients treated with aerosolized human recombinant IFN-γ1b show an improvement on MTB sputum clearance and a decrease of inflammatory cytokines level at the site of the disease [START_REF] Condos | Recombinant gamma interferon stimulates signal transduction and gene expression in alveolar macrophages in vitro and in tuberculosis patients[END_REF]. Another study has been performed on TB patients with nebulized IFN-γ1b or subcutaneous injections of IFN-γ1b treatment. All patients treated with IFN-γ (nebulized or injected) have better outcomes for TB and increased sputum conversion [START_REF] Dawson | Immunomodulation with recombinant interferon-γ1b in pulmonary tuberculosis[END_REF]. Despite the effectiveness of INF-γ targeting HDT, these latter are expensive and difficult to produce [START_REF] Hawn | Host-Directed Therapeutics for Tuberculosis: Can We Harness the Host?[END_REF].
Several other trials should be performed to confirm the benefit of this cytokine.
Regarding other cytokines, a study has revealed that overexpression of GM-CSF in acute TB contributes to an efficient M1 polarization and effective immune response (Benmerzoug et al., 2018). On one hand, TB patients treated with adjunct subcutaneous GM-CSF show a faster sputum clearance without any adverse effect, displaying its potential role for HDT [START_REF] Pedral-Sampaio | Use of Rhu-GM-CSF in pulmonary tuberculosis patients: results of a randomized clinical trial[END_REF]. On the other hand, a trial using adjunctive recombinant IL-2 has failed to prove a difference in sputum conversion or in bacterial clearance after a couple of months, compared to untreated patients [START_REF] Johnson | Randomized trial of adjunctive interleukin-2 in adults with pulmonary tuberculosis[END_REF].
In conclusion, targeting cytokines as an HDT could have unpredictable outcomes. Use of these therapies should therefore be used with caution to avoid exacerbation or unappropriated host immune response.
Host lipid metabolism
Eicosanoids: Eicosanoids, previously described in this thesis, are important signaling molecules that modulate inflammation and cell death (M. [START_REF] Chen | Lipid mediators in innate immunity against tuberculosis: Opposing roles of PGE 2 and LXA 4 in the induction of macrophage death[END_REF][START_REF] Rocca | Cyclooxygenases and prostaglandins: Shaping up the immune response[END_REF].
The balance between PGE2 and LXA4 / LXB4 is crucial as depending on the predominant eicosanoid, it can lead to apoptosis or necrosis of the cell (M. [START_REF] Chen | Lipid mediators in innate immunity against tuberculosis: Opposing roles of PGE 2 and LXA 4 in the induction of macrophage death[END_REF]. To tilt the balance in favor of PGE2 side, HDT could inhibit the 5-lipoxygenases (5-LOX), responsible of LXA4 synthesis (which could be hydrolyzed in LXB4). Indeed, lipoxins negatively regulate the production of major protective cytokines in TB infection like IL-12 or IFN-γ but also the enzyme NOX2 [START_REF] Bafica | Host control of Mycobacterium tuberculosis is regulated by 5-lipoxygenase-dependent lipoxin production[END_REF]. Inhibition of LXA4 should therefore be beneficial for the host. In this respect, a study has shown that mice treated with zileuton, a 5-LOX inhibitor, associated with PGE2 supplementation, display higher levels of IL-1β production by limiting type I IFN and a decrease of MTB growth [START_REF] Mayer-Barber | Host-directed therapy of tuberculosis based on interleukin-1 and type i interferon crosstalk[END_REF]. Besides being considered as a potential HDT candidate to treat TB, zileuton is also used to treat asthma [START_REF] Thalanayar Muthukrishnan | Zileuton use and phenotypic features in asthma[END_REF].
Despite the apparent benefit of PGE2 in TB, the role of PGE2 remains controversial in MTB infection [START_REF] Moreno | The role of prostaglandin E2 in the immunopathogenesis of experimental pulmonary tuberculosis[END_REF][START_REF] Sorgi | Eicosanoid pathway on host resistance and inflammation during Mycobacterium tuberculosis infection is comprised by LTB4 reduction but not PGE2 increment[END_REF]. Even if PGE2 is important at an early stage of infection, later overexpression of PGE2 downregulates the cell-mediated immunity, especially IFN-γ and TNF-α [START_REF] Moreno | The role of prostaglandin E2 in the immunopathogenesis of experimental pulmonary tuberculosis[END_REF][START_REF] Sorgi | Eicosanoid pathway on host resistance and inflammation during Mycobacterium tuberculosis infection is comprised by LTB4 reduction but not PGE2 increment[END_REF]. Non-steroidal antiinflammatory drugs (NSAID) counteract the overproduction of PGE2. NSAID inhibits COX activity, and therefore the production of prostaglandins and other prostanoids with proinflammatory activity [START_REF] Maitra | Repurposing drugs for treatment of tuberculosis: A role for nonsteroidal anti-inflammatory drugs[END_REF]. Ibuprofen, aspirin, celecoxib and diclofenac significantly increase host cell survival, protect lungs and decrease bacterial loads in mice [START_REF] Kroesen | Non-steroidal anti-inflammatory drugs as host-directed therapy for tuberculosis: A systematic review[END_REF]. Furthermore, combining anti-TB drugs with NSAID leads to a synergic effect on bacterial load [START_REF] Dutta | Activity of diclofenac used alone and in combination with streptomycin against Mycobacterium tuberculosis in mice[END_REF]. Ibuprofen alone is efficient on bacillary loads and on mice survival [START_REF] Vilaplana | Ibuprofen therapy resulted in significantly decreased tissue bacillary loads and increased survival in a new murine experimental model of active tuberculosis[END_REF]. However, a study with patients taking celecoxib and anti-TB drugs together does not highlight a change in bactericidal activity [START_REF] Naftalin | Adjunctive use of celecoxib with antituberculosis drugs: evaluation in a whole-blood bactericidal activity model[END_REF]. Another large population-based study demonstrates that patients using NSAID are associated with increased risk for active TB, emphasizing the divergence between the studies (C. W. [START_REF] Wu | Risk of incident active tuberculosis disease in patients treated with non-steroidal anti-inflammatory drugs: A population-based study[END_REF]. Several trials are currently ongoing to assess the benefit of NSAID for TB treatment and should status for their use or not.
Statins: Statins are lipid-lowering drugs that inhibit the 3-hydroxy-3-methylglutaryl-CoA (HMG-CoA) reductase [START_REF] Parihar | Statin Therapy Reduces the Mycobacterium tuberculosis Burden in Human Macrophages and in Mice by Enhancing Autophagy and Phagosome Maturation[END_REF]. This enzyme is essential in lipid metabolism and acts on inflammatory pathways [START_REF] Lobato | Statins increase rifampin Mycobactericidal effect[END_REF][START_REF] Skerry | Simvastatin increases the in vivo activity of the first-line tuberculosis regimen[END_REF]. Many studies confirm that statins enhance the in vitro activity of first-line anti-TB drugs like RIF or INH [START_REF] Lobato | Statins increase rifampin Mycobactericidal effect[END_REF][START_REF] Skerry | Simvastatin increases the in vivo activity of the first-line tuberculosis regimen[END_REF]. Statins contribute to macrophage-bactericidal activity against MTB by increasing phagosomal maturation and by promoting host-induced autophagy in murine macrophages [START_REF] Parihar | Statin Therapy Reduces the Mycobacterium tuberculosis Burden in Human Macrophages and in Mice by Enhancing Autophagy and Phagosome Maturation[END_REF]. These effects are supposedly linked to the reduction of total cholesterol in host macrophages, thus limiting the favorable lipid environment for MTB [START_REF] Parihar | Statin Therapy Reduces the Mycobacterium tuberculosis Burden in Human Macrophages and in Mice by Enhancing Autophagy and Phagosome Maturation[END_REF]. A study reports that hospitalized patients treated with anti-TB drugs and statins at the same time display a reduction of mortality after admission [START_REF] Jeeraaumponwat | Statin with standard antituberculosis regimen and mortality in hospitalized pulmonary tuberculosis[END_REF]. A national representative cohort has confirmed that these drugs decrease the risk of active TB, suggesting a real benefit of these lipid-lowering drugs [START_REF] Lai | Statin treatment is associated with a decreased risk of active tuberculosis: an analysis of a nationally representative cohort[END_REF]. However, a recent population-based cohort study affirms that statins do not improve anti-TB treatment (Y.-T. [START_REF] Chen | Use of lipid-lowering agents is not associated with improved outcomes for tuberculosis patients on standard-course therapy: A population-based cohort study[END_REF]. These inconsistencies may be attributed to the differences in the cohort. Thus, further trials are needed to conclude on statins effects.
HDT can be a useful tool in the fight against TB, including the MDR-TB and XDR-TB. Used in combination with anti-TB drugs, they could shorten the TB treatment, reduce side and toxic effects as well as improve patient compliance. While the impact of HDT on TB treatment could be major, further studies and trials are needed to fully include HDT as part of the TB regimen.
Recently, many studies have focused on the ability of pathogens to modify host epigenetics.
Thus, it would be interesting to investigate whether targeting the host epigenome as a novel HDT may give the host an advantage on the fight against MTB.
II.
Targeting epigenetics
Definition
The concept of epigenetics was first introduced in 1942 by Conrad Waddington [START_REF] Deichmann | Epigenetics: The origins and evolution of a fashionable topic[END_REF]. He defined epigenetics as "the complex of developmental processes between the genotype and phenotype". Over the years, this concept has evolved and now refers to the research on chromatin modifications such as DNA or histone modifications and RNA related changes, without a change in the DNA base sequence [START_REF] Deichmann | Epigenetics: The origins and evolution of a fashionable topic[END_REF]. Recently, it has been reported that pathogens target and modulate the host epigenome (epigenetic status of the host cell) to their advantage to enhance their intracellular survival [START_REF] Fatima | Epigenetic code during mycobacterial infections: therapeutic implications for tuberculosis[END_REF]. Three main categories of epigenetic changes are modulated by pathogens: DNA methylation, histone modifications and RNA modifications (M. Singh et al., 2018) (Figure 5).
DNA methylation: DNA methylation is a chemical process in which a methyl group (CH3) is added to a cytosine residue [START_REF] Adhikari | DNA methyltransferases and epigenetic regulation in bacteriaa[END_REF]. This methylation tag is mostly common on the CpG sequence in the genome and is regulated by various DNA methyltransferase (DNMTs) enzymes DNMT1, DNMT2, DNMT3a, and DNMT3b [START_REF] Adhikari | DNA methyltransferases and epigenetic regulation in bacteriaa[END_REF][START_REF] Bird | DNA methylation patterns and epigenetic memory[END_REF]. Among them, isoform DNMT1 is the most abundant and methylates already hemimethylated CpG dinucleotides whereas DNMT3a/b performs "de novo" methylation or hemi-« élément sous droit, diffusion non autorisée » "copyrighted material, unauthorized distribution" [START_REF] Gartstein | Prenatal influences on temperament development: The role of environmental epigenetics[END_REF].
methylation [START_REF] Bird | DNA methylation patterns and epigenetic memory[END_REF]. DNA methylation leads to gene silencing by preventing the recruitment of transcriptional factors or by recruiting some methyl-binding proteins that bind transcriptional corepressor complexes [START_REF] Miller | The role of DNA methylation and histone modifications in transcriptional regulation in humans[END_REF]. DNA methylation is stable and usually influences the development, the differentiation and the gene expression of the host cell [START_REF] Miller | The role of DNA methylation and histone modifications in transcriptional regulation in humans[END_REF].
Histone modifications: Chromatin is formed at its basic unit by the association of proteins called histones with eukaryotic DNA. This unit called the nucleosome is composed of approximately 147 base pairs of DNA attached to an octamer of four core histones proteins (H2A, H2B, H3, and H4) (Tollefsbol, 2011). The linker histone H1 for its part, links nucleosomes with each other and around the DNA. The structure of the chromatin is dynamic and is divided into euchromatin or heterochromatin, that are transcriptionally active or inactive states, respectively [START_REF] Tjelle | Phagosome dynamics and function[END_REF]. In the euchromatin region, DNA is accessible and contains genes that can be transcribed; DNA in heterochromatin is highly condensed and inaccessible.
Histone modifications change chromatin status and therefore modulate various processes such as gene expression or DNA repair and DNA replication [START_REF] Miller | The role of DNA methylation and histone modifications in transcriptional regulation in humans[END_REF]. Histones are positively charged and interact with the negatively charged DNA. Modification on histone neutralizes these positive charges, allowing the exposition of DNA to transcription factors [START_REF] Miller | The role of DNA methylation and histone modifications in transcriptional regulation in humans[END_REF]. Histone modifications are post-translational modifications and include methylation, acetylation, phosphorylation, ubiquitination and sumoylation (Bowman & Poirier, 2014). They are performed by different enzymes like histone methyltransferases / demethylase, histone acetyltransferases / deacetylases, kinases or phosphatases (Bowman & Poirier, 2014). Histone acetylation leads to transcriptional activation whereas deacetylation leads to the opposite. Depending on the amino acid that is methylated and the degree of methylation, histone methylation activates or silences gene expression. In conclusion, epigenetic modifications on histone are wide, diverse and have an important impact on gene expression.
RNA modifications: Noncoding RNA are small RNA (21-24 nucleotides) that regulate gene expression at the post-transcriptional level (Treiber et al., 2018). These noncoding RNA, also called microRNA (miRNA), have an important role in cell proliferation, differentiation or apoptosis (Treiber et al., 2018). They are highly conserved endogenous gene silencers which target, along with the RNA-induced silencing complex (RISC), the protein-coding mRNA [START_REF] Tjelle | Phagosome dynamics and function[END_REF]. This binding leads to either mRNA degradation or the inhibition of its translation. Thus, miRNA are major regulators of many cellular pathways, including those related to immunity. Recent studies have focused also on long non-coding RNA (lncRNA) and their role in host response to mycobacterial infection. They are transcripts with more than 200 nucleotides which do not code for proteins but act as chromatin remodelers and regulators of innate immunity [START_REF] Hadjicharalambous | Long Non-Coding RNAs and the Innate Immune Response[END_REF].
Epigenetics during tuberculosis infection
MTB, like other bacterial pathogens, modulates the host epigenome to its advantage, promoting its adaptation to the host environment, latency or growth inside the host cells.
Alteration of the host DNA methylation profile as well as histone modifications or RNA interferences by MTB and their impact on the mycobacterial virulence will be described in this thesis.
DNA methylation
A few years ago, it has been demonstrated that the host DNA methylation profile is modified by MTB during infection in macrophages and in DCs. DNA methylation is an important epigenetic modification whose roles include regulation of chromosome stability, DNA mismatch repair and gene transcription [START_REF] Marinus | Roles of DNA adenine methylation in host-pathogen interactions: Mismatch repair, transcriptional regulation, and more[END_REF]. These methylations are involved in the regulation of transcriptional responses to the infection [START_REF] Pacis | Bacterial infection remodels the DNA methylation landscape of human dendritic cells[END_REF][START_REF] Zheng | Unraveling methylation changes of host macrophages in Mycobacterium tuberculosis infection[END_REF]. MTB possesses several methyltransferases including Rv2966c which localizes in the host cell nucleus and modifies host DNA methylation on non-CpG regions [START_REF] Sharma | The interaction of mycobacterial protein Rv2966c with host chromatin is mediated through non-CpG methylation and histone H3/H4 binding[END_REF]. By comparing genome-wide DNA methylation profiles between THP-1 macrophages infected or not with MTB, a study demonstrates an enrichment of DNA hypermethylation hotspots (associated with gene repression) in MTB-infected macrophages for specific gene families including HLA complex, cytokines and inflammatory associated genes [START_REF] Sharma | Genome-wide non-CpG methylation of the host genome during M. tuberculosis infection[END_REF]. Moreover, a study performed on TB patients shows a higher methylation level on TLR2 promoter's regions and lower TLR2 expression on monocytes, compared to healthy people [START_REF] Kisich | Antimycobacterial agent based on mRNA encoding human β-defensin 2 enables primary macrophages to restrict growth of Mycobacterium tuberculosis[END_REF]. In this study, a TLR2 hypermethylation is associated with active pulmonary TB. Interestingly, TLR2 gene expression is reverted after a six-month anti-TB treatment suggesting this hypermethylation is somehow important for the infection. Similarly, hypermethylation in the CpG island of the VDR gene contributes to the risk of TB for African and European patients [START_REF] Jiang | The methylation state of VDR gene in pulmonary tuberculosis patients[END_REF]. Another study describes that patients with TB have DNA hypermethylation on several genes related to IL-2, NF-κB, and IFN-γ signaling pathways [START_REF] Dinardo | DNA hypermethylation during tuberculosis dampens host immune responsiveness[END_REF]. These methylations are associated with a decreased production of major protective cytokines including TNF-α, IL-12 or IFN-γ (DiNardo et al., 2020). In addition, macrophages isolated from TB patients present a DNA hypermethylation in the inflammatory protective IL-17 gene promoter region [START_REF] Zheng | Unraveling methylation changes of host macrophages in Mycobacterium tuberculosis infection[END_REF]. On the other side, the host modulates its own DNA methylation profile to fight against bacterial infection. For instance, a demethylation of the promoter region of NLRP3 during infection activates the NLRP3 inflammasome and increases the levels of pro-inflammatory cytokines IL-1β and IL-18 to control the infection (M. [START_REF] Wei | NLRP3 Activation Was Regulated by DNA Methylation Modification during Mycobacterium tuberculosis Infection[END_REF].
In conclusion, maintaining host DNA methylation and demethylation profile is crucial in order to have proper immune responses and to fight infections. Modifying this methylation/demethylation patterns could modulate TB outcomes in favor of the host or MTB.
Histone modifications
Histone modifications, by chromatin remodeling, are responsible for the accessibility of DNA to transcription factors, thereby modulating gene expression. These modifications are targeted by pathogens to modulate the host immune machinery for their own survival [START_REF] Fatima | Epigenetic code during mycobacterial infections: therapeutic implications for tuberculosis[END_REF]. For instance, MTB secretes ESAT-6 within the host, which then inhibits histone methylation at the lysine 4 of histone 3 (H3K4) site induced by IFN-γ. This hypomethylation results in the suppression of the Class II transactivator (CIITA) protein expression and thus diminution of antigen presentation within macrophages (P. [START_REF] Kumar | ESAT6 differentially inhibits IFN-γ-inducible class II transactivator isoforms in both a TLR2-dependent andindependent manner[END_REF]. Apart from ESAT-6, the enhanced intracellular survival (Eis) acetyltransferase of MTB induces a high level of IL-10 expression via acetylation of histone H3 [START_REF] Duan | Mycobacterium tuberculosis EIS gene inhibits macrophage autophagy through up-regulation of IL-10 by increasing the acetylation of histone H3[END_REF]. This mechanism was presented by the authors as part of the MTB escape mechanism from autophagy [START_REF] Duan | Mycobacterium tuberculosis EIS gene inhibits macrophage autophagy through up-regulation of IL-10 by increasing the acetylation of histone H3[END_REF]. The methyltransferase secreted Rv1998 is implicated in host histone modulation by MTB [START_REF] Yaseen | Mycobacteria modulate host epigenetic machinery by Rv1988 methylation of a non-tail arginine of histone H3[END_REF]. Rv1988 di-methylates the arginine at the position 42 of the H3 (H3R42me2), which in turn represses the expression of genes involved in ROS production NOX1, NOX4, NOS2 and the tumor necrosis factor receptor-associated factor (TRAF3) controlling type I IFN production [START_REF] Häcker | Expanding TRAF function: TRAF3 as a tri-faced immune regulator[END_REF][START_REF] Yaseen | Mycobacteria modulate host epigenetic machinery by Rv1988 methylation of a non-tail arginine of histone H3[END_REF]. Deletion of Rv1988 gene in MTB results in reduced survival in the host, highlighting the major role of this methylase during the infection process [START_REF] Yaseen | Mycobacteria modulate host epigenetic machinery by Rv1988 methylation of a non-tail arginine of histone H3[END_REF].
In addition, MTB equally regulates host histone deacetylase (HDAC) to reduce cytokines expression. Indeed, a study shows that MTB infection induces HDAC1 expression with a concomitant decrease of H3 acetylation level observed [START_REF] Chandran | Mycobacterium tuberculosis infection induces HDAC1-mediated suppression of IL-12B gene expression in macrophages[END_REF]. HDAC1 is subsequently recruited to the IL-12B promoter and represses the expression of this cytokine [START_REF] Chandran | Mycobacterium tuberculosis infection induces HDAC1-mediated suppression of IL-12B gene expression in macrophages[END_REF]. IL-12B plays a role in the initiation of the Th1 immune response of the host [START_REF] Cooper | Interleukin 12 (IL-12) is crucial to the development of protective immunity in mice intravenously infected with mycobacterium tuberculosis[END_REF]. In a consistent way, knocking-down HDAC1 gene in macrophages reduces MTB survival and promotes colocalization of MTB and LC3 in autophagosomes [START_REF] Madhavan | Transcription Repressor Protein ZBTB25 Associates with HDAC1-Sin3a Complex in Mycobacterium tuberculosis-Infected Macrophages, and Its Inhibition Clears Pathogen by Autophagy[END_REF]. Furthermore, a study demonstrates that MMP-1, responsible for matrix degradation upon MTB infection, is regulated by histone acetylation and deacetylation.
In MTB-infected macrophages, MMP-1 is upregulated and chemical inhibition of histone acetyltransferases suppressed the MTB-driven MMP-1 (and MMP-3) induction [START_REF] Moores | Epigenetic regulation of matrix metalloproteinase-1 and -3 expression in Mycobacterium tuberculosis infection[END_REF]. This study demonstrates that MTB has the ability to induce changes in host HDAC expression for its own benefice.
Therefore, the previous studies demonstrate that epigenetic mechanisms regulating histone modifications contribute broadly to the MTB arsenal against the host immune response. miRNA miRNA are important regulators of gene expression including cytokines related genes. They act as a key player in host-MTB interaction during infection. It has been shown that many miRNAs are manipulated by MTB to alter cytokines expression. Some miRNA implicated in TB infection are listed in Table 3.
miRNA-125a
UV radiation resistanceassociated gene Inhibition of autophagy (J. K. [START_REF] Kim | MicroRNA-125a Inhibits Autophagy Activation and Antimicrobial Responses during Mycobacterial Infection[END_REF] miRNA-125b TNF-α Inhibition of TNF mRNA and reduction of host inflammatory response [START_REF] Rajaram | Mycobacterium tuberculosis lipomannan blocks TNF biosynthesis by regulating macrophage MAPK-activated protein kinase 2 (MK2) and microRNA miR-125b[END_REF] miRNA-140 TNF receptor -associated factor 6 (TRAF6)
Reduction of pro-inflammatory cytokines levels of IL-6, TNF-α, IL-1β (X. [START_REF] Li | MiR-140 modulates the inflammatory responses of Mycobacterium tuberculosis-infected macrophages by targeting TRAF6[END_REF] miRNA-144 DNA damage-regulated autophagy modulator 2 Inhibition of autophagosome formation (J. K. [START_REF] Kim | MIR144* inhibits antimicrobial responses against Mycobacterium tuberculosis in human monocytes and macrophages by targeting the autophagy protein DRAM2[END_REF] miRNA-146a miRNA are important actors during MTB infection and may represent potential therapeutic targets to fight TB. Furthermore, circulating miRNA in serum are undergoing evaluation to be prognostic biomarkers for TB infection [START_REF] Chakrabarty | Host and MTB genome encoded miRNA markers for diagnosis of tuberculosis[END_REF]. Altogether, miRNA studies represent a various and promising field regarding MTB infection.
Modulation of the host epigenome is a mechanism used by MTB to promote its chances of survival within the host cell. Therefore, inhibiting these mechanisms could be valuable for HDT to fight the mycobacterial infection. For example, several HDACs inhibitors are in clinical trials in the treatment of breast cancer [START_REF] Hontecillas-Prieto | Synergistic Enhancement of Cancer Therapy Using HDAC Inhibitors: Opportunity for Clinical Trials[END_REF]. These molecules being already tested in humans, their benefits could be extended to patients with TB. Recently, a study performed in vitro in human macrophages and in vivo in zebrafish reveals that inhibition of HDACs by trichostatin A enhances the antimicrobial response of M1 and M2 macrophages [START_REF] Moreira | Functional Inhibition of Host Histone Deacetylases (HDACs) Enhances in vitro and in vivo Anti-mycobacterial Activity in Human Macrophages and in Zebrafish[END_REF]. Furthermore, a selective inhibition of class IIa HDACs by TMP195 and TMP26 decreases bacterial growth in M2 macrophages, showing the potential of molecules targeting HDAC in MTB HDT [START_REF] Moreira | Functional Inhibition of Host Histone Deacetylases (HDACs) Enhances in vitro and in vivo Anti-mycobacterial Activity in Human Macrophages and in Zebrafish[END_REF]. Recently, a growing number of studies focuses on specific HDAC upon MTB infection: the sirtuins family.
Example of sirtuins
The sirtuins family is composed of seven proteins called sirtuin 1 to sirtuin 7 (SIRT1-7), with different modes of action and targets [START_REF] Houtkooper | Sirtuins as regulators of metabolism and healthspan[END_REF]. They are divided into four main classes: SIRT1-3 belong to class I, SIRT4 to class II, SIRT5 to class III and SIRT6-7 to class IV [START_REF] Michan | Sirtuins in mammals: Insights into their biological function[END_REF]. The sirtuins have different cellular localizations: SIRT1 is localized in the nucleus and cytosol, SIRT2 in the cytosol, SIRT3-5 in mitochondria, SIRT6 in the nucleus and SIRT7 in the nucleolus [START_REF] Michan | Sirtuins in mammals: Insights into their biological function[END_REF]. SIRT1, 2 and 3 are nicotinamide adenine dinucleotide (NAD + )-dependent class III histone deacetylase. They target histone and nonhistone proteins [START_REF] Houtkooper | Sirtuins as regulators of metabolism and healthspan[END_REF]. The most described SIRT involved upon MTB infection are the SIRT1, 2, 3 and will be described further in this thesis.
SIRT1: this sirtuin is involved in different cellular processes such as host metabolism and stress responses [START_REF] Dang | The controversial world of sirtuins[END_REF]. SIRT1 targets transcription factors including p53, NF-κB, FOXO1 but also histones, thereby modulating gene expression [START_REF] Houtkooper | Sirtuins as regulators of metabolism and healthspan[END_REF]. For example, deacetylation of p53 impairs its activity as an inducer of apoptosis and autophagy [START_REF] Kitada | Chapter 3 -Role of Sirt1 as a Regulator of Autophagy[END_REF]. The importance of SIRT1 in gene regulation has been investigated upon bacterial infection. First of all, disruption of SIRT1 pathway leads to impairment of autophagy in cells infected by Salmonella Typhimurium [START_REF] Ganesan | Salmonella Typhimurium disrupts Sirt1/AMPK checkpoint control of mTOR to impair autophagy[END_REF]. In a consistent way, it has been observed that MTB infection down-regulates SIRT1 expression and thus impairs the SIRT1mediated autophagy [START_REF] Cheng | Host sirtuin 1 regulates mycobacterial immunopathogenesis and represents a therapeutic target against tuberculosis[END_REF] . Macrophages treated with resveratrol, an activator of SIRT1, better control intracellular bacterial load through an increase in the autophagy pathway. Moreover, Sirt1 +/-mice exhibit higher inflammation with enhancement of proinflammatory cytokines and are more susceptible to MTB infection (H. [START_REF] Yang | Role of Sirt1 in innate immune mechanisms against Mycobacterium tuberculosis via the inhibition of TAK1 activation[END_REF]. SIRT1
activation allows the regulation of this overt inflammation through the inhibition of transforming growth factor-β-activated kinase 1 (TAK1) pathway (which leads to the subsequent NF-κB activation) (H. [START_REF] Yang | Role of Sirt1 in innate immune mechanisms against Mycobacterium tuberculosis via the inhibition of TAK1 activation[END_REF]. SIRT1 activation also normalizes the MTBinduced inflammation via RelA/p65 deacetylation [START_REF] Cheng | Host sirtuin 1 regulates mycobacterial immunopathogenesis and represents a therapeutic target against tuberculosis[END_REF]. Moreover, a study has recently demonstrated that SIRT1 prevents the activation of the B-cell lymphoma 2-associated X protein (Bax) pro-apoptotic factor, which is normally increased upon MTB infection, and thus limits excessive apoptosis (H. [START_REF] Yang | Sirt1 activation negatively regulates overt apoptosis in Mtb-infected macrophage through Bax[END_REF]. Another team suggests that SIRT1 directly promotes glycogen synthase kinase 3 beta (GSK3β) phosphorylation by its deacetylation, which also prevents MTB-induced apoptosis in macrophages (H. [START_REF] Yang | Sirtuin inhibits M. tuberculosis -induced apoptosis in macrophage through glycogen synthase kinase-3β[END_REF]. Together, these studies demonstrate that SIRT1 has an important role in the MTB pathogenesis and could be a potential target for HDT.
SIRT2: SIRT2 targets proteins (histone or not) according to their subcellular localization and influences different biological processes. For example, this sirtuin deacetylates tubulin and hence regulates cytoskeleton dynamics, modulates metabolic pathways or regulates skeletal muscle differentiation [START_REF] Houtkooper | Sirtuins as regulators of metabolism and healthspan[END_REF]. SIRT2 has been implicated in various processes such as inhibition of autophagy or regulation of NF-κB signaling via p65 deacetylation (P. [START_REF] Gomes | Emerging Role of Sirtuin 2 in the Regulation of Mammalian Metabolism[END_REF]. SIRT2 is highly expressed in myeloid cells including macrophages and its role in bacterial infection has been investigated. Listeria monocytogenes induces deacetylation of H3K18 by regulating SIRT2 function and localization [START_REF] Eskandarian | A role for SIRT2-dependent histone H3K18 deacetylation in bacterial infection[END_REF][START_REF] Pereira | Infection Reveals a Modification of SIRT2 Critical for Chromatin Association[END_REF]. Inactivation of SIRT2 results in a decrease of bacterial infection in vivo and in vitro, showing that activity of SIRT2 is necessary for bacterial infection [START_REF] Eskandarian | A role for SIRT2-dependent histone H3K18 deacetylation in bacterial infection[END_REF]. Moreover, enhanced bacterial phagocytosis upon staphylococcal infection is observed in mice deficient for SIRT2 [START_REF] Ciarlo | Sirtuin 2 Deficiency Increases Bacterial Phagocytosis by Macrophages and Protects from Chronic Staphylococcal Infection[END_REF].
Similarly, SIRT2 -/-mice show a better S. Typhimurium clearance after infection, compared to wild-type mice [START_REF] Gogoi | Salmonella escapes adaptive immune response via SIRT2 mediated modulation of innate immune response in dendritic cells[END_REF]. A study suggests that MTB induces SIRT2 translocation into the nucleus and deacetylation of H3K18 [START_REF] Bhaskar | Host sirtuin 2 as an immunotherapeutic target against tuberculosis[END_REF]. Moreover, inhibition of SIRT2 leads to a decrease of intracellular MTB growth and significantly reduces MTB pathogenicity in mice [START_REF] Bhaskar | Host sirtuin 2 as an immunotherapeutic target against tuberculosis[END_REF]. However, an opposite result has been published by another team, where SIRT2 deletion in the myeloid lineage increases MTB load in the lungs and liver in mice [START_REF] Cardoso | Myeloid sirtuin 2 expression does not impact long-term Mycobacterium tuberculosis control[END_REF].
SIRT2 also modulates the expression of inflammatory genes via the deacetylation and inhibition of NF-κB (described in a sepsis mice model) (X. Wang et al., 2016). Hsp90 is also deacetylated by SIRT2, resulting in a disassociation of Hsp90 with the glucocorticoid receptor (GR). This disassociation allows the translocation of GR into the nucleus and the repression of pro-inflammatory cytokines expression by its binding to NF-κB (K. Sun et al., 2020).
SIRT2 is an important sirtuin which is involved in the process of many bacterial infections.
Thus, its regulation by HDT could better control bacterial infection outcomes, including MTB infection.
SIRT3: SIRT3 is the major mitochondrial sirtuin deacetylase. It regulates various processes like amino acid metabolism, fatty acid oxidation, energy balance, ETC activity and cellular redox homeostasis [START_REF] Houtkooper | Sirtuins as regulators of metabolism and healthspan[END_REF]. In MTB infection, SIRT3 deficiency is associated with an enhanced inflammatory response and mitochondrial dysfunctions (T. S. [START_REF] Kim | SIRT3 promotes antimycobacterial defenses by coordinating mitochondrial and autophagic functions[END_REF].
sirt3 -/-mice present an increased susceptibility to MTB infection and an exaggerated lungs inflammatory response with damaged mitochondria in macrophages (T. S. [START_REF] Kim | SIRT3 promotes antimycobacterial defenses by coordinating mitochondrial and autophagic functions[END_REF].
MTB down-regulates SIRT3 in a TLR2-dependent manner [START_REF] Smulan | Sirtuin 3 downregulation in mycobacterium tuberculosis-infected macrophages reprograms mitochondrial metabolism and promotes cell death[END_REF]. This decrease results in the destabilization of the TCA cycle and ETC, which then increases mtROS and proinflammatory cytokines and may lead to cell death [START_REF] Smulan | Sirtuin 3 downregulation in mycobacterium tuberculosis-infected macrophages reprograms mitochondrial metabolism and promotes cell death[END_REF]. Activation of SIRT3 also enhances bacterial autophagy during MTB infection (T. S. [START_REF] Kim | SIRT3 promotes antimycobacterial defenses by coordinating mitochondrial and autophagic functions[END_REF]. Finally, a recent study shows that the single-nucleotide polymorphisms rs3782118 on sirt3 was associated with an increased risk of TB, showing the implication of SIRT3 in TB pathogenesis (T. [START_REF] Wu | The dominant model analysis of Sirt3 genetic variants is associated with susceptibility to tuberculosis in a Chinese Han population[END_REF]. Altogether, SIRT3 seems to be a modulator of the mycobacterial infection outcomes. Activation of SIRT3 leads to better bacterial control and can therefore be considered as a new target for HDT. Of note, it seems that SIRT3 deficiency does not affect immune cell development or defenses against other pathogens like Escherichia coli, Klebsiella pneumoniae, or L. monogenes (Ciarlo, Heinonen, Lugrin, et al., 2017).
Sirtuins are key factors in many cellular processes but also during MTB infection. Small molecules targeting sirtuins could therefore prevent intracellular mycobacterial growth. In this context, sirtuin modulators could be valuable news drugs for the treatment of MTB infection.
III. News drugs or regimen for tuberculosis treatment
In fifty years, only three news drugs have been approved for treating TB [START_REF] Cox | FDA Approval of Bedaquiline -The Benefit-Risk Balance for Drug-Resistant Tuberculosis[END_REF][START_REF] Ryan | Delamanid: First global approval[END_REF][START_REF] Keam | Pretomanid: First Approval[END_REF]. Emergence of antibiotic resistance to first-and second-line anti-TB drugs urge the development or the discovery of new treatments. In this optic, two main techniques can be used to address this problem: the repurposing of already approved drugs (which could shorten the approval of TB treatment), or the discovery of new drugs such as HDT or new antibiotics. In addition, given the difficulty of treating TB and MDR-TB, studying the combination of old and new drugs may be another promising approach.
Drug screening
Bacterial genome sequencing has enhanced the understanding of bacterial evolution and physiology. Indeed, it has allowed the identification of new targets essential for pathogen survival inside the host and has paved the way for the discovery of novel antibacterial drugs.
Subsequently to these findings, large high-throughput screening (HTS) assays were designed to assess the "druggable potential" of these targets (Payne et al., 2006).
A screening for drug discovery is the search of interaction between a chemical (natural or synthetic) and a defined biological target [START_REF] Entzeroth | Overview of high-throughput screening[END_REF]. For example, classical HTS assay evaluates the drug binding to a receptor or monitors enzyme activities by measuring fluorescence intensity modifications or by using reporter gene expression assays [START_REF] Entzeroth | Overview of high-throughput screening[END_REF]. Introduction of automated devices over the years has enabled the automatization of screening and a high throughput testing of compounds, compared to manual methods. The tested compounds are often grouped in compound libraries, some of which contain several hundred thousand compounds [START_REF] Entzeroth | Overview of high-throughput screening[END_REF]. This method also allows the multiplication of screening conditions with different parameters (media, nutrients, pH, infection time) and the use of different MTB strains with a significant time saving [START_REF] Ollinger | A highthroughput whole cell screen to identify inhibitors of Mycobacterium tuberculosis[END_REF].
In parallel, automatization of cellular imaging techniques has enabled the development of high-content screening (HCS). This approach combines automated images coupled with data analysis in a high-throughput format (Wagner et al., 2005b). HCS, for instance, allows the visualization of intracellular bacterial growth, the exclusion of cytotoxic compounds and the quantification of the effect of compounds via computer analysis of the data [START_REF] Queval | A microscopic phenotypic assay for the quantification of intracellular mycobacteria adapted for high-throughput/high-content screening[END_REF][START_REF] White | Mycobacterium tuberculosis highthroughput screening[END_REF]. The purpose of HTS/HCS screening is to find active compounds/molecules, also called "hits", for which the activity/potential has to be confirmed with ulterior assays to be considered as a "lead". This hit-to-lead phase is based on the potential of the molecule in terms of toxicology, pharmacology and structure-activity relationships (SAR) [START_REF] Temml | Structure-based molecular modeling in SAR analysis and lead optimization[END_REF]. The selected lead(s) will then be optimized, possibly with the synthesis of more potent or less toxic analogues. Target identification or validation could be done in parallel with this optimization. The efficacy of the molecule can be evaluated in an in vivo model for further development in the TB drug pipeline.
HCS can also be associated with RNA interference (RNAi) genetic screening. In this technique, short interfering RNA (siRNA) are employed to silence specific genes during the experiment [START_REF] Krausz | High-content siRNA screening for target identification and validation[END_REF]. It can be used to assess the importance of certain genes during the infection and increases the understanding of factors involved in host-pathogen interaction [START_REF] Queval | Mycobacterium tuberculosis Controls Phagosomal Acidification by Targeting CISH-Mediated Signaling[END_REF]Wagner et al., 2005b).
High-throughput approaches on MTB have been conducted in different conditions such as nutrient starvation, stress, infection or in live cells and have provided amount of information (intracellular growth, phagosomal modification, oxidative stress production, lipid accumulation) [START_REF] Christophe | High-content imaging of Mycobacterium tuberculosis-infected macrophages: An in vitro model for tuberculosis drug discovery[END_REF][START_REF] Deboosere | High-Content Analysis Monitoring Intracellular Trafficking and Replication of Mycobacterium tuberculosis Inside Host Cells[END_REF]. They have revealed many natural or synthetic compounds that enhance host defenses against MTB [START_REF] Ollinger | A highthroughput whole cell screen to identify inhibitors of Mycobacterium tuberculosis[END_REF]. Due to their results, these high-throughput techniques can serve as a starting point for the discovery of new molecules against MTB. However, the major limitations of development of new drugs are the duration of the optimization phase and the resources devoted to the project.
Drug repurposing
The discovery of new drugs takes time considering they have to go through different clinical phases before they are approved by the competent authority. Considering this, alternative approaches such as drug repurposing is a valuable strategy for implementing a new TB regimen.
Drug repurposing concerns drugs that have been already approved for a pathology and revealed to be beneficial for another disease [START_REF] Maitra | Repurposing-a ray of hope in tackling extensively drug resistance in tuberculosis[END_REF]. Those drugs have the advantage of having already known safety and toxicity profiles due to their previous approval.
In this context, drug repurposing has a faster evaluation of the drugs for treating other diseases (an average of 14 years clinical trial for a new drug versus 5 years for a repurposed drug), but also a reduced development time and a lower cost (Chong & Sullivan, 2007b).
Before a drug is repurposed to treat MTB, various evaluations of the drug have to be carried out on MTB in vitro and in vivo (An et al., 2020a). Antibiotics but also antifungal or antiviral drugs are now under evaluation for TB treatment repurposing (An et al., 2020a). It is the case of mefloquine, normally used as an antimalarial drug, that has been shown to be an alternative to treat MDR-TB clinical strains in vitro and in vivo in mice, making it a potential candidate for drug repurposing (An et al., 2020a).
Facing the current low number of new drugs approved, tools for rapid identification of a potential drug to be repurposed have emerged. One of these tools is the in silico screening of already approved drugs. Computational analysis provides information on putative interaction(s) with new or old cellular targets, essential for MTB survival [START_REF] Passi | RepTB: a gene ontology based drug repurposing approach for tuberculosis[END_REF]. In silico screening is a structure-based phenotypic screening that assesses the potential binding between a known drug with a defined target (Battah et al., 2019). This approach is highly valuable to find potential new candidates. Moreover, it is also an interesting method to predict the target of the molecules and possibly their mechanisms of action [START_REF] Passi | RepTB: a gene ontology based drug repurposing approach for tuberculosis[END_REF].
However, a major problem with drug repurposing is that even if these drugs target MTB components, the bacteria can still develop resistance.
To tackle this issue, drugs that directly target the host is an alternative solution. Indeed, drug repurposing can be closely linked to HDT. It relies on the fact that a drug developed originally for a disease may exhibit secondary beneficial biological effects on the host [START_REF] Maitra | Repurposing-a ray of hope in tackling extensively drug resistance in tuberculosis[END_REF]. Utilization of those drugs on the host could result in a decrease of bacterial load or a better control of the infection. Several previously cited drugs are under evaluation for TB drug repurposing including metformin, NSAID drugs or statins (see section C.I.). Repurposed drugs
for HDT could be a potential way to include old drugs in a new TB regimen.
Despite the benefits of drug repurposing, they still require evaluations and clinical trials before being approved for the treatment of TB. Then, several investigations are performed to enable the expansion of TB treatment with approved drugs in combination.
Drugs combinations
New techniques, such as high-throughput approaches, combined with known chemical libraries, have allowed the testing of drug combinations for the approval of new anti-TB treatment regimens [START_REF] Ramón-García | Synergistic drug combinations for tuberculosis therapy identified by a novel high-throughput screen[END_REF]. Using a combination of already described drugs together may have the potential to better control bacterial infection. Firstly, drugs can target different pathways like the regimen used to treat TB with RIF, INH, EMB and PZA [START_REF] Fischbach | Combination therapies for combating antimicrobial resistance[END_REF]. Secondly, drugs may have different targets within the same pathway, like preventing pathways preventing cell wall synthesis [START_REF] Worthington | Combination approaches to combat multidrugresistant bacteria[END_REF]. Thirdly, drugs may have the same targets but act with different mechanisms (for example association of streptogramins and virginiamycin which alone are bacteriostatic but together have bactericidal activity -not used in TB treatment) [START_REF] Mast | Streptogramins -Two are better than one![END_REF]. Results of compound combinations are usually either additive, synergistic or antagonist. Additive effect is the combination of effects of each drug individually; synergy occurs when the effect produced is greater than the additive effects of the molecules and antagonism is the opposite of synergy [START_REF] Bulusu | Modelling of compound combination effects and applications to efficacy and toxicity: State-of-the-art, challenges and perspectives[END_REF] .
Combination of drugs is especially needed against MDR-TB because of their preexisted antibiotic resistance(s) and the risk of developing/acquiring new ones [START_REF] Brooks | Therapeutic strategies to combat antibiotic resistance[END_REF]. Interestingly, some drug combinations restore the efficacy of certain antibiotics in drug-resistant strains. A study has shown that addition of spectinomycin with first-line anti-TB drug INH and RIF restores their activities in vitro against resistant MTB [START_REF] Omollo | Developing synergistic drug combinations to restore antibiotic sensitivity in drug-resistant mycobacterium tuberculosis[END_REF].
Recently, drug combinations with adjuvants such as small molecules, have been investigated
for the design of new drug regimens against sensible and resistant MTB. For example, boosters were designed and tested for their ability to potentiate the activity of ethionamide (ETH) [START_REF] Willand | Synthetic EthR inhibitors boost antituberculous activity of ethionamide[END_REF]Flipo et al., 2012;[START_REF] Blondiaux | Reversion of antibiotic resistance in Mycobacterium tuberculosis by spiroisoxazoline SMARt-420[END_REF]. Furthermore, drug combinations allow an increased efficacy of drugs without the need of a high dose (and the significant toxicity usually associated with a high drug concentration) [START_REF] Brooks | Therapeutic strategies to combat antibiotic resistance[END_REF].
Finally, drug combinations may use ineffective antibiotics against the pathogen, like β-lactam for MTB. Those ineffective antibiotics may present a certain antimicrobial activity once combined with another drug. For example, β-lactam are rapidly hydrolyzed by MTB but meropenem, belonging to carbapenem family, combined with clavulanate, a β-lactamase inhibitor, shows potent activity against laboratory MTB strains [START_REF] Hugonnet | Meropenem-clavulanate is effective against extensively drug-resistant Mycobacterium tuberculosis[END_REF].
It is worth mentioning that there is a risk that drug combinations promote the evolution of drug resistance by causing higher sensitivity to other antibiotics or by blocking their activities [START_REF] Brooks | Therapeutic strategies to combat antibiotic resistance[END_REF]. Depending on the drug combinations as well as the drug concentrations and duration of the treatment, this strategy may also be toxic for the host cell.
Therefore, toxicity assays should be performed to find the best drug combinations and concentrations [START_REF] Yilancioglu | Design of high-order antibiotic combinations against M. tuberculosis by ranking and exclusion[END_REF] .
Several trials are ongoing to find new treatment regimens with old and new drugs (An et al., 2020a;[START_REF] Brooks | Therapeutic strategies to combat antibiotic resistance[END_REF]. Treatment of TB is complex and becomes even more difficult to treat with the apparition of MDR/XDR-TB strains [START_REF] Seung | Multidrug-resistant tuberculosis and extensively drug-resistant tuberculosis[END_REF]. Other strategies can be used to improve the efficacy of current TB treatment, notably with the use of nanoparticles.
Nanoparticles
Nanoparticles (NPs) are small particles made of lipids, liposomes, polymers or polysaccharides for example [START_REF] Costa-Gouveia | How can nanoparticles contribute to antituberculosis therapy?[END_REF]. There was an increasing interest for the use of NPs in anti-TB therapy due to their ability to either have a direct antimycobacterial activity or to act as a vehicle for enhancing the delivery of already known anti-TB drugs [START_REF] Costa-Gouveia | How can nanoparticles contribute to antituberculosis therapy?[END_REF]. For instance, booster molecules, which potentiate ETH, have limited solubility in aqueous media. Moreover, ETH tends to rapidly crystallize, limiting the effect of this combination. To solve this issue, the poly-β-cyclodextrins (pβCD) NPs was used to co-encapsulate these two compounds. Administration of this suspension shows no toxicity for the mice and is efficient to decrease pulmonary mycobacterial load, compared to untreated mice [START_REF] Costa-Gouveia | Combination therapy for tuberculosis treatment: Pulmonary administration of ethionamide and booster coloaded nanoparticles[END_REF]. Another study has assessed the intrinsic antibacterial properties of the unloaded pβCD. These NPs impair MTB-macrophages invasion by disrupting cell surface lipid rafts and contribute to the depletion of alveolar macrophages, diminishing the pool of potential reservoir for MTB [START_REF] Pastor | A novel codrug made of the combination of ethionamide and its potentiating booster: Synthesis, self-assembly into nanoparticles and antimycobacterial evaluation[END_REF]. More recently, ETH and its booster BDM43266 were synthetized as codrug (association of both molecules via a glutaric linker) and designed to self-assemble into NPs [START_REF] Pastor | A novel codrug made of the combination of ethionamide and its potentiating booster: Synthesis, self-assembly into nanoparticles and antimycobacterial evaluation[END_REF]. The resulted NPs can then be intranasally administered. Finally, NPs encapsulation of booster or anti-TB drug is thought to be very interesting for the treatment of drug-resistant TB. Indeed, the higher delivery of the drug or the ability to combine it with a booster may lead to a decrease of the dose needed for treatment, diminishing the risk of toxicity and side-effects [START_REF] Pelgrift | Nanotechnology as a therapeutic tool to combat microbial resistance[END_REF].
Taken together, these findings highlight the potential of using NPs as a strategy to improve current TB treatment and to fight more efficiently MDR-TB. In addition to NPs, discovery of news drugs, including HDT, drug repurposing and drug combinations represent valuable strategies to overcome MDR-TB and XDR-TB and to finally eradicate TB.
TB is one of the top ten causes of human death worldwide and remains a major public health threat. An effective vaccine against TB is still not available. Nowadays, people developing TB can be cured with a 6-month anti-TB treatment regimen, involving 4 antibiotics. This antibiotic treatment has been introduced 40 years ago, and multidrug resistant strains of MTB are continually emerging. Since then, and despite important investments by scientists and funding agencies, only a few new anti-TB drugs have been approved and released recently. Although several antibiotics are currently under clinical trials, it is most likely that MTB will develop resistance against these new molecules. In this context, innovative strategies are urgently needed to eradicate TB.
In addition to the development of "classical" anti-TB drugs targeting mycobacterial key factors necessary for MTB survival, host-directed therapies (HDT) are considered as a promising strategy to control MTB infection. HDT are drugs that target the host instead of the bacteria by either (i) modulating host anti-mycobacterial pathways that are blocked or altered by MTB to promote its survival, or (ii) potentiating antimicrobial host immune defense mechanisms.
Because HDT do not target mycobacteria directly, the development of resistance to HDTs by MTB is expected to be more difficult than for antibiotics [START_REF] Kaufmann | Host-directed therapies for bacterial and viral infections[END_REF]. Furthermore, HDTs are predicted to be effective on drug-sensitive and drug-resistant TB strains. Taken in combination with antibiotics, HDT may also reduce the duration of current anti-TB treatments [START_REF] Kilinç | Host-directed therapy to combat mycobacterial infections*[END_REF]. Indeed, host factors may influence the pharmacokinetics and pharmacodynamics of antimicrobials, impacting drug efficacy and/or toxicity. Therefore, combining antibiotics with host-targeted molecules to improve their efficacy, holds great promise for improving the treatment against TB. Taken together, these evidences suggest that HDT could be one of the future strategies used to eradicate TB.
MTB modulates host signaling and effector pathways to its own advantage [START_REF] Koul | Interplay between mycobacteria and host signalling pathways[END_REF].
In particular, MTB modifies the host epigenome to evade the immune response (M. Singh et al., 2018). Thus, by reprogramming the host immune system epigenome using HDT, the host cell may be able to better control intracellular MTB growth. This hypothesis is the starting point of this thesis, where I screened new compounds targeting the host epigenome that could render human macrophages more resistant to MT. Macrophages are important innate immune cells that detect and eliminate invading microbes including MTB and instruct the immune system for appropriate adaptive responses. Macrophages are the main cell target of MTB which has developed numerous strategies to survive and replicate inside them.
The main objectives of my thesis were the following:
-Identification of a molecule that directly enhances the capacity of macrophages to control MTB infection.
-In vitro characterization of the host target(s) of the molecule and its mechanism of action.
-Determination of hit optimization by SAR studies.
-Investigating the impact of the molecules or its analogues on the efficacy of TB drugs.
-Evaluation of the efficacy of the molecules in vivo, in a mouse model of MTB infection.
Previous studies have identified novel HDTs in TB through intracellular high-throughput screening methods [START_REF] Jayaswal | Identification of host-dependent survival factors for intracellular Mycobacterium tuberculosis through an siRNA screen[END_REF][START_REF] Korbee | Combined chemical genetics and data-driven bioinformatics approach identifies receptor tyrosine kinase inhibitors as host-directed antimicrobials[END_REF][START_REF] Stanley | Identification of Host-Targeted Small Molecules That Restrict Intracellular Mycobacterium tuberculosis Growth[END_REF], and led to the identification of host factors impacting MTB intracellular replication. Here we used primary cells, namely human monocyte-derived macrophages. HCS coupled to image-based analysis represent a rapid and reliable way to test molecules in an intracellular model of infection. Based on this assumption, the beginning of my thesis was devoted to acquiring knowledge and skills to use this technique. I was hosted in Priscille Brodin's team at the Pasteur Institute in Lille to be trained in the use of HCS. She has developed a phenotypic assay relying on the quantification of the growth of GFP-expressing MTB within cells using automated confocal microscopy. Back in my unit, I adapted this technique using human macrophages infected with MTB. After some adjustments, I was able to screen a library of non-commercialized molecules provided by a team of chemists from the University of Sapienza in Rome (Dante Rotili and Antonello Mai). The number of molecules in this library was relatively modest compared to others (<300 compounds), but they were developed to inhibit or activate enzymes regulating the host epigenome. Another advantage, these molecules have never been tested on MTB.
From this screening, I was able to identify a promising hit on which I based the following results of my thesis.
GlutaMAX; Gibco) and 10% FBS. The cells were incubated at 37°C, 5% CO2, and the culture medium was changed every two days.
Bacterial strains
Listeria monocytogenes strain wildtype EGD, number BUG600 was provided by Melanie
Hamon and grown in brain-heart infusion medium (BHI; Difco Laboratories) at 37°C. The strain E. coli MG1655 was cultivated in Luria Bertani (LB) medium at 37°C. Laboratory mycobacterial strains used in this study include MTB strain H37Rv (ATCC 27294) transformed with an Ms6based integrative plasmid pNIP48 harboring GFP (MTB H37Rv-GFP) provided by Priscille
Brodin [START_REF] Christophe | High-content imaging of Mycobacterium tuberculosis-infected macrophages: An in vitro model for tuberculosis drug discovery[END_REF], M. bovis BCG Pasteur 1173P2, M. abscessus (ATCC 19977) and BDQ-resistant MTB strain H37Rv (Giraud-Gatineau et al., 2020). Clinical resistant strains were provided by Nathalie Grall, Anne-Laure Roux and Jean-Louis Herrmann from the "Assistance Publique Hôpitaux de Paris". Mycobacteria were grown at 37°C, 5% CO2 in Middlebrook 7H9 medium (Becton-Dickinson) supplemented with 10% albumin dextrose catalase (ADC, Difco Laboratories), and 0.05% Tween 80. Hygromycin B was added to the culture medium of strains containing a plasmid encoding GFP.
Infections
L. monocytogenes was grown to exponential phase at an optical density 600 nm (OD600 nm) of 1. Bacteria were washed twice in PBS and added to cells at a multiplicity of infection (MOI) of 50:1. After 1 hour, cells were washed with medium, and 10 µg/mL of gentamycin was added to kill extracellular bacteria. Mycobacterial cultures were washed and aggregates were removed by filtration through a 10 μm filter. Number of bacteria was estimated by measuring OD600nm. Cells were infected at a MOI of 0.5:1, 2:1 or 5:1 for MTB, BCG and M.abscessus respectively for two hours in complete medium. The cells were washed and incubated at 37°C
for one hour in complete medium containing amikacin (50 μg/mL) to remove extracellular bacteria. Infected cells were then incubated in fresh complete medium and were ready to use.
System (Perkin Elmer Technology), with a 20x air objective NA 0,4. Images were analyzed using the software Columbus Conductor™ Database (Perkin Elmer Technologies).
RNA isolation, library preparation and sequencing
Total RNA from macrophages was extracted using QIAzol lysis reagent (Qiagen) and purified by RNeasy columns (Qiagen). RNA integrity of the samples was assessed with an Agilent 2100 bioanalyzer (Agilent Technologies). Only RNA with integrity number >9 were selected for further experiments. cDNA libraries were prepared with the Illumina TruSeq Stranded mRNA and were sequenced on llumina NovaSeq 6000 system at the CHU Sainte-Justine Integrated Centre for Pediatric Clinical Genomics (Montreal, Canada).
Gene ontology (GO) enrichment analyses were performed using the Cytoscape app ClueGO (version 3.8.2) [START_REF] Bindea | ClueGO: A Cytoscape plug-in to decipher functionally grouped gene ontology and pathway annotation networks[END_REF]. The following parameters were used: only pathways with pV ≤0.01, Minimum GO level = 3, Maximum GO level = 8, Min GO family >1, minimum number of genes associated to GO term = 3, and minimum percentage of genes associated to GO term = 4. Enrichment p-values were calculated using a hypergeometric test (p-value<0.05, Bonferroni corrected).
Quantitative reverse transcription PCR (RT-qPCR)
mRNA was reverse transcribed into cDNA SuperScript III Reverse Transcriptase (Thermo Fisher). Each standard dilution and each sample were tested in duplicate/triplicate and negative controls were included in each experiment. Amplification of cDNA was performed using Power SYBR Green PCR Master Mix (Thermo Fisher) in a StepOnePlus Real-Time PCR System Thermal Cycling block (Applied Biosystems). To perform RT-qPCR expression analyses, 2-fold dilutions of genomic DNA were used as PCR templates for construction of standard curve. Based on the CT values obtained, a linear regression line was plotted and the resulting equation is used to calculate the efficiency of amplification. Results were normalized using reference gene. The expression level of each gene was reported as ratio of expression (fold induction) according to the 2 -ΔCt method [START_REF] Pfaffl | A new mathematical model for relative quantification in real-time RT-PCR[END_REF].
RNA silencing of SIRT2
siRNA silencing was performed using a previously described protocol [START_REF] Troegeler | An efficient siRNA-mediated gene silencing in primary human monocytes, dendritic cells and macrophages[END_REF].
Briefly, macrophages were transfected using the lipid-based HiPerfect system (Qiagen), siRNA targeting SIRT2 (ON-TARGETplus SMARTpool at 50 nM) and a scramble siRNA (ON-TARGETplus Non-targeting Pool) (Dharmacon). After 6 hours, complete medium supplemented with M-CSF (final concentration of 10ng/mL) was added to the cells, before the cells were allowed to recuperate for an additional 2 days. The inactivation of SIRT2 was confirmed by RT-qPCR 48 hours after the transfection.
Western blot analysis
Cells were lysed with RIPA buffer (Thermo Fisher) supplemented with protease inhibitor cocktails (Complete Protease Inhibitor Cocktail; Sigma) and phosphatase inhibitors cocktails (Thermo Scientific). Protein concentration was determined using the BCA protein assay kit (Thermo Fisher) according to the manufacturer instructions. 20 µg of total proteins were loaded and separated by SDS-PAGE on a NuPage 4-12% Bis-Tris polyacrylamide gel (Thermo Fisher). Proteins were transferred into a nitrocellulose membrane using the iBlot dry blotting system (Invitrogen). The membranes were blocked with TBS-0.1% Tween20, 5% non-fat dry milk for 30 min at RT and incubated overnight with primary antibodies against α-β-Tubulin, anti-acetyl-histone H3K18 (Cell Signaling) and anti-histone H3 antibody (Abcam). Membranes were washed with TBS and incubated with the appropriate secondary antibody coupled to horseradish peroxidase-conjugated antibody (GE Healthcare) at RT for 1 hour. Membranes were washed and exposed to SuperSignal West Femto Maximum Sensitivity Substrate (Thermo Fisher). Detection and quantification of band intensities were performed using Azure Imager C400 (Azure Biosystems) and ImageJ software.
Indirect immunofluorescence
Macrophages (4 × 10 5 cells/mL) were grown on 12 mm circular coverslips in 24-well tissue culture plates for 24 hours in cell culture medium. Cells were infected and treated during the desired amount of time. Cells were fixed with 4% PAF for 1 hour at RT, and were then incubated for 30 min in 1% BSA (Sigma-Aldrich) and 0.075% saponin (Sigma-Aldrich) in PBS, to block nonspecific binding and to permeabilize the cells, respectively. Cells were incubated with anti-LC3 (MBL) during 2 hours at RT. Cells were washed and incubated with Alexa Fluor 555 secondary antibody (Thermo Fisher) for 2 hours. Nuclei were stained with DAPI (1 µg/mL) during 10 min. After labeling, coverslips were set in Fluoromount G medium (SouthernBiotech) on microscope slides. LC3B puncta were analyzed using Leica TCS SP8 confocal microscopy and quantified using ImageJ. Dot plots represent the mean values of at least 100 cells. Error bars depict the standard deviation.
Synergy degree of drugs combination
Bliss independence model was used to calculate the excess over Bliss model for growth inhibition (Q. Liu et al., 2018). The Bliss model has been suggested as the best model of choice for drugs with supposed different modes of action and targets [START_REF] Baeder | Antimicrobial combinations: Bliss independence and loewe additivity derived from mechanistic multi-hit models[END_REF]. The
In vivo experiments
Compounds were solubilized daily in 0.5% DMSO + 4.5% Cremophor EL® + 90%NaCl. 8 weeksold female C57BL/6J mice (Charles River Laboratories) were treated by intraperitoneal injections during 2 weeks, 6 days per weeks. Asparatate aminotransferase (AST) and alanine aminotransferase (ALT) were quantified using Reflover®Plus. Mice were infected via aerosol generated from a suspension containing 5 × 10 6 CFUs/ml of H37Rv to obtain an expected inhaled dose of 50-100 bacilli/lungs. After 7 days, mice were treated with compounds by intraperitoneal injections and/or with antibiotic by gavage during 2 weeks, 6 days per weeks.
Colony forming units (CFUs) in lungs and spleen of infected animals were determined after 21 days post-infection.
Identification of a compound by HCS that inhibits intracellular MTB growth.
In order to identify new compound(s) that inhibit(s) the intracellular MTB growth, we performed HCS, a technique that has already been proven successful for the discovery of compounds with a potent intracellular anti-mycobacterial efficacy [START_REF] Christophe | High-content imaging of Mycobacterium tuberculosis-infected macrophages: An in vitro model for tuberculosis drug discovery[END_REF][START_REF] Ollinger | A highthroughput whole cell screen to identify inhibitors of Mycobacterium tuberculosis[END_REF][START_REF] Shapira | High-Content Screening of Eukaryotic Kinase Inhibitors Identify CHK2 Inhibitor Activity Against Mycobacterium tuberculosis[END_REF]. Because MTB manipulates epigenetic host-signaling pathways to subvert host immunity (M. Singh et al., 2018), we hypothesized that reprogramming the host immune system by a compound targeting the host epigenome may lead to a better control of the bacterial infection (J. [START_REF] Cole | The therapeutic potential of epigenetic manipulation during infectious diseases[END_REF]. To tackle this hypothesis, we screened a library of 157 compounds with putative epigenetic targets to identify a hit.
Human monocytes-derived macrophages were first infected by MTB expressing GFP, before being seeded in a 384-well plate; each well contained a different compound at a final concentration of 10µM. Using automated confocal microscopy, cell toxicity was determined by comparing the number of nuclei stained by Hoechst 33342 in treated wells to the control well with DMSO. Compounds associated with a cell viability superior to 75% were considered as non-toxic (Figure 6A). Following this viability analysis, 106 compounds were selected for further testing.
Inhibition of intracellular MTB growth in presence of the remaining compounds was assessed by determining the spot area within the cells. The spot area is representative of the GFP area per cell, which is correlated to the amount of intracellular MTB expressing GFP. By comparing the spot area intensity of cells in presence of different compounds to the DMSO condition, we selected the most promising hit, MC3465, for its ability to reduce the bacterial infection by 60% compared to the control (Figures 6A-B). To validate these initial screening results, MC3465 efficacy was then confirmed by enumerating the number of bacteria inside macrophages (measured by colony forming units (CFUs) assay). A decrease of 25% and 55% of CFUs is obtained when MTB-infected cells are cultivated in presence of MC3465 for 1 or 4 days, respectively (Figure 6C). Moreover, MC3465 seems to have a bacteriostatic effect as the number of CFUs remains constant over time once MTB is treated with this compound, compared to a bactericidal INH treatment, where the number of CFU rapidly drops after 24 hours (Figure S1). To extrapolate the potential applicability of this compound to a larger scale, we then determined if MC3465 also exhibits an intracellular bacterial growth inhibitory activity against other bacteria. Our results show that MC3465 does not impede the intracellular growth of L. monocytogenes, S. Typhimurium and E. coli (Figures 6D andS2).
However, the intracellular growth of BCG and M. abscessus, a slow and a fast-growing mycobacterium respectively, is affected in presence of MC3465 (Figures 6D andS2).
To verify the host-directed effect of this molecule, we then monitored MTB growth in liquid medium, with the same concentration used in the cell-based infection model (10 µM) but also at a higher concentration (50µM). In any case, MC3465 does not affect MTB growth (Figure 6E). This observation suggests that MC3465 does not directly act on bacteria, but on the host cell. Finally, to confirm MC3465 targets only intracellular MTB, we performed a transwell experiment, where MTB was cultivated in presence of macrophages, without any direct contact with the host cells (Figure 6F). Our data shows that bacterial growth is not affected by MC3465 when bacteria are cultivated in the transwell condition, compared to the control condition, where bacteria are able to infect the host cell (Figure 6F). This observation suggests that MC3465 is not metabolized by macrophages in an active form that could target extracellular bacteria.
SIRT2 is not the target of MC3465.
MC3465 belongs to an uncommercialized library of compounds designed to target the host epigenome. This molecule has been described to inhibit specifically sirtuin 2 (SIRT2) (Gu et al., 2009a). In a recent study, it has been shown that inhibition of SIRT2 by AGK2 leads to a decrease of intracellular MTB survival [START_REF] Bhaskar | Host sirtuin 2 as an immunotherapeutic target against tuberculosis[END_REF]. To determine if MC3465 acts in a similar way as SIRT2 inhibitors, we incubated MTB-infected macrophages with MC3465, but also with two well-known SIRT2 inhibitors: AGK2 and SirReal2. Our results show that only MC3465 has a stronger effect on the intracellular MTB growth inside human macrophages, while no effect on bacterial growth is observed in the tested conditions for both AGK2 and SirReal2 (Figure 7A). This experiment suggests that MC3465 has probably a different mechanism of action than AGK2 and SirReal2. To keep investigating if SIRT2 is the putative target of MC3465, we then silenced the SIRT2 gene by using a siRNA and managed to reduce its expression by 70% in human macrophages (Figure 7B). As a control, the relative expression of SIRT2 when cells are incubated with a scramble RNA (scRNA) is similar to the control condition (Figure 7B). We then infected the silenced SIRT2 macrophages with MTB, with or without MC3465, and counted the number of intracellular bacteria after 4 days of infection. Even when 70% of SIRT2 gene is silenced, MC3465 has a similar effect on intracellular MTB, compared to when cells express SIRT2
(control and scRNA conditions) (Figure 7C). Moreover, MC3465 is still active on intracellular MTB in mouse embryonic fibroblasts (MEFs) SIRT2 -/-cell line, to a similar level as the wild-type cells (Figure 7D).
Finally, we tested the impact of MC3465 on macrophages infected with L. monocytogenes.
Indeed, it has been shown that this bacterium uses the host SIRT2 at its advantage during infection, by deacetylating the lysine 18 on histone H3 [START_REF] Eskandarian | A role for SIRT2-dependent histone H3K18 deacetylation in bacterial infection[END_REF]. However, upon L. monocytogenes infection, MC3465 does not inhibit the deacetylation of H3K18 compared to AGK2 or nicotinamide, another sirtuin inhibitor (Figures 7E-F).
Taken altogether, these results demonstrate that SIRT2 is not the target of MC3465.
Transcriptome analysis of uninfected or MTB-infected macrophages reveals host pathways modulated by MC3465.
As SIRT2 is not the target of MC3465, we decided to use an unbiased approach to investigate which host pathways are differentially regulated when macrophages are cultivated in presence or in absence of MC3465. Concretely, macrophages from four individual donors were infected by MTB or left uninfected before being treated with either DMSO (control condition) or MC3465 at a final concentration of 10 µM. We then characterized the genomewide gene expression profiles by RNAseq after 4 hours or 24 hours of treatment. The analysis revealed that the expression of 1,383 genes is affected by MC3465 (p-value < 0.05). We focused the analysis on the most differentially expressed genes (with log FC ≥ 0.5 and ≤ -0.5
for upregulated and downregulated genes, respectively). 53 genes were upregulated and 128 genes downregulated at 4 hours post-infection (Figure 8A). Moreover, 3,702 genes are differentially expressed at 24 hours post-infection, including 224 upregulated genes and 652 genes downregulated (Figure 8A). Interestingly, MC3465 also modulates the gene expression of uninfected cells with a total of 1,346 genes are differentially expressed with 103 genes upregulated and 97 genes downregulated at 4 hours post-treatment and 3,364 genes differentially expressed with 147 genes upregulated and 240 genes downregulated at 24 hours post-treatment (Figure S3). We classified those differentially expressed genes by performing gene-set enrichment analysis using ClueGO cluster analysis [START_REF] Bindea | ClueGO: A Cytoscape plug-in to decipher functionally grouped gene ontology and pathway annotation networks[END_REF].
The gene set downregulated by MC3465 in MTB-infected cells is mainly composed of genes associated with cellular zinc ion homeostasis and cell chemotaxis at 4 hours (Figure 8B). At 24 hours, most of the downregulated genes for MTB-infected cells treated with MC3465 belongs to the innate immune response and cytokine response (Figure 8C). At 4 hours, the expression of genes belonging to the metallothionein (MT) family is downregulated in presence of MC3465 (for both uninfected and MTB-infected cells), compared to DMSO, as shown on the heatmap (Figure 8D). At 24 hours, genes related to the type I IFN pathways are also downregulated when the molecule is added (for both uninfected and infected cells), compared to the control condition (Figure 8D).
MC3465 effect on intracellular MTB growth is not related to autophagy.
To further investigate the activity of MC3465 on the inhibition of the intracellular MTB growth, we explored if the compound has an effect on the host defense mechanisms, and more specifically on autophagy. Indeed, autophagy is a well-established key factor for host defense against MTB [START_REF] Gutierrez | Autophagy is a defense mechanism inhibiting BCG and Mycobacterium tuberculosis survival in infected macrophages[END_REF]. Furthermore, several drugs are known to activate autophagy upon MTB infection such as PZA, INH and more recently BDQ [START_REF] Giraud-Gatineau | The antibiotic bedaquiline activates host macrophage innate immune resistance to bacterial infection[END_REF][START_REF] Kim | Host cell autophagy activated by antibiotics is required for their effective antimycobacterial drug action[END_REF]. To test if MC3465 somehow triggers the autophagy in MTBinfected macrophages, we performed some confocal microscopy to evaluate the presence of LC3B puncta, a cellular marker for the formation of autophagosomes and autolysosomes.
Presence of LC3B puncta was detected using secondary antibody Alexa Fluor 555 linked to anti-LC3 antibody. Our result shows a slight decrease of LC3B puncta per cell at 24 hours and 48 hours post-MC3465 treatment (only significative for 24 hours) in MTB-infected macrophages, suggesting that MC3465 effect is not related to autophagy (Figures 9A-B). Structure-activity relation (SAR) study is a key aspect of drug design and optimization [START_REF] Temml | Structure-based molecular modeling in SAR analysis and lead optimization[END_REF]. In our study, we screened 29 analogues of MC3465 with the aim of finding a compound with a higher activity than MC3465. Using automated confocal microscopy, we showed that MC3466 inhibits 70% of intracellular MTB growth, compared to the control condition (Figure 10A). Effect of MC3466 was confirmed by a viability assay, with a decrease of one log of CFU, compared to the control, when MTB-infected macrophages are treated with MC3466 for 4 days (Figure 10B). Similarly to MC3465, the analogue does not have an effect on MTB growth in a liquid culture medium at 10 µM nor at a higher concentration (50 µM) (Figure 10C).
We next assessed the IC50 of MC3466. IC50 represents the concentration required to inhibit 50% of the bacterial colonization [START_REF] Christophe | High content screening identifies decaprenyl-phosphoribose 2ʹ epimerase as a target for intracellular antimycobacterial inhibitors[END_REF]. We used two different techniques to estimate the IC50: (i) an image-based analysis with the dose-response curve (DRC) (Figure 10D) and (ii) a CFU-based assay (Figure 10E). Both techniques gave us similar IC50 for MC3465 and MC3466, with an estimated IC50 for MC3465 between 4.5-5.9 µM and 2.5-3.2 µM for MC3466 (Figure 10F).
Moreover, the SAR experiments allowed us to define that the 5-(3-bromopropyl)-3-(4-Rphenyl)-1,2,4-oxadiazole (R being bromine for 3465 and methoxyle for MC3466) is the active site of the molecule (marked in red) (Figure 10F).
As HDT drugs act on the host rather than on bacterial targets, our compounds are expected to inhibit the growth of different MTB strains in a similar manner. To test this assumption, we determined the effect of MC3465 and MC3466 on human macrophages infected with hypervirulent clinical MTB strains belonging to the Beijing family, namely GC1237, CDC1551 and Myc5750 [START_REF] Millán-Lou | Rapid test for identification of a highly transmissible Mycobacterium tuberculosis beijing strain of sub-Saharan origin[END_REF][START_REF] Polena | Mycobacterium tuberculosis exploits the formation of new blood vessels for its dissemination[END_REF][START_REF] Valway | An Outbreak Involving Extensive Transmission of a Virulent Strain of Mycobacterium tuberculosis[END_REF]. Our data show that both compounds are effective in inhibiting the intracellular growth of each MTB strains (Figure 10G).
Both MC3465 and MC3466 protect MTB-infected cells from cell death.
MTB is known to induce macrophages death in order to proliferate [START_REF] Moraco | Cell death and autophagy in tuberculosis[END_REF].
To evaluate the role of MC3465 and MC3466 on the cell death of MTB-infected macrophages, we treated MTB-infected cells with both molecules and we stained the cells with LIVE/DEAD 11). In addition, less GFP positive MTB-infected cells are PB450 positive when treated with the compounds.
Taken together, these results show that both MC3465 and MC3466 have a protective effect on the cells.
MC3466 potentiates the activity of antibiotics on susceptible and mono-resistant strains.
HDT are not stand-alone drugs and are mostly considered as adjunctive drugs to standard antibiotic treatments for TB [START_REF] Kaufmann | Host-directed therapies for bacterial and viral infections[END_REF][START_REF] Machelart | Host-directed therapies offer novel opportunities for the fight against tuberculosis[END_REF]. We therefore investigated whether there is a synergistic or additive effect of MC3466 and antibiotics on a drug susceptible MTB. To do so, we combined MC3466 at different concentrations with RIF, BDQ, MOX or INH and we analyzed the bacterial growth by image-based analysis (Figures 12A andS4). The synergy or antagonist effect of the drug combination was determined using the excess over Bliss score and Bliss independence model (Q. Liu et al., 2018). These effects were represented by a heatmap ranging from antagonism (blue) to synergy (red). For the RIF condition, we observe a strong synergy at low concentrations (below 0.031 µg/mL) with MC3466 at concentrations superior to 2.5 µM (Figure 12A). A synergy is also observed with MC3466 (≥ 2.5µM) in presence of MOX and INH but at lower concentrations (≤ 0.016 µg/mL for MOX and ≤ 0.002 µg/mL for INH) (Figure S4). For BDQ, lower concentrations of MC3466
show a higher synergy with BDQ (Figure 12A). These results may be explained by the fact that at a high concentration, each drug is very effective on its own and the combination with MC3466 does not increase the antibiotic effect than when the antibiotic is used alone.
After having shown that there is a synergy between MC3466 and anti-TB drugs on drug susceptible MTB, we then investigated the impact of these drug combinations on monoresistant clinical strains. Macrophages were infected with RIF MTB resistant strains, before evaluating the effect of a MC3466-RIF treatment on the bacterial infection. Our data shows that the highest synergy occurs when MC3466 is used at a concentration of 2.5 µM (Figure 12B). The effect of MC3466 alone was then evaluated on this clinical RIF resistant strain by confocal microscopy and image analysis. We observed that the compound has a direct dosedependent efficacy on this resistant strain (Figure 12B). We confirmed the image-based synergy result by performing a CFU assay on MTB-infected macrophages. The number of CFU remains constant when MTB-infected macrophages are cultivated in presence of suboptimal concentrations for a RIF resistant strain (from 0.1 to 20 µg/mL). However, the number of CFUs decreases when cells are cultivated in presence of both MC3466 (10 µM) and RIF, with a higher impact on the intracellular RIF resistant MTB survival when the RIF concentration is increased (Figure 12C). In parallel, we have also tested the effect of MC3466 on a BDQ resistant MTB strain [START_REF] Giraud-Gatineau | The antibiotic bedaquiline activates host macrophage innate immune resistance to bacterial infection[END_REF]. Synergy effect on this strain is also observed with MC3466 (at a concentration superior to 5 µM) and BDQ (at a concentration superior to 0.31µg/mL) (Figure 12D). However, an antagonist effect of the combination is observed at lower concentrations (Figure 12D). We tested the effect of MC3466 at a concentration of 10µM in combination with different concentrations of BDQ by a CFU assay (Figure 12E).
MC3466 was also proved to impede the intracellular BDQ-resistant strain growth, in a dose dependent manner (Figure 12E).
All these results show thatMC3466 can potentiate the effect of anti-TB drugs when it is used at a concentration ≥ 5µM. MC3466 inhibit MTB growth in mice and potentiate RIF effect in vivo.
After having shown that both MC3465 and MC3466 inhibit the intracellular MTB growth, without any toxicity for the cell, we then performed an in vivo experiment on C57BL/6J mice.
We first assessed if the compounds are toxic or not for the mice. To do so, we intraperitoneally injected MC3465 and MC3466 at different concentrations, six days a week for two weeks in uninfected mice. No loss of weight was observed for the mice treated with the molecules, as well as for the control group that was treated with the vector alone (Figure 13A). After the two weeks treatment, total blood of each mouse was collected to quantify the level of alanine aminotransferase (ALT) and the aspartate aminotransferase (AST), which could reveal a hepatic dysfunction. No difference between the groups treated with the compounds and the control group is observed for the level of ALT (Figure 13B) and AST (Figure 13C), suggesting there is no toxicity associated to both MC3465 and MC3466 treatments.
From these results, we then decided to test the compounds on MTB-infected mice in an acute model of infection. One week after the mycobacterial infection, mice were treated for two weeks, six days a week. Lungs and spleens were harvested, and the number of bacteria were estimated by CFUs. No significative effect on the bacterial load in the lung was observed for the MC3465 treatment (Figure 13D). However, the number of bacteria in the lung after the MC3466 treatment drops by 5 times, compared to the control condition (Figure 13D). No significant difference between the three conditions was observed in the spleen of the mice (Figure 13E).
As we have previously shown that MC3466 potentiates the effect of RIF on intracellular MTB in vitro, we then investigated whether we could observe a similar effect in vivo. To do so, mice were treated for two weeks with RIF, with or without MC3465 and MC3466. MC3465 does not potentiate the effect of RIF, contrarily to MC3466 in the lung (Figure 13F). However, no effect is observed on the number of bacteria in the spleen (Figure 13G).
Taken together, our results demonstrate that both MC3465 and MC3466 are non-toxic to mice over a 15-day period. Furthermore, MC3466 exerts an effect on intracellular MTB growth in vivo and potentiates the effect of RIF on the bacterial load within the lungs. The aim of this thesis was to discover new compounds that enhance TB treatment by targeting the host epigenome. To do so, we have performed a high-content screening to evaluate the impact of a non-commercialized library of molecules on the intracellular growth of MTB.
HTS has been extensively used to identify potential targets for antimycobacterial inhibitors [START_REF] Christophe | High content screening identifies decaprenyl-phosphoribose 2ʹ epimerase as a target for intracellular antimycobacterial inhibitors[END_REF] or drug candidates that inhibit MTB growth [START_REF] Ollinger | A highthroughput whole cell screen to identify inhibitors of Mycobacterium tuberculosis[END_REF][START_REF] Shapira | High-Content Screening of Eukaryotic Kinase Inhibitors Identify CHK2 Inhibitor Activity Against Mycobacterium tuberculosis[END_REF]. Phenotypic-based approaches were first used to screen a large number of compounds for their ability to kill MTB in culture medium [START_REF] Entzeroth | Overview of high-throughput screening[END_REF]. However, this approach has rapidly showed some limitations as drugs inhibiting MTB growth in culture medium may be ineffective on intracellular MTB. For instance, those drugs may be inactivated by some host cell enzymes or they may not reach their bacterial target in those conditions. In this context, intracellular MTB drug screening models were developed to address these limitations, notably with the widespread use of image-based HCS. Intracellular screens are used to select antimicrobial drugs that are effective on intracellular bacteria, with no toxicity for the cells. Furthermore, MTB being known to interfere with many host cell processes, the use of HCS on intracellular MTB has revealed number of novel targets for the development of anti-TB drugs related to host-pathogen interaction [START_REF] Sorrentino | Development of an intracellular screen for new compounds able to inhibit Mycobacterium tuberculosis growth in human macrophages[END_REF].
In this thesis, we performed an HCS with human-monocyte derived macrophages. This model is closer to the human host, more physiological than immortalized tumor cells and allows to evaluate the efficacy of molecules on the host with inter-individual variability. Because MTB subverts host functions through epigenetic modifications (M. Singh et al., 2018), we performed HCS with a library of molecules that presumably target the host epigenome to promote the discovery of a host-directed compound. Our research hypothesis was that by reprogramming the host epigenetic system, the immune cells may better control a mycobacterial infection. This hypothesis is supported by several studies showing that epigenetic modifications of histones modulate the anti-mycobacterial activity of the cells [START_REF] Cheng | Host sirtuin 1 regulates mycobacterial immunopathogenesis and represents a therapeutic target against tuberculosis[END_REF][START_REF] Moreira | Functional Inhibition of Host Histone Deacetylases (HDACs) Enhances in vitro and in vivo Anti-mycobacterial Activity in Human Macrophages and in Zebrafish[END_REF]. Through our screening, we discovered a new molecule, MC3465, which prevents MTB growth inside macrophages, without any effect on MTB growth in culture medium. This compound is active against the laboratory strain H37Rv as well as against clinical isolates belonging to the Beijing lineage and mono-resistant strains.
To determine the effect of MC3465 on MTB intracellular growth, we have performed a kinetic experiment on MTB-infected macrophages. Our results have shown that MC3465 seems to be bacteriostatic rather than bactericidal. Interestingly, the Live/Dead flow cytometry experiment revealed that this molecule decreases infected and uninfected bystander cells death, subsequently preventing the spread of the infection. The efficacy of this molecule on MTB-infected macrophages prompted us to further investigate its mechanism of action.
MC3465 was described to be a potent SIRT2 inhibitor (Moniot et al., 2017b). However, we have shown that MC3465 mechanism of action is independent of the host SIRT2. SIRT2
inhibition by AGK2, a commercialized inhibitor of SIRT2, has been described to reduce the intracellular survival of MTB [START_REF] Bhaskar | Host sirtuin 2 as an immunotherapeutic target against tuberculosis[END_REF]. However, in our experiment with human macrophages AGK2 did not show any effect on inhibiting intracellular MTB growth. This difference between Bhaskar et al. study and ours could be explained by the use of different cell types. Unlike us, they also pre-treated their cells with AGK2 2 hours before infection.
Furthermore, upon L. monocytogenes infection, we did not see any inhibition of the deacetylation of the H3K18 following the addition of MC3465, nor a decrease of the bacterial load, compared to an AGK2 treatment [START_REF] Eskandarian | A role for SIRT2-dependent histone H3K18 deacetylation in bacterial infection[END_REF][START_REF] Pereira | Infection Reveals a Modification of SIRT2 Critical for Chromatin Association[END_REF]. Moniot [START_REF] Greenwood | Subcellular antibiotic visualization reveals a dynamic drug reservoir in infected macrophages[END_REF] and then in vivo [START_REF] Fearns | Correlative light electron ion microscopy reveals in vivo localisation of bedaquiline in Mycobacterium tuberculosis-infected lungs[END_REF]. They also have shown that the lipid droplet represents a reservoir of BDQ.
Subsequently, MTB interacts with these droplets, resulting in an increase of the antibacterial efficacy of the antibiotic. MC3465 also possesses a bromine atom and a similar approach could be used to determine where the molecule accumulates within the cells. In a synergic context, it may also be interesting to investigate a co-localization of MC3465 with anti-TB drugs, like it was recently assessed with PZA and BDQ [START_REF] Santucci | Intracellular localisation of Mycobacterium tuberculosis affects efficacy of the antibiotic pyrazinamide[END_REF].
Based on our data, the host SIRT2 inhibition is not the mechanism of action of MC3465. The non-validation of the target made us question whether the molecule was indeed an HDT. To prove that MC3465 only targets intracellular bacteria, we have performed a transwell experiment, showing that extracellular MTB are not affected by the compound nor by a potential metabolite secreted by the cells following the treatment. However, at this stage, we cannot rule out that MC3465 is not metabolized. A quantification of the compound within the cells by liquid chromatography with tandem mass spectrometry might allow us to determine if macrophages metabolized the compound [START_REF] Xiao | Metabolite identification and quantitation in LC-MS/MS-based metabolomics[END_REF]. Meanwhile, the transwell experiment allow us to conclude that MTB has to be localized inside the cell for MC3465 to have its effect. From this, we then have investigated the efficacy of MC3465 on other bacteria to see if the molecule also prevents the growth of other intracellular pathogens with different lifestyles. Our compound has shown no activity on S. Typhimurium, L. monocytogenes and E.coli. However, MC3465 limits the intracellular growth of BCG and M. abscessus, a rapidly growing mycobacterium which is also an opportunistic pathogen [START_REF] Johansen | Non-tuberculous mycobacteria and the rise of Mycobacterium abscessus[END_REF].
These results suggest that MC3465 efficacy is related to mycobacteria.
Mycobacteria have a gene coding for a Sir2 homolog, named Rv1151c. This latter acts as a NAD + -dependent protein deacetylase (Gu et al., 2009b). Rv1151c does not seem to be important for MTB infection and is a non-essential gene for growth in vitro [START_REF] Dejesus | Comprehensive essentiality analysis of the Mycobacterium tuberculosis genome via saturating transposon mutagenesis[END_REF]. However, it has been shown that nicotinamide, a SIRT inhibitor, also inhibits the bacterial Sir2 (Gu et al., 2009b). From those observations, we have hypothesized that MC3465 may target Rv1151c. We have generated a strain lacking Rv1151c and we are planning to perform growth and virulence assays on this mutant compared to the WT strain, in presence of MC3465. These experiments will help us determine whether the activity of the compound is dependent of the bacterial Sir2. More generally, it might be interesting to perform a transcriptomic analysis of intracellular MTB, with or without MC3465. This could help us to determine whether MTB differently expresses certain genes in response to the presence of the molecule [START_REF] Benjak | Whole-transcriptome sequencing for highresolution transcriptomic analysis in Mycobacterium tuberculosis[END_REF]. For example, because MC3465 has a bacteriostatic effect on the bacteria, we could hypothesize that the molecule, somehow, triggers a dormancy state of MTB. Assessing the general gene expression in MTB could broad us valuable knowledge on the state and response of intracellular MTB to MC3465.
Furthermore, macrophages have other sirtuins that may be MC3465 targets. Interestingly, SIRT1 and SIRT3 have been shown to be regulated during MTB infection. More specifically, activation of host SIRT1 reduces intracellular growth of MTB, which is similar to what we have observed with MC3465 [START_REF] Cheng | Host sirtuin 1 regulates mycobacterial immunopathogenesis and represents a therapeutic target against tuberculosis[END_REF]. SIRT1 reduces MTB intracellular growth by inducing autophagy and phagosome-lysosome fusion. Moreover, several drugs also regulate the host immune defense and activate autophagy such as PZA, INH and more recently BDQ (Giraud-Gatineau et al., 2020; J. J. [START_REF] Kim | Host cell autophagy activated by antibiotics is required for their effective antimycobacterial drug action[END_REF]. In this context, we have assessed whether the addition of MC3465 leads to an increase in autophagy activity. Our experiments did not prove that this pathway was involved in the inhibition of MTB growth, which limits the possible role of the host SIRT1 in our case. One way to assess whether MC3465 has a role on SIRT3 is to determine the ROS level inside the cell. Indeed, ROS are one of the main defense mechanisms of the cell against pathogens and SIRT3 is an important regulator of cellular redox homeostasis during MTB infection [START_REF] Herb | Functions of ros in macrophages and antimicrobial immunity[END_REF][START_REF] Smulan | Sirtuin 3 downregulation in mycobacterium tuberculosis-infected macrophages reprograms mitochondrial metabolism and promotes cell death[END_REF]. Thus, quantification of the total ROS level with CellROX™ or mtROS with MitoSOX™ dye by flow cytometry [START_REF] Giraud-Gatineau | The antibiotic bedaquiline activates host macrophage innate immune resistance to bacterial infection[END_REF] is also relevant to link or not ROS production to the mechanism of action of MC3465. Moreover, as MC3465 is supposed to target a histone deacetylase, it would also be interesting to analyze histone modifications by chromatin immunoprecipitation followed by sequencing (ChIP-seq) to highlight a possible histonerelated modification [START_REF] O'geen | Using ChIP-seq technology to generate highresolution profiles of histone modifications[END_REF].
At this stage, we cannot conclude whether the molecule has a direct effect on the host or on compounds and targets (S. [START_REF] Wang | Advanced activity-based protein profiling application strategies for drug development[END_REF]. ABPP is more focused on analyzing enzymatic functions while CCCP identifies interacting components via drug affinity chromatography and mass spectrometry techniques (S. [START_REF] Wang | Advanced activity-based protein profiling application strategies for drug development[END_REF]. In principle, a small group is added to the molecule of interest which will then allow the isolation of the molecule with its target, also called "target fishing". Click-chemistry regroups the reactions by which groups are added to the molecules [START_REF] Hein | Click chemistry, a powerful tool for pharmaceutical sciences[END_REF]. Target fishing is a valuable technique that then allows the design of drug inhibitors [START_REF] Sharpless | In situ click chemistry: A powerful means for lead discovery[END_REF]. Reporter groups are commonly biotin, fluorescent groups, alkynes or azide. For example, biotin is used for biotin-streptavidin pulldown assays (capture of the tagged protein with its target) followed by identification of the target by mass spectrometry (S. [START_REF] Wang | Advanced activity-based protein profiling application strategies for drug development[END_REF]. Identification of the target with fluorescent reporters is performed via imaging-based detection (S. [START_REF] Wang | Advanced activity-based protein profiling application strategies for drug development[END_REF].
However, these techniques have some limitations. First of all, efficacy of the modified molecules has to be assessed before being used, to verify that the molecules are still active.
In addition, non-specific binding may lead to the "fishing" of many off-targets, preventing the identification of the true target(s). Discriminating the false positive targets from the real target(s) may be performed by comparing the target pool between the molecule of interest versus an inactive similar analogue [START_REF] Wright | Chemical proteomics approaches for identifying the cellular targets of natural products[END_REF]. Another issue associated with these techniques is that the probe-labeled molecule may be metabolized by the cells, therefore leading to the loss of the binding and the efficacy. Despite these limitations, this technique has been shown to be a valuable tool to determine possible targets of molecules.
Moreover, potential identified targets have then to be confirmed by other techniques, such as use of recombinant proteins to verify the interaction of the compound with the target ex vivo; to a co-localization or a co-immunoprecipitation by microscopy or western blot analysis, respectively (if an antibody of the target is available); to silence the gene expression by siRNA and determine if the compound is still active. Identification of a drug target remains challenging. For example, there is no strong consensus for the target(s) of PZA which has been discovered more than 65 years ago [START_REF] Gopal | Pharmacological and Molecular Mechanisms Behind the Sterilizing Activity of Pyrazinamide[END_REF].
Beside finding the target of MC3465, it is also important to investigate its mechanism of action. In order to unravel the mode of action of this molecule, we used an undirect approach to map the host pathways differentially expressed after the addition of MC3465. We performed RNAseq on uninfected and MTB-infected cells at different time points, with or without the molecule. We found that several pathways are downregulated in presence of MC3465 in uninfected and infected cells, including zinc related pathways and cytokines regulation with notably modulation of INF signaling pathways. Metals are essential for all living organisms and thereby represent highly important elements in host-pathogen interaction [START_REF] Hood | Nutritional immunity: Transition metals at the pathogenhost interface[END_REF]. The host cell has deployed many mechanisms to limit the availability of essential nutrients to starve the pathogen or, on the contrary, to intoxicate it with an excess of metals (Neyrolles et al., 2015b). Upon TB infection, mechanisms controlling the intracellular zinc concentration are highly regulated by the host [START_REF] Lonergan | Nutrient Zinc at the Host-Pathogen Interface[END_REF].
In the host, zinc is bound to metallothioneins (MT). During the infection, zinc is released from the host and accumulates in the MTB-containing vacuole [START_REF] Botella | Mycobacterial P 1-Type ATPases mediate resistance to Zinc poisoning in human macrophages[END_REF]. Results of our
RNAseq have showed a downregulation of several MT at 4h in MTB-infected and uninfected cells. Based on these results, two hypotheses for the mechanism of action of the molecule involving zinc are considered. First, MC3465 may generate a high burst of zinc which is then mobilized in the phagosome to intoxicate the bacteria. If zinc is redirected to the vacuole, the level of zinc in the cytosol should subsequently be low, which must lead to down-regulation of MT. Secondly, MC3465 could act as a zinc binder, and the MT are less required by the host.
The zinc binding by MC3465 may lead to zinc starvation for the bacteria, therefore affecting bacterial growth inside the cell. These opposite hypotheses could be tested by different methods. For example, Botella et al. used the FluoZin™-3 to demonstrate a higher zinc concentration inside the phagocytic vacuole using confocal microscopy [START_REF] Botella | Mycobacterial P 1-Type ATPases mediate resistance to Zinc poisoning in human macrophages[END_REF].
A similar technique could be used on MTB-infected macrophages, with or without MC3465.
Furthermore, MTB under zinc deprivation or intoxication has distinct changes in its transcriptome [START_REF] Dow | Zinc limitation triggers anticipatory adaptations in Mycobacterium tuberculosis[END_REF]. We could therefore assess the expression of different mycobacterial genes by qRT-PCR to determine whether MTB is intoxicated (e.g. with the expression of the metal efflux pump linked to the cpcC gene [START_REF] Botella | Mycobacterial P 1-Type ATPases mediate resistance to Zinc poisoning in human macrophages[END_REF]) or under starvation (with Zur-regulated genes [START_REF] Dow | Zinc limitation triggers anticipatory adaptations in Mycobacterium tuberculosis[END_REF]).
The host transcriptomic analysis has also pointed out a differential regulation of several pathways that are implicated in the modulation of cytokines at 24 hours in uninfected and infected cells. Particularly, our data have shown that genes involved in type I IFN pathway are downregulated. Several studies have demonstrated that Type I IFN overexpression is deleterious for the host during MTB infection [START_REF] Dorhoi | Type I IFN signaling triggers immunopathology in tuberculosis-susceptible mice by modulating lung phagocyte dynamics[END_REF][START_REF] Mcnab | Type I IFN Induces IL-10 Production in an IL-27-Independent Manner and Blocks Responsiveness to IFN-γ for Production of IL-12 and Bacterial Killing in Mycobacterium tuberculosis -Infected Macrophages[END_REF][START_REF] Moreira-Teixeira | Type I interferons in tuberculosis: Foe and occasionally friend[END_REF]. In patients with active TB, blood transcriptional gene signature with overexpression of type I IFN-related genes have been correlated with disease severity and is downregulated following successful treatment [START_REF] Moreira-Teixeira | Type I interferons in tuberculosis: Foe and occasionally friend[END_REF]. MTB induces IFN-related gene expression to promote its survival [START_REF] Novikov | Mycobacterium tuberculosis Triggers Host Type I IFN Signaling To Regulate IL-1β Production in Human Macrophages[END_REF][START_REF] Stanley | The Type I IFN Response to Infection with Mycobacterium tuberculosis Requires ESX-1-Mediated Secretion and Contributes to Pathogenesis[END_REF]. Indeed, type I IFN cytokines are linked to the inhibition of TNF-α, IL-12, and IL-1β while it promotes the production of IL-10 (McNab et al., 2014;[START_REF] Novikov | Mycobacterium tuberculosis Triggers Host Type I IFN Signaling To Regulate IL-1β Production in Human Macrophages[END_REF]. However, type I IFN may be occasionally beneficial for the host in absence of IFN-γ [START_REF] Moreira-Teixeira | Type I IFN Inhibits Alternative Macrophage Activation during Mycobacterium tuberculosis Infection and Leads to Enhanced Protection in the Absence of IFN-γ Signaling[END_REF]. Some host-directed therapies (using zileuton) manipulate host eicosanoid by increasing PGE2 levels which negatively regulates type I IFN in vivo [START_REF] Mayer-Barber | Host-directed therapy of tuberculosis based on interleukin-1 and type i interferon crosstalk[END_REF].
Thus, it might be interesting to dose type I IFN cytokines level, PEG2 and the related cytokines IL-ß, to determine if macrophages treated with MC3465 better control the infection through cytokine or eicosanoid modulations.
While the mechanism of action of MC3465 remains elusive and needs further investigation, we continued to focus on the efficacy of the molecule and its analogue MC3466. In addition to their own growth inhibition activity on MTB, our results have underlined important observations regarding the combination of the analogue MC3466 with antibiotics used to treat TB. Suboptimal concentrations of RIF, INH, MOX or BDQ in combination with MC3466 had more efficacy on preventing MTB intracellular growth than used alone. These effects were also demonstrated in vivo for RIF. In the fight against TB, there is growing interest in the development of effective antibacterial drug combinations for better therapeutic results [START_REF] Ramón-García | Synergistic drug combinations for tuberculosis therapy identified by a novel high-throughput screen[END_REF][START_REF] Yilancioglu | Design of high-order antibiotic combinations against M. tuberculosis by ranking and exclusion[END_REF]. Experimental drug regimens have been tested in vitro and optimized for their use in vivo (B.-Y. [START_REF] Lee | Drug regimens identified and optimized by output-driven platform markedly reduce tuberculosis treatment time[END_REF]. Many in vivo studies have led to trials to determine the efficacy of different combinations of molecules to shorter current treatments (World Health Organization, 2020). Recently, drug combinations using an already approved drug with an adjuvant such as small molecules, have been integrated for the design of new drug regimens. These molecules are also called boosters, activators or enhancers depending on the study [START_REF] Guieu | Desirable drug-drug interactions or when a matter of concern becomes a renewed therapeutic strategy[END_REF]. For example, several 1,2,4-oxadiazoles were tested for their potency to boost the antibacterial activity of ETH (Flipo, Desroses, et al., 2012;[START_REF] Flipo | Ethionamide boosters: Synthesis, biological activity, and structure-activity relationships of a series of 1,2,4-oxadiazole EthR inhibitors[END_REF]Flipo, Willand, et al., 2012). Efficacy of small molecules is based on the enhancement of the pharmacokinetic profile of the approved drug, the increasing of its activity or the reversion of the resistance of a strain [START_REF] Guieu | Desirable drug-drug interactions or when a matter of concern becomes a renewed therapeutic strategy[END_REF][START_REF] Wambaugh | Highthroughput identification and rational design of synergistic small-molecule pairs for combating and bypassing antibiotic resistance[END_REF]. Reversion of antibiotic resistance represents a breakthrough for the fight against MDR-TB. As an example, the spiroisoxazoline small molecules aborting resistance (SMARt)-420 was described to boost the effect of ETH but also to revert the ETH-acquired resistance via its interaction with the transcriptional regulator EthR2 (SMARt-420 inhibits the EthR2 repression of EthA2 and lead to the activation of ETH) [START_REF] Blondiaux | Reversion of antibiotic resistance in Mycobacterium tuberculosis by spiroisoxazoline SMARt-420[END_REF]. More interestingly, in our study, the anti-TB drug used with our small molecule MC3466 also shows a better effect on mono-resistant strains.
Our results support the fact that MC3466 may be a potential molecule for adjunctive therapy for TB treatment. Pre-clinical studies, related to the pharmacodynamic and pharmacokinetic of the molecule, are very important for the development of a potential drug. In this context, it might be relevant to test the genotoxicity of MC3466, the concentration of the molecule found in the serum and the safer dose for long-term administration, the half-life of the molecule and its clearance, the possible sign of toxicity with a higher dose on the liver or the induction of cytochrome P450 (CYP450) especially if this drug is used in combination with other anti-TB drug including RIF (which is a known inducer of CYP450 [START_REF] Kanebratt | Cytochrome P450 induction by rifampicin in healthy subjects: Determination using the Karolinska cocktail and the endogenous CYP3a4 marker 4β-hydroxycholesterol[END_REF]).
In this thesis, we have tested the impact of MC3465 and MC3466 on MTB in an acute model of infection in vivo. Therefore, testing these compounds in a chronic infection model should determine the possible interaction with the development of the adaptive immune response.
Several in vivo tests have to be performed before properly considering this molecule as an adjuvant for anti-TB treatment.
HCS has allowed the discovery of promising new drugs that inhibit the intracellular growth of MTB in vitro, on susceptible and resistant strains of MTB, but also in vivo. So far, we have not discovered the target of MC3465 or its mechanism of action, but we have carried out many experiments that have given some interesting leads. An important result of our study is the efficacy of MC3466 with known anti-TB drugs on both sensible and resistant strains. Recently, the combination of drugs and the use of adjunctive therapy have been extensively studied for the development of new treatment regimens against MTB and MDR-TB [START_REF] Blondiaux | Reversion of antibiotic resistance in Mycobacterium tuberculosis by spiroisoxazoline SMARt-420[END_REF][START_REF] Guieu | Desirable drug-drug interactions or when a matter of concern becomes a renewed therapeutic strategy[END_REF]. However, many drugs have different effects on the host. Indeed, some drug effects may be deleterious and cause toxic side effects. However, some drugs may also have beneficial effects and be candidates for drug repurposing. Drug repurposing represents one of the main elements of HDT (An et al., 2020b). HDT are not stand-alone drugs.
They act through various pathways inside the cell in combination with already known and characterized antibiotics. The use of HDT leads to very different outcomes, especially among individuals. Indeed, as described in the introduction, results from human trials which include HDT are often contradictory. These inconsistencies among patients represent the major limitation for the use of HDT and these different responses underlying the importance of drug combination studies. However, HDT still represent a very interesting alternative for overcoming antimicrobial resistance. Use of HDT for TB treatment is clearly at its early stages and requires deeper studies and analysis to become a main part of TB treatment. In the meantime, development of new regimens for TB treatment with adjunctive, drug repurposing or new molecules represents one of the best ways to eradicate TB.
Title: Enhancing tuberculosis treatment with host-targeting strategies.
Abstract: Tuberculosis (TB) is one of the top ten causes of human death worldwide and remains a major public health threat. This disease can be cured with a 6-month treatment regimen, combining up to four antibiotics. Alarmingly, multidrug resistant (MDR) strains of Mycobacterium tuberculosis (MTB), resistant to the first-line anti-TB drugs, are constantly emerging and spreading. Treatment options to cure MDR-TB are limited, expensive and associated with a lower success rate. In addition, they are based on the use of more toxic second-and third-line drugs for a longer duration. Despite considerable research efforts, the vaccine is not sufficient to control the epidemic. Recently, host-directed approaches have emerged as an innovative and promising strategy to eradicate TB. These therapies, unlike antibiotics which specifically target bacteria, aim to improve the host defenses. Their combination with existing or future antibiotics can be a major improvement for eradicating TB. MTB manipulates some host signaling pathways to subvert innate and adaptive immunity.
For instance, MTB modulates the host gene expression by targeting the host epigenome to allow immune evasion. My thesis project aims to determine whether pharmacological interference with these MTB-induced driven changes of the epigenome decreases bacterial survival within the host. Using an automated confocal microscopy, we have developed a high throughput screen to evaluate the impact of compounds that target the host epigenome. With this technique, we have identified a new molecule that prevents GFP-expressing MTB replication within macrophages. This molecule is a putative inhibitor of Sirtuin 2 (SIRT2). SIRT2 is an NAD+-dependent deacetylase that regulates many cellular processes and plays a role in controlling bacterial infections such as listeriosis. We have demonstrated that this compound is not toxic to human cells. Moreover, it does not affect bacterial growth in liquid medium, suggesting a host-dependent mechanism of action. This molecule potentiates the activity of several anti-TB antibiotics, on both drug-susceptible and resistant MTB strains, in human macrophages and in mice. These promising results pave the way for future research, evaluating the potential of molecules targeting host epigenome as a new weapon in the arsenal against TB.
Key words: Tuberculosis, Mycobacterium tuberculosis, macrophages, high throughput screening, host-directed therapy, epigenetics, sirtuin, antibiotic, resistance Titre : Améliorer le traitement de la tuberculose grâce à des stratégies dirigées vers l'hôte.
Résumé : La tuberculose (TB) est l'une des dix premières causes de mortalité dans le monde et représente une menace majeure pour la santé publique. Cette maladie peut être guérie grâce à un traitement de 6 mois associant jusqu'à quatre antibiotiques. Cependant, il est alarmant de constater que des souches multirésistantes (MDR) de Mycobacterium tuberculosis (MTB) ne cessent d'apparaître et de se propager. Leur résistance aux antibiotiques de première ligne rende les traitements beaucoup plus difficiles. Les options thérapeutiques pour guérir la TB MDR sont limitées, coûteuses et associées à un taux de réussite plus faible. De plus, elles sont basées sur l'utilisation de médicaments de deuxième et troisième intention plus toxiques et sur une durée plus longue. Malgré des efforts de recherche considérables, le vaccin quant à lui, ne suffit pas à enrayer cette épidémie. Récemment, les approches dirigées vers l'hôte sont apparues comme une stratégie innovante et prometteuse pour éradiquer la tuberculose. Ces thérapies, contrairement aux antibiotiques qui ciblent spécifiquement les bactéries, visent à améliorer les défenses immunitaires de l'hôte. Leur combinaison avec les antibiotiques existants ou futurs peut se révéler d'une utilité majeure pour améliorer le traitement de la TB. MTB a su développer différentes stratégies pour contourner l'immunité innée et adaptative de l'hôte qu'elle colonise. Pour exemple, MTB module l'expression des gènes de l'hôte en ciblant l'épigénome ce qui lui permet de se développer dans les cellules immunitaires. Mon projet de thèse vise à déterminer si l'interférence pharmacologique de ces modifications de l'épigénome induites par MTB diminue la survie bactérienne intracellulaire chez l'hôte. Dans cette optique, nous avons mis au point une technique de criblage de composés épigénétiques qui modifient l'expression des gènes de la cellule. Nous avons testé l'efficacité des composés sur des macrophages infectés par une bactérie MTB exprimant la GFP grâce à l'utilisation d'un microscope confocal automatisé à haut débit. Ce criblage nous a permis de découvrir une molécule qui réduit considérablement l'infection bactérienne dans ces cellules en empêchant MTB de se multiplier. Cette molécule est un inhibiteur putatif de la Sirtuine 2 (SIRT2). La SIRT2 est une désacétylase dépendante du NAD+ qui régule de nombreux processus cellulaires et joue un rôle dans le contrôle des infections bactériennes telle que la listériose. Nous avons démontré que ce composé n'était pas toxique pour les cellules humaines. De plus, il n'affecte pas la croissance bactérienne en milieu liquide, suggérant un mécanisme d'action dépendant de l'hôte. Aussi, cette molécule potentialise l'activité de plusieurs antibiotiques antituberculeux sur des souches de MTB sensibles et résistantes aux médicaments, aussi bien in vitro dans des macrophages humains qu'in vivo chez la souris. Ces travaux de thèse pourraient permettre la création d'un nouveau traitement contre la tuberculose.
Mots clés : Tuberculose, Mycobacterium tuberculosis, macrophages, criblage à haut débit, thérapie dirigée vers l'hôte, épigénétique, sirtuine, antibiotique, résistance
Figure 1 .Figure 2 .Figure 3 .Figure 4 .Figure 5 . 76 Figure 6 .Figure 7 .Figure 8 .Figure 9 .Figure 10 .Figure 11 .Figure 12 .Figure
12345766789101112 Figure 1. Global representation of countries with at least 100,000 incident cases of TB reported in 2019.. ................................................................................................................ 10 Figure 2. Cell cycle of MTB infection. ................................................................................... 32 Figure 3. Host-MTB interaction influences the granuloma fate. .......................................... 36 Figure 4. Host-directed therapies against MTB. ................................................................... 65 Figure 5. Schematic representation of the three different epigenetic mechanisms: DNA methylation, histone modification and non-coding RNAs. .................................................. 76 Figure 6. Identification of new compounds inhibiting the intracellular growth of MTB. ... 104 Figure 7. MC3465 constrained the intracellular growth of MTB independently of SIRT2. . 106 Figure 8. Differentially expressed genes upon MC3465 treatment.................................... 108 Figure 9. MC3465 did not induce autophagy in MTB-infected macrophages. ................... 110 Figure 10. Structure-Activity Relationship (SAR) analysis of MC3465 and identification of MC3466 as a more efficient analogue. .............................................................................. 112 Figure 11. MC3465 and MC3466 protected MTB-infected cells from death. ..................... 114 Figure 12. MC3466 potentialized the efficacy of anti-TB drugs on sensible-and resistant-MTB strains. ....................................................................................................................... 116 Figure 13. MC3466 increased the efficiency of rifampicin in the mouse model of TB. ...... 118 Figure S1. MC3465 blocked the intracellular growth of MTB.……………………………………………121 Figure S2. Effect of MC3465 on other bacteria.…………………………………………………………………121 Figure S3. Differentially expressed genes upon MC3465 treatment.…………………………………122 Figure S4. MC3466 increased the efficiency of anti-TB drugs. ………………………………….………122
is based on the antigenic detection of LAM, an immunogenic glycolipid component of the mycobacterial cell wall excreted in the urine. According to the WHO, this technic is implemented in 13 countries for the diagnosis of TB(World Health Organization, 2020) In 2010, the WHO endorsed a novel molecular technique for TB detection: the Xpert® MTB/RIF assay (World Health Organization, 2020a).This assay is based on an automatized nucleic acid amplification for rapid TB diagnosis and simultaneous identification of rifampicin (RIF -an antibiotic routinely used in the TB treatment) resistance. It can be used on bronchoalveolar lavage, sputum or induced sputum, gastric or nasopharyngeal aspirates. Since then, nextgeneration versions of this system have been developed like the Xpert® MTB/RIF Ultra with a higher sensitivity. Other tests, like the Xpert® XDR cartridge or the MDR/MTB ELITe MGB ® Kit
): INH is a prodrug that first needs to be activated by KatG, a mycobacterial enzyme, to be active. Once activated, INH produces intracellular INHderived damaging species including superoxide, hydrogen peroxide and nitric oxide (NO), deleterious for MTB. It also affects several key cellular pathways such as the bacterial cell wall synthesis by inhibiting InhA (enoyl acyl carrier protein reductase),
TB preventive treatments recommended by the WHO are: six to nine months of daily INH; four months of daily RIF; three months of weekly rifapentine plus INH; three months of daily INH plus RIF or a one-month regimen of daily rifapentine plus INH.
3 and 7 .
7 They then initiate the degradation phase of apoptosis with proteolytic cleavage of different proteins such as structural components of the cell like actin or DNA-dependent protein. Their action leads to DNA fragmentation, mitochondrial remodeling, ROS production
Figure 5 .
5 Figure 5. Schematic representation of the three different epigenetic mechanisms: DNA methylation, histone modification and non-coding RNAs[START_REF] Gartstein | Prenatal influences on temperament development: The role of environmental epigenetics[END_REF].
formula ŷab=ya+yb-yayb, was used to determine the Bliss predicted inhibition rate ŷab, where ya and yb are the observed inhibition rates with drug A alone at dose a and drug B alone at dose b. Excess over Bliss score (I), I=yab-ŷab, was used to determine the difference between the observed response (yab) and the Bliss predicted response (ŷab) at the same combination dose. If I = 0, the two drugs act independently. I < 0 indicates antagonistic drug interactions, while I > 0 indicates synergy.
Figure 6 .
6 Figure 6. Identification of new compounds inhibiting the intracellular growth of MTB. (A) Human macrophages were infected with GFP-MTB at a MOI 0.5:1 and were incubated with the epigenetic library at 10 µM. Four days post-infection, images were acquired by automated confocal microscopy followed by image analysis. Only compounds which were not toxic to the cell (cell viability>75%, upper panel) were kept for further analysis. The spots area per cell (lower panel) was expressed as the percentage of GFP area in compound-treated cells compared to cells incubated with DMSO. (B) Representative confocal images of macrophages infected with MTB (green) and treated with MC3465. Hoechst and HCS CellMask (blue) were used to visualize respectively nuclei and cytoplasm. Scale bar: 10 μm (C) MTB-infected cells were treated with different concentrations of MC3465. After one or four days, cells were lysed and bacteria were enumerated by CFU. (D) Macrophages were infected with Listeria monocytogenes, Salmonella Typhimurium or Bacille Calmette Guerin (BCG). Intracellular bacteria were enumerated at the indicated time post-infection. (E) Growth of MTB in culture liquid medium in the presence of MC3465 at different concentrations. (F) On the left panel, illustration of the Transwell® assay. Briefly, MTB were plated in the upper chamber of the Transwell plate with or without macrophages in the lower chamber. The co-culture system was then treated with MC3465 or vehicle (DMSO) for four days and the number of bacteria was enumerated. MTB-infected macrophages were seeded as control. One representative experiment (of three) is shown. Error bars represent the mean ± SD. One-way ANOVA test was used. * p < 0.05, ** p < 0.01, *** p < 0.001, ****p<0.0001.
Figure 7 .
7 Figure 7. MC3465 constrained the intracellular growth of MTB independently of SIRT2. (A) MTB-infected macrophages were treated with different concentrations of MC3465 and of SIRT2 inhibitors (namely AGK2 and SirReal2). After four days, cells were lysed and bacteria were enumerated by CFU. (B) siRNA-mediated Silencing of SIRT2. The relative gene expression of SIRT2 was measured by RT-qPCR and normalized to the GADPH gene. (C) SIRT2-silenced macrophages were infected with MTB and treated with MC3465. After 4 days, the number of intracellular bacteria was counted. (D) WT MEFs or Sirt2 -/-MEFs were infected with MTB. After 48 hr, the number of intracellular bacteria was enumerated. (E) Acetylation levels in Hela cells infected with L. monocytogenes and treated with MC3465 or with sirtuin inhibitors for 24 hr, as detected by immunoblotting. Uninf: uninfected, NAM: nicotinamide. (F) Quantification of acetylated H3K18 immunoblots in L. monocytogenes-infected Hela cells treated with MC3465 or with sirtuin inhibitors. Error bars represent the mean ± SD. *p<0.05, **p<0.01, ***p<0.001, ****p<0.0001.
Figure 8 .
8 Figure 8. Differentially expressed genes upon MC3465 treatment. Uninfected and MTB-infected macrophages derived from four individual donors were treated with MC3465 (10µM) for 4 hr and 24 hr. Differentially expressed genes were identified by mRNAseq. (A) Volcano plot showing differentially expressed genes due to MC3465 treatment (p-value <0.05, fold change <-0.5 and >0.5). (B-C) Gene ontology enrichment analysis using the Cytoscape app ClueGO, of genes whose expression is downregulated by MC3465 treatment at 4 hr (B) and at 24 hr (C) (p-value ≤ 0.05; LogFC ≤ -0.5). (D) Heatmap showing differential expression of genes differentially expressed by MC3465 in naive and MTB-infected cells. Genes related to cellular zinc ion homeostasis and to type I interferon signaling pathway were respectively represented at 4 hr and 24 hr. Genes which were not differentially expressed were represented by a grey square.
Figure 9 .
9 Figure 9. MC3465 did not induce autophagy in MTB-infected macrophages. (A) Detection by indirect immunofluorescence of LC3 (red) in MTB (green) infected macrophages, treated with MC3465 for 24 hr and 48 hr (scale bar: 10 μm). DAPI (blue) was used to visualize nuclei. (B) Determination of the number of LC3-positive puncta per cell (unpaired t-test). (C) MTB-infected macrophages were left untreated or incubated with MC3465 plus bafilomycin (BAF), 3-methyladenine (3-MA) or chloroquine (CQ). After four days, the number of intracellular bacteria was enumerated. Error bars represent the mean ± SD. One-way ANOVA test was used. * p < 0.05, ** p < 0.01, *** p < 0.001.
Figure 10 .
10 Figure 10. Structure-Activity Relationship (SAR) analysis of MC3465 and identification of MC3466 as a more efficient analogue.(A) Macrophages were infected with GFP-MTB were incubated with 29 analogues at 10 µM. Four days post-infection, images were acquired by automated confocal microscopy followed by image analysis. The spots area per cell was expressed as the percentage of GFP area in compound-treated cells compared to cells incubated with DMSO. (B) MTB-infected macrophages were treated with MC3465 or MC3465 (10µM). After four days, the number of intracellular bacteria was enumerated. One representative experiment (of at least three) is shown. (C) MTB growth in culture liquid medium in the presence of the analogue MC3466 at different concentrations, determined by OD600. (D) The dose-response curves (DRC) for MC3465 and MC3466 were calculated by automated confocal microscopy followed by image analysis. Tfhe ratio of GFP area in compound-treated macrophages compared to cells incubated with DMSO, was normalized with the negative control DMSO (0% inhibition) and the positive control RIF (100% inhibition). (E) MTB-infected cells were treated with different concentration of MC3465 or MC3465. After four days, bacteria were enumerated by CFU. (F) The IC50 of each compound was determined by automated confocal microscopy (as described in (D)) and by counting the CFU. Representation of the molecules. (G) Macrophages were infected with a panel of clinical isolates of MTB (GC1237, CDC1551 and Myc5750) and were treated with MC3465 and MC3466. The number of intracellular bacteria was determined after four days of treatment. Error bars represent the mean ± SD. *p<0.05, **p<0.01, ***p<0.001.
Figure 11 .
11 Figure 11. MC3465 and MC3466 protected MTB-infected cells from death. Macrophages were infected with GFP-MTB at a MOI 5:1 and were treated with DMSO, MC3465 and MC3466. After four days, cells were labeled with Dead/Live fixable violet dead cell dye. The fluorescence intensity was quantified by flow cytometry.
Figure 12 .
12 Figure 12. MC3466 potentialized the efficacy of anti-TB drugs on sensible-and resistant-MTB strains. (A) Heatmaps representing variation in drug combination, using the Bliss independence model, ranging from antagonism (blue) to synergy (red). Cells were infected with GFP-MTB and were treated with a range of concentration of MC3466 and RIF or BDQ. Four days postinfection, images were acquired by automated confocal microscopy followed by image analysis as described in Figure 6. (B-C) Cells were infected with a rifampicin-resistant MTB strain and treated with MC3466 during four days. (C) Heatmaps of the combination of RIF and MC3466, using the Bliss independence model (D) Macrophages were infected with a rifampicin-resistant MTB strain. RIF was added at different concentrations with or without MC3466 at 10µM. The number of intracellular bacteria was enumerated 4 days post-infection. (D-E) Cells were infected with a bedaquiline-resistant MTB and treated with MC3466 during four days. The effects of MC3466 were analyzed as in (B-C). Error bars represent the mean ± SD.*p<0.05, **p<0.01, ***p<0.001, ****p<0.0001.
Figure 13 .
13 Figure 13. MC3466 increased the efficiency of rifampicin in the mouse model of TB. (A) Eight-week-old female C57BL/6J mice received different concentration of MC3465 and MC3466 by intraperitoneal injection six days per week during two weeks. Weight was control before every injection. (B-C) Estimation of the enzymatic activity of alanine aminotransferase (ALT) and aspartate aminotransferase (AST) using total blood of mice treated during two weeks. (D-E) Mice were infected via the aerosol route with <50 CFU of MTB H37Rv. After a week of infection, mice were treated six times per week with MC3465 or MC3466. Bacterial loads in the lungs (D) and spleen (E) were measured after two weeks of treatment. Five mice per group were included. (F-G) Seven days after infection, mice were treated during two weeks with MC3466 in combination with RIF. Lungs and spleens harvested were and the number of bacteria were determined by CFUs. Error bars represent the mean ± SD.*p<0.05, **p<0.01, ***p<0.001
Figure S4 .
S4 Figure S4. MC3466 increased the efficiency of anti-TB drugs. Heatmaps of the combination of MC3466 and MOX or INH, using the Bliss independence model, ranging from antagonism (blue) to synergy (red).
Fold
et al. tested the inhibitory effect of the compound by the determining the level of cytosolic acetyl-α-tubulin in U937 cell (pro-monocytic, human histiocytic lymphoma cell line)(Moniot et al., 2017b). It might be possible that once inside the cells, MC3465 has a more selective and potent target than SIRT2 or that it is metabolized. Difference between inhibition of the deacetylation of α-tubulin or H3 may be due to the fact that the molecule may allegedly inhibit SIRT2 inside the cytosol but does not have any effect on SIRT2 in the nucleus. Determining the localization of the molecule within the cell may therefore provide insights regarding its mechanism of action. Using correlative electron and ion microscopy,Greenwood et al. has determined the localization of the BDQ (which contains a bromine atom), by measuring the intensity of 79 Br ion signal in vitro
a
bacterial component used by mycobacteria to grow intracellularly. Identification of the targets could provide us valuable insights for the characterization of this molecule. By performing SAR studies, where we have tested several analogues of MC3465, we have identified the active group of MC3465. With this knowledge, we can possibly modify the molecule or its analogues to identify its target. Activity-based protein profiling (ABPP) and compound-centric chemical proteomics (CCCP) are chemical proteomic approaches which use small-molecule chemical probes for the evaluation of interaction mechanisms between
Drugs Targets Modes of action A
Levofloxacin or moxifloxacin DNA gyrase and topoisomerase IV Inhibition of DNA replication, transcription and repair
Bedaquiline F-ATP synthase ATP synthase inhibition
Linezolid 23S ribosomal RNA of the 50S subunit Inhibition of bacterial protein synthesis
Generation of
Clofazimine intracellular reactive DNA damages
B Cycloserine or terizidone oxygen species L-alanine racemase and D-alanine: D-alanine ligase Inhibition of cell-wall synthesis
Ethambutol Arabinosyl transferases Inhibition of cell-wall formation
Formation of radical
Delamanid intermediates which inhibit mycolic acid Inhibition of cell-wall synthesis
synthesis
Inhibition of protein
C Pyrazinamide S1/RpsA, GpsI, PanD translation process, nucleic acid metabolism, coenzyme A synthesis
Imipenem-cilastatin or meropenem L,D-transpeptidase Inhibition of cell-wall synthesis
Amikacin 30S ribosomal subunits Inhibition of protein synthesis
Ethionamide or prothionamide InhA Inhibition of synthesis of mycoloic acids
p-aminosalicyclic acid Pteridine synthetase Inhibition of folic acid synthesis
Table 1 . Antibiotics used for MDR-TB treatment. MDR
1
-TB regimens are usually composed of
Clinical trial phase Mode of action
BTZ-043 Ib/IIa Inhibition of the DprE1 enzyme
Delpazolid II Inhibition of bacterial protein synthesis
GSK-3036656 IIa Inhibition of protein synthesis
Macozinone I/II Inhibition of DprE1 enzyme
OPC-167832 I/II Inhibition of DprE1 enzyme
Telacebec (Q203) I/IIa Inhibition of cytochrome bc1 complex
SPR720 I/IIa Inhibition of gyrase B/ParE
SQ109 IIb/III Inhibition of cell wall synthesis
Sutezolid IIb Inhibition of protein synthesis
TBA-7371 I//II Inhibition of DprE1 enzyme
TBI-166 I Not fully elucidated
TBI-223 I Inhibition of protein synthesis
TBAJ-876 I Inhibition of F-ATP synthase
Table 2. New compounds in the pipeline for the treatment of TB (Working Group on
Among these 22 drugs, 13 are new compounds including: BTZ-043, Delpazolid, GSK-3036656,
Macozinone, OPC-167832, Q203, SQ109, SPR720, Sutezolid, TBAJ-876, TBA-7371, TBI-166 and
TBI-223 (Table 2) (Working Group on New TB Drugs, 2021). Furthemore, six previously
approved antibiotics are currently being tested for their application in association with MDR-
TB treatment: clofazimine, levofloxacin, linezolid, moxifloxacin, rifampicin (high dose) and
rifapentine.
In the TB pipeline, several combinations of new and/or repurposed drugs are being checked
in trials, either for drug-sensitive or drug-resistant TB. For drug-sensitive TB, most trials aim
to shorten the current TB treatment. The RIFASHORT trial is based on increased doses of
rifamycins (i.e rifampicin, rifampin or rifapentine) while the TRUNCATE-TB is for evaluation of
the efficacy of 2-month regimens with combinations of multiples antibiotics (INH, PZA, EMB,
RIF, linezolid, levofloxacin and BDQ)
New TB
Drugs, 2021).
Table 3 . miRNAs, targets and effects on genes involved in immunity against mycobacteria.
3
miRNA Targets Effects References
miRNA-21 IL-12 Th1 response Inhibition by targeting IL-12 production (Z. Wu et al., 2012)
Phosphofructokinase Limitation of the host glycolysis and IL- (Hackett et al.,
muscle (PFK-M) 1β production 2020)
miRNA-21-5p Bcl-2, TLR-4 Attenuation of the secretion of IL-1β, IL-6, and TNF-α (Zhao et al., 2019)
miRNA-27 Ca 2+ channel, voltage-dependent, a2/D subunit 3 Downregulation of Ca 2+ signaling, inhibition of autophagosome formation (F. Liu et al., 2018)
miRNA-27b Bcl-2-associated athanogene, TLR-2/MyD88 pathway Pro-inflammatory cytokines decrease, NF-κB activation, apoptosis, ROS production (Liang et al., 2018)
miRNA-33 ATG5, ATG12, LC3B, and LAMP1 Autophagy inhibition and host lipid metabolism reprogramming (Ouimet et al., 2016)
miRNA-99b TNF-α, TNF Receptor Superfamily Member 4 Suppression of cytokines overexpression (IL-6, IL-12, IL-1β) (Y. Singh et al., 2013)
LIST OF TABLES
Table 1 :
THESIS OBJECTIVES
Ethics statement
Buffy coats were obtained from anonymous healthy donors who signed a consent to donate their blood for research purposes.
Macrophages and cell lines
Peripheral blood mononuclear cells (PBMC) were isolated from buffy coats by centrifugation with lymphocyte separation medium (Eurobio). CD14 + monocytes were purified by positive selection using microbeads coupled to an anti-CD14 antibody (Miltenyi Biotec) and magnetic columns. Monocytes were cultured at 37°C and 5% CO2 in RPMI-1640 medium (Gibco) supplemented with 10% heat-inactivated fetal bovine serum (FBS; Dutscher), and 2 mM Lglutamine (Gibco) (hereafter defined as complete medium) with macrophage colony stimulating factor (M-CSF, 20 ng/mL; Miltenyi Biotec). After six days of differentiation, the resulting macrophages were incubated in a buffer solution containing PBS, 2 mM ethylenediaminetetraacetic acid (EDTA) and 10% FBS, at 37°C and 5% CO2 for 15 minutes before being harvested and counted.
THP-1 cells derived from an acute monocytic leukaemia patient were cultured in complete medium. The cells are incubated at 37°C, with 5% CO2, and the culture medium was changed every 48 hours, before confluence was reached. THP-1 cells were differentiated into macrophages by treatment with phorbol 12-myristate 13-acetate (PMA) at 50 ng/mL for 48 hours.
Mouse embryonic fibroblasts (MEF) and Henrietta Lacks (HeLa, ATCC, CCL-2) cells were cultured in Dulbecco's Modified Essential Medium supplemented with L-glutamine (DMEM-
Determination of bacterial counts
Cells were lysed in water with 0.05% Triton X-100. Number of MTB was enumerated as previously described and plated on 7H11 [START_REF] Tailleux | Constrained Intracellular Survival of Mycobacterium tuberculosis in Human Dendritic Cells[END_REF]. CFUs were counted after three weeks at 37°C. L. monocytogenes was plated on BHI agar and CFUs were counted after 1 day at 37°C.
Measurement of cell death
Necrotic cell death was evaluated by staining cells with Live/Dead fixable violet dead cell stain kit (Invitrogen) as previously described [START_REF] Amaral | A major role for ferroptosis in Mycobacterium tuberculosis-induced cell death and tissue necrosis[END_REF]. Briefly, macrophages were incubated with Live/Dead staining solution (1:1000 diluted in PBS) at room temperature (RT) for 15 min in the dark. The staining reaction was stopped by washing the cells with PBS and 10% FBS. Cells were fixed with 4% paraformaldehyde (PAF) during 1 hour at RT and detached.
Analysis was performed using a CytoFLEX Flow Cytometer (Beckman Coulter, Brea, California).
More than 10,000 events per sample were recorded. The analysis was performed using the FlowJo software.
Compounds library for screening
Compounds library was obtained from Dante Rotili and Antonello Mai (Sapienza University, Rome), with each compound resuspended in DMSO. 384-well polypropylene plates (Greiner Bio-One) were prepared with compounds at final concentration of 10 μM in 0.5% DMSO (v/v).
High-content screening
Infected macrophages were plated at 2x10 4 cells/well in 384-well tissue culture plates, with each well containing different compound. Cells were fixed with 4% paraformaldehyde (PAF)
for 1 hour at RT and were strained with HCS CellMask Blue stain (HCS; Invitrogen Molecular Probes; 2µg/mL) and Hoechst 33342 (5µg/mL) in PBS for 30 minutes. Confocal images were acquired using the automated fluorescence microscope Opera Phenix High-Content Screening
Quantification and statistical analysis
Quantitative data were expressed as mean ± standard deviation (shown as error bar).
Statistical analyses were performed with Prism software (GraphPad Software Inc), using the t test and one-way analysis of variance (ANOVA) as indicated in the figure legends. Differences between groups were examined statistically as indicated (*p < 0.05, **p < 0.01, ***p < 0.001 and ****p<0.0001). Results were considered statistically significant with a p-value < 0.05.
RESULTS
Supplemental |
04106192 | en | [
"spi.meca"
] | 2024/03/04 16:41:22 | 2022 | https://theses.hal.science/tel-04106192/file/these.pdf | Renée El
Melhem Bât
Pascal Blaise
M Stéphane
ÉLECTROTECHNIQUE E E A Électronique
M Philippe Delachartre
email: [email protected]
Bénédicte Lanza
Mme Sandrine
email: [email protected]
Mme Sylvie
M Hamamache Kheddouci
M Stéphane Benayoun
email: [email protected]
Stéphanie Cauvin
M Jocelyn Bonjour
M Christian Montes
Mr Ritou Mathieu
Maître De
Mr Serra
Maître De Conférences
Mr Fromentin
Mrs Kaftandjian-Doudet
Valérie
Mr Guillet François
Maître Mrs Chanal Hélène
Adrien Marsick
Alice Dinsenmeyer
Sanae Serbout
Nicolas Poncetti
Yasmine Hawwari
Corentin Guillon
Yaqiang Jin
Xavier Plouseau-Gu Éd É
Nicolas Aujogue
Matthieu Decaux
Valentin Miqueau
Achilleas Achilleos
Florent Dumortier
Zijian Luc Laroche
Xiaowen Zhu
Automatically Programmed Tool AR Auto-Regressive ARMA Auto-Regressive Moving Average AUC Area Under Curve CAD
Keywords: Condition monitoring, Milling tool, Wear, Correlation, SVD. viii AAS Average Angular Speed AE Acoustic Emission
Based on the tendency of Industry 4.0, this thesis targets the tool condition monitoring (TCM), which is the terminal process of flexible production. The aim is to detect the abnormal tool behavior as early as possible to improve the surface quality of workpieces and to prevent subsequent losses from serious tool failures.
In this context, a methodology for monitoring the wear of end mills in real-time production based on the inter-insert correlation is presented. The approach takes advantage of the angular domain characteristics to segment the signal into periodic cycles of the same angular duration, which are then amenable to correlation analysis.
Under high rotational speeds, the external working environment experienced by the individual teeth can be considered quasi-equivalent. Through the correlation analysis, the impact of the non-stationary operation on the monitored signal is effectively reduced. From a wide range of correlation algorithms, singular value decomposition (SVD) is selected to proceed with the analysis, and an ordered separability index with latent correlation characteristics is extracted to assess the current condition of the tool. The feasibility of the proposed indicator was valid and evaluated via the simulated signal and a series of experimental data collected by designed milling patterns.
The results demonstrate the promising development of this method in forming an efficient TCM system. The proposed approach is more independent of the cutting conditions (changes in speed or direction) than the traditional teach-in method and does not require a trial run. It partially fills the gap of tool monitoring demand in flexible manufacturing for customized small batch production. At the same time, inter-insert correlation is also seen as part of a broader framework for monitoring and maintaining rotating machinery. It has great potential to be applied to the analysis
Résumé Chapitre 1 -Introduction
Dans le contexte de l'industrie 4.0, les machines-outils à commande numérique sont, d'une part, considérées comme un élément vital dans l'établissement de systèmes de production intelligents et flexibles, en raison de leur bonne base industrielle et de leurs caractéristiques automatiques. D'autre part, la surveillance de l'état des outils (TCM) est restée un obstacle au cours de la dernière décennie.
Puisque l'outil se trouve à la fin de la chaîne de production, son état a un impact direct sur la qualité de la surface de la pièce. Il peut entraîner un temps d'arrêt involontaire avec une diminution de l'efficacité ou, pire encore, endommager la broche en raison des contraintes supplémentaires. Par conséquent, la TCM en temps réel est devenue une nécessité.
Les difficultés de la TCM résident principalement dans le fait que :
• l'outil est un article consommable de taille relativement petite et aucun capteur ne peut être attaché directement ;
• l'environnement de travail est compliqué et présente une visibilité/accessibilité limitée en raison de la présence de liquide de refroidissement et de copeaux métalliques ;
• l'opération subit toujours des changements rapides à une vitesse de rotation élevée ;
• les trajectoires de fraisage varient selon les différentes tâches et ne présentent pas de schéma fixe.
Cette recherche se situe dans ce contexte et vise à développer une nouvelle méthode basée sur la corrélation inter-insert utilisant la décomposition en valeur singulière ix (SVD) pour surmonter les difficultés mentionnées ci-dessus. Les innovations de l'approche proposée portent sur les aspects suivants :
• Elle est fondée sur l'échantill-onnage ou le rééchantillonnage du signal dans le domaine angulaire, ce qui permet de segmenter le signal en unités de dent de coupe.
• Les multiples inserts d'un même outil sont considérés comme des individus en interaction. Sous des vitesses de rotation élevées, l'environnement de travail externe subi par les dents individuelles peut être considéré comme quasiéquivalent. Grâce à l'analyse de corrélation, l'impact du fonctionnement non stationnaire sur le signal surveillé est réduit de manière efficace.
• Il n'est pas nécessaire d'organiser des essais ou de former un grand nombre de données pour obtenir des signaux de référence.
• Cette méthode peut être considérée comme versatile, et peut être appliquée à une large gamme de sources de signaux provenant de machines tournantes.
À la connaissance des auteurs, aucune recherche similaire n'existe et la méthode proposée est originale ainsi qu'innovante.
La thèse est organisée dans les 5 chapitres suivants. Le contexte théorique nécessaire à l'étude est passé en revue dans Chapitre 2. Chapitre 3 construit le modèle général et discute des questions spécifiques. Chapitre 4 présente le dispositif expérimental et les conditions opérationnelles. Les résultats d'analyse les plus importants sont présentés dans Chapitre 5. Enfin, Chapitre 6 clôt la thèse par une conclusion et quelques perspectives.
Chapitre 2 -L'état de l'art L'histoire du développement de la TCM est d'abord retracée et les efforts qui ont déjà été faits par la communauté académique sur ce sujet important sont explorés.
Ensuite, une revue de la littérature divisée en trois sections spécifiques : les caractéristiques des opérations de fraisage, la construction du système de surveillance de l'état des outils et l'algorithme de corrélation pertinent.
Le type d'usinage contrôlé visé par ce travail est précisé comme étant le fraisage en bout et ses caractéristiques opérationnelles correspondantes, les types d'usure et
x la durée de vie de l'outil sont analysés. La revue montre que l'usure en dépouille est le type d'usure le plus dominant sur les fraises en bout. Après l'usure initiale, l'usure augmente à un taux assez constant pendant la majeure partie de sa durée de vie. Finalement, elle atteint une zone d'usure exponentiellement accélérée, qui marque la fin de sa vie opérationnelle. Ensuite, le processus de formation du copeau est discuté plus en détail dans des conditions normales et d'usure. Le modèle de force de fraisage est construit sur la base de la formation de copeaux. En combinant les caractéristiques de durée de vie de l'outil mentionnées ci-dessus avec le modèle de force de coupe en cas d'usure, l'interaction entre les dents de l'outil a été saisie comme base de la corrélation inter-insert.
Sur cette base, les trois éléments qui constituent le système TCM sont configurés : la perception sensorielle, l'extraction de caractéristiques et la prise de décision. Après avoir considéré la rentabilité et les applications des capteurs existants, ainsi que la disponibilité de l'équipement dans le laboratoire, les forces de coupe ont été sélectionnées comme signal cible pour l'analyse ultérieure.
La revue indique que les méthodes actuellement disponibles commencent généralement par extraire les caractéristiques pertinentes, puis analysent l'état de l'outil sur la base de la comparaison entre les données de surveillance en temps réel et une référence prédéterminée. Cette référence standard peut être obtenue par un simple essai ou par des paradigmes cognitifs. Cependant, à part le gaspillage de matériel, les résultats obtenus à partir des essais ne font pas autorité. En particulier dans les opérations non stationnaires (fraisage avec charge variable et trajectoire complexe), la référence peu fiable peut entraîner de fausses alarmes et des arrêts non souhaités. D'autre part, la référence conclue par le paradigme cognitif, bien que relativement stable, a rencontré des difficultés dans la formation des données en raison de ses coûts élevés d'acquisition de données et du développement insuffisant de la technologie de stockage et de transmission.
En outre, la revue a noté l'émergence progressive des méthodes de traitement du signal dans le domaine angulaire. La fraise, en tant que mécanisme rotatif, présente naturellement une régularité d'échantillonnage dans le domaine angulaire. Cet avantage du signal angulaire est exploité pour réduire l'instabilité due aux variations de vitesse et pour effectuer une meilleure segmentation du signal en fonction des dents.
Ainsi, le concept de surveillance de l'outil dans le domaine angulaire en utilisant les corrélations inter-insert est présenté. Enfin, la courbe ROC (receiver operating xi characteristic) est présentée comme un outil d'aide à la décision.
Après une recherche documentaire approfondie, quelques études pertinentes basées sur les corrélations ont été trouvées, mais elles diffèrent encore des idées présentées dans cette thèse. Une fois l'originalité du concept confirmée, des méthodes spécifiques pour effectuer une analyse de corrélation sur les segments ont été discutées. Comme le nombre de dents d'un outil est généralement supérieur à 2, l'analyse de corrélation est envisagée en conjonction avec une analyse multivariée. Quatre exigences ont été proposées comme critères pour trouver l'algorithme de corrélation des segments à base de dents :
• le modèle doit être capable d'analyser plusieurs variables simultanément et de produire un résultat complet ;
• le modèle doit être capable d'extraire les caractéristiques qui reflètent l'état de l'outil, tout en fournissant l'interprétation appropriée dans un sens physique ;
• le modèle peut réduire la taille de l'information originale tout en capturant les caractéristiques principales ;
• le modèle n'est pas obligé de distinguer les variables indépendantes et dépendantes.
Finalement, après une discussion approfondie sur l'analyse en composantes principales (PCA), la SVD a été identifié comme l'algorithme pour la TCM basée sur la corrélation inter-insert.
Chapitre 3 -Modélisation générale du comportement des mécanismes de rotation Après avoir lu la littérature et identifié l'algorithme spécifique pour la corrélation inter-insert, ce chapitre se concentre sur l'applicabilité de la méthode proposée. Le type de signal ciblé par la corrélation inter-insert doit présenter les caractéristiques suivantes :
• le signal doit contenir des événements récurrents ;
• ces événements doivent correspondre au même nombre de points d'échantillonnage avec troncabilité ; xii • il doit y avoir une interaction entre ces segments.
Sur cette base, un modèle général du comportement des machines rotatives est développé pour l'expansion future de la méthodologie. En ce qui concerne le processus de fraisage en bout étudié dans ce cas, un angle de trajectoire a été introduit pour rectifier le problème de la variation de trajectoire pendant les tâches d'usinage non stationnaires afin de mieux s'adapter à ce modèle général. Le modèle de force de coupe mentionné dans le chapitre précédent est incorporé dans ce processus général comme base théorique pour la validation expérimentale. Parallèlement, la vitesse angulaire instantanée (IAS) est incluse comme signal de simulation plus pratique pour l'analyse préliminaire de faisabilité. Elle est conforme aux trois caractéristiques requises mentionnées ci-dessus et démontre que la corrélation inter-insert peut être largement utilisée pour les signaux générés par les machines tournantes.
D'autre part, la segmentation du signal, qui est une étape préalable importante de la corrélation inter-insert, est présentée en détail. Sur cette base, la stratégie de réalisation de l'analyse de corrélation entre les segments de signaux est spécifiquement Après avoir déterminé le montage expérimental et les paramètres des conditions de coupe pour l'opération, quatre trajectoires de fraisage différentes ont été conçues.
xiii Le contour carré contient quatre coupes en ligne droite et est utilisé pour établir la condition de travail stationnaire. La trajectoire en forme de losange vise à vérifier la compatibilité des coordonnées angulaires utilisées dans la méthode proposée et les coordonnées cartésiennes du système de la machine-outil. La trajectoire en forme de carré arrondi contient des trajectoires courbes, conçues pour tester des régimes non stationnaires. L'objectif de la courbe désignée est d'observer comment la corrélation inter-insert est affectée par l'enroulement du matériau autour de l'outil, lorsque la fraise passe respectivement par les contours intérieur et extérieur. Sur cette base, en ajoutant des trous à la courbe désignée, celle-ci est utilisée pour tester le comportement de l'indicateur lorsque l'outil rencontre un tel changement soudain.
Le prétraitement préalable à la corrélation inter-insert comprend le rééchantillonnage du signal dans le domaine angulaire, la correction de la direction de fraisage, la troncature, la segmentation, le centrage sur le zéro, etc. Les résultats obtenus à partir du prétraitement des données expérimentales montrent que les étapes de rééchantillonnage dans le domaine angulaire et de correction de la direction clarifient efficacement les composantes du signal correspondant à chaque dent, ce qui améliore l'opérabilité de la segmentation et fournit une base prometteuse pour la corrélation inter-insert.
Chapitre 5 -Caractérisation de l'état de l'outil par SVD Après les étapes de prétraitement décrites dans le chapitre précédent, les signaux obtenus dans différentes conditions de coupe sont prêts pour la corrélation interinsert. Ce chapitre détaille les résultats du traitement des algorithmes SVD pour les matrices cible, qui est le contenu central de la corrélation inter-insert pour la TCM. En fonction de la disponibilité de l'équipement, la stratégie appropriée peut être sélectionnée de manière flexible. Le signal de l'outil d'usure a été utilisé pour tester ces deux indicateurs. Finalement, les courbes ROC (receiver operating characteristic) donnent des AUC (area under curve) de 0,87 et 0,96 respectivement.
La corrélation inter-insert pour la TCM peut réduire efficacement l'influence des facteurs externes sur le processus d'analyse et est donc indépendante des changements de profondeur de coupe et des changements de trajectoire. Cette méthode sans référence comble en partie les besoins de surveillance des outils dans la production de petits lots personnalisés. Parmi les perspectives prometteuses, certaines limites sont également identifiées. La méthode proposée a du mal à maintenir un état stable dans certaines situations qui peuvent provoquer un déséquilibre de la quantité de coupe entre les plaquettes, comme le passage sur un trou, le fraisage d'un contour extérieur à faible rayon, etc. Cela pose un nouveau défi pour les développements futurs. Les principales contributions peuvent être conclues comme suit :
• Le comportement des signaux de rotation est exploré plus avant dans le domaine angulaire, ce qui crée une base stable pour la segmentation des signaux correspondant à chaque insert de l'outil.
• Le concept d'exploitation de la corrélation inter-insert pour surveiller l'état de l'outil est proposé. La dérivation théorique et l'analyse expérimentale sont justifiées de manière préliminaire avec des résultats très prometteurs.
• Parmi les nombreuses méthodes de corrélation, un traitement spécifique utilisant la SVD est identifié. Chacune de ses composantes de décomposition a une signification physique correspondante, tout en étant efficace en termes de xvi calcul. Sur cette base, l'indice de séparabilité des ordres pour évaluer l'état de fonctionnement actuel de l'outil est introduit et deux stratégies dérivées pour la détection des défauts sont proposées.
• Par rapport à la prise de décision par Teach-in & Comparaison utilisée dans le système de surveillance le plus, la méthode proposée dans ce manuscrit présente les avantages de la commodité, de l'intuitivité et de la flexibilité pour s'adapter aux différentes trajectoires de fraisage (conditions cyclostationnaires et cyclonon-stationnaires). Comme il n'est pas nécessaire de procéder à des essais ou à une formation au gros des données pour obtenir un seuil de référence standard pour chaque tâche de fraisage, il est très facile à personnaliser à la demande dans la production ainsi que dans la production en petites quantités.
Les perspectives d'avenir de cette recherche résident dans deux aspects. Le premier consiste à établir une correspondance plus précise entre les exigences de précision et l'indicateur proposé. Cela nécessitera une série d'expériences pour déterminer les critères spécifiques à appliquer, mais il ne concerne que les caractéristiques de l'outil (matériau, angle d'hélice, etc.), indépendamment des conditions de coupe. Comme son principe de fonctionnement est basé sur l'analyse de corrélation d'événements répétés sur la base de révolution, il peut être considéré comme faisant partie d'un cadre plus large pour la surveillance et la maintenance des machines tournantes. On peut donc s'attendre à ce qu'une autre direction soit prise, à savoir l'exploration de son potentiel dans le diagnostic d'autres structures périodiques, telles que les engrenages et les roulements.
xvii
Glossary
Scalars are written as italic letters, such as a.
Vectors are written as bold italic letters, such as a.
Matrices are written as hollow upper-case letters, such as A.
Notations relating to general cutting conditions: revolution-based data matrix σ, Σ singular value and corresponding matrix u, U left singular vector and corresponding matrix v, V right singular vector and corresponding matrix α ki separability index of the i th order for the k th revolution m constant presenting the number of data in one segment n constant presenting the total segments i subscript indicating the tooth counts before SVD and the order counts after SVD (i = 1, 2, . . . , n z ) j subscript indicating the sample counts (j = 1, 2, . . . , m) k subscript indicating the revolution counts (k = 1, 2, . . . , n nz ) e subscript indicating the extraction counts (e = 1, 2, . . . , final extraction) L execution number of shift l overlap number in the shifting processing ∆ W S sensitivity range µ qp average value of α k1 from (q -p) th to q th revolution σ qp standard deviation of α k1 from (q -p)
V c cutting speed (m/
Introduction
General context
With globalization, a clear international division of labor has been formed between developed and newly industrialized countries. More than 80% of low-valueadded production lines have been transferred to developing regions. In the 1980s, the United States had overemphasized the importance of the tertiary industry, further contributing to the decline of its manufacturing sector for a time. But in the post-COVID19 era, the significance of domestic manufacturing to the health of the national economy has been reconfirmed, in which traditional machining occupies a very essential position.
After entering the 20 th century, the means of production and productivity held by mankind have grown exponentially, and the pace of technological development continues to accelerate [START_REF] Hanson | Long-term growth as a sequence of exponential modes[END_REF]. To meet the surging consumer demand stimulated by social development, the machining process is streamlined, standardized, and globalized. Fordism is one of the typical successful cases with high efficiency and low cost and provided numerous products for the seller's market society at that time.
However, this mode of production came at the expense of a reduction in product diversification. As the economy developed, the competitive advantage of companies evolved, and it no longer manifested itself in simple rivalries on quality, price, and quantity, but rather in a demand for innovative and even customized production. At the same time, with the high rate of technological iteration, the fixed processes of mass production, represented by Fordism, cannot be adapted swiftly. It is therefore difficult to keep up with the shifting trends of the market, which ultimately leads to redundancy and waste of large amounts of inventory. To survive in the current ever-changing and unpredictable market environment, companies have to face three major challenges in the production of customized on-demand goods:
• How to target market fronts and meet customized needs?
• How to minimize the prohibitive price of one-off products made on order?
• How to quickly restructure production lines for early delivery?
Flexible Manufacturing System (FMS) is seen as the main answer to these 1.1 General context questions. In general, an FMS system consists of three main units: a central control computer to govern the manufacturing parameters, a programmed working center (usually an automated computer numerical controlled (CNC) machine, robot or 3D printer), and a material handling system to optimize the production workflow [START_REF] Kosky | Chapter 12 -Manufacturing Engineering[END_REF]. It provides firms a proactive and strategic manufacturing foundation to adapt to the dynamic trend of consummation. Its flexibility involves following three main aspects [START_REF] Tolio | Design of Flexible Production Systems: Methodologies and Tools[END_REF]:
• Design flexibility
The conception, modification, and development of the product could be modeled and simulated before manufacturing through computer-aided design (CAD) software. Customers can participate in the early design stage to ensure the final product will meet their requirements.
• Machine flexibility
The machining process is divided into multiple independent operations, whose sequence and type are allowed to be changed and combined by pre-programmed codes to produce new product categories without expensive expenses.
• Routing flexibility
The process routing is composed of a variety of machines to achieve a one-stop production mode from raw material to packaging. A wide range of machinery options and high interface compatibility between machinery can adapt to production scale changes, such as in volume, capacity, or capability, and enable faster implementation of a product transition.
Flexibility is a latent competence of a system rather than an actual behavior. It is usually interpreted as reactive sensitivity to three factors:
• Scope
Long-term flexible production involves process improvements and innovations.
The system is structurally reconfigurable, reusable, and scalable. The larger the range of adaptation, the greater the flexibility.
• Time
The time required to switch between different working states. The shorter the time, the greater the flexibility.
• Cost
The cost required to switch between different working states. The lower the cost, the greater the flexibility.
FMS works on an on-demand basis following the market trends, thus reducing inventory backlogs, increasing machine efficiency, improving labor productivity, minimizing manufacturing costs, and making companies more competitive. However, the development of FMS is slower than anticipated, despite all the advantages mentioned above. The effectiveness of an FMS depends to a large extent on the automation level in other areas of the entire organization. It can operate with limited manpower, but various support equipment is needed. Therefore, some companies are hesitant to introduce such a sophisticated system because of the heavy initial investment required and the subsequent maintenance expenses associated with it [START_REF] Terkaj | A Review on Manufacturing Flexibility[END_REF].
However, in the context of fierce global competition, companies must constantly make technological advances in order to maintain the competitiveness of their production systems. For this reason, many companies make a compromise choice by building FMS systems on top of their existing equipment -CNC machining centers.
Since the CNC machining center was one of the first automation technologies to be introduced in the business, it already has a good industrial base with low subsequent retrofitting costs compared to implementing emerging technologies.
The CNC machining center is a computer-controlled motorized platform, where high-performance microprocessors and programmable logical controllers work in a parallel and coordinated manner. It is equipped with a range of tools in different sizes and materials, such as drills, flat end mills, ball nose mills, etc., which can be called up in accordance with CAD/CAM codes for different operations and automatically perform subtractive manufacturing. At the same time, in contrast to additive manufacturing (3D printer), the high-speed operation of CNC machining dictates a high productivity ceiling, which allows the system to shift freely from small customized batches to efficient mass productions [START_REF] Bilalis | The flexible manufacturing systems (FMS) in metal removal processing: An overview[END_REF].
Alongside the evolution of techniques, technologies, and strategies for industrial production, other aspects are also advancing in parallel. The hardness levels of materials were previously unthinkable. In addition to the need for diversification, social demands are at the same time placing higher requirements on product precision and 1.1 General context structural complexity. In order to pass the strict quality control session, companies need to pay a high fee to safeguard the FMS services. According to the AFNOR standard NF X60-000 (April 2016), maintenance is a collection of actions including all technical, administrative, and management behaviors within the life cycle of an asset, intended to maintain or restore it to a state in which it can perform the required function [START_REF] Afnor | Maintenance industrielle -Fonction maintenance[END_REF]. Basically, there are two different maintenance strategies available, depending on the circumstances and level of maintenance, as shown in Fig. 1.1. The corrective maintenance (reactive maintenance) is acceptable when dealing with simple situations, such as changing a light bulb. Equipment can be repaired or replaced after it has worn out, malfunctioned, or broken down as it has no serious subsequent effects. However, once a sophisticated system with certain expensive components is involved, it is very risky to simply leave the machine running until it fails completely. The consequences are not only high repair costs, but also involve significant safety issues. At present, the preventive maintenance (proactive maintenance) is implanted in most actual manufacturing, which means that the parts are maintained or replaced at predetermined planning according to the standards provided by the supplier (operating hours, number of units produced, number of movements carried out, etc.), thereby minifying the probability of system failure. Compared with traditional corrective maintenance, regular and Introduction routine checks are more systematic and assure. However, it does not make full exploitation of the service life of the components, which can be considered redundancy and waste.
Facing challenges and competition, maintenance is not stagnated. It has also evolved over the years to support the transformation of the industrial world. A more advanced method based on condition monitoring, called the predictive maintenance, has been derived from preventive maintenance [START_REF] Forsthoffer | 11 -Preventive and Predictive Maintenance Best Practices[END_REF]. It refers to a sensorbased system that will determine the health of equipment by observing real-time data collection and will only take action when maintenance is truly required. In contrast to simply planned maintenance, predictive maintenance enables a significant degree of prioritization and optimization over maintenance resources. Ideally, predictive maintenance could minimize spare parts costs and increase productivity, as well as ensure product quality.
Extensive instrumentation of equipment with specialized analysis tools has led to a development trend in recent years. For a discrete-event system such as a box packing line, the execution of predictive maintenance is simple. The next task only will be proceeded when the sensors confirm that the previous task has been completed correctly. Due to the independence between each task and the slow operation speed of the work-line, the above treatment will not cause excessive subsequent losses.
However, the predictive maintenance for material removal manufacturing on CNC machines is a totally different story.
The sensor-assisted monitoring system is already installed in most CNC machining centers to proceed with in-process milling monitoring during unmanned machining operations as much as possible [START_REF] Altintas | Manufacturing Automation: Metal Cutting Mechanics, Machine Tool Vibrations, and CNC Design[END_REF]. By integrating different sensors, such as position sensors, dynamometers, rotary encoders, etc., current CNC centers allow simultaneous servo-position, velocity control of all the axis, monitoring of the controller, and vibration performance. However, the prognosis of tool conditions in predictive systems is a bottleneck that has plagued the entire manufacturing industry for the past decade.
1.2 Problematic and objectives
Problematic and objectives
With steady developments on the other side, difficult-to-monitor tool wear as a weak point in machining has limited the industry's progress. According to statistics, among all factors, cutter breakage has become one of the main causes of downtime, which accounts for approximately 6.8% causes of unwanted stoppage [START_REF] Adam G Rehorn | State-of-the-art methods and results in tool condition monitoring: a review[END_REF].
Although most CNC machining centers are equipped with sensor-based monitoring systems, they can only monitor the general condition of the machine. Due to the consumable nature of the cutter and its overly small size, there are currently no sensors that can be directly attached to it. On the other hand, the remote sensors do not have good accessibility and visibility to monitor the tool condition as the coolant and constant chip generation during the operation. The cutting tool is like the fingernail, working hard without any painful sensation even when it is already injured, which unfortunately is a fairly common case.
In fact, the aforementioned 6.8% only refers to situations where the tool is seriously damaged. In most cases, the tool has no visible cracks but is already excessively worn. During steady production, the excessive wear that appeared prior to scheduled tool replacement will introduce cutting conditions change including enormous chip heating, cutting speed change, chattering, etc. Using a dull cutter will produce ripples on the machined surface, and the quality of the workpiece will be immeasurably reduced. Especially when the machined products are high added value parts, if the problem cannot be discovered in time, the consequences and losses will be extremely significant.
Excessive tool wear also has a high risk of triggering periodic overloads at the cutting edge, which might accelerate the tool damage as chipping or accidental breakage of the insert and even put dangerous stresses on the spindle and other parts of the machine. In the event of an incident, sufficient stoppage must be taken quickly to limit the damage and the resulting additional costs. Such a shutdown accident not only wastes time and diminishes efficiency, but also may cause irreparable damage to the machine (mostly to the spindle).
Therefore, under the trend of a lightly staffed or fully automated machining en-Introduction vironment, the monitoring of the real-time status of the tool has become an essential issue. Tool condition monitoring has therefore been abbreviated to TCM as a term specifically for the study of tool diagnosis and prognosis. The data predict that an accurate and reliable TCM system can allow scheduled maintenance, which may save 10% to 40% of the cost of production projects [START_REF] Lamraoui | Indicators for monitoring chatter in milling based on instantaneous angular speeds[END_REF].
As mentioned earlier, in order to adapt to the diverse social requirements, the generated milling paths and selected operating parameters vary according to the customer's needs. In the context of flexible manufacturing, the main difficulties for real-time TCM can be summarized as follows:
(i) The cutter is a consumable item with a relatively small size, to which no sensors can be directly attached.
(ii) The working environment has limited visibility and accessibility due to the presence of coolant and metal chips.
(iii) The condition of the tool may undergo rapid changes at a high rotational speed.
(iv) The milling trajectories are complex and vary from the different tasks with no fixed pattern.
(v) The sources of vibration signals are extremely sophisticated.
Among these obstacles, the difficulties (i) and (ii) place demands on acquisition This thesis comes into this context aiming to design and demonstrate a new TCM method to overcome the difficulties (iii) and (iv) and reliably detect tool states during real-time operations. To accomplish this overall aim, several objectives are listed as follows:
• The first objective is to identify the suitable indirect signal characteristics that can be used to describe the condition of the machining process.
• The second objective is to consider non-stationarity issues of the regime introduced by factors such as trajectory changes, variable loads, etc.
• The third objective is to confirm that the diagnostic tool can handle the massive volume of data that is generated by rapid machining.
• The fourth objective is to ensure that the method can identify fault conditions from sensor data without the need for data training.
Introduction
Delivering these goals contributes to minimizing the frequency and severity of downtime, and reducing the manual interventions that are common in today's manufacturing.
General strategy and innovations
The kinematic excitation of rotating mechanisms is often synchronized with the shaft. This kind of physical nature determines that most of their output signals are cyclostationary (CS), i.e., they exhibit periodic statistical characteristics [START_REF] Antoni | Cyclostationary modelling of rotating machine vibration signals[END_REF].
Instead of comparing the real-time signal with a rigid reference object, the similarities and the differences inside the signal are explored. Combining the several stages of the typical tool life curve, the inserts will gradually show different degrees of wear from the initial similar state and the correlation between them will continuously decrease. Therefore, this research proposed to take advantage of the inter-insert correlation as the general strategy.
Milling is a continuous long-term assignment in a high-speed and high-temperature working environment. The process involved multiple external interferences including ambient vibration noises, coolant influx, etc., as well as the internal condition changes, such as a tool passing through curved paths, etc. Considering the fact that the disturbance is formed much slower than the high-speed rotation of the tool, therefore, the different teeth can be seen as subject to quasi-equivalent influences in one revolution. After correlation analysis, the results focus on the differences between the tool inserts. This method is suitable to reveal weak fault signatures from strong and sophisticated environmental noises. It thus can analyze the end mill state under both cyclostationary (operation with constant speed, load, and straight trajectory) and cyclo-non-stationary conditions (operation with variable speed, load, and complex trajectory).
The innovations of the proposed approach are embodied in the following aspects:
(i) It is founded on sampling or resampling of the signal in the angular domain, which allows the signal to be segmented in units of cutting tooth. Few works (iii) The inter-insert correlation analysis can eliminate the external cyclo-non-stationary factors and focus only on the state of the tool. The high rotational speed characteristic, which is originally seen as the monitoring difficult point (iii), reversely helps to approximate the external working environment of each tooth to be quasi-equivalent, and thus contributes to a better correlation result;
(iv) There is no need to arrange the trial runs or a large amount of data training to obtain reference signals;
(v) More importantly, this method can be considered versatile, which has the potential to be applicable to a wide range of signal sources from rotating machinery. As far as the authors know, there is no similar research and the proposed method is original as well as innovative.
Organization of the thesis
This thesis is divided into 6 chapters.
In Chapter 1, the thesis is started with a general introduction, including current trends in machining and existing maintenance strategies. We locate the position of this work and delineate its problem and objective. A general strategy is equally set with the declaration of innovation of this study.
In Chapter 2, the theoretical background required for the study is reviewed through three aspects: the milling operation, the construction of tool condition mon-
Introduction
itoring system, and the correlation relevant algorithm. Through the analysis, the source and processing method used to analyze the signals are selected.
In Chapter 3, specific problems are modeled and strategies for correlation analysis are discussed in detail as well as the scope of application.
In Chapter 4, we state the experimental design and implementation details, containing multiple milling trajectories. The data collected through the experiments are pre-processed.
In Chapter 5, the processing results of simulated and experimental data are presented, including the physical significance corresponding to each decomposed component, the validity, and sensitivity analysis of the indicator proposed, and the workflow for fault detection.
In Chapter 6, we seal the thesis with a conclusion and some perspectives.
Introduction
A brief history of machining monitoring
The late 18 th century was a historical turning point in the modernization of human society. Since the first Industrial Revolution (IR), marked by James Watt's invention of the steam engine in 1765, production methods were gradually mechanized. The weavers' looms greatly increased the efficiency of their labor and can be seen as representative of the embryonic industrialization of manufacturing [START_REF] Kosky | Chapter 12 -Manufacturing Engineering[END_REF].
In this period, the concept of monitoring did not yet exist. Because the machines were elementary in construction, no specialist thus was required to take care of the maintenance. It can be said that in those days it took less time and money to do nothing than to schedule a detailed maintenance program. People naturally used the machines until they broke down and then repaired them.
Following the first wave of the IR, society entered a phase of rapid standardization. With the emergence and spread of many important inventions, such as electrification, motors, turbines, and large assembly lines, manual production began to be gradually replaced by mass mechanized production in the late 1870s. In turn, the increased mechanization required more metallic parts (made of cast or wrought iron), which boosted the development of machine tools, creating a healthy cycle. Among enough builders, it is difficult to say exactly who 'invented' the milling machine, but Henry Maudslay was the unquestioned leader in the development of machine tools [START_REF] Roe | English and American Tool Builders[END_REF]. He is the first to build a functional lathe using an innovative combination of known screws, sliding frames, and variable speed gears. The workshops established by him trained a generation of people dedicated to the design and manufacture of machine tools based on his works. Machine tools, including milling machines, turning machines, etc., were developed from the early 20 th century as an important manufacturing tool in workshops.
As technology continues to evolve, the range of products expands, which brings an unparalleled level of complexity to the supply chain. for all the problems of whether NC could be economically viable based on the cost of programming [START_REF] Reintjes | Numerical control: making a new technology[END_REF]. The APT language was adopted as the international standard for CNC machine programming in 1978. During the 1980s, commercial CAM/CAD systems gained in maturity and graphics-based 3D modeling systems have been gradually integrated. The main feature of this period was the boom in electronics and information technology, which was brought into production to drive machine automation.
During the same time frame (1967)(1968)(1969), the British company Molins developed the System 24 automated workshop for the first time based on the basic concept of FMS proposed by D.T.N. Williamson (US Patent 4621410). With six multiprocess machine tools working in a modular structure, the tentative goal is to achieve 24/7 continuous processing services under unattended conditions [START_REF]A Competitive assessment of the U.S. flexible manufacturing systems industry[END_REF]. Although it was not completed due to economic and technical difficulties, it brings together ideas such as high-speed milling, cell-based production, computer control (but not CNC in the modern sense), transfer, and storage of workpieces in an attempt to achieve efficient production under automation. By this time, downtime was already having a major impact on production. In the 1970s, Total Productive Maintenance (TPM) has been developed by Seiichi Nakajima in Japan, as a method of physical asset management focused on optimizing equipment performance with the aim of increasing the economic efficiency of production. It emphasized the higher priority of condition monitoring, integrated maintenance into the company's basic strategy, and involved system-wide employees in maintenance activities [START_REF] Prabhuswamy | Statistical Analysis and Reliability Estimation of Total Productive Maintenance[END_REF].
Over the same period, numerous studies on the use of sensors to monitor manufacturing activity state began to be carried out by the academic community [START_REF] Micheletti | In Process Tool Wear Sensors for Cutting Operations[END_REF].
As transistors continued to shrink as predicted by Moore's Law, computer technology gradually possessed the hardware capability to implement online analysis of production data. Sensors are progressively being exploited in laboratory environments to measure and collect data. Real-time monitoring became possible by combining the experience gained from previous conventional maintenance with data-driven analysis statistical models. This allowed asset failure to be predicted/prevented before problems occurred and helped to enable process optimization at a later stage.
Characteristics of operation
Among the many types of machining, the studied case of this thesis is based on conventional milling, which is one of the most important branches in the family tree of the material removal process. Milling is usually performed as a secondary process, following the basic process, such as casting or batch deformation (forging,
Characteristics of operation
drawing, etc.), to determine the final parameters of the geometry and dimensions of the workpiece. The tool is in direct contact with the workpiece to complete the finishing process. Therefore, the condition of the tool is critical to the final quality of the product.
Tool geometry
For a more precise description to be followed, some terms of tool geometry are introduced here first. The corresponding illustrations are in Figure 2.1.
The cutting tool in milling is called a milling cutter. The milling cutter interacts directly with the material, but it is not as one piece with the machine spindle.
Between these two components, the tool holder provides a critical interface, which mounted and tightened the cutter to ensure that it moves or vibrates as little as possible during the milling process. The cutting edges are known as the teeth in general conversation, whereas the real action part is the screw-tightened replaceable coated insert. The cutting tool inserts are widely used in practical production due to their significant economical nature. Each insert contains several cutting edges. It allows the tool equipped with a brand new milling edge by adapting simply three stepsun-clamp, rotate the insert to the next available edge, and re-clamp. When all the edges are worn out, the insert has then attended to its full potential and is ready to be discarded and replaced. However, it is always a difficult task to determine if the cutting edge is in a sufficient state to continue the productive work, which is also the targeted subject of this research.
Most milling tools are multi-edge tools (single-edge tools are mostly found in turning; and in a very rare case, a single-edge milling tool called fly-cutter might be used). During the milling process, the tool performs a discontinuous cutting operation in a rotational manner. This implies that the teeth of the tool enter and exit the workpiece during each revolution. Such an intermittent action is surely accompanied by significant force impacts and thermal shocks on the rotational basis. Therefore, the tool must be made of a material that is harder than the workpiece and designed by geometry to ensure that it can withstand the harsh working conditions.
Tool cutting condition
During the milling process, a desired relative motion is achieved between the workpiece and the tool to form chips and remove them from the workpiece. In most circumstances, this relative motion is generated by a motion known as the cutting speed V c in conjunction with an operation known as the feed f z . The rotation axis of the cutting tool is perpendicular to the direction of the feed. Figure 2.2 illustrates a basic milling operation. The rotation frequency N (rpm) in milling is defined from V c (m/min) and the radius of the cutter R (mm) as
N = 1000 • V c 2πR . (2.1)
The instantaneous angular speed ω (rad/s) is also equivalent to N as well as V c ; it can be expressed by converting the units as
ω = 2πN 60 = 1000V c 60R . (2.
2)
The feed rate V f (mm/min) of the tool is the speed of the relative motion between the cutter and the workpiece. It can be calculated by the feed per tooth f z (mm/tooth), the teeth number n z and rotational frequency N as The material removal rate M RR (mm 3 /min) in milling is determined by multiplying the cross-sectional area of the cut and the feed rate V f . Hence, if the milling operation involves a radial depth of cut a e (mm) and an axial depth of cut a p (mm), then the material removal rate is
V f = f z • n z • N. ( 2
M RR = a e • a p • V f .
(2.4)
Tool wear and tool life
The tool can only be put into operation when its cutting edges are able to produce the part with specified surface finish and dimensional tolerances. Tool life is defined as the duration of cutting time after which the tool is no longer usable. In actual milling, there are several modes of tool failure, as listed below [START_REF] Groover | Fundamentals of Modern Manufacturing: Materials, Processes, and Systems[END_REF]:
(i) Fracture failure
This failure mode commonly behaves in the form of brittle fracture and is most frequently seen at the tool point where the cutting force is overly high. Possible solutions are switching to a more ductile tool material or adjusting the milling parameters, For example, reducing the feed rate or the depth of cut, increasing the nose radius, etc.
(ii) Temperature failure
Temperature failure occurs often when the cutting temperature is too high. The material of the tool point is softened by the heat, which leads to the plastic deformation of the originally sharp cutting edge. The common way to prevent this kind of failure is to reduce the cutting speed and use coolant appropriately.
(iii) Progressive wear
During the milling process, the tool works intermittently, and the teeth periodically interact with the workpiece. As the cutting edge enters the workpiece, it is heated; and as it exits the workpiece, it begins to cool. At the same time, the cutting force on the tool varies periodically with the thickness of the chip.
Such cycling of temperature and stresses produces alternating compression and tension on the tool, which leads to gradual fatigue and progressive wear.
If neither of the above two cases leads to a tool failure, then the cutting edge will eventually come to the end of its life due to continuous wear. Although some measures, such as proper use of lubricants, development of higher wearresistant materials, etc., can be taken to slow down this failing process, the tool wear is always an inevitable phenomenon, as a gradual loss of tool material in the contact zone between the tool and the workpiece.
Of the three failure modes mentioned above, the first two need to be avoided as far as possible. In addition to significantly reducing the tool life, they can also cause sudden tool failure. As a result, the surface quality of the workpiece may be damaged, which might lead to the requirement of further rework or even possible scrapping of the whole workpiece. Ideally, therefore, tools should fail in a progressive wear pattern to achieve the theoretical maximum service life and the corresponding economic advantages.
The two main locations where progressive wear occurs on the cutter are the flank and the top rake face. The two common types of wear accordingly are the flank wear as shown in Figure 2.3(a) and the crater wear as shown in Figure 2.3(b) [START_REF] Anon | Tool life testing in machining -Part 2[END_REF]. It is necessary to note that if the insert is not replaced in time when these two above types of wear reach a certain level, the catastrophic deterioration will happen as shown in Crater wear is a cavity in the leading edge surface caused by the friction of Flank wear occurs at the expense of losing a portion of the sharp cutting edge, which influences greatly the quality of the product and also is the most frequent situation in practice. Therefore, it is more important to control the flank wear than the crater wear.
Through extensive experience, a typical tool life curve has been concluded, as illustrated in Figure 2.4. Here, the relationship shown is concerned with flank wear, but the same trend holds for the crater wear as well.
A typical flank wear growth curve with milling time can be classified into three regions [Alt12]:
(i) Initial wear zone The first zone occurs at the very beginning of the cut and is a break-in period that may last only a few minutes. The brand new, extremely sharp cutting edge will undergo a rapid wear during this period until the friction between the tool and workpiece reaches a relative steady balance.
(ii) Steady state wear zone
Following the initial wear region, the flank wear on the second stage behaves at a fairly constant rate. Although in practice, the tool wear sometimes might deviate from the theoretical line shape, in approximation, the steady state wear phase can be depicted as a linear function of time for most cases. The slope of this function will be influenced by the workpiece material (e.g. hardness, ductility, etc.) and the cutting conditions (e.g. cutting speed, feed rate and depth of cut, etc.).
(iii) Accelerated wear zone
As the flank wear accumulates to a certain level, the slope of the curve increases exponentially. This marks the tool service has entered the failure zone. At this point, both the overall efficiency and quality of the milling process begin to decrease, accompanied by a significant increase in cutting temperature. The tool must be replaced before it reaches its critical limit to prevent catastrophic
Characteristics of operation tool deterioration.
On this basis, F.W. Taylor was the first to propose a calculation method for modelling tool life, as
T t = C t • V c -p • (n z • f z ) -q , (2.5)
where T t (min) presents the estimation of tool life; C t , p and q are constants depending on the given tool-workpiece material pair, the preset milling operation condition (e.g. feed rate, cutting speed, depth of cut, radial engagement of the cut, etc.) and the tool life criterion required.
Since the above constants rely on numerous conditions and immeasurable realistic factors may arise in practice, the industries cannot afford to take such a huge risk and apply the T t simply as the time point for tool replacement. Hence, the Taylor equation can only be used as a reference. It is more important to monitor the current state of the tool in real-time, ensuring the quality of the machined piece as well as the full exploration of the tool life, which is also the topic of ongoing research in this thesis.
Chip formation behaviors 2.3.4.1 General case
The most direct and obvious reflection of the current tool condition is the corresponding chip formation on the workpiece. In the general case, it is assumed that each tooth produces a similar cut thickness. The wedge-shaped cutting edge is perpendicular to the direction of cutting speed V c . When the tool is forced into the material by the spindle, the workpiece at the tip the further forces of shear, the failed partial material is completely separated from its parent material and is discharged as chips. In general, the typical geometry shape produced by milling is a plane surface. However, as it can be seen from the bottom of the curves, the resulting plane is actually formed by the tangents of multiple curves.
Therefore, it is necessary to be careful about the match between the feed rate and cutting speed in order to avoid overly sparse curves, which might lead to ribbed leftover material on the surface of the workpiece.
In Figure 2.5, four trochoid-like curves present the paths traversed by the four teeth during the feeding. The painted areas with the corresponding colors on the workpiece are the chips produced as the teeth pass by. It can be observed that the cut thickness is not a constant, but varies in the radial direction of the tool. It will be designated as h c . The cut thickness h c appears to be thickest at the beginning and then gradually thins out.
Figure 2.6 specifies which distance geometrically corresponds to the cut thickness and the scenarios for points generation are summarized in Table 2.2. The exact value of the varying h c can be solved by establishing a system of curve equations.
Assuming that the tool is rotating at a constant speed ω (if not, then integrate to calculate θ), and the rotational angle of the tool at a given moment t is θ = ωt.
(2.6) Fig. 2.6: Tool cut thickness
Point A C B D O O' O" Subject Tooth (i -1) Tooth (i -1) Tooth i Tooth i Shaft Shaft Shaft t 1 t t 1 + T t c t 1 t t c Time (t 1 < t < t 1 + T < t c ) Table 2.
2: Scenarios for points generation
Taking the vertical upward direction as the standard rotation start, then the instantaneous immersion angle of the i th teeth can be presented as
θ i (t) = θ - 2π n z (i -1), (2.7)
where i is the subscript indicating the tooth counts (i = 1, 2, . . . , n z ). By splitting the tool movement into the superposition of its own rotation and feed motion, and with the reference of the cycloid or trochoid modeling, the path of teeth can be represented
as x i (t) = nz•fz•θ(t) 2π + R sin θ i (t), y i (t) = R cos θ i (t).
(2.8)
The blue solid curve and the red solid curve in Figure 2.6 were created in MATLAB from the above equations, representing the paths of the (i -1) th and i th tooth, respectively. The blue dashed circle stands for the position of the cutter at the moment t = t 1 , while the (i -1) th insert is at point A. As for the red dashed circle, it represents the position of the tool when t = t 1 + T , where T corresponds to the time of one tooth passage, defined as
T = 2π n z • ω . (2.9)
At this moment (t = t 1 + T ), the i th tooth is in the position of point B, which is the same location of the (i -1) th tooth on the tool circumference at t = t 1 , but superimposed with the translation of the feed. When the process continues to a certain time t c (t c > t 1 + T ), the i th insert follows the red solid curve passing from point B to point D. The corresponding tool position at t = t c is the green dashed circle, whose circle center is marked as point O". The intersection of the line O"D to the path of the (i -1) th insert is noted as point C, which corresponds to the position of the (i -1) th tooth at the moment t , and the center of the circle of the tool at t is noted as O'. The distance of CD is the cut thickness formed by the i th tooth at t = t c . The coordinates of the point O", point D and point C can be express as
x O" = nzfz•θ(tc) 2π , y O" = 0, x D = nzfz•θ(tc) 2π + R sin(θ i (t c )), y D = R cos(θ i (t c )), x C = nzfz•θ(t ) 2π + R sin(θ i-1 (t )), y C = R cos(θ i-1 (t )).
(2.10)
Since the point O", D and C are in the same straight line, the corresponding equation can be formulated as
x C -x O" y C -y O" = x D -x O" y D -y O" . (2.11)
Substituting Equation 2.10 into Equation 2.11, it can be simplified as
n z f z (θ(t ) -θ(t c )) 2π cos(θ i (t c )) + R sin(θ i-1 (t ) -θ i (t c )) = 0.
(2.12) Since Equation 2.12 contains both trigonometric and algebraic terms (transcendental equation), the only unknown t in the equation can be solved with high precision by approximation algorithms, such as the bisection method or the Newton-Raphson method. This enables the calculation of the cut thickness as
h c = (x D -x C ) 2 + (y D -y C ) 2 . (2.13)
Although the numerical method is accurate, the closed-form cut thickness model provides an approximate solution for ease of use and better physical understanding of the milling process. By applying the cosine law to the triangle OO"C, the cut thickness can be geometrically presented as
h c = O"D -O"C = O"D + O O" • sin θ i -O C 2 -O O" 2 cos 2 θ i , (2.14)
where
O"D = O C = R; (2.15) O O" ≈ OO" = f z . (2.16)
For traditional milling, the tool radius is usually much larger than the feed (2πR n z f z ), where the distance between OO' can be neglected and the actual distance between OO", noted as f real , can be simply replaced by f z [MAR41; CC13].
Then the cut thickness can be approximated as [MAR45]
h c ≈ f z sin θ i + R -R 2 -f z 2 cos 2 θ i (2.17) ≈ f z sin θ i if - 17 36 π < θ i (t) < 17 36 π. (2.18)
For micro-end-milling, a similar equation using precise feed per tooth f real can be rewritten as [KZ13]
h c = f real sin θ i + R -R 2 -f 2 real cos 2 θ i ≈ f z sin θ i - n z f z 2 cos θ i sin θ i 2πR + n z f z cos θ i + R -R 2 - 2πRf z cos θ i 2πR + n z f z cos θ i 2 .
(2.19) after it has become worn. It can be seen that due to the flank wear, the chips of the 2 nd tooth (red-colored zone) become thinner and the part of the undone workload that it left (green-colored zone) is piled up to the next tooth. The 3 rd insert carries the extra burden of the additional work, but it can not fully compensate for the error caused by the 2 nd tooth. For each revolution afterward, a small strip of ribbed material (black colored zone) will be left on the machined surface.
Case including consideration of the tool defects
The defect of the cutter might refer to the tool's runout, whose teeth lengths are unequal in the first place. Another possible defect is that the teeth are distributed in an uneven angular position on the circumference of the tool. All of the above are inborn defects brought about by limited manufacturing precision, which are considered minor and controllable compared to the subsequent flank wear or even the break of the insert.
When considering the flank wear phenomenon of the tool, the wear amount of the current i th tooth V b i and the previous (i-1) th tooth V b i-1 needs to be aggregated into the cut thickness model. The condition mentioned in Equation 2.15 is changed to
O"D = R -V b i and O C = R -V b i-1 .
(2.20)
With the expansion using Taylor series, the cut thickness includes wear condition is
h c * = f real sin θ i + (R -V b i ) -(R -V b i-1 ) • 1 - f real cos θ i R -V b i-1 2 (2.21) ≈ f real sin θ i + V b i-1 -V b i + f 2 real cos 2 θ i 2(R -V b i-1 ) . (2.22)
In some severe worn cases, if
V b i > h c + V b i-1 , then h c * is numerically negative,
which means the tooth i does not experience the chip load at all. Depending on the scenarios and the requirements, the feed with diverse accuracy (Equation 2.13 -2.19) can be chosen to solve the equation. In the studied case, Equation 2.21 can be approximated as
h c * ≈ f z sin θ i + V b i-1 -V b i .
(2.23)
Milling force model
It is already known that different flank wears make the material to be removed unevenly distributed between the teeth. From the inverse deduction method, the parameters related to the cut thickness variation can be considered as logical external expressions for the cutting state. Since it is difficult to detect the cut thickness in real-time, the signals closely related to the chip formation, such as cutting forces and instantaneous angular velocities, etc., can be used as a reasonable source for the analysis of the tool condition. It possesses the potential to be a versatile method for related mechanisms, and the total cutting force is chosen here as an example.
Merchant established an orthogonal cutting model by assuming the shear zone is a very thin plane [START_REF] Merchant | Mechanics of the Metal Cutting Process. II. Plasticity Conditions in Orthogonal Cutting[END_REF]. The resulting force of the tool on the chip is noted as F , whose vector composition can be interpreted into three deconstructions from different aspects, as listed below [START_REF] Groover | Fundamentals of Modern Manufacturing: Materials, Processes, and Systems[END_REF]:
(i) F = F s + F n
F can be decomposed into a shear force F s along the shear plane and a normal force to shear F n . These two explain how the material is failed and discards the chips under the shear deformation. The angle between the shear plane and the work plane is designated as the shear angle φ c .
(ii
) F = F u + F v
F can also be decomposed into a friction force F u along the rake face of the tool and a normal force to friction F v , both of which describe the process of chip flow along the rake face causing the crater wear. The angle between the rake face and the normal to the reference plane is α r , and the angle formed by the vector triangle of F , F u , F v is denoted as the friction angle β a .
(iii) F = F r + F t F can likewise be decomposed into a tangential force F t along the direction of the instantaneous cutting speed and a radial force F r perpendicular to F t .
Both of these can be seen as the tooth tip rubbing against the newly machined surface, which has a significant effect on flank wear.
As the cutting forces diagram illustrated in Figure 2.8(b), the resulting force F can be numerically expressed as
F = F s cos (φ c + β a -α r ) = τ s • a p • h c sin φ c • cos (φ c + β a -α r ) , (2.24)
where τ s is the corresponding shear stress on the shear plane.
Since only the geometry of the tool is known beforehand, and the friction angle and shear angle are not enough to be accurately predicted. Hence, it is customary to define the cutting forces mechanistically as a function of the cutting conditions and
F t = K tc • a p • h c + K te • a p ; F r = K r • F t , (2.25)
where
K tc = τs•cos(βa-αr) sin φc•cos(φc+βa-αr) ; K r = tan (β a -α r ) .
(2.26)
The radial force F r is proportional to the tangential force F t . K tc and K r are the cutting constants contributed by the shearing action in tangential and radial directions, respectively, and K te is the edge coefficient that does not contribute to the shearing and is directly calibrated from metal cutting experiments for a tool-workpiece pair
[Alt12; FDK84].
Taking the flank wear into account on this basis, by substituting the above equations, the total cutting force corresponding to the i th tooth can be derived as
F (θ i ) = [F r (θ i )] 2 + [F t (θ i )] 2 = a p • h c * •K tc • K r 2 + 1 + a p • K te • K r 2 + 1 = a p • h c * •K c + a p • K e = a p • (f z • K c • sin θ i + K e ) + a p • K c • (V b i-1 -V b i ) = F basic (θ i ) + ∆F (θ i ), (2.27)
where
F basic (θ i ) = F + Ψ F (θ i ) = a p • K e + a p • K c • f z • sin θ i . (2.28) and ∆F (θ i ) = a p • K c • (V b i-1 -V b i ). (2.29)
The term F basic (θ i ) stands for the standard milling force, with F corresponding to the constant force and Ψ F (θ i ) representing the shape of the force during the cutting.
∆F i embodies the part of the cutting force increased or decreased due to wear. The term (V b i-1 -V b i ) actually emphasizes the similarities and differences between two adjacent teeth, which will make the subsequent process easier and also makes the results more intuitive.
To observe the state and corresponding behavior of the tool in milling, two periods of signals presenting the 1 st and the 98 th rotation cycles respectively are extracted and displayed in Figure 2.9. Since the tool used to collect the data was not a brand-new cutter, there were 2.4 Constructive elements of TCM system already varying degrees of slight wear between the cutting inserts at the beginning.
Combined with the typical tool life curve in Section 2.3.3, the tool in the period presented by Figure 2.9(a) has passed the initial wear stage and is in the stable wear state. Besides Figure 2.9(b) shows significant degradation of the 1 st tooth. Although it still maintains the characteristic waveform of down milling (the tooth experiences the maximum chip load at entry, and then the load gradually decreases), its value drops significantly compared to Figure 2.9(a). In contrast, the cutting force of the 2 nd insert increased remarkably, and in the meantime, there was also a small rise for the 3 rd tooth. This is due to the fact that when the previous tooth wears or breaks, the cutting conditions of the corresponding process often tend to be modified. The next insert will be subjected to compensatory overload. The most intuitive change is between two consecutive teeth, but it may also affect a third or other subsequent teeth. Such a performance matches both the modeling of chip generation in Section 2.3.4 and the cutting forces in Section 2.3.5, establishing a good basis for the subsequent analysis. series [START_REF] Anon | Condition monitoring and diagnostics of machines[END_REF], to respond to industrial needs.
Constructive elements of TCM system
The three basic elements for tool state supervision [START_REF] Garcia-Plaza | Surface finish monitoring in taper turning CNC using artificial neural network and multiple regression methods[END_REF] are: the correct choice of sensors for recording the signal in the monitoring environment, the accurate data manipulation and characterization, and the reliable decision-making models with minor/low prediction of errors. They will be discussed in the following three subsections.
Selection of the sensor system
According to Teti et al. [START_REF] Teti | Advanced monitoring of machining operations[END_REF], measurement techniques used for TCM are traditionally divided into two types:
• Direct measurement: where laser [RMK98; TG08] and optical [START_REF] Lauro | Monitoring the temperature of the milling process using infrared camera[END_REF][START_REF] Dutta | Progressive tool condition monitoring of end milling from machined surface images[END_REF] sensors are often applied to capture data with high accuracy on the machining surface, the cutting edge, or even the geometry of the chips. These methods are commonly only implemented in laboratories due to environmental constraints such as illumination and coolant usage in practical production and their high expense.
• • Static part: which presents the average value of the cutting force.
• Dynamic part: which corresponds to the superimposed fluctuation of the cutting force.
Fig. 2.11: Kistler 3-component force dynamometer [Kisb] The measurement of the static part already provides an effective feedback on tool conditions during milling. Tansel et al. [START_REF] Tansel | Tool wear estimation in micro-machining.: Part I: tool usage-cutting force relationship[END_REF] reported that when cutting tools continuously lose their sharp edges, the wear of the tool flank will lead to an increasing contact area between the cutting edge and the workpiece, thus causing a gradual rise in the average values of friction and cutting forces. The dynamic part, on the other hand, can further satisfy the need for higher monitoring accuracy The measurement accuracy has made the multi-axis dynamometers desirable for laboratories. But their mounting characteristics, which would limit the size of the workpiece, as well as their high cost, cause an uncommon use of them in commercial applications [START_REF] Stavropoulos | Tool wear predictability estimation in milling based on multi-sensorial data[END_REF][START_REF] Ghani | Monitoring online cutting tool wear using low-cost technique and user-friendly GUI[END_REF].
Accelerometers
Vibration measurements acquired by accelerometers are often used for chatter detection and are also widely published in TCM. This is because the vibration signatures satisfy the conditions of robustness, reliability, and applicability. Meanwhile, the accelerometers are inexpensive and easy to install [START_REF] Dimla Snr | The Correlation of Vibration Signal Features to Cutting Tool Wear in a Metal Turning Operation[END_REF]. Since vibration is originally caused by the cutting forces, under ideal conditions, it can provide a periodic signal shape similar to the cutting forces for the diagnosis of tool wear [START_REF] Sevilla-Camacho | Tool failure detection method for highspeed milling using vibration signal and reconfigurable bandpass digital filtering[END_REF].
During machining, the wear of the insert initially increases the elastic deformation of the workpiece material, causing frictional damping that reduces the vibration to a small extent. When the flank wear exceeds a certain threshold, the increased cutting force due to strong friction on the contact surface becomes dominant and excites an increase in vibration amplitude [START_REF] Yalcin M Ertekin | Identification of common sensory features for the control of CNC milling operations under varying cutting conditions[END_REF]. In fact, there are indeed many issues in the practical application of monitoring vibration levels to assess tool conditions. In particular, forced vibrations caused by machine components, such as unbalanced rotating parts, inertial forces in reciprocating parts, etc., are not related to the state of the cutter [START_REF] Ch Lauro | Monitoring and processing signal applied in machining processes-A review[END_REF]. Despite the advantages of accelerometers such as cheapness and ease of installation, the vibration source diversity, complexity, and difficulty of filtering make it hard to target the desirable measurements [START_REF] Cuka | Fuzzy logic based tool condition monitoring for end-milling[END_REF].
AE sensors
Acoustic emission is the phenomenon of radiation of elastic waves in solids. The accumulated elastic energy is rapidly released when the internal structure of material undergoes irreversible changes due to aging, temperature gradients, or external mechanical forces [START_REF] Vicente | A review of machining monitoring systems based on artificial intelligence process models[END_REF]. The released energy is generally in the range of 1 kHz to 1 MHz and creates acoustic emission in the form of mechanical waves [START_REF] Thomas | Metal machining: theory and applications[END_REF].
Many research has shown interest in the application of AE measurement to TCM during milling processes. On the one hand, the AE signal can take into account the primary (due to chip formation), secondary (due to friction between tool and chip), and tertiary (due to friction between tool flanks and workpiece) cutting zones [START_REF] Lu | Study on prediction of surface quality in machining process[END_REF].
On the other hand, the AE technique has a superior signal-to-noise ratio and higher sensitivity compared to other sensors as shown in Figure 2.12. Marinescu et al.
[MA08] reported the possibility of using AE sensory measures to monitor the surface integrity of tools and workpieces, and compared it with conventional monitoring methods. In reference [START_REF] Li | A brief review: acoustic emission method for tool wear monitoring during turning[END_REF], Li inventories and reviews a range of issues concerning AE generation, classification, signal processing, and several methods for estimating tool wear using AE sensors.
Although AE sensors are inexpensive and easy to install, Jemielniak [START_REF] Jemielniak | Some aspects of AE application in tool condition monitoring[END_REF] noted that the range of cutting operations must be tested in advance in order to adjust the amplifier gain and avoid signal distortion due to sensor overload. According to Zhou et al. [START_REF] Zhou | Review of tool condition monitoring methods in milling processes[END_REF], intermittent cutting leads to the appearance of AE signal spikes, which complicates the analysis. Other studies have argued that the use of AE sensors for tool wear is controversial and that they are better suited as additional sensors for increased reliability, as they are more sensitive to noise and changes in cutting conditions than to the condition of the tool itself [START_REF] Zhu | Sparse representation and its applications in micro-milling condition monitoring: noise separation and tool condition monitoring[END_REF].
Current sensors
Current sensors can sense changes in cutting force indirectly through the torque output from the spindle motor. Compared to dynamometers, current sensors do not require extensive wiring and have no limitations on the size of the workpieces, making them simpler and more economical to apply [START_REF] Ch Lauro | Monitoring and processing signal applied in machining processes-A review[END_REF]. However, due to the inertia of the motor rotor, the sensing bandwidth of current sensors is limited, it cannot observe the high-frequency component of the cutting force, and is only suitable for monitoring low-speed events. With the development of high-speed machining, current sensors are more and more considered as a complementary information for diagnosing tool wear [Gho+07; AR10].
Fig. 2.12: Sensor application versus level of precision
Combining the results of Ertekin [EKT03] using three different types of materials to test dynamometer, AE sensor, and accelerometer, along with the information above, the application and robustness of the different sensor signal characteristics are summarized in Figure 2.12. Considering that the focus of this thesis is on the analysis of tool wear/breakage, and the data acquisition is performed in the laboratory, as well as the accuracy, reliability, and response speed of the signal sources, the cutting force based on a dynamometer is finally selected for taking the measurements. For the next step in the future, it is appropriate to use signals closer to actual production to validate the indicators.
Feature extraction
After the sensor is selected, the acquired analog signal will normally be preprocessed, such as digital conversion, low-pass/band-pass filtering, etc., for prepara-
tion
Time domain analysis
Time domain analysis refers to a display of response parameters as a function of time. The signal obtained by equal time measurements is univariate and can present an overall picture of the time series under study. A measuring signal x(t) containing N samples can be generally described as [Moh+19] adopted vibration signatures in the time domain as representatives to analyze the tool condition under optimum process parameters utilizing various coolants.
x j = x(t 0 + (j -1)∆t), ( 2
The results show that the arithmetic mean and the skewness rise continuously with increasing flank wear.
Besides the statistical features, the more advanced feature generating approach in the time domain is time series modeling. The main techniques of time series analysis frequently used in TCM are the auto-regressive (AR), moving average (MA), and auto-regressive moving average (ARMA) models [START_REF] Bhattacharyya | Cutting forcebased real-time estimation of tool wear in face milling using a combination of signal processing techniques[END_REF]. Due to the high computing load inadequate for online process monitoring, the 1 st or the 1 st and the 2 nd orders of AR, MA or ARMA coefficients are often used as the feature representatives [START_REF] Dong | Bayesian-inference-based neural networks for tool wear estimation[END_REF]. Nouri et al. [START_REF] Nouri | Real-time tool wear monitoring in milling using a cutting condition independent method[END_REF] proposed an effective algorithm based on MA from cutting force data to estimate the tool wear in face milling. Chelladurai et al.
[CJV08] applied ARMA analysis to validate the emulation differences between actual and created artificial flank wears.
The features extracted in the time domain have corresponding well-defined physical meanings and generally present linear characteristics, which are sufficient for stationary machining like turning [START_REF] Zhu | Wavelet analysis of sensor signals for tool condition monitoring: A review and some new results[END_REF]. Yet it is not suitable for non-stationary
Features Description Comments
Mean x = 1 N N j=1
x j Arithmetic mean of the signal.
Root mean square
x RMS = 1 N N j=1 x 2 j
Statistical measure of the intensity of the signal.
Variance
σ 2 = 1 N -1 N j=1 (x j -x) 2
Non-negative value indicating the spread of the signal data around its mean value. Equal to the standard deviation squared.
Skewness Sk = 1 N N j=1 (x j -x) 3 σ 3
A measure of the asymmetry of the probability distribution of the signal.
Kurtosis Ku = 1 N N j=1 (x j -x) 4 σ 4
A measure of the "peakedness" of the probability distribution of the signal.
Higher kurtosis means more of the variance is the result of infrequent extreme deviations, as opposed to frequent modestly sized deviations.
Signal power P = 1 N N j=1 x 2 j
Equivalent to the square of the RMS value.
Peak to peak amplitude x pk-pk = max (x j ) -min (x j )
The difference between the maximum amplitude and minimum amplitude of the signal.
Crest factor
x CF = max (x j )
x RMS
x CF provides a value of the peak amplitude relative to the signal RMS.
Table 2.3: Time domain features and descriptions machining, which is however the most common situation in industrial plants, including end milling, as discussed in this thesis.
Frequency and time-frequency domain analysis
Through fast Fourier transform (FFT), the time domain data can be transferred into the frequency domain for the investigation of frequency structure and harmonic
Constructive elements of TCM system
components of the signal. The main advantage of carrying signal processing in the frequency domain over the time domain is its capacity for spectrum display, which allows the easy reorganization and segregation of the interest frequency components.
Similar to the study in the time domain, data that have been converted to the frequency domain can also provide frequential statistics. A power spectrum s(f ) calculated by the square of FFT magnitude of x(t) can be written as
s j = s(f 0 + (j -1)∆f ), (2.31)
where j = 1, 2, ..., N ; f 0 is the beginning frequency of the corresponding spectrum;
∆f is the sampling spacing. The general statistical features are listed in Table 2.4
Features Description Comments
Mean of total band power s = 1
N N j=1 s j
Arithmetic mean of the frequency power for a selected band of the frequency spectrum.
Variance of band power
σ 2 s = 1 N -1 N j=1 (s j -s) 2
Non-negative value indicating the spread of the frequency magnitude data.
Skewness of band power
Sk s = 1 N N j=1 (s j -s) 3 σ 3 s
A measure of the asymmetry of the probability distribution of the spectra. Also termed as relative spectral peak. The angular signal is defined on the basis of constant angular intervals ∆θ, which can be described in the same format as the time domain signal as
Kurtosis of band power
Ku s = 1 N N j=1 (s j -s) 4 σ 4 s A measure
x j = x(θ 0 + (j -1)∆θ), (2.32)
where j = 1, 2, ..., N ; t 0 is the initial angle. Angle-based recording data bring the possibility to yield the stationarity in the angular view regardless of the speed variation.
It also provides the potential to detect angular periodicity by involving kinematic relationships on the rotating machine under consideration [START_REF] Renaudin | Natural roller bearing fault detection by angular measurement of true instantaneous angular speed[END_REF].
In order to acquire such an angular signal, two classes of solution are distinguished:
(i) Angular sampling
Angular sampling is a direct measurement, whose signal acquisition is automatically synchronized with the impulses of a tachometer/encoder [START_REF] Antoni | Effective vibration analysis of IC engines using cyclostationarity. Part IA methodology for condition monitoring[END_REF].
But it is currently considered difficult to set up, both in terms of financial and technical concerns. Therefore, the scientific community often chooses to resam-
Cyclo-non-stationarity
Since the CS paradigm was adopted in the exploration of TCM, its application has always been limited to stationary machine regimes. Although angle-CS can relax these restrictions to some extent, it is still not sufficient to handle the large fluctuations in the operating conditions [START_REF] Antoni | Cyclostationarity in condition monitoring: 10 years after[END_REF]. Therefore, cyclo-non-stationary (CNS)
signals have been recently introduced to extend the cyclostationarity to cases where a signal still conserves short-term cyclic patterns related to the angular rotation, but at the same time is (strongly) non-stationary on a long-term basis [START_REF] Abboud | The spectral analysis of cyclo-non-stationary signals[END_REF].
In the milling context, the typical situation can occur in the following machine regimes:
• Start-up and shut-down periods, where the tool experiences dramatic rotational speed variation;
• Direction changing periods, where the tool steers according to preset non-linear trajectory and encounters the change in chip volume.
The signal generated at this time is still based on the periodic rotation, but the former is subject to amplitude modulation, while the latter is affected by phase modulation.
Such distortions can jeopardize the effectiveness of numerous processing methods. remains controversial [START_REF] Abboud | The spectral analysis of cyclo-non-stationary signals[END_REF]. But there is no denying that angular domain analysis does have an attractive potential. This thesis also seeks to follow this direction, drawing on the strengths of angular analysis while simplifying the feature extraction process to facilitate industrial implementations.
Action for decision making
Once the features are extracted, appropriate criteria need to be set to determine the condition of the tool and whether to make an alert. A large number of schemes, techniques, and paradigms are used to develop decision-making support systems. The objective is to find a balance between good detection probability and false alarms based on the data features.
Receiver operating characteristic (ROC) curve is introduced for the purpose of assessing the developed feature as well as locating an adequate threshold for actions.
It works based on the probabilities calculated by the confusion matrix. By plotting the true positive rate (TPR) versus the false positive rate (FPR), the curve shows the trade-off in the TPR and the FPR for the varying value of that threshold. There will always be a point in (0,0) where every case is classified as normal. Similarly, every case is classified as abnormal at point (1,1). The area under the curve (AUC) is a metric for the evaluation of the feature classifier. Figure 2.14 demonstrated three types of performances for ROC curves. An ideal feature for classifier is like Figure 2.14(a) with AUC=1. It may imply the presence of the threshold perfectly labeled for each detection in the test data [START_REF] Matlab | Evaluate the performance of machine learning classification models[END_REF]. At the same time, it is also important to check if the amount of data is insufficient or the classifier is over-fitted. In most cases, the ROC curve does not reach such an ideal state, but rather looks like the shape of
Correlation-based monitoring
Through the analysis of the end milling operation behaviors in Section 2.3 and the study of the TCM process in Section 2.4, the following facts were brought to our attention:
• The structure of the tool generally contains multiple teeth with interactive characteristics.
• The most concerning wear type for end mill is flank wear. The wear value V b exhibits linear under the normal conditions and increases exponentially as the tool approaches the service life (referring back to Figure 2.4).
• The condition of the teeth affects the chips formation, which in turn is reflected in the indirect signals.
• The end mill, as a tool functioning in a rotary manner, has the advantage of inherent periodicity. The implementation of the study in the angular domain can avoid to a certain extent the irregularities in the time domain due to the speed variation. However, few studies are devoted to this direction. Comparison principle to diagnose the tool status: the assessment is based on the correlation between the real-time monitoring signal and the pre-defined standard. A common method of obtaining this reference in current manufacturing is to perform a trial cut at the same working parameters. However, as described in chapter 1, it is not financially acceptable for the trend towards flexible customized production in small quantities and is not robust as well, especially when non-stationary operations are involved. The cognitive paradigm is another way to pre-acquire the reference, which although relatively stable, has encountered challenges in data training due to its high costs of data acquisition and insufficient development in storage and transmission technology [START_REF] Zhou | A new tool wear condition monitoring method based on deep learning under small samples[END_REF].
Prasad and Babu [START_REF] Srinivasa | Correlation between vibration amplitude and tool wear in turning: Numerical and experimental analysis[END_REF] tried to find a better solution in terms of cost. They proposed a 3D finite element simulation model in their study of dry turning to replace the actual test cut as a standard for correlation analysis. However, compared to turning, milling is more complex on the trajectory, its simulation is more challenging and the reliability is yet to be verified. Furthermore, it is also a complicated work to re-simulate different milling conditions (speed and trajectory variations) for each machining task.
With the purpose of excluding the influence of external operating factors on the TCM, Fong et al. [START_REF] Mun | Investigation on universal tool wear measurement technique using image-based cross-correlation analysis[END_REF] used the original tool image as a benchmark and developed an offline monitoring system by cross-correlating the worn tool image with the initial one. Such an intuitive approach is indeed independent of different milling conditions, but the accessibility of the machine is always a restriction, limiting its development from offline to online monitoring.
The methods mentioned above are all correlation-based but still differ from the ideas presented in this thesis. To the authors' knowledge, such a study using interinsert correlation as the detection feature proposed in this work has not been published in the field of TCM.
For the concrete implementations of the proposed simple concept, the segmentation previously used in the angle SA is drawn upon for separating the signal corresponding to its unit of teeth. The specific way in which these segments are correlated is worth exploring.
Relationships between variables
Relevant relationships between variables can be subdivided into parallel and dependent relationships. Correlation analysis is appropriate for the former, while the latter needs to be processed by regression analysis [START_REF] Chatfield | Introduction to multivariate analysis[END_REF]. In the context of the present discussion, it is clear that the teeth of the tool are considered as being on an equal footing. Therefore, the correlation method to be employed should be free of any distinction between dependent and independent variables.
In order to measure the degree of correlation, several correlation coefficients
Correlation-based monitoring
were derived and defined to numerically assess the relevance between two variables.
Depending on the type of variable requiring comparison (such as continuous variables, nominal variables indicating on/off state, etc.), the appropriate correlation coefficient is suggested for each combination as listed in Pearson correlation coefficient PCC is the most frequently used correlation calculation. It is a measure of the linear dependence between two sets of random variables A and B, and can be defined as
R(A, B) = Cov(A, B) σ A σ B , (2.33)
where Cov(A, B) represents the covariance of A and B, and σ A and σ B are relatively the standard deviation of A and B. As its foundation is the normalization of the covariance of A and B, the correlation result will always be between -1 and 1, where 1 is total positive linear correlation, 0 is no linear correlation, and -1 is total negative linear correlation [START_REF] Rodgers | Thirteen Ways to Look at the Correlation Coefficient[END_REF].
However, the traditional correlation analysis like Pearson coefficient is always a pairwise correlation method, which can only provide a statistical relationship between two random variables [START_REF] Asuero | The Correlation Coefficient: An Overview[END_REF]. The tooth number n z of the common end-mill is often 2 to 6 or more. The study first attempted to extend the Pearson coefficient to correlate with multiple sets of variables at the same time. But the results showed obvious limitations. The concerned processing with specific experimental data can be found in Appendix B.
Correlation matrix
The correlation matrix is the first method that comes to mind as it is the nature transition for correlation analysis from two variables to multiple variables. Suppose there are n segments, then the correlation matrix is
R = R 1,1 R 1,2 • • • R 1,n R 2,1 R 2,2 • • • R 2,n . . . . . . . . . . . . R n,1 R n,2 • • • R n,n , (2.34)
containing the Pearson correlation coefficients calculated from the combination of any two segments. R is a symmetric matrix and the elements on the diagonal are equal to 1, representing the self-correlation. Corresponding to the combinations of the segments, the matrix has C n 2 coefficients that do have the ability to reflect the inter-insert relationships, but are very trivial and cannot provide a simple and comprehensive expression. The concerned processing with specific experimental data can be found in Appendix B.
Therefore, the direction of the literature review gradually tapped into multivariate analysis, trying to combine it with correlation.
Multivariate correlation analysis
A wide variety of multivariate analysis methods are available in scientific research. The choice of the most appropriate method depends on the type of data and problem as well as the objectives that the analysis is expected to achieve. As a result, four requirements were proposed as criteria for finding a suitable multivariate analysis model for correlating the tooth-based segments:
• The model needs to be able to analyze multiple variables simultaneously and produce a comprehensive result.
• The model should be capable of extracting features that reflect the tool condition while providing the appropriate interpretation in a physical sense.
• The model could reduce the size of the original information while capturing the main characteristics.
• The model is not required to distinguish the independent and dependent variables.
Among the common models, cluster analysis and discriminant analysis are targeted at classification problems. Multivariate analysis of variance (MANOVA) requires a distinction between the independent and dependent variables. And canonical correlation analysis (CCA) is an extension of multiple regression analysis, for summarising the joint variation in two sets of variables [START_REF] Chatfield | Introduction to multivariate analysis[END_REF]. Based on the evaluation of the above criteria, principal component analysis (PCA) is considered to be the more appropriate model for current needs.
It is worth noting that factor analysis is often mentioned alongside principal component analysis, as they both simplify the function of the data by analyzing correlations between variables. However, FA is more oriented towards identifying the latent structure among the variables and estimating the loading on each factor of individual variables. According to [START_REF] Chatfield | Introduction to multivariate analysis[END_REF], it has several limitations. A large number of assumptions have to be made in setting up the FA model. Besides, the number of factors is unknown and needs to be obtained by complex sequential tests or external consideration based on the proposed assumptions. The loading of the factors will change as the number of factors changes. PCA, on the other hand, is dedicated to finding the principal components of the best linear combination of variables. It requires neither the specification of any underlying statistical model to account for the residual error nor any assumptions about the probability distribution of the original variables. The components derived in PCA are unique (except where there are equal eigenvalues) and so stay the same as one varies the number of components that are thought to be worth including.
Principal component analysis (PCA)
PCA can be traced back as far as 1901 to the work of Pearson. It was gradually popularised during the 20 th century and is widely applied since then for describing complex multivariate problems [START_REF] Jolicoeur | Size and shape variation in the painted turtle. A principal component analysis[END_REF].
PCA is a variable-oriented technique, with no distinction between independent and dependent variables, and is suitable for situations where variables are generated on an equal footing. The general aim of PCA is to construct overall indicators after
State of the art
analyzing correlations between multiple variables, uncovering the dominant combinations of features that describe as much of the data as possible.
To achieve the aforementioned objectives, its main technical thrust lies in trans- When n = 2, the principle of PCA can be interpreted in a geometric way illustrated in Figure 2.15. All data points are plotted using variable 1 and variable 2 as Fig. 2.15: Schematic diagram of the PCA principle coordinates. Then the average of the variable 1 and variable 2 sequences needs to be calculated as the center of data. To facilitate the computation, all data points are shifted so that the center is on top of the origin in the graph, as shown in Figure 2.15. If it is desired to represent two-dimensional data points in one dimension, the work required is to locate a line going through the origin and minimize the sum of distances from the data to that line. Since the distance from the data point to the origin remains constant, according to the Pythagorean theorem, requiring the minimum sum of the distances from the data to the line is equivalent to requiring the projections of the sample points on this line to be as separate as possible. This level of dispersion can be expressed mathematically as the variance of the sample points.
Suppose the mean-subtracted sample matrix is (2.37)
X = a 1 a 2 • • • a m b 1 b 2 • • • b m , ( 2
The covariance matrix of the data
C = 1 m XX = 1 m m j=1 a 2 j 1 m m j=1 a j b j 1 m m j=1 a j b j 1 m m j=1 b 2 j (2.38)
can be employed to conclude these two requirements. In order to satisfy the constraints, this covariance matrix needs to be diagonalised. Let Y = PX, then
Z = λ 1 λ 2 . . . λ n = 1 m YY = 1 m (PX)(PX) = P 1 m XX P = PCP , (2.39)
where C is the original covariance matrix, the matrix Z is obtained after diagonalization, whose non-diagonal elements are all zero. The optimisation objective of PCA now becomes finding a matrix P that satisfies PCP is a diagonal matrix and the diagonal elements are arranged hierarchically in descending order. Each row of P is the eigenvector of C, corresponding to the eigenvalue in the diagonal of Z arranged in descending order. Depending on the required precision, the percentage variance
α = z j=1 λ j n j=1 λ j (2.40)
can be used to determine the first z dimensions that need to be retained from the n-dimensional data volume.
The traditional calculation of PCA described above is based on the eigendecomposition of the covariance matrix, which squares the condition number, i.e. the digits lost by the roundoff errors will be doubled. When confronted with the continuously updating large input matrix, it could cause problems. Therefore, singular value decomposition (SVD) is introduced in this context. The SVD algorithm typically works by bidiagonalization or similar methods that avoid forming the covariance matrix, and thus provides higher numerical precision [START_REF] David C Lay | Linear algebra and its applications[END_REF].
Singular value decomposition (SVD)
SVD is one of the most well used and general-purpose practical tools in numerical linear algebra for data reduction processing. It can help reducing high dimensional data into the key features that are necessary for analyzing, understanding, and describing [START_REF] Wall | Singular Value Decomposition and Principal Component Analysis[END_REF].
A matrix data set
X = | | | x 1 x 2 • • • x n | | | ∈ R m×n , (2.41)
can be seen as a collection of a certain set of experiments. Each column vector
x i ∈ R m×1 (i = 1, 2, ..., n) presents a group of measurements of the physical system state that is evolving in time from experiments with m data points. Whether n is greater than m or m is greater than n (assuming n < m in the following discussion),
the SVD always has a unique matrix decomposition that exists for every input matrix X. The data set can be decomposed into three parts as
X = UΣV = | | | u 1 u 2 • • • u m | | | • σ 1 . . . σ n --- 0 • -v 1 - -v 2 - . . . -v m - , (2.42)
where U ∈ R m×m and V ∈ R n×n are unitary matrices, and Σ ∈ R m×n is a matrix with real, non-negative entries on the diagonal and zeros off the diagonal. The twodimensional geometric interpretation of the SVD operation is illustrated in Figure 2.16.
SVD can provide systematic interpretations in terms of correlations among the columns of object matrix X and correlations among the rows of X. When the decomposed components are substituted into the correlation matrix among the columns of X, the result obtained corresponds to the definition of eigendecomposition, as Fig. 2.16: Visualization of the SVD [START_REF] Strang | The fundamental theorem of linear algebra[END_REF] expressed below :
X • X = VΣ U • UΣV = VΣ 2 V ⇒ (X • X) • V = VΣ 2 . (2.43)
It means V and Σ 2 contain the eigenvector and eigenvalue of the column-wise correlation matrix X • X. Similarly, U and Σ 2 contain the eigenvector and eigenvalue of the row-wise correlation matrix X • X .
If taking this decomposition into a physical explanation without looking at the specific application scenarios, then U is the hierarchically arranged "eigen" information about the measurements, and V is essentially the "eigen" time series that stands for how each mode U evolves in the duration of the process. The eigenvalues in Σ represent the amount of energy that each of these column vectors captures, hierarchically arranged in order of importance [START_REF] Brunton | Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control[END_REF].
For a better understanding, one can simplify Equation 2.42 in the following way:
X = σ 1 u 1 v 1 + σ 2 u 2 v 2 + . . . + σ n u n v n . (2.44)
Here, σ i are the ordered singular values, u i is an m-dimensional column vector, and v i is an n-dimensional column vector. The matrix X is decomposed by SVD into a weighted, ordered sum of separable matrices σ i u i v i . Those matrices can be considered as n-order superposition arranged by their ability to approximate the original input matrix. This is the so-called SVD separable model.
Although the first singular value occupies the dominant position, the second, third, and even higher order singular values, as long as they are non-zero, also have a certain meaningful value, such as the representation of noise in the measurement, etc.
For estimating the ability of the i th order singular value to approximate the original matrix, the following separability index
α ki = σ 2 ki n j=1 σ 2 kj , (2.45)
is put forward to precisely indicate the nature of its separability [START_REF] Wall | Singular Value Decomposition and Principal Component Analysis[END_REF].
In the TCM field, the application of SVD lies commonly on the purpose of data reduction for already obtained features. Zhou et al. [START_REF] Zhou | Intelligent diagnosis and prognosis of tool wear using dominant feature identification[END_REF] used the SVD to perform dominant feature identification from 16 features calculated by the measured force data. The selected sets of dominant features were then passed through a regression model for mechanical monitoring in the industry. Samraj et al. [START_REF] Samraj | Dynamic clustering estimation of tool flank wear in turning process using SVD models of the emitted sound signals[END_REF] proposed an online turning measurement system using emitted sound as data sources. The research employed the fuzzy clustering algorithm to classify the oriented energy obtained from SVD to monitor the tool wear. Cempel [START_REF] Cempel | Generalized singular value decomposition in multidimensional condition monitoring of machines-a proposal of comparative diagnostics[END_REF] applies generalized SVD in symptom observation matrices to discuss machine monitoring, using rolling bearings and diesel engines as examples. These methods are all based on observing similarities in machine wear processes but are still limited to using external objects as a reference for comparison.
To the authors' knowledge, there is no similar research using SVD or PCA to explore inter-insert correlation in the angular domain for TCM. The proposed method is original and innovative. The specific physical meaning corresponding to the three obtained parts U, Σ, V will be explained in detail in Chapter 5.
The main findings of the literature review, which covered many aspects of TCM systems, are listed below.
Review shows that lateral wear is the most dominant type of wear on end mills.
After the initial wear, flank wear increases at a fairly constant rate for the majority of its service life. Finally, it reaches an exponentially accelerated wear zone, which marks the end of the tool's operational life. Combining the above-mentioned tool life characteristics with the model of cutting forces in case of wear proposed after the chip forming analysis, the interaction between the tool teeth was captured as the basis for inter-insert correlation.
After considering the cost-effectiveness and applications of the available sensors, as well as the availability of equipment in the laboratory, the cutting forces were selected as the target signal for the subsequent analysis. Based on the characteristics of rotating mechanisms, the advantage of the angular signal is exploited to reduce instability due to speed variations and to perform better tooth-based segmentation of signal. Thereby the concept of tool monitoring in the angular domain using interinsert correlations was presented.
Finally, after the study based on correlation and multivariate analysis with a deepening discussion of PCA, SVD was identified as the specific algorithm for TCM based on inter-insert correlation.
Introduction
After reading the literature and identifying the inter-insert correlation as the research direction, this chapter focuses on the applicability of the proposed method.
The characteristics of the processing-oriented signal are outlined in Section 3.2 and a correction of the trajectory angle is introduced in Section 3.3 to optimize the problem of path variation in the milling process.
A general model is formulated in Section 3.4 to facilitate the expansion of the method for future applications. The cutting force model proposed in the previous chapter is incorporated into this general process as a theoretical basis for experimental validation. Meanwhile, the instantaneous angular velocity is presented as a more convenient simulation signal for the preliminary feasibility test. On this basis, specific strategies for performing inter-insert correlation analysis are discussed in Section 3.5.
Required signal features for inter-insert correlation
The methodology of obtaining the state of the studied object by constructing the correlations between the signal segments is theoretically versatile for different practical cases, especially for mechanical structures with rotational periodicity. Rotating mechanical structures with periodic movements are particularly suitable for this method. In this thesis, the subject is directed at the monitoring of milling tools, while the same principle can be applied to the diagnosis of mechanical parts such as gears and bearings. In addition to the versatility of the objects under monitoring, the signal sources associated with them are also diverse, as in the examples given by Section 2.4.1. In order to define the application scope of the inter-insert correlation method, several features that the applicable signal needs to be equipped with are summarized here as follows:
(i) Recurrence of events
The entire diagnostic method of inter-insert correlation has a cornerstone, which
Required signal features for inter-insert correlation
is the recurrence of events. It is certainly possible to pick a particular point in time to analyze an individual event. But if one wants to see the global trend over the timeline, the signal being processed should contain multiple consecutive events.
The definition of the event is based on the object for which the correlation analysis is performed. For example, in this thesis, each rotation of the milling tool can be considered as one event, or the entry-exit of the material by each tooth can be considered as one event. Regardless of the choice, however, it is worth emphasizing that the recurrence of events is not equivalent to a periodic signal. During the continuous repetition of events, intentional artificially set variations and effects due to changes in the state of the detected object are allowed to occur. Therefore, the signal does not have to be cyclical in the strict sense of the word, but it needs to have a physically continuous working regime.
(ii) Segmentability and matchability
The second requirement is that the signal needs to possess the properties of segmentability and matchability.
Segmentability means that the signal consists of discrete data and can be segmented into equal-length observation objects based on recurring events. Matchability, on the other hand, refers to the fact that the segmented signal pieces can correspond well to the customized events in a physical sense.
Both two characteristics are aimed at improving the accuracy of multivariate correlation analysis between the segments.
(iii) Similarity and interaction between the segments Another important prerequisite for inter-insert correlation is the similarity between the neighboring segments. The principle of this method is to distinguish anomalies from events that are supposed to be in the same condition. If in an ideal condition (e.g., using a brand new tool without any wear), several segments for correlation analysis already have a large difference, and the upper limit of the indicator will be below 1. In such a case, the analysis of the condition of the workpiece purely from the index will lose a standard reference state, and therefore the use of this method is not recommended.
It is also worth noting that in comparison to the continuous recurrence of the events, similarity refers rather to the short term. During the overall period, it is possible that the form of the signal may change gradually. Therefore, a signal with both recurrence and similarity is not necessarily a cyclostationary signal.
On the basis of similarity, if there are extra interactions between segments. For example, when the previous tooth wears or breaks, the cutting conditions of the corresponding process often tend to be modified. The next insert will be subjected to compensatory overload. Due to this compensatory work, the data fluctuations on the correlation indicators will be more pronounced in case of anomalies. The most intuitive change is between two consecutive teeth, but it may also affect a third or other subsequent teeth.
The above properties are the criteria to determine whether the signal can be processed by the inter-insert correlation method discussed in this research. From a practical point of view, this method is well suited for a rotating mechanism with cyclic behavior. It has very promising development prospects and application extensions.
Trajectory change and unification of the angle
In most cases of milling operations, the cutting speed and feed rate are generally set to certain constant values. However, the milling trajectory is not always as simple as a straight line. Since the research is carried out in the angular domain, changes in the trajectory will naturally introduce phase changes to the measured signal. Therefore, before proposing a general model to describe the system, it is necessary to introduce the concept of the trajectory change and the corresponding unification of the angle.
The spindle rotation behavior and the trajectory change behavior are parameterized respectively into θ and Θ. The geometric relationship is used to establish the unified angle ϑ, which expects to combine the two rotational behaviors under the same frame of reference.
It is assumed here that the preset trajectory is a straight line followed by a cir- When the tool rotates, the instantaneous immersion angle θ, defined in Equation 2.7, increases along with the rotational frequency ω. Taking clockwise rotation as the positive direction, the angle of trajectory Θ is superposed on this basis. The unified angle ϑ is given by
ϑ i (t) = θ i (t) + Θ(t) = t 3 0 ω(t)dt - 2π n z • (i -1) + Θ(t), (3.1)
where Θ is the angle that the tangent of the path makes with the axis x and is continuously updated by superposing the variable ∆Θ multiple times during milling process as indicated in Figure 3.1. The trajectory change angle Θ can be obtained from the information recorded by the axis encoder or can be extracted from the initial program input to the CNC machine-tool.
General modeling of rotational machinery behavior
As mentioned earlier, although the research in this thesis is targeted at the condition monitoring of end mills, the proposed method has the potential to be adapted to a broader frame of rotating machinery maintenance. Any signal that satisfies the three characteristics presented in Section 3.2, including that the signal of the cutting operation, can be regarded as a superposition of the analyzable general event and external interferences. With description of the angular parameters defined above, it can be considered that the actual collected signal is decomposed into
d(θ, Θ, t) = S(θ) + D(θ, Θ, t) + I(θ, Θ, t) + η(θ), (3.2)
where the decomposition components are:
• S(θ): The interference driven by the motors, bearings, and joints in the spindle.
Within the reference of the main shaft, therefore, the function S is linked with θ. It is considered can generally be corrected by comparing it to the idling condition. In normal working states, the interference is so small that it can be ignored after correction.
• D(θ, Θ, t): The influence of the resistance generated by the material removal, which is the main component that interests the analysis. It not only depends on θ but also associates with the deterministic trajectory Θ. The unified angle ϑ might be used to summarize the total phase changes. The nature of this component relies upon the shape of the trajectory. In most milling cases, the cutting path is very variable depending on the manufacturing requirements.
Hence, this component is always included in the analysis of the cyclo-nonstationarity category.
• I(θ, Θ, t): The vibration impulse response in relation to time frequencies and machine-tool mode. According to [START_REF] Lamraoui | Indicators for monitoring chatter in milling based on instantaneous angular speeds[END_REF], the chatter generated by rotating machinery is deterministic and can be summarized with second-order cyclostationarity (CS2).
• η(θ): The residual part due to different sources of noise, such as lubrication impact and chip evacuation, etc., which could be refined to a great extent by the filters.
Depending on the actual experimental environment and the condition of collected signals, one can choose filtering, CS2, and correction with idling rotation according to the need to reduce respectively the effects of η, I, S. In general, it is rational to consider that η, I and S are so small that overwhelmed by the valuable component D and can be appropriately ignored. The rest term D is the purest reflection of the state of the end mill.
In the following Section 3.4.2 and Section 3.4.3 of this chapter, the cutting force, and the instantaneous angular speed are used as two practical cases to represent the general model, respectively.
Segmentation
The purpose of segmentation is to extract relevant samples of the continuous sensor signal data set for further processing. Since the object of study is the angular signal, it avoids the sampling irregularity caused by velocity variation, satisfying the second feature required in Section 3.2. An appropriate segmentation can significantly improve the resolution of fault diagnosis [START_REF] Rehorn | Fault diagnosis in machine tools using selective regional correlation[END_REF].
In this research, each revolution of the tool is considered as a sequence of n z events. For example, Figure 3.2(a) shows a revolution of data divided into n z = 4 equal segments, where the part between the two red dashed lines includes the mechanism of the present tooth entering and then leaving the material, followed by an empty gap until the next tooth enters again. It can be seen that these segments share a similar waveform trend, which corresponds to the third feature required in Section 3.2.
It is worth emphasizing that the corresponding radial depth of cut a e of the signal shown in Figure 3.2(a) has been controlled to ensure that only one tooth is in contact with the material at a time to clearly illustrate the processing results.
However, it is envisioned that, under the good condition of the tool, even if more than one tooth is in contact with the material at the same time, the consecutive n z segments will also be identical and correlated. Because the periodicity of the signal still exists, and one revolution always contains n z segments.
The original one-column data sequence is reshaped into a matrix
D seg = | | | | d 1,1 d 1,2 • • • d k,i • • • d n nz ,nz | | | | ∈ R m×n , (3.3)
where n is a constant presenting the number of total segments, k is the subscript indicating the revolution counts (k = 1, 2, ..., n nz ) and i is the subscript indicating the tooth counts (i = 1, 2, ..., n z ). The visualization of this step is illustrated in Figure 3.2(b). Each column corresponds to a segment, and d k,i presents the i th (i = 1, 2...n z ) segment sequence in the k th revolution. n z consecutive segments contains the data collected in one complete revolution.
After separating the finer component D, the signal will be split into segments corresponding to customized events for subsequent correlation analysis. This process of segmentation could be programmed by the user with the milling preset parameters in the CNC machine tool [START_REF] Thomas E Mcleay | Unsupervised monitoring of machining processes[END_REF].
In case of milling force
Combining the general model of rotating machinery in Section 3.4 and the expression (Equation 2.27) for the cutting force in Section 2.3.5, the column vector d k,i in the matrix D seg can be further expressed as
d k,i = d k,i • 1 m,1 + Ψ F k,i + ∆F k,i , (3
Ideal D seg = d k,i • 1 m,n + Ψ F k,i + ∆F k,i • 1 1,n . (3.5)
However, with the real-world external disturbances that are difficult to exclude, the segments cannot be exactly the same. At this time, the revolution matrix D k will perform differently as the data, if removing D seg J to reach zero-centering, then the contribution of ∆F will be more prominent, and the subsequent analysis results will be more obvious.
D seg = D seg • 1 m,n + (Ψ F k + ∆F k ) • 1 1,n = D seg • J + Ψ F + ∆F, ( 3
In case of instantaneous angular speed (IAS)
Before the experimental data are available, the preliminary feasibility analysis of the proposed method is based on simulated data. Since the physical behavior of the cutting edge entering and exiting the material naturally makes the cutting forces intermittent and numerically complex (Figure 3.2(a)), the simulated signals are built based on the instantaneous angular speed (IAS).
IAS is also a signal parameter that is related to the tool state and at the same time fits the applicable range presented in Section 3.2. It always runs around the set cutting speed with small fluctuations, which facilitates the simulation and at the same time demonstrates side-by-side that the method can be applied to a wide range of signals generated by rotating machinery.
Straight-line simulation signal
The dynamic behavior of such a continuous and stable rotational mechanism has a law of cyclic motion that is consistent with the physical meaning implied by the trigonometric function. Therefore, using a sine or cosine function multiplied by a reasonable amplitude value K can be used to simulate the milling IAS signal, which
Curve-line simulation signal
On the basis of straight cutting, the cutting curvature can be taken into account to simulate a cyclo-non-stationary case, with a curved trajectory. In this paper, it is assumed that the depth of the cut remains stable, only the direction changes in the curved path. For the simulated signal, that is to say, the amplitude stays steady while the phase undergoes a corresponding change.
Suppose the milling path is as shown in Figure 3 while the one in the y-direction decreases gradually, which is usually self-adapted by the machine, rather than artificially set. After that, f y totally becomes zero, and transverse milling is performed.
The rotational speed passing the curve is defined as Ω. It can be expressed using the rotation speed of the tool ω, the number of teeth n z , the feed f z , and the radius of curvature R c of the cutting curve. The expression is
Ω = Θ t curve = Θ 2πRc• Θ 2π fz•nz • 2π ω = n z • f z • ω 2 • π • R c = ω a , (3.8)
where
a = 2 • π • R c f z • n z , (3.9)
is the number of revolutions required to pass through the curved trajectory with R c as the radius.
On the basis of Equation (3.7), combined with the geometrical modeling of milling, the curve cutting part can be defined as
D = K • cos (n z • ωt + Ω • (t -t )) + D 0 (3.10)
where t is the time point of entering the curve. Here, 30 revolutions for the curved path are inserted in time point t after 30 revolutions of the straight-line cutting.
Bringing reasonable parameters selected based on experience into Equation 3.8, the simulation results with segmentation by the revolution in case of IAS are presented in Figure 3.5. From k = 1 to k = 30, the yellow stripes (peak value of the signal) appear vertical, while from k = 31 until the end, the yellow stripes gradually tilt to the left side. This is exactly the phase shift that naturally occurs due to the trajectory angle Θ when the tool passes the corner at a constant feeding speed.
Correlation strategies
The signal that has been reshaped in Section 3.4.1 is presented as a matrix D seg .
But for further correlation analysis, the specific strategies need to be discussed. That is to say, the principle is the same, always comparing the segments corresponding to different events, but which segments are selected to form the initial input matrix is still open to negotiation. Different strategies can be adopted by extracting different segments for the object of the correlation analysis.
Several attempts have been proposed here for discussion, listed as follows:
(A) Correlation analysis in unit of all segments
The first proposal is to analyze the data as a whole. In other words, the matrix D seg is directly used as the object matrix
D e = D seg = | | | | d 1,1 d 1,2 • • • d k,i • • • d n nz ,nz | | | | .
(3.11)
It does not actually require the extraction step, hence, the extraction number
e = 1.
As a matter of fact, such an initial input matrix is relatively simple, which requires only a single treatment of SVD to process the entire data. Consequently, this strategy is applied in Section 5.2 as a way to display the results of signal processing. The downside, however, is that it has no way of tracking the dynamic changes of the tool in real-time. Since segments of normal operating conditions constitute the majority of the data, the first hints of signal changes due to tool wear are easily drowned out, resulting in insignificant changes of the indicator value. Even if the results of the correlation analysis indicate that the tool is in a defective condition, it is hard to locate the exact time when the tool was worn. If all data from the start to the present during processing is used as the input matrix, then in practice, the amount of data will increase over time and the load on the SVD calculation will eventually exceed expectations.
(B) Correlation analysis in the unit of individual tooth
The second conception is to extract the segments corresponding to each tooth.
For example, the object matrix of the first tooth is
D e = D k,1 = | | | | d 1,1 d 2,1 d 3,1 • • • d n nz ,1 | | | | .
(3.12)
The data within D seg can form n z object matrices D k,i (i = 1, 2, ..., n z ) for the subsequent correlation analysis.
The original intent of this analysis is to allow only the signal of an individual tooth to be compared with its changes in the time domain by controlling other variables. However, the behavior of the teeth affects each other. When one tooth wears out, the rest of the teeth do compensatory work for it to varying degrees. Therefore, even if the correlation result of the object matrix for an individual tooth fluctuated, it does not necessarily mean that the corresponding tooth is in bad condition.
This attempt produces less clear results while increasing the computational load by n z times. In addition, such a method has the same problem as case (A) and cannot achieve the target of real-time monitoring. Consequently, this strategy was negated.
(C) Correlation analysis in unit of revolution Dong et al. [START_REF] Dong | Bayesian-inference-based neural networks for tool wear estimation[END_REF] pointed out that in order to reduce the effect of runout, it is better to analyze a signal within a spindle rotation range rather than a tooth period. Therefore, on the basis of above thoughts, the case (C) tends to comply with the need for real-time monitoring and tries to decompose the data in a more dynamic way. The idea is to extract the segments belonging to one revolution as the object matrix
D e = D k = | | | | d k,1 d k,2 • • • d k,i • • • d k,nz | | | | , (3.13)
where the extraction number e = k = 1, 2, ..., n nz . In other words, this matrix will be updated by each revolution continuously, and it always contains the relevant data to differentiate and compare multiple teeth passes of each cycle. The process will be repeated n nz times from the beginning until the end of the machining.
The advantage of such a strategy is that the target of each analysis is fixed to n z segments, which could make the presentation of results more concise. At the same time, it is capable of taking into account both the periodic change of the rotational angle θ of the tool and the deterioration process of the tool state over time, which enables a better correspondence between the angular domain and the time domain.
In actual production, the data of consecutive multiple revolutions can be selected as the object matrix according to the specific situation. For the same machining operation, this processing can reduce the number of calculations e.
Certainly, while reducing the computational burden, such an approach also lowers the system's refresh rate for real-time tracking feedback of the tool state. If the physical equipment enables it, what should be done is to select a parameter with a higher update ratio to better monitor the condition of the cutting tool.
In addition to these three cases, it is of course possible to choose another extraction to form the study unit D e . But regardless of the combination, the segments in D e always share similar (and perhaps not completely identical) characteristics. Similar to Equation 3.6, D e can be further expressed in a unified way with the weight under the identity unit as the current equipment has a problem adapting the calculation speed, the multirevolutions extraction method can be appropriately selected according to the situation. In all the results presented hereafter, if there is no special indication, the case (C) is used, and the selected object matrix always contains n z consecutive segments belonging to one revolution.
D e = D e • J + K • W + ε, ( 3
Conclusion
Chapter 3 begins with a summary of the applicable scope of the inter-insert correlation method for condition monitoring. Three characteristics of the oriented signal are recurrence of events, segmentability & matchability, and similarity & interaction between these segments. Based on these, a general model is developed for the future application expansion of the methodology.
Regarding the end milling process studied in this case, a trajectory angle was introduced to rectify the issue of path variation during non-stationary machining tasks in order to better accommodate this general model. The previously mentioned milling force model was adapted to the general process as a theoretical basis for subsequent experimental validation. Meanwhile, the instantaneous angular velocity is included as a more convenient simulation signal for the preliminary feasibility analysis. It is consistent with the above-mentioned three required characteristics and demonstrates from the side that inter-insert correlation can be widely employed for signals generated by rotating machines.
On the other hand, signal segmentation, as an important pre-step of inter-insert correlation, is presented in detail. On this basis, the strategy of how to perform the correlation analysis among the signal segments is specifically discussed. Three methods, including the overall-based, tooth-based, and revolution-based extraction, were analyzed. Considering the effectiveness of the update rate for online monitoring and the volume of data calculated in real-time, the target matrix was determined to be generated by the revolution-based method.
Introduction
Within the framework of the study of tool conditions, this chapter will describe the acquisition and pre-processing of experimental signals, organized in three sections.
Experimental setup
The DECKEL MAHO DMC 635 V milling machine was used for the experimental tests. By using the 4 holes in the bottom plate and matching types of screws-nuts, a Kistler 9257A 3-axes dynamometer could be tightened on the operation bench. It is worth noting that the support surface must be cleaned before installation (e.g. residual chips). Otherwise, the uneven supporting surface may set up internal stresses, which will impose severe additional loads on the individual measuring elements and may also increase cross talk [Kisa].
To avoid drilling holes in the workpiece, a compact self-centering vise V75100 is installed on the dynamometer, which serves to hold the workpiece and acts as a bridge to transfer the milling forces from the block to the dynamometer. After coupling with Kistler 5015B charge amplifiers, the cutting forces are measured in three orthogonal directions ( x, y, z). Simultaneously, two axis encoders for x, y directions integrated into the bench and the rotational encoder combined in the spindle were linked to It should be noted that the inserts used for the experiment were not brand new due to the limitations of the conditions, and one of the teeth was 0.06 mm larger than the others, as shown in Figure 4.2.
The specific parameters of the experimental materials can be found in Appendix A. The following machining tests used the same series of device setups.
Milling parameters and trajectories
Milling parameters and trajectories
In most circumstances, for a given machining operation, the cutting speed V c will be set to an appropriate constant to complete the entire machining step at a theoretical unchanging cutting speed. For the inter-insert correlation studies, the variation of preset cutting speed has a uniform effect on all the teeth due to the integral nature of the tool. Therefore, this experiment does not make multiple comparisons of the preset spindle frequency. The cutting speed is always set to 140 m/min. The depth of milling a p is kept at 3 mm.
The general parameters are summarized in Although theoretically, the cutting speed is always constant, in reality, the spindle speed is subject to fluctuations as the tool advances over the surface of the workpiece, especially when the operation involves machining a continuously varying diameter.
The focus of the study is to demonstrate that the proposed inter-insert correlation indicator is only related to the state of the tool itself. In other words, their correlation remained at a high level even though each tooth was subjected to nonstationary external influences as the tool passed through the non-linear trajectories.
Multiple running trajectories were tested in Section 4.3.1 to demonstrate that the proposed method is equally diagnostic for cyclo-non-stationary working conditions.
Different trajectories cases
The experiment has proposed five different cutting trajectories to verify the subject mentioned above.
The tests are performed layer by layer on the same workpiece. At the end of each layer of the milling process, there must be some leftover materials with different shapes. Therefore, a surface milling of a p = 1 mm is carried out after each test to ensure that the initial conditions for the next test remain the same as before. At the beginning of each cut, a reference cut of a e = 3.2 mm is first conducted to verify and validate the initial cutting condition. The milling mode is always down-milling. The cutting path of a rhombus shares a close resemblance to that of a square.
They both contain straight paths on 4 sides. The difference, however, is that the square contour acts in one direction at a time (x or y axis), while the rhombus contour makes demands on both the x and y directions. According to the advice of experienced professionals, this is one of the factors to be considered in actual production.
After the reference cut, the excess material is removed in the pre-cut step, thereby obtaining a rhombus-shaped outline. The starting point is always at the bottom. The tool is milled from a clockwise direction around the workpiece, experiencing the four stages in that order: the decreasing of both x and y; the increasing of x while the decreasing of y; the increasing for both x and y; the decreasing of x while the increasing of y. With the limitations of the size of the workpiece, only two tests were planned, whose radial engagements are 3.2 mm (Test #1 17) and 8 mm (Test #1 18), correspondingly, as illustrated in Figure 4.4. The rounded square contour shows the gradual transition of the tool from a straight line through a circular arc. During this phase, the angle of the tool immersed in the material is in a varying state. In other words, there may be inconsistencies in the material cut by each tooth at this point. Therefore, this designed contour is to examine whether the proposed method is executable in curved trajectories (cyclo-non-stationary). Since in practice, end milling operations are often accompanied by a cycloidal path, this verification, thus, is of great interest. Three tests gave similar results, where the test with a e = 3.2 mm was employed as the presentation of the subsequent treatments.
(D) Contour of designed curve The curve of this designed path has some similarity to the rounded square, consisting of the straight line and the quartered arc as the base elements. Their loose ends are connected as shown in Figure 4.6. The difference is that all four corners of the rounded square are milled for the outer contour, while the contrast is introduced here for the inner and outer contour milling.
The start point of infeed is always at the upper edge of the workpiece. The tool first proceeds along a straight segment and passes through an outer quarter arc.
The outer quarter arc is linked to an inner quarter arc whose radius is greater than the radius of the tool. And then, the tool finally enters the second straight section to finish the test. The radial depth of the cut is a e = 11 mm (Test #2 6), 8 mm (Test #2 7), and 3.2 mm (Test #2 8), respectively.
Compared to outer contour milling, the material has a stronger tendency to enclose the tool when milling the inner contour. For comparison with other trajectory cases, the test with a e = 3.2 mm has been selected for the subsequent interpretation.
(E) Contour of designed curve with holes In this design, the tool runs on exactly the same trajectory as in case (D).
The only difference is that prior to the experiment, holes were drilled in the workpiece at the locations shown in Figure 4.7, whose diameter is 10 mm.
The position of the hole and the cutting trajectory are partially overlapped. This is considered as an unexpected event during the operation and is used to evaluate whether the proposed correlation method can cope with sudden changes. The operation with a e = 3.2 mm (Test #3 19) is considered as the representative to display the results for the subsequent sections.
Data pre-processing
The study proposes a methodology for assessing the tool condition by exploiting the correlation between teeth signatures. Useful information, hidden fairly deep in the raw signal, includes the preceding and following idle rotations of the end-mill at the settled spindle speed. The key steps of the signal pre-processing are described in the following sections.
The data of Test #1 10 from case (C) is employed here to show the processing results.
Hilbert transformation
The spindle signal received from Input 6 needs to be detuned to obtain an intuitive angular signal. The Hilbert transform is served as a demodulator here.
Hilbert transform imposes an important role in signal processing. Its physical meaning is to impart a phase shift of ± 90°to every frequency component of a signal, constructing the analytic representation of the initial real data sequence. The analytic signal x can be described as
x = x r + jx i . (4.1)
The sign of the shift depends on the sign of the frequency ω, that is:
F(x i (ω)) = e + iπ 2 • F(x r (ω)) for ω < 0, 0 for ω = 0, e -iπ 2 • F(x r (ω)) for ω > 0. (4.2)
By introducing the imaginary part of the signal, the Hilbert transform returns a helical sequence, including the phase information that depends on the phase of the original, as illustrated in Figure 4.8(a).
The projection of the analytic signal on a plane consisted by the real-imaginary axes can be thought of as a vector that rotates over time (Figure 4.8(b)). The instantaneous phase
θ = arctan x i x r (4.3)
corresponds to the intuitive angular information of the rotational spindle, and the instantaneous amplitude
|x| = x r 2 + x i 2 (4.4)
is the envelope of the signal.
Consequently, using the hilbert and angle commands in Matlab, the angular
Resampling
Since all kinematic variables in the operation of rotating machinery are related to certain angles of rotation, the acquisition signals associated with mechanisms behave as angular cyclostationary rather than temporal cyclostationary [START_REF] Antoni | Cyclostationary modelling of rotating machine vibration signals[END_REF].
Therefore, sampling the signals concerning an angular variable rather than a time variable allows better preservation of the cyclostationary property.
From a macro point of view, the cutting speed has been set at a fixed value, but as a matter of fact, due to cutting force/torque change and servo control, the milling speed changes constantly and slightly. This leads to differences in the number of sampling points within the same time length, making it difficult to later divide the signal into segments corresponding to each tooth.
In this work, the dynamometer under the workpiece records the force change over time, while the rotary encoder gives the information of the corresponding angle domain. Resampling is adopted as the means to achieve converting the time signal to the angular domain.
Compared with the time sampling in Figure 4.9(a), the resampling in the angular domain can stabilize the number of samples per revolution, regardless of the rotation speed (Figure 4.9(b)). That is to say, no matter how the cutting speed changes (ω 1 < ω 2 < ω 3 ), a constant number of sampling points will be recorded for each revolution. Experiment design and data pre-processing
The average value of the segment d k,i , as defined in Section 3.4.2, can be used to roughly represent the general state of the segment. By connecting d k,i with the same segment i in series, the results provide a principal tendency to observe briefly whether the subsequent optimization process is effective before the introduction of correlation analysis.
As shown in Figure 4.10, the data with the same subscript i are plotted as one line with the rotation cycles as the abscissa. The results before and after resampling are strongly affected by whether the signal is considered in the time or in the angular domain.
Milling direction correction
This work ensures that the force signal is correctly synchronized with the tooth signatures. When cutting in a straight line, only angular interpolation is needed to achieve this goal. However, changing the cutting path will naturally introduce a phase shift, which affects the distribution of the signal to the cutting edge. After the correction, the signal is shifted to the position corresponding to the correct segment. The cutting force is stable at a high value during the straight cutting section and drops to a low point when entering the arc. The cycle repeats four times along the trajectory (Figure 4.11). In traditional monitoring, similar parameter changes are likely to cause false alarms, whereas, in the proposed method, the impact resulting from trajectory changes can be minimized after correlation analysis because the segments have undergone identical milling changes almost simultaneously. Due to the limitations of experimental circumstances, the tools used are not brand new.
As mentioned in Section (D), the second tooth is about 0.06 mm longer than the other teeth at the beginning of the experiment. Therefore, every time it cuts into the material, the force it receives is about 50 N stronger than the other teeth. This difference is within the allowable range of machining accuracy.
Truncation & Segmentation & Zero-centering
The idling component of the signal (before the tool enters the material and after the tool exits the material) is of no value for identifying the signal signature corresponding to each segment. Hence, this part is truncated by manual selection.
This process also serves to round up the acquired data by revolutions. When one of the teeth cuts into the material, the resistance suddenly increases (matching the rising edge) and the angular velocity decreases. During the cutting process of one insert, the resistance gradually drops off with the formation of the chips and finally reduces to approximately zero when this insert leaves the material. The process of the previous tooth entering and then quitting the material until the next tooth enters again is called a single tooth-cut. Repeating this process n z times is a complete rotatory cycle of the tool. Figure 4.12 illustrates the tooth-cut with dotted lines, using the cutting force signal and corresponding cutting speed signal (instantaneous angular speed (IAS) and average angular speed (AAS)).
At the same time, by truncating the incomplete head and tail, the rest of the data is rounded well according to the rotational period, as shown in Figure 4.12 below.
The remaining signal is segmented and reshaped as described in Section 3.4.1.
The original one-column data sequence is folded into k object matrices for further analysis, each of which has n z columns containing m points. very small compared to the original value; it needs to be centralized before correlation analysis. Zero-centering refers to the matrix (D k -D k J). In this way, the performance of relevance will be emphasized, and this method can be applied to map the different risk factors to each insert to realize higher-precision monitoring. The experimental results show that zero-centering can also remove the random effects in the shutdown process.
Conclusion
Chapter 4 mainly covers the acquisition and pre-processing of experimental data.
A force measurement experiment was set up with the Kistler 9257A 3-axis dynamometer as the centerpiece. In conjunction with the rotary encoder integrated into the spindle and the displacement sensors on the bench, a total of six channels of signals were gathered and saved in the OROS analyzer.
Conclusion
After determining the cutting condition parameters for the milling operation, four different milling trajectories were designed. The square contour contains four straight line cuts and is used to establish the basic stationary working condition.
The rhombus-shaped path is aimed at verifying the compatibility of the angular coordinates used in the proposed method and the Cartesian coordinates from the CNC working system. The rounded square trajectory contains curved paths, i.e. for the testing of non-stationary regimes. The design curve intends to observe how the inter-insert correlation is affected by the wrapping of the material around the tool when the cutter passes through the inner and outer contours respectively. On this basis, adding holes to the path is used to test how the tool behaves when it encounters such a sudden change.
Pre-processing prior to inter-insert correlation includes signal resampling in angular domain, milling direction correction, truncation, segmentation, zero-centering, etc. The results obtained from the pre-processing of the experimental data demonstrate that the steps of angular domain resampling and direction correction effectively clarify the signal components corresponding to each tooth, which enhances the operability of the segmentation and provides a promising basis for the inter-insert correlation.
Chapter 5
Characterization of tool state by SVD Summary
Introduction
After the pre-processing steps described in Chapter 4, the signals obtained under different cutting conditions are ready for inter-insert correlation analysis. After the literature review (Section 2.5.3), the singular value decomposition is considered a suitable and effective means to implement inter-insert correlation.
In Section 5.2, the simulated signal of the IAS constructed in Section 3.4.3 is used as the data source for the preliminary feasibility analysis. The physical significance of the SVD decomposed components is interpreted in both straight and curved milling cases. After the ideal numerical simulations yielded optimistic results, the same algorithm was applied to the collected experimental data. Section 5.3 shows the processing results for different experimental groups and validates the previous conjectures. Following a sensitivity analysis, Section 5.4 describes in detail the two metrics based on the separability index and the window shift. After analyzing the tools in good condition, the signals of worn tools are presented to assess the feasibility of the proposed indicators. In the end, a brief evaluation is made using ROC curves.
The limitations faced by the method are commented in Section 5.5.
SVD components
The signal that has been reshaped by pre-processing in Chapter 4 is presented as a matrix and ready for correlation analysis. In theory, SVD can extract the main components of the data with an underlying correlated analysis algorithm (Section 2.5.3) and determine the status of the tool. In the following section, the decomposition components of SVD will be introduced first to facilitate the presentation of the results with experimental data.
In order to understand the nature of SVD, the proposed method is first pushed to the extreme to see how the ideal situation looks like, which can help to enhance the grasp of the problem. After that, the real conditions can be added step by step.
Therefore, the IAS simulation model mentioned earlier in Section 3.4.3 is used here to serve for the demonstration. To make the results more concise, this part of the study was performed based on the strategy of correlation analysis in the unit of all segments (case (A) in Section 3.5) and only one SVD was performed.
In the simulation, the signal contains a total of 120 segments, i.e. 30 revolutions, and the amount of data for each segment is designed to be 250 points. The amplitudes
Characterization of tool state by SVD
The specific simulation parameters correspond to Equation 3.7 for MATLAB code are listed in Table 5.1. The complete simulation signal is shown in The three parts are plotted separately in Figure 5.2. The blue line represents the 1st order component, which is the most dominant component, while the orange lines depict the components from the second to the last order. The aim is trying to connect these three components with the physical meaning in reality.
Evidently, as can be observed from Figure 5.2(a), the plotting shares the curvilinear features of trigonometric functions for both first-order and higher-order data.
In comparison with Figure 5.1(c), it can be seen that the left singular matrix U extracts the normalized shared waveform from the segments of the original simulated signal. As mentioned earlier, the order represents the importance of each component and its approximation to the original signal. Therefore, Σ can be considered as a scaling factor matrix to determine the contribution of the decomposition sequences to the original signal.
As for V, each decomposed order contains information on the amplitudes proportion. Since the previous paragraph has already concluded that the first-order component is dominant, it is reasonable to focus on the line traced by the first-order vector v 1 (the blue line in Although the amplitudes of v 1 do not exactly match the value of the given amplitude K, the result shows that the ratio between the amplitudes in v 1 is consistent with the original signal. At the position of the tenth revolution, the sudden change shows up as the preset control. That is to say, the matrix V consists in providing a series of pulses with the correct amplitude proportions, while the matrix Σ operates with a distribution of importance, scaling up or down the pulses of the corresponding order. As a result, when plotting ΣV , it can be observed that the previous cluttered overlapping curves from the second to the last order (the orange lines in Figure 5.2(c)) are multiplied by a very small factor and approach 0, while the first-order curve is enlarged. For the sake of brevity, Σ and V will be analyzed as a whole ΣV if there are no special circumstances in the following text.
It is worth noting that this is a very ideal result. Because the signal processed here is a simple straight-line simulation signal. When this method is applied to the analysis of real signals, it is often not enough to describe it by relying on only the first order, and the second, third and even higher orders also have certain reference significance. This process is like drawing. When the described object is more complex, there is a need to gradually add more details to the basic content.
Straight-line simulation signal
After the physical meaning of the SVD decomposition components is clear, the simulated signal of the straight cutting is subjected to SVD once again but with the object matrix in the unit of revolution (case (C) in Section 3.5). Since there are multiple extractions and calculations, the results are represented with 3D images, as demonstrated in Figure 5.4.
The abscissa in Figure 5.4(a) represents the sample size of the segments being analyzed; the ordinate corresponds to the times of extractions, i.e., the number of rotations in this case; and the applicate represents the value of the extracted waveform.
In Figure 5.4(b), the horizontal coordinate indicates the number of extractions, the longitudinal coordinate stands for the respective number of teeth, and the vertical axis presents the factor for scaling the waveform.
If looking purely from the aspect of the images,
Curve-line simulation signal
Since the ultimate goal of this research is to be applied to practical machining, which must involve the cyclo-non-stationary working environment. Therefore, the simulated curve-line signal described in Section 3.4.3 is also been tested to verify the above-mentioned method (case (C) in Section 3.5). The corresponding simulation parameters for Equation 3.10 are listed in The signal is set to 30-revolution straight-line milling, followed by a 30-revolution curve path cutting. Together with the number of teeth n z = 4, there are 240 segments. Conducting the SVD analysis by each revolution to the simulated signal, and the results are shown in Figure 5.5.
As can be seen in the 1 st -order subplot of Figure 5.5(a), the curve shows a consistent sine waveform from revolution counts k = 1 to k = 30, and gradually indicates a phase shift from k = 31 until the end. This is exactly the kind of performance of the signal when the tool goes through the curve trajectory. At the same time, from the 2 nd -order subplot, it is obvious that k = 30 is a conspicuous demarcation point. Before this, the waveform shows a mixed noise shape, and after this point, there is a clear and smooth waveform as a supplement to compensate for the insufficiency of information in the 1 st -order. It can be concluded that the phase of the signal does shift according to the turning angle of the trajectory Θ as the tool passes through the curve. In this case, the SVD can still produce a valid decomposition for n z consecutive segments of a given revolution. The extracted waveform contains the overall-considered phase offset of multiple segments in the object matrix.
Similarly, in Figure 5.5 (b), the value of ΣV fluctuates after the critical point of k=30. This illustrates that each tooth will experience a slightly different cutting process when the tool is under cyclo-non-stationary operations. Therefore, if conditions permit, it is advisable not to omit the prior correction of the milling direction to k = 31). For both the pre-and post-change periods, the 1 st -order separability indexes are dominant, constant, and always remain higher than 99%.
The above results reflect two fundamental viewpoints:
-The method is sensitive to changes within the signal and has the capacity to conclude intuitive feedback.
-The fluctuation in the separability index is very small. Due to the fact that the feed rate of the tool through the curve is much lower than the tool rotation speed, the external influences for n z segments of the same revolution can be considered to be very similar in general. Theoretically, only the state of the tool affects the separability index significantly, so false alarms triggered by external factors can be minimized.
These two foundations make the method possible to create a related indicator for tool state monitoring.
It is worth noting that although the second-order complements the first-order (Figure 5.5), the separability index of the first-order and second-order (Figure 5.6) have nine orders of magnitude (10 -9 ) differences in their values. This is due to the fact that the zero-centering step is skipped for the simulation signals, resulting in a value of the original parameter that is much larger than its fluctuating amount.
Analysis of results
Because the tool condition is almost the same before and after the experiment (no wear was observed) and the results of multiple experiments for the same cutting path case (described in Section 4.3.1) are similar, in this section, one set of data corresponding to each milling trajectory is taken as typical objects for analysis. At the same time, in order to verify the performance of the proposed methodology under the worn tool situation, a milling data set is borrowed from F. Girardin et al.
[GRR10] for experimental validation of failure detection.
The relevant information of all involved experimental datasets are summarized in
Validity verification of SVD analysis with experimental data
Test #1 10 is taken as an example to be the main object of analysis. In the case of the rounded square contour, the results can prove that the correlation analysis based on the SVD is more than sufficient to provide valid information. The tool is under good working condition and the SVD is performed on each revolution of the data, as shown in Figure 5.7.
(D k -D k J) as D k -D k J = σ k1 u k1 v k1 + σ k2 u k2 v k2 + . . . + σ knz u knz v knz .
(5.1)
The left singular vector u k extracts the common part contributed mainly by Ψ F k (defined in Equation (3.6)) for each input matrix (D k -D k J). It can be seen from that there is a small falling gap during the shutdown process. This is a reasonable adjustment caused by the withdrawal of the tool. When the workpiece completely leaves the material, the subsequent idling part has no cutting resistance, so the
Analysis of results
separability index α k1 is closer to 1 than during milling operation. Note that in the input matrix, the subscript i stands for the column number, whereas after SVD, the subscript i represents the order of the hierarchical decomposition. Fig. 5.9: Hierarchical presentation of signal by order: (a) the 1 st -order component occupied dominant contribution at the beginning of tool wear; (b) when a tooth is severely worn, the 1 st -order is not enough to express the signal signature, thereby the proportion of the 2 nd -order increases.
SVD provides hierarchical results from the 1 st -order to the n z th -order (Equation (5.1)), which reserves more information for detailed analysis. Using the data set from Girardin et al. as an example, the 1 st -order shown in Figure 5.9 is the closest in the mean-square sense to the signals, and the 2 nd -order or higher order can supplement the remaining details on this basis. Figure 5.9(a) presents the tool state of the 10 th revolution where tool wear begins, whereas Figure 5.9(b) indicates the behavior in the 99 th revolution where the first tooth is completely worn.
Fundamental verification
5.3.2.1 Compatibility of the angular domain-based approach with the
Cartesian coordinate system
Here in Figure 5.10, the results of Test #1 4 and Test #1 18 are firstly listed to verify whether the above theory is stable and valid for the analysis of experimental data.
Comparing Figure 5.10(a) and 5.10(b), it can be seen that the starting positions of the waveforms are different, which is due to the different choice of truncation during pre-processing. This equals a certain number of sample points displaced in all segments and therefore does not affect the subsequent correlation analysis. The fact of similarity between the square contour milling and the rhombus contour milling confirms that the proposed SVD method can be adapted to the case where the tool acts in both x and y directions, and the algorithm compiled provides a good integration of the angular domain variation based on the polar coordinate system and the working standard of the CNC machining center based on Cartesian coordinates.
Effectiveness of zero-centering
Secondly, using the data from Test #1 10 as a basis, the results of the firstorder separability index with and without the zero-centering step are made, as shown in Figure 5.11. The significant differences provided by the figures confirm that the zero-centering treatment is of great significance and therefore cannot be omitted.
Variation in radial depth of cut
Thirdly, Test #1 10 and Test #1 12 are compared here in Figure 5.12 to verify if the variation in radial depth of cut will cause the influence on the proposed method. For cases where the width of cut is too great, resulting in more than one tooth cutting the material at the same time, are covered in Test #2 6. Although the designed width of cut for Test #2 6 is only 11 mm, the material wraps around the tool beyond the normal setting when milling through the inner contour (second corner). At this point, the separability index behaves logically and remains within a reasonable range of values, as shown in Figure 5.12(c).
Basically, it can be assumed that changing the radial engagement within a certain range will not affect the stability of the proposed method, as long as there are no drastic abrupt changes affecting the balanced cutting contribution of the segments within a certain revolution.
These above facts provide a solid foundation to proceed to the next level of analysis.
Sensitivity analysis
After confirming the compatibility of the algorithm for both Cartesian and angular coordinates and the effectiveness of the proposed method independent of the external factors, sensitivity analysis was further introduced for a systematic review of the model. It is dedicated to investigating the impact of the inputs on the output of the model.
The choice of sensitivity analysis method is guided by the constraints of the problem. In the present case, two constraints are analyzed and developed in the following paragraphs:
• The effect of the input order of the teeth and the change of the cutter teeth state -performed using bootstrap.
• The effect of the starting point of signal segmentation, as it can be fortuity retained -performed using window shift.
Bootstrap
Bootstrapping uses a large number of random replacement observation data sets to construct resampling from the approximate distribution for establishing hypothesis testing. When parameter inference cannot be performed or complex formulas are required to calculate errors, it is usually used as an alternative to statistical inference based on parametric model assumptions.
A great advantage of bootstrap is its simplicity. It is a straightforward way to derive estimates of standard errors and confidence intervals for complex estimators of the distribution, such as percentile points, proportions, odds ratio, and correlation coefficients. Bootstrap is also an appropriate way to control and check the stability of the results. Although for most problems it is impossible to know the true confidence interval, bootstrap is asymptotically more accurate than the standard intervals obtained using sample variance and assumptions of normality [START_REF] Diciccio | Bootstrap confidence intervals[END_REF]. Bootstrapping is also a convenient method that avoids the cost of repeating the experiment to get other groups of sample data.
In this section, the bootstrap method will be applied to the input matrix of the SVD algorithm to estimate the confidence interval. That is, taking each column of the input matrix as a unit, randomly select 4 columns (with or without repeat) among them and arrange those 4 columns as a new input matrix for SVD calculation.
From the mathematical definition, the random sampling methods can be clearly divided into the following four types:
• C (Combination): When only considering the combination without repeated data, there is one and only one input matrix obtained, which is the original data itself.
• P (Permutation): When it comes to the pure permutation without repeated data, the total data composition of the original matrix remains unchanged, whereas the focus is on the arrangement order between columns. For a case of n z = 4, there are 24 different arrangement orders. In this way, it can be foreseen that there is no need to calculate the bootstrapping type P R, which also included the consideration for data sorting. Therefore, only type CR is the focus of the moment.
From Figure 5.16, the waveform is gradually and slightly changing with the modification of data content. With the comparison of Figure 5.16 and Figure 5.13(a), it is easy to find that the first waveform corresponds to the data of the first tooth As a result, the separation index is constantly changing under different combinations, forming an interval like Figure 5.17. From the beginning to the 120 th revolution, the cutter cuts in a straight line, so the operation of the four teeth is basically the same. After the 120 th revolution, the tool entered an arc-shaped trajectory, and the cutting condition of the four teeth appeared subtle differences. Therefore, the proportion of the separability index for the first order decreased, but it was very 123 slight and did not trigger the alarm.
For simplicity of drawing, only the maximum and minimum values are taken. It is worth noting that when the data columns are exactly the same in the bootstrap method, 1 is obtained because the degree of identicalness reaches the ideal state. At this time, this value is of no help for actual production, so the next largest value is taken. Therefore, the interval calculated by bootstrap CR can be defined as all numbers in between the minimum and maximum values (other than 1) of the separability index. These upper and lower limits are obtained from testing each combination of n z column vectors (including repeats) within the input matrix at the current moment.
In the case of changing only the ordering of the column vector in the input matrix (P ), the separability index remained stable at all times. This is consistent with the expectation of multivariate correlation analysis. With the addition of repeated combinations in random sampling (CR), the separability index had a fluctuation interval. When there are multiple vectors of the same data in the input matrix, the separability index increases in response to the rising correlation. So far, no unintended input-output relationships have been observed by bootstrap, proving the basic stability of the proposed method.
However, from Figure 5.18, the interval calculated by bootstrap CR will be influenced by the pre-processing of milling direction correction. The bootstrap method
Analysis of results
will enumerate many combinations that are close to the ideal situation, and thus the upper limit will always approach 1. To a certain extent, the upper limit calculated by bootstrap is not able to reflect the state of the tool. At the same time, the lower limit has more distinguishing features than the upper limit. However, after processing multiple sets of experimental data, the variation of the lower limit does not bring any additional useful information. It only records the worst-case scenario for the current combination of segments, and the pattern of its variation depends heavily on the external conditions encountered by the tool.
As such, the sensitivity analysis was rethought, and this time it targeted window shift.
Window shift
In the above-mentioned signal processing, the preceding idling part is truncated to accurately select the starting point of the segmentation (Section 4.4.4). This step ensures that each tooth signature corresponds to the segment, rather than one continuous signature being divided into two segments. However, it is a relatively ideal situation achieved by manual choice. The data length of the idling part is variable when the proposed method is implemented in real-time monitoring with different milling items. Therefore, it is not realistic to expect an accurate and automatic division of the signal at the starting position of the cutting force every time. The concept of window shift sensitivity range ∆ W S is introduced to proceed with sensitivity analysis and include this uncertainty in the input.
As named, it is assumed to be a shifted window of size m sliding backward from the beginning of the original one-column data sequence to capture the sample set corresponding to the first segment d 1,1 , which can be expressed as
d 1,1 = d 1+(m-l)(L-1) d 2+(m-l)(L-1) . . . d m+(m-l)(L-1) , (5.2)
where Once the first segment is chosen, the following data points will fold into input matrices and proceed with correlation analysis as normal steps for each shift. The process of element shifting for the general segment is described as
L = m m -l . ( 5
d k,i = d m(ki-1)+1+(m-l)(L-1) d m(ki-1)+2+(m-l)(L-1)
. . .
d mki+(m-l)(L-1) . ( 5
Fault detection
Based on the above results, two methods are derived for assessing the status of the tool:
TCM based on separability index α k,1
The judgment directly made by the performance of the separability coefficient is clear and intuitive. Under normal operating conditions, the 1 st -order main components can already explain more than 90 % of the information. In the event of abnormal wear, the signal will deviate from the typical waveform, and the 1 st order will no longer be sufficient to describe the signal, while the 2 nd -or 3 rd -order contribution of SVD will increase rapidly.
Fault detection
If the results obtained are consistent with the known tool state after processing this set of data according to the steps mentioned in the above sections, the effectiveness of the proposed method can be verified. For tools in good condition, the separability index is always above 0.9 before the stoppage, whereas the separability index has an obvious decline near the 100 th revolution on the curve for the worn tool (Figure 5.21 (b)). Periodic wear gradually occurs at stages 1, 2, and 3. The longer tooth cuts more material and is more stressed, which causes and aggravates different levels of wear between teeth. From the perspective of the separability index, the correlation continues to decrease and becomes more and more severe. At the start of stage 4, the indicator drops dramatically. It corresponds to the situation where the wear increases exponentially until the tool is completely damaged, which might damage the spindle. After a certain reaction time, the tool was stopped manually (S5), completely withdrew from the material, and reached the idling state (S6).
As mentioned in Section 5.3.3, the separability index α k1 and sensitivity range ∆ W S will fluctuate slightly with different machining conditions. Using Figure 5.21 (b) as an example, even if the tool continues to wear in the S1, S2, and S3 stages, the values corresponding to the separability index are all above 0.95 due to its simple rectilinear cutting condition. Therefore, it is difficult to use this value to clearly define the wear status of tools with high precision.
The empirical rule is chosen as the definition of the threshold to solve this prob-lem. The three-sigma interval encompasses 99.7 % of the values under the assumption of a Gaussian distribution [START_REF] Upton | A dictionary of statistics 3e[END_REF]. Supposing the operation runs to the q th revolution, the previous separability indexes from (q -p) th to q th revolutions are considered as a cluster to present the current tool state. Their average value µ qp and standard deviation σ qp are calculated as
µ qp = 1 p + 1 q k=q-p α k1 , (5.5)
and
σ qp = 1 p q k=q-p |α k1 -µ qp | 2 .
(5.6)
The value of p can be adjusted to control the response speed of the TCM system.
In the studied case, p = 9, and the thresholds are set at µ qp ± 3σ qp .
The interval formed by µ qp ± 3σ qp for the tool in good condition is very close to the separability index (Figure 5.21 (a)). The standard deviation σ qp emphasizes the variation of the separation index for the worn tool, so regardless of its own value, when α k1 shows a downward trend, µ qp ± 3σ qp will form a raised area in both the up and down limit, named the bubble effect.
The vertical diameter of the first bubble in Figure 5.21 (b) is 0.01196, and then the diameter gradually increases. In the second and third stages, the values are 0.01898 and 0.03785, and the fourth stage shows an order of magnitude change at 0.2011. According to the different accuracy requirements, an alarm threshold can be set corresponding to a suitable bubble diameter to monitor the tool state. For the current case, the milling process ideally should be stopped at S2 or S3 to avoid subsequent catastrophic failure. At this time, the tool is no longer in a steady-state, which means that at least one of the teeth is entering the rapid wear phase. The result of the verification is in full agreement with the acknowledged tool status.
TCM based on sensitivity range ∆ W S
The assessment based on the sensitivity range has the advantage of skipping the step of milling direction correction (Section 4.4.3). During regular milling conditions, ∆ W S remains stable with a standard deviation of approximately 0.005. Once tool wear is detected, the sensitivity range varies greatly due to the decline of the correlation between segments. The bubble appears in the same position as with the previous method.
Summary and evaluation
The above two judgments are actually two expressions of the same phenomenon.
The first one simply relies on the separability index. Only current revolution data need to be included when calculating real-time monitoring, which is fast and efficient. However, the trajectory angle and the spindle angle must be synchronized to achieve higher accuracy. The second method uses the traversal mode, which has the advantage of not requiring a trajectory correction but needs a longer running time to process a larger amount of calculations for the window shift. Depending on the available equipment, it provides different monitoring strategies to suit variable machining conditions. The general workflow for tool state diagnosis based on the inter-insert correlation method by SVD is illustrated in Figure 5.23. No single method can guarantee that the right decision will be made all the time. The notion of false alarm is considered in a statistical sense for tool condition monitoring. The threshold establishes a balance between the probability of good detection and the probability of false alarm. As described in Section 2.4.3, the ROC curve is introduced in Figure 5.24 to determine the optimal threshold setting and the corresponding accuracy.
According to the information given by Girardin et al., the 94 th revolution is chosen as the turning point for the validation signal, before which the data are labeled as good condition and after which the data are labeled as worn condition. With the comparison of the given true labels and diagnosed states, the ROC curve shows the relationship between the true positive (TP) rate for the model and the false positive (FP) rate. When the threshold is adjusted, the resulting classifications will also change based on the value of this threshold. The curve shown in Figure 5.24 is obtained when all thresholds have been tried. The area under the curve (AUC) provides Note that the bubble size could cross the threshold during stoppage, due to a really fast drop in width of cut. There is still room for optimization at this stage.
The current strategy adopted is to cooperate with the milling program to indicate the finish point of product processing. After the product processing is completed, the monitoring is stopped while the tool is withdrawn (no chips are generated).
In summary, this method provides effective and stable feedback on the condition of the tool, which is very promising for TCM system construction.
In the above results, the proposed indicators provide effective feedback on the state of the tool. However, some of the limitations of the method are shown in the findings from the comparison between Test #2 8 and Test #3 19.
Both tests contain the base path as shown in It is clear from the amplitude extraction given by the first-order ΣV (Figure 5.26(a), (b)) that there are three value drops before the stoppage. The first drop occurs in the S2, where the radius of the outer profile milling is too small compared to the tool radius, thus resulting in a different amount of cut distribution for each tooth. The second and third value falls happen in S4 and S5, respectively. Obviously, when the tool passes over the holes, the material to be cut suddenly decreases, so the cutting force will be definitely reduced significantly at this instant. These two sharp valleys raised challenges for the TCM system. Both the separability index α k1 based, and the sensitivity range ∆ W S based diagnostic approaches exhibited fluctuations when the tool passed over the holes.
The right column images in Figure 5.26 actually show a similar trend, but the The cause of the fluctuations is logical, but the fact is that the separability index (the blue line in Figure 5.26(d)) falls below 90% in S2, S4, and S5, especially when the tool passes over the hole, where the minimum value can be less than 80%. For S2, the bubble based on the separability index is about 0.06, and the bubble size based on the sensitivity range is close to 0.025. This is still in a tolerable range if the same accuracy requirement in Section 5.4 is used as a reference. But for the two cases passing over the hole, the bubble based on the separability index is quite large, up to 0.37, while the bubble based on the sensitivity range is around 0.15. This has reached the threshold for triggering an alarm.
Conclusion
Based on the above analysis, the proposed method is hard to maintain a steady state when encountering certain situations that would create an imbalance in the cutting amount between the inserts, such as passing over a hole, and milling the outer contour with a tiny radius, etc. Consequently, for projects with these special scenarios and high accuracy requirements, it is necessary to pre-mark the sudden changes in advance with the help of 3D machining design software to reduce the unnecessary false alarm. A more appropriate and targeted application of this method is intelligent planning machining, which is recently emerged. For example, Esprit has proposed ProfitMilling, which automatically plans and optimizes the milling paths mainly using basic geometries such as linear, spiral, and trochoidal motions [Mita].
Conclusion
Chapter 5 details the processing results from SVD algorithms for target revolutionbased matrices, which is the core content of the inter-insert correlation for TCM.
The simple and idealized IAS simulation signal was first taken as the data source for the SVD, and the physical significance of the corresponding components was explored in both straight and curved scenarios. The left singular matrix U extracts the normalized shared waveform from the segments with preservation of the phase shift due to the curved trajectory; the singular value matrix Σ can be considered as the scaling factor determining the contribution of the decomposition sequences to the original signal; the right singular matrix V contains the proportional evolution in segments.
Afterward, the SVD method is applied to the experimental data to verify that the results are consistent with the simulated signal. The outcomes show that the interinsert correlation successfully copes with the more complex real data. Meanwhile, the comparison between the different experimental groups reveals that the method has good compatibility with angular and Cartesian coordinates and is independent of the radial depth of cut. A sensitivity analysis was then carried out, which derives two indicators based on the separability index α k,1 and the sensitivity range ∆ W S , respectively. These two indicators are in fact two expressions of the same principle.
The indicator α k,1 has the advantage of a fast response and ease of calculation, but Characterization of tool state by SVD the path direction correction is necessary to achieve greater accuracy. The indicator ∆ W S , on the other hand, does not require prior correction of the trajectory but has a greater computational burden. Depending on equipment availability, a suitable strategy can be selected flexibly. The signal from the worn tool was employed to test these two indicators. Finally, the ROC curves give AUC of 0.87 and 0.96 respectively.
The inter-insert correlation for TCM can effectively reduce the influence of external factors on the analysis process and is therefore independent of changes in width of cut and trajectory changes. This reference-free method partially fills the gap for tool monitoring in small-batch customized production. Amidst the promising prospects, some limitations are also identified. The proposed method has difficulty in maintaining a steady state when encountering certain situations that may cause an imbalance in the cutting amount between inserts, such as passing over a hole, milling a tiny radius outer contour, etc. This poses a new challenge for future developments.
Conclusion
The condition monitoring system is an important factor to ensure part quality and production safety in the emerging flexible manufacturing of Industry 4.0. In which the tool, as the terminal executive part of the machining process, contacts directly with the workpiece, so TCM is extremely important. However, at the same time, it is a challenging task due to the complex working environment. Therefore, the work carried out in this thesis aims to propose a solution that allows real-time condition monitoring of end mills in both stationary and non-stationary regimes, especially to meet the monitoring needs for customized small batch production.
In the literature review (Chapter 2), we first investigated the characteristics of milling operations, mainly concerning operating parameters, tool wear types, tool service life, and chip forming process under normal and wear conditions of the tool, and milling force analysis. On this basis, sensorial perception, feature extraction, and decision making, these three elements for building the TCM system were explored The obtained results partially fill the gap of tool monitoring demand in flexible manufacturing for customized small batch production. It is promising to be further extended to monitoring and maintenance of other rotating machinery.
Certainly, there were some setbacks that were also summarized during the research. The proposed method has difficulty in maintaining a steady state when encountering certain situations that may cause an imbalance in the cutting amount between inserts, such as passing over a hole, milling a tiny radius outer contour, etc. This is a more specific challenge that can be a direction for future improvements.
Principal contributions
The main contributions of this research can be concluded as follows:
• The behavior of the rotational signals is further explored in the angular domain, which creates a steady base for the segmentation of the signals corresponding
to each insert of the tool.
• The concept of exploiting the inter-insert correlation to monitor the tool state is proposed. Both theoretical derivation and experimental analysis are preliminarily justified with very promising results.
• Among the many methods of correlation, a specific treatment employing SVD is identified. Each of its decomposition components has a corresponding physical meaning, meanwhile, it also possesses computational efficiency. On this basis, the order separability index for assessing the current operating state of the tool is introduced and two derived strategies for fault detection are proposed.
• Compared to the decision-making by Teach-in & Comparison used in the most monitoring system, the method proposed in this paper has the advantages of convenience, intuitiveness, and flexibility to adapt to the different milling trajectories (cyclostationary and cyclo-non-stationary conditions). As it does not require any trial runs or big data training to obtain a standard threshold reference for each milling task, it is very friendly to customize on-demand in production as well as in small-batch production.
Further works
This thesis proposes a method based on correlation analysis for the monitoring of tool conditions during milling processes. In view of the fact that there is only a small number of publications in the same direction, there are still many techniques and algorithms that can be further discussed in addition to the implementation presented in this work. In order to advance future related research, some of the outlooks on the topic are described below.
• The purposefulness of both roughing and finishing operations should be further considered: roughing places more emphasis on the issue of allowing full productivity, whereas finishing is more about guaranteeing the quality of the workpieces. This requires a more precise correspondence between the accuracy requirements and the suggested indicator to determine in different scenarios whether a tool with minor damage is still suitable for further machining. It will require a series of experiments to determine the specific criteria to be applied, but it only concerns the characteristics of the tool (material, helix angle, etc.), independent of the cutting conditions.
• As described in Section 5.5, when encountering certain situations such as passing over a hole, etc., the proposed indicator will fluctuate due to the imbalanced cutting amount between inserts. Consideration could be given to optimizing this instability digitally by combining it with a pre-implanted milling program, or by taking multiple sensor signal signatures together, to enhance the monitoring stability.
• In this study, the cutting force was taken as the object, but dynamometers are mostly used in the laboratory and it would be appropriate to follow up with a signal source from the real production to validate the indicator. The addition of signal sources could also be accompanied by a diversification of the parameters
• As its working principle is based on the correlation analysis of repeated events on the revolution-base, it can be seen as part of a broader framework for monitoring and maintaining rotating machinery. Hence, another direction that can be expected is to explore the potential in the diagnosis of other periodic structures, such as gears and bearings. As opposed to sampling using a constant time step, this system measures the elapsed time based on the occurrence of the event, or to say in this case, on the rotational position of the spindle. It uses the dynamic event as a function often causes a negative result due to the parameter points may be smaller than the average value when performing the operation. The tool with n z =4 is always used due to the limitations of the experimental conditions, and therefore this issue was neglected at the beginning of the study.
As shown in
But this is not pointless. The result curves obtained in the case, where the number of teeth n z is even, are a good encouragement to use inter-insert correlation to diagnose the tool state. What should be done is to stick to the principle but find a more appropriate way to handle the data.
Correlation between each pair of the inserts
The second attempt returns to the most original conception of the idea of interinsert correlation, as described in Section 2.5.1.
The n z column vectors in D k are paired with permutation to proceed correlation. All pair-based correlation coefficients calculated according Equation (2.33) are arranged in the following matrix as
R k = R 1,1 R 1,2 • • • R 1,nz R 2,1 R 2,2
discutée. Trois méthodes, dont l'extraction globale, l'extraction par dent et l'extraction par révolution, ont été analysées. Compte tenu de l'efficacité du taux de mise à jour pour la surveillance en ligne et du volume de données calculé en temps réel, il a été déterminé que la matrice cible était générée par la méthode basée sur les révolutions. Chapitre 4 -Plan d'expérience et prétraitement des données Après avoir déterminé l'algorithme spécifique, la matrice cible et le signal de simulation, ce chapitre décrit l'acquisition et le prétraitement des données expérimentales. La machine-outil DECKEL MAHO DMC 635 V a été utilisée pour les tests expérimentaux. Un dynamomètre 3 axes Kistler 9257A a été vissé sur son banc d'opération, délivrant des forces de coupe mesurées dans trois directions orthogonales ( x, y, z). En combinaison avec les informations de déplacement en x, y fournies par le banc et les informations angulaires θ provenant du codeur rotatif intégré à la broche, un total de six signaux sont collectés simultanément et transmis à l'analyseur de signaux dynamiques Oros R35.
Le signal simple et idéalisé de simulation IAS a d'abord été pris comme source pour la SVD, et la signification physique des composants correspondants a été explorée dans les scénarios en ligne droite et en courbe. La matrice unitaire gauche U extrait la forme d'onde partagée normalisée des segments en préservant le déphasage dû à la trajectoire incurvée ; la matrice des valeurs singulières Σ peut être considérée comme le facteur d'échelle déterminant la contribution des séquences de décomposition au signal original ; et la matrice unitaire droite V contient l'évolution proportionnelle dans les segments. L'indice de séparabilité α k,1 présente la capacité de xiv la valeur singulière de premier ordre à se rapprocher des données originales. Lorsque les segments de la matrice cible présentent une similitude extrêmement élevée, α k,1 affichera un état dominant proche de 1, et en cas d'usure de l'insert, α k,1 diminuera de manière sensible. Ensuite, la méthode SVD est appliquée aux données expérimentales pour vérifier que les résultats sont cohérents avec la simulation. Les résultats montrent que la corrélation inter-insert gère avec succès les données réelles complexes. Parallèlement, la comparaison entre les différents groupes expérimentaux révèle que la méthode est bien compatible avec les coordonnées angulaires et cartésiennes et qu'elle est indépendante de la profondeur radiale de coupe. Une analyse de sensibilité a ensuite été réalisée, et la plage de sensibilité ∆ W S a été obtenue. Sur cette base, deux indicateurs sont dérivés respectivement de l'indice de séparabilité α k,1 et de la plage de sensibilité ∆ W S . Ces deux indicateurs sont en fait deux expressions d'un même principe. L'indicateur α k,1 a l'avantage d'une réponse rapide et d'une facilité de calcul, mais la correction de la direction du chemin est nécessaire pour obtenir une plus grande précision. L'indicateur ∆ W S , quant à lui, ne nécessite pas de correction préalable de la trajectoire, mais présente une charge de calcul plus importante. Le flux de travail est résumé dans la figure suivante.
Fig. 1 :
1 Fig. 1: General flowchart of TCM system based on the inter-insert correlation using SVD
Fig. 1. 1 Fig. 2 . 2 Fig. 2 .Fig. 2 .Fig. 2 . 5 Fig. 2 .Fig. 2 .
122222522 Fig. 1.1 Maintenance strategies: (a) Derived subordination; (b) corrective maintenance; (c) pre-determined maintenance; (d) predictive maintenance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Fig. 2 .Fig. 2 .Fig. 2 .Fig. 2 .
2222 Fig. 2.10 Basic process flow of TCM in milling processes . . . . . . . . . .Fig. 2.11 Kistler 3-component force dynamometer[Kisb] . . . . . . . . . .Fig. 2.12 Sensor application versus level of precision . . . . . . . . . . . .Fig. 2.13 Typology of signals . . . . . . . . . . . . . . . . . . . . . . . . .
Fig. 2 . 5 [Fig. 2 .Fig. 2 .Fig. 5 .Fig. 5 . 2 Fig. 5 .Fig. 5 .Fig. 5 .Fig. 5 .Fig. 5 .Fig. 5 .Fig. 5 .Fig. 5 .Fig. 5 .Fig. 5 .Fig. 5 .Fig. 5 .Fig. 5 .Fig. 5 .Fig. 5 .Fig. 5 .Fig. 5 .Fig. 5 .Fig. 5 .Fig. 5 .Fig. 5 .Fig. C. 2
25225525555555555555555555552 Fig. 2.14 ROC curves: (a) AUC=1; (b) 0.5 <AUC <1; (c) AUC=0.5 [MAT22]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Fig. 2.15 Schematic diagram of the PCA principle . . . . . . . . . . . . . Fig. 2.16 Visualization of the SVD [Str93] . . . . . . . . . . . . . . . . . .
Fig. C. 3
3 Fig. C.3 Results of designed curve Test #2 8 (a e = 3.2mm) -(a) illustration of the cut; (b) 1 st -order of U; (c) 1 st -order of ΣV ; (d) fault detection based on separability index α k1 ; (e) Fault detection based on sensitivity range ∆ W S . . . . . . . . . . . . . . . . . . . . . . . . .
127 5.4.1 TCM based on separability index α k,1 . . . . . . . . . . . . . . 128 5.4.2 TCM based on sensitivity range ∆ W S . . . . . . . . . . . . . . 131 5.4.3 Summary and evaluation . . . . . . . . . . . . . . . . . . . . . 131 5.5 Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 5.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 General conclusion 139 6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 6.2 Principal contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 142 context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Problematic and objectives . . . . . . . . . . . . . . . . . . . . . . . . 1.3 General strategy and innovations . . . . . . . . . . . . . . . . . . . . 1.4 Organization of the thesis . . . . . . . . . . . . . . . . . . . . . . . .
Fig. 1. 1 :
1 Fig. 1.1: Maintenance strategies: (a) Derived subordination; (b) corrective maintenance; (c) pre-determined maintenance; (d) predictive maintenance.
1. 4
4 Organization of the thesis cover the signal in the angular domain. The research in this paper is developed based on the previous work of Girardin et al. [GRR10] on the instantaneous angular speed; (ii) The multiple tool inserts on the same tool are considered as interacting individuals, and the real-time continuous signal is divided into corresponding segments for self-comparison;
After a literature review, it appears the current researches mostly emphasize the correlation between the proposed indicator and the mechanical state [Sen+12; Dut+13b; NL19], or compare the real-time signal with a known template [RSJ06; SJ07; Niu+21].
M.
Weck et al. [WEC+80] published the article Concept of integrated data processing in computer controlled manufacturing systems in 1980. It concerns a flexible and automated manufacturing processes method, which has been successfully demonstrated in an aerospace machining application at the University of Aachen. The application of sensors in CNC machine tools steadily expanded during the late 20 th century, replacing manual monitoring of production processes and eliminating personnel costs while improving product quality and minimizing downtime. As the general aspects of maintenance strategy and technology have improved, the diagnostic challenges of tools have come to the fore and been noticed. In January 1993, CIRP established a framework within the context of the Working Group on Tool Condition Monitoring (TCM WG), to investigate the international state of the art, technological challenges, and growing trends in TCM both from a research perspective and from within the manufacturing industry [Byr+95]. It focused on catastrophic tool failures, collisions, progressive tool wear, and tool chipping/fracture during machining processes (including milling, turning, drilling, etc.). A further issue of the TCM WG was A Review of Tool Condition Monitoring Literature Data Base published by Teti et al. in 1995, which is continuously updated until 2006 with 500 new publications comprising more than 1000 classified references [RST08]. In 2010, the same author presented a keynote paper in CIRP Annuals [Tet+10], which summarized the developments in sensor systems, advanced signal processing, mon-itoring scopes, etc. A large number of citations over several decades demonstrates the continuing interest and significance of the subject for the manufacturing sector. With the rapid advancement of the Internet, diagnostic and maintenance technology is witnessing and participating in the revolution of Industry 4.0 (IR4). A wide range of predictive maintenance systems consists of sensors, cyber-physical systems, the Internet of Things (IoT), big data analysis, cloud computing, web and artificial intelligence, mobile networks, etc. [P ŽB19; CST18], and the TCM is a crucial element for the operational practice. In 2009, a research program called Factories of the Future (FoF) was launched by the European Factories of the Future Research Association (EFFRA). It was a recovery plan after the economic crisis in 2008, for rebooting the industry competitiveness with newly developed high-value-adding manufacturing.In the roadmap proposed by EFFRA, it highlighted the necessity of monitoring the actual state of the machine/tool in a continuous mode[START_REF]Factories of the future : multi-annual roadmap for the contractual PPP under Horizon 2020[END_REF].From January 2014 to December 2015, the EU Commission carried out a project to develop an advanced monitoring technology to track tool wear with a budget over 1,245,000 euros[START_REF]Advanced monitoring technology to track tool wear[END_REF]. In 2020, European Horizon 2020, the biggest EU Research and Innovation program ever, reiterated the call for Factories of the Future to emphasize the condition monitoring as a key development theme of the day[START_REF]Call for Factories of the Future[END_REF].
Fig. 2 .
2 Fig. 2.1: Tool geometry elements of a four-tooth milling cutter
Fig. 2 . 2 :
22 Fig. 2.2: Cutting condition in milling
Figure 2 .
2 Figure 2.3(c).
Fig. 2 .
2 Fig. 2.3: Tool deterioration phenomena [Ano89]: (a) flank wear; (b) crater wear; (c) catastrophic failure.
Fig. 2 .
2 Fig. 2.4: Tool life (T ) curves: variation of flank wear (V b) land with operation time.
Figure 2 .
2 Figure 2.5 uses an end mill (down milling) as an example, illustrating the tooth path in the material removal process. Since it does not involve oblique cutting, its three-dimensional motion is simplified to a two-dimensional orthogonal model. It appropriately omits certain unnecessary geometric complexities while retaining an efficient description of the machining mechanism.
Fig. 2 . 5 :
25 Fig. 2.5: Chip formation in general case
Fig. 2 .
2 Fig. 2.7: Chip formation with flank wear
Fig. 2 .
2 Fig. 2.8: Forces during milling process: (a) general preview; (b) partial zoom [Alt12].
Fig. 2 .
2 Fig. 2.9: Force behavior in different tool conditions: (a) relatively healthy state at the 1 st revolution; (b) severe worn state at the 98 th revolution.
Fig. 2 .
2 Fig. 2.10: Basic process flow of TCM in milling processes
[
[START_REF] Shankar | Prediction of cutting tool wear during milling process using artificial intelligence techniques[END_REF]. Liang et al.[START_REF] Steven Y Liang | Machining process monitoring and control: the state-of-the-art[END_REF] pointed out that when a substantial breakage of the insert occurs, there is a dynamic process in which the cutting force increases instantaneously and then falls back. The magnitude of this drop depends on the reduction of the cutting chip volume due to partial breakage. Kim et al.[START_REF] Kim | Development of a combined-type tool dynamometer with a piezo-film accelerometer for an ultra-precision lathe[END_REF] developed a combined-type tool dynamometer that can measure the static cutting force and the dynamic cutting force together by use of strain gauges and a piezofilm accelerometer. Given the growing need for measuring dynamic response and the increasing stiffness of processing devices, piezoelectric sensors have been preferred over strain gauges in recent years. As shown in Figure2.11, Kistler's three-component plate dynamometer has several 3-axis force load piezoelectric cells evenly distributed in the plate. These piezoelectric sensing elements produce a charge proportional to the load in the direction of measurement[Kisb]. Huang et al.[START_REF] Potsang | A PNN selflearning tool breakage detection system in end milling operations[END_REF] detected the tool breakage in end milling operation based on the cutting forces obtained by the piezoelectric dynamometer.
Many studies have demonstrated the feasibility of using vibration signals for TCM in the milling process. Wang et al. [Wan+14] conducted milling experiments on Ti6Al4V alloy. Vibration signals corresponding to four tool wear states are collected. Their features are extracted in the time and frequency domains for TCM analysis. Hsieh et al. [HLC12] demonstrated that when coupled with appropriate feature extraction and classifiers, spindle vibration signals can distinguish the different tool conditions in the micro-milling process. Bisu et al. [Bis+12] decomposed the vibration sources and proposed the vibration envelope method to perform a complete dynamic analysis of various components characterizing the machine tool. However, at the same time, they emphasized that vibration signals are the result of mixing different sources, making it difficult to identify the damage state of a specific component.
of the feature extraction. The signals need to be converted into adequately characterized descriptors, for two reasons. One is to closely represent the tool condition, and the other is to significantly reduce the dimensions of the original information. Many research works have been done to study various feature extraction approaches in the time, frequency, and angular domains. The descriptor output from the feature extraction module will become the input of the monitoring model module and thus construct the TCM system.
[ CBA10 ;
CBA10 Bin+09]. Vibration and sound signals are usually the object for feature extraction in frequency domain analysis. Lu and Kannatey-Asibu [LK02] use audible sound generated from the cutting process as analyzing source for monitoring tool wear during turning. Through the observation of the microphone and accelerometer power spectrum, the energy distribution for sharp and worn tools is easily discernible from the sound and vibration signals. However, when confronted with nonstationary signals whose frequency varies with time, frequency-domain methods focus only on the feature information of the spectrum perspective and can lose sight of the temporal information of the data. To overcome the shortcomings mentioned above, many researchers turned to the time-frequency domain analysis based on wavelet transform (WT) for feature extraction in TCM for milling processes [ZYN15]. Wavelet transform enables the decomposition of signals into various components in various time windows and frequency bands using scale and mother wavelets, which can achieve adaptive time-frequency analysis. Since Tancel et al. [Tan+93] first applied WT for tool wear monitoring, the algorithm has also derived a series of optimizations. Continuous WT (CWT) can extract the feature pattern from both stationary and non-stationary signals, but it is computationally intensive, which led to the development of discrete WT (DWT) [Kwa06]. However, Hong et al. [Hon+16] indicated that DWT has information loss for high frequency signals, and they proposed wavelet packet transform (WPT) to enhance the resolution for high-frequency band signals. More details about WT can be found in the literature [ZSH09] for references.
Figure 2.13, it is classified under the category of non-stationary signals [GNP06]. The cyclostationary signals include generally all non-stationary signals generated under a constant regime by periodic mechanisms. At the same time, periodic signals and stationary random signals are considered to be the two important special cases whose cycle equals to zero [AAX16].
Fig. 2 .
2 Fig. 2.13: Typology of signals
Stander et al.[START_REF] Stander | Transmission path phase compensation for gear monitoring under fluctuating load conditions[END_REF] improved the previously mentioned angle SA by performing phase correction prior to the averaging operation. Abboud et al.[START_REF] Abboud | Deterministic-random separation in nonstationary regime[END_REF] proposed generalized synchronous average (GSA), which redefined the angle averaging to account for the speed dependence and accommodate for the induced changes in the signal. It is also worth noting that without constant rotational conditions, the conversion between the time-CS and angle-CS is ambiguous and incomplete. Literature[START_REF] Antoni | Cyclostationary modelling of rotating machine vibration signals[END_REF] discussed the conditions for the equivalence of their conversion. On the other hand, there are also attempts [Abb+15; AAB14] to address the problems from a joint angle-time vision.Overall, performing TCM from the angular domain is a newly developed technique in about the last 10 years. Many of the methods have not yet reached a very mature stage, especially for the CNS signal processing whose definition in our case
Figure 2 .
2 Figure 2.14(b) (0.5 <AUC <1). Sometimes the curve may present jaggedness, which commonly comes with the gap in the test data. If the curve is a straight line from the bottom left to the top right as shown in Figure 2.14(c)(AUC=0.5), the performance does no better than a random guess. It's better to reconsider the model or engineer some better features.
Fig. 2 .
2 Fig. 2.14: ROC curves: (a) AUC=1; (b) 0.5 <AUC <1; (c) AUC=0.5 [MAT22].
•
The signals generated by the rotating tool are CS signals during the stationary operations and CNS signals during the non-stationary regime (speed or trajectory changes).Based on the above facts along with the investigation of the background and development trend of machining in Chapter 1, a concept of tool condition assessment using inter-insert correlation is tentatively proposed. While treating the tool as a whole, the concept also strives to consider multiple inserts as interacting individuals.The correlation analysis among them can eliminate as much as possible the external non-stationary factors and gather the focus on the condition of the tool. When the cutter undergoes variation in speed, all the inserts can be seen as being similarly affected. At the same time, the operating mode of high rotational speed with low feed rate makes the trajectory changes experienced by individual cutting edges tend to be similar. Especially the high-speed machining can better approximate the external working environment of each tooth as quasi-equivalent, thus contributing to more accurate correlation results.After an extensive literature search, some relevant correlation-based studies were found. Most of the methodologies aim to establish the correlation between flank wear and some specific parameters (vibrations[START_REF] Kundu | A correlation coefficient based vibration indicator for detecting natural pitting progression in spur gears[END_REF], cutting forces[START_REF] Zhong | Correlation analysis of cutting force and acoustic emission signals for tool condition monitoring[END_REF], temperature[START_REF] Pc Wanigarathne | Progressive tool-wear in machining with coated grooved tools and its correlation with cutting temperature[END_REF], surface texture[START_REF] Dutta | Correlation study of tool flank wear with machined surface texture in end milling[END_REF], etc.) in order to perform condition monitoring.Other studies [HWG17; KS17; Che+15] try to start with the establishment of the working standard in good condition as a reference, then follow Teach-in &
forming the observed set of n correlated variables (each variable has m observation samples ) into a new set of uncorrelated variables, also referred to as principal components (PCs) [BK19]. These new variables are linear combinations of the original variables and are arranged hierarchically in decreasing order of importance. If the original variables are highly correlated, the first few components may already have the potential to represent most of the variations in the original data. With minimal loss of the original information, extracting the first few components as the overall indicators representing the original variables can lower the dimension of the problem, thereby saving the amount of subsequent data manipulation amount and facilitating its storage. On the other hand, if the original variables are barely correlated, then PCA may seem pointless.
.35) then the variance of the projections of the sample points on this basis can be -dimensional problem, it is sufficient to find the direction that maximizes the variance. Intuitively, in order for the different PCs to represent as much of the original information as possible, duplicate information should be avoided, i.e. no (linear) correlation between the PCs. Mathematically, the correlation between two variables can be expressed in terms of their covariance. With a zero mean, the covariance of two variables is expressed as their inner product divided by the number of elements m. The theoretical covariance can be empirically approximated as Cov(a, b) ≈ 1 m m j=1 a j b j .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Required signal features for inter-insert correlation . . . . . . . . . . . 3.3 Trajectory change and unification of the angle . . . . . . . . . . . . . 3.4 General modeling of rotational machinery behavior . . . . . . . . . . 3.4.1 Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 In case of milling force . . . . . . . . . . . . . . . . . . . . . . 3.4.3 In case of instantaneous angular speed (IAS) . . . . . . . . . . 3.5 Correlation strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Fig. 3
3 Fig. 3.1: End milling preset trajectory model
Fig. 3. 2 :
2 Fig. 3.2: Segmentation of signal: (a) the signal of the force divided by two red dotted lines corresponds to a segment; (b) illustration of the reshaping process.
.4) where d k,i is the average value corresponding to the i th segment in k th revolution d k,i , and 1 m,1 is an all-ones vector with dimension m × 1. The term Ψ F k,i corresponds to the representative (theoretical) standardized waveform of the zero-centered cutting force. The term ∆F k,i presents the cut thickness characteristic due to the individual tooth condition.When the n z teeth have ideally the same state, the sequences possess remarkably close signal signatures and can be seen as identical. Hence, the average values of different segments d k,i are equal to each other and the ideal segmentation matrix D seg can be written as
.6) where J = 1 m,n . The average value D seg equals to d k,i only in ideal situation. The theoretical shape Ψ F and the remaining individual residual ∆F extracted based on D seg are also different from Ψ F and ∆F . Certainly, the outcomes derived from D seg and d k,i should be very similar (within a certain error) during the operation in good condition. When the teeth gradually show wear, the difference between them increases significantly. Considering that the cut thickness characteristic due to individual tooth condition ∆F is relatively much smaller than the original value of
Fig. 3. 3 :
3 Fig. 3.3: Simulation result comparison
Fig. 3.4: Example diagram of curved path milling (top view)
Fig. 3. 5 :
5 Fig. 3.5: Curve-line simulation signal
.14) where the W summarized the basic waveform of the segments, K is the amplitude corresponding to each segment superimposed on W, D e is the average value, and ε refined the individualized fluctuations for each data point (in response to the standard line shape), as illustrated in Figure 3.6. Such standardized expressions facilitate finding potential internal connections among segments.
Fig. 3
3 Fig. 3.6: Re-expression of components of D e
Section 4. 2
2 describes the experimental devices used to perform the machining tests and the acquisition of the signals, including the setup and the measurement output. These setups, combined with different parameter settings, were used to study several cases of the milling tasks in Section 4.3. Section 4.4 explains the preprocessing steps before the inter-insert correlation analysis, including resampling, path correction, segmentation, etc. The purpose of this chapter is to describe the setup and the intrinsic principles involved in the acquisition of experimental data in order to facilitate subsequent validation experiments. At the same time, it extends the range of applications for detecting tool wear or breakage under different cutting conditions.
Fig. 4.1: Experimental setup
θFig. 4. 2 :
2 Fig. 4.2: Tool initial condition: (a) end mill inspection display; (b) the tool diameter including 1 st , 3 rd or 4 th tooth is 31.770 mm; (c) the tool diameter including 2 nd tooth is 31.830 mm.
(A )
) Fig. 4.3: Trajectory display for contour of square
Fig. 4. 4 :
4 Fig. 4.4: Trajectory display for contour of square
Fig. 4. 5 :
5 Fig. 4.5: Trajectory display for contour of rounded square
Fig. 4. 6 :
6 Fig. 4.6: Trajectory display for contour of designed curve
Fig. 4
4 Fig. 4.7: Trajectory display for contour of designed curve with holes
Fig
Fig. 4.8: Demonstration of Hilbert transformation: (a) composition of the analytic signal; (b) representation of phase information corresponding to the rotational angle of the spindle.
Fig
Fig. 4.9: Schematic diagram for data acquisition under different domains with variable rotational speed (ω 1 < ω 2 < ω 3 ): (a) the number of samples collected per revolution varies with the rotational speed if sampling in time domain; (b) the number of samples collected per revolution is evenly distributed regardless of rotational speed if sampling in angular domain.
Fig. 4.10: Comparison before and after resampling in angular domain of the average value of each segment d k,i : (a) sampling in time domain; (b) resampling in angular domain (θ)
Fig. 4 .
4 Fig. 4.11: Comparison before and after trajectory correction by average value of each segment d k,i : (a) angle of reference (θ) without correction; (b) angle of reference (θ + Θ) with correction.
Fig. 4 .
4 Fig. 4.12: Truncation
Fig. 5 .
5 Fig. 5.1: Simulation signal for SVD components demonstration: (a) the simulated amplitudes K = [5, 5, 10, 1] correspond respectively to segments of each normal revolution cycle; (b) the simulated amplitudes K change = [1, 5, 10, 15] correspond respectively to segments of the tenth revolution with sudden change caused by the tooth wear; (c) a full view of the simulated signal.
Figure 5
5 set for segments K [5,5,10,1] Table 5.1: Parameters for straight-time simulation signal The result of reshaping the above-mentioned simulated signal in the manner described in Section is denoted as D seg . After SVD processing, D seg can be decomposed into three matrices U, Σ and V. As mentioned in Section 2.5.3, SVD can grab the main components of the data by taking advantage of the underlying correlation principle. Accordingly, the information in the matrices U, Σ and V are hierarchically arranged in order, based on the criteria of the importance possessed by the decomposed components.
Figure 5 .
5 Figure 5.2(b) represents the ordered singular matrix Σ. It is a matrix with real, non-negative entries on the diagonal and zeros off the diagonal. Therefore, only the values on the diagonal are plotted in the diagram. The results shown in the figure indicate that the first-order value is 50, which is much larger than the other orders.
Fig. 5 . 2 :
52 Fig. 5.2: Demonstration of decomposed components by SVD: (a) left singular matrix U; (b) ordered singular matrix Σ; (c) right singular matrix V.
Figure 5 .
5 2(c)). The amplitudes of most of the segments are repeated in cycles of [0.1, 0.1, 0.2, 0.02]. From the 37 th to 40 th segments (corresponding to the tenth revolution), the amplitude value undergoes a change.
Fig. 5 . 3 :
53 Fig. 5.3: The result of combined decomposed components ΣV represents the amplitude of the signal being processed.
Figure 5 .4 appears to be a simple split of Figure 5 . 2 .
552 As a matter of fact, however, Figure 5.2 is the result of a decomposition of the reshaping matrix D seg , while Figure 5.4 is the result of a decomposition of the revolution-by-revolution matrix D k . When dealing with more complex realistic signals, the decomposition of D k will give greater accuracy and more diagnostic advantages.
Fig. 5 .
5 Fig. 5.4: SVD analysis by revolution for straight-line simulation signal: (a) the extracted waveforms for revolutions (the 1 st -order of U); (b) the extracted amplitudes for revolutions (the 1 st -order of ΣV ).
Fig. 5 . 5 :Fig. 5 .
555 Fig. 5.5: SVD analysis by revolution for curve-line simulation signal: (a) the extracted waveforms for revolutions (the 1 st -order to 3 nd -order of U); (b) the extracted amplitudes for revolutions (the 1 st -order to 3 nd -order of ΣV ).
Fig. 5.7: SVD results for healthy tool: (a) left singular vector u k of SVD as a function of revolutions presents a basic waveform of signal; (b) ordered singular value σ k times transposed right singular vector v k presents the corresponding value of force.
Figure 5 .
5 Figure 5.7(a) that the first-order result of a single revolution is displayed with 512sample points that form a basic waveform of k th revolution. Throughout the milling process, the waveform remains unchanged, but there is a slight phase shift towards the right when the cutter passes through the corner. This is the phase shift that is naturally introduced by the cutting path mentioned in Section 4.4.3. The milling direction correction in signal processing has greatly improved the situation of phase
Fig. 5.8: Separability index for each order with respect to samples
Fig. 5 .
5 Fig. 5.10: Fundamental verification: Contour of square -(a) 1 st -order of U k ; (c) 1 st -order of ΣV ; (e) Separability index; Contour of rhombus -(b) 1 st -order of U k ; (d) 1 st -order of ΣV ; (f) Separability index.
Figure 5 .
5 10(d) shows a little more fluctuation than Figure 5.10(c), but their value and curve shape almost behave the same. The separability indices in Figure 5.10(e) and 5.10(f) are both close to 1 by a dominant margin and have the appearance of an identical trend.
Fig. 5 .
5 Fig. 5.11: The results with and without the step zero-centering: (a) 1 st -order of separability index for central data; (b) 1 st -order of separability index for original data.
Figure 5 .
5 Figure 5.11(a) presents clearly the corresponding change due to the rounded square milling trajectory, whereas Figure 5.11(b) has no obvious trend change from
Fig. 5 .
5 Fig. 5.12: The influence of the tool's radial engagement: (a) Test #1 10 with a e =3.2 mm; (b) Test #1 12 with a e =12 mm; (c) Test #2 6 with a e =11 mm.
Figure 5 .Figure 5 .
55 Figure 5.12(a) presents the separability index of cut width a e =3.2 mm and Figure 5.12(b) shows the behavior of one test with a e =12 mm. The two lines follow more or less the same trend, except for the milling distance shortened by the change in path length (recall the rounded square trajectory in Figure 4.5).
Fig. 5 .Fig. 5 .
55 Fig. 5.14: Decomposition for random selection P
Fig. 5 .
5 Fig. 5.16: Decomposition for random selection CR
Fig. 5 .
5 Fig. 5.18: Sensitivity analysis based on bootstrap before and after trajectory correction: (a) angle of reference (θ) without correction; (b) angle of reference (θ + Θ) with correction.
. 3 )
3 The scalar d represents a sample point, and the subscript shows the row number of the sample in the original one-column data set. l (l ∈ N) is the overlap number in the shifting process. L is the execution number of the shift defined by the ceiling function of m m-l .
Fig. 5 .
5 Fig. 5.19: Sensitivity analysis ∆ W S of separability index α k1 before and after trajectory correction are almost the same: (a) angle of reference (θ) without correction; (b) angle of reference (θ + Θ) with correction.
Fig. 5 .
5 Fig. 5.20: Status information of the data used for validation: (a) surface condition of machined workpiece; (b) final condition of inserts.
Figure 5 .
5 20(a) shows two defects in the machined workpiece. One in the middle and the other at the end, with a length of about 2 mm and 16 mm, respectively. Girardin et al.[START_REF] Girardin | Tool wear detection in milling-An original approach with a non-dedicated sensor[END_REF] therefore make the inference that at around the 94 th revolution, one of the teeth partially broke, and at about the 193 rd revolution, all the teeth may break. The final state of the inserts after the stoppage was registered as in Figure5.20(b).
Fig. 5 .
5 Fig. 5.21: Fault detection based on separability index α k1 (blue line) with upper (purple line) and lower (yellow line) limit of 3-sigma interval: (a) tool in good condition; (b) tool in worn condition.
Fig. 5 .
5 Fig. 5.22: Fault detection based on sensitivity range ∆ W S (blue line) with upper (purple line) and lower (yellow line) limit of 3-sigma interval: (a) tool in good condition; (b) tool in worn condition.
Fig. 5 .
5 Fig. 5.23: General flowchart of TCM system based on the inter-insert correlation using SVD
Fig. 5 .
5 Fig. 5.24: Receiver operating characteristic curves of proposed two TCM methods
Figure 5 .
5 25, consisting of the following stages: -S1: straight cut along the positive direction of the y-axis; -S2: external contour milling with a 90°corner; -S3: straight cut along the negative direction of the x-axis; -S4: inside contour milling with a 90°corner; -S5: straight cut along the positive direction of the y-axis again; -S6: stoppage.
Figure 5 .Figure 5 .
55 Figure 5.26(a), (c), and (e) show the results corresponding to the basic designed curve, including respectively the decomposition components ΣV and the bubble effects around α k1 and ∆ W S . Correspondingly, the subplots (b), (d), and (f) of Figure 5.26 demonstrate the counterparts of the results with the addition of two extra holes in the S4 and S5 accordingly.
Fig. 5 .Figure 5 .
55 Fig. 5.25: Demonstration of the milling trajectory and corresponding stages of tool states
Figure 5 .
5 Figure 5.26(d) and (f) begin at the 211 th revolution. This is because the values drop in Figure 5.26(b) is due to the reduction of cutting forces as mentioned before, while the decrease in Figure 5.26(d) and (f) are the consequence of inter-tooth correlation analysis. Therefore, the valleys in Figure 5.26(d) and (f) correspond to the time when the tool has just touched the hole area. At that time, the teeth of the tool experience very different states within the same revolution, and the value of SVT in Figure 5.26(b) did not reach the bottom yet. The fluctuation of values in S2 and S5 can be explained by the same principle.
Fig. 5 .
5 Fig. 5.26: Limitation: Contour of designed curve -(a) 1 st -order of ΣV ; (c) fault detection based on separability index α k1 ; (e) Fault detection based on sensitivity range ∆ W S ; Contour of designed curve with holes -(b) 1 st -order of ΣV ; (d) fault detection based on separability index α k1 ; (f) Fault detection based on sensitivity range ∆ W S .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Principal contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Further works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
in depth. Combined with the sensor performance, the indirect signals that can reasonably describe the machining process are identified. Feature extraction is further discussed in the time and frequency domains as well as in the emerging angular domain.After extensive literature reading, we found that most extant methods perform detection by TCM teach-in method with a pre-established reference. Such methods either require trial cuts or big data learning, which makes it difficult to adjust quickly to changes in trajectory and velocity. This prompted us to propose a new concept: TCM using correlation analysis between multiple teeth of the tool, i.e. inter-insert correlation. While treating the tool as a whole, the concept also strives to consider multiple inserts as interacting individuals. The correlation analysis among them can eliminate as much as possible the external non-stationary factors and gather the focus on the condition of the tool. When the cutter undergoes variation in speed, all the inserts can be seen as being similarly affected. At the same time, the operating mode of high rotational speed with low feed rate makes the trajectory changes experienced by individual cutting edges tend to be similar. Especially the high-speed machining can better approximate the external working environment of each tooth as quasi-6.1 Conclusionequivalent, thus contributing to more accurate correlation results.For this concept, correlation and multivariate analysis were investigated. Eventually, the study settled on the SVD algorithm, which decomposes the signal with underlying correlation-based analysis and gives a comprehensive indicator for assessing the tool condition.It is worth emphasizing that this work is seen as part of a broader framework for monitoring and maintaining rotating machinery. Although the application in this thesis is restricted to signals generated by end mills (mainly milling forces and spindle speed), the applicability of the developed method is intended to cover a wide range of mechanical signals. Theoretically, it could also potentially be applied to condition monitoring of other mechanical components with a rotating nature, such as gears, bearings, etc. Therefore, in Chapter 3, we establish the criteria for applicable signals and a general model of rotating machinery behavior. Under this model, the IAS signals are simulated for preliminary validation of the proposed approach and the specific strategies for the correlation analysis are discussed.After the numerical simulations yielded promising results, representative cutting paths were designed and experiments were conducted. The experimental setup and data pre-processing are shown in Chapter 4.In Chapter 5, the obtained experimental data are used to verify the effectiveness of inter-insert correlation for TCM. The corresponding decomposed components of SVD are interpreted in terms of their physical significance. Based on sensitivity analysis, the separability index α k,1 and the difference between the upper and the lower limits obtained by window shift ∆ W S were derived as the two indicators of detection. These are actually two expressions of the same phenomenon. The first one simply relies on the separability index. Only current revolution data need to be included when calculating real-time monitoring, which is fast and efficient. However, the trajectory angle and the spindle angle must be synchronized to achieve higher accuracy. The second method uses the traversal mode, which has the advantage of not requiring a trajectory correction but needs a longer running time to process a larger amount of calculations for the window shift. Depending on the available equipment, it provides different monitoring strategies to suit variable machining conditions. A simple evaluation of these two indicators based on the ROC curve is given, as well as the workflow of the method.The above work answers the objectives of the study proposed in Chapter 1.
Figure A. 2 ,F x 0 - 5000 F 2 :
250002 there are eleven M8 × 1.25 mm threaded holes in the top plate for workpiece mounting. Meanwhile, the top plate is covered with a special insulation layer, which is against the penetration by coolant fluid (IP 67) and the temperature fluctuation in the working environment[Kisa]. The measuring range and the sensitivity of the dynamometer are detailed in TableA.2. In conjunction with the Kistler 5015B charge amplifiers[Kisc], it enables accurate measurements of small dynamic changes in the context of large cutting forces. The characteristics of Kistler 9257A dynamometer (C) Encoder The sinusoidal signal from the sensor is in phase with the light passing through the slit. The hardware parameters of the encoder indicate that for each revolution of the spindle, it emits 1 rotational reference signal named R and two quadrature signals with 256 electrical periods, noted as sine A and cosine B (Figure A.3 (a)). The three analog signals are converted to digital signals through a TTL converter. In parallel, a high-frequency clock provides an f c = 80 MHz counting pulses. Each rising edge of the encoder digital signal triggers one recording. The number of pulses between two rising edges combined with the counting frequency f c determines the time between two consecutive events [Li+05] (Figure A.3 (b)).
Fig. A. 4 :Fig. A. 5 :
45 Fig. A.4: The characteristics of APX3000R325M16A end mill
Figure B. 2 (
2 Figure B.2(a).
Fig. C. 2 :
2 Fig. C.2: Results of rhombus contour Test #1 18 (a e = 8mm) -(a) illustration of the cut; (b) 1 st -order of U; (c) 1 st -order of ΣV ; (d) fault detection based on separability index α k1 ; (e) Fault detection based on sensitivity range ∆ W S .
K tc , K te , K r constants of the force model related to tool-workpiece pair
F s shear force (N)
τ s shear stress (N/m)
φ c shear angle (rad)
β a friction angle (rad)
α r angle between the rake face and the normal to the work plane (rad)
F t tangential force (N)
F r radial force (N)
F basic standard milling force (N)
F average force (N)
Ψ F , Ψ F shape of the force and corresponding matrix (N)
∆F, ∆F part of the cutting force increased or decreased due to wear and
corresponding matrix (N)
Notations relating to SVD:
min)
d d k,i d k,i D seg D seg D e D k N ω Ω n z f z V f M RR a e rotation frequency (rpm) sample point instantaneous angular speed (rad/s) segment of tooth i in the k th revolution passing speed during the curve (rad/s) average value of segment d k,i number of teeth of the tool segmented data presented as a matrix tooth feed (mm/tooth/rev) average value of matrix D seg feed rate (mm/min) extraction matrix material removal rate (mm 3 /min) radial depth of cut (mm)
a p axial depth of cut (mm)
R radius of the tool (mm)
R c curvature radius of the cutting trajectory (mm)
i subscript indicating the tooth counts (i = 1, 2, . . . , n z )
θ i angular position of the teeth (rad)
Θ trajectory angle (rad)
ϑ unified angle (rad)
Notations relating to cutting force:
VB width of the wear land on the flank (mm)
V b reduced length of the tool tip due to flank wear (mm)
h c instantaneous nominal cut thickness in general case (mm)
h c * instantaneous nominal cut thickness in worn case (mm)
F resulting force (N)
xix
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 A brief history of machining monitoring . . . . . . . . . . . . . . . . . 2.3 Characteristics of operation . . . . . . . . . . . . . . . . . . . . . . . Constructive elements of TCM system . . . . . . . . . . . . . . . . . 2.4.1 Selection of the sensor system . . . . . . . . . . . . . . . . . . 2.4.2 Feature extraction . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 Action for decision making . . . . . . . . . . . . . . . . . . . . 2.5 Correlation-based monitoring . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.2 Required signal features for inter-insert correlation . . . . . . . . . . . 66 3.3 Trajectory change and unification of the angle . . . . . . . . . . . . . 68 3.4 General modeling of rotational machinery behavior . . . . . . . . . . 70 3.4.1 Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.4.2 In case of milling force . . . . . . . . . . . . . . . . . . . . . . 73 3.4.3 In case of instantaneous angular speed (IAS) . . . . . . . . . . 74 3.5 Correlation strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.2 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.3 Milling parameters and trajectories . . . . . . . . . . . . . . . . . . . 87 Fault detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
CONTENTS
2.6 General modeling of the rotational machinery behavior 65
3.1 Experiment design and data pre-processing 83
4.1
2 State of the art 13
2.1 xxxi
2.3.1 Tool geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Tool cutting condition . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Tool wear and tool life . . . . . . . . . . . . . . . . . . . . . . 2.3.4 Chip formation behaviors . . . . . . . . . . . . . . . . . . . . 2.3.5 Milling force model . . . . . . . . . . . . . . . . . . . . . . . . 2.4 2.5.1 Relationships between variables . . . . . . . . . . . . . . . . . 2.5.2 Multivariate correlation analysis . . . . . . . . . . . . . . . . . 2.5.3 Singular value decomposition (SVD) . . . . . . . . . . . . . . 4.3.1 Different trajectories cases . . . . . . . . . . . . . . . . . . . . 88 4.4 Data pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.4.1 Hilbert transformation . . . . . . . . . . . . . . . . . . . . . . 93 4.4.2 Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.4.3 Milling direction correction . . . . . . . . . . . . . . . . . . . . 96 4.4.4 Truncation & Segmentation & Zero-centering . . . . . . . . . 97 4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Characterization of tool state by SVD 101 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 5.2 SVD components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 5.2.1 Straight-line simulation signal . . . . . . . . . . . . . . . . . . 107 5.2.2 Curve-line simulation signal . . . . . . . . . . . . . . . . . . . 108 5.3 Analysis of results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5.3.1 Validity verification of SVD analysis with experimental data . 113 5.3.2 Fundamental verification . . . . . . . . . . . . . . . . . . . . . 115 5.3.3 Sensitivity analysis . . . . . . . . . . . . . . . . . . . . . . . . 119 5.4
Table 2 .
2 Type of maintenance Corrective maintenance Planned maintenance Productive maintenance Predictive maintenance
Timeline 1760s-1830s 1870s-1910s 1940s-1980s 1980s-Present
Technology Mechanization, Mass production, Computers, Cyber systems,
development steam power, assembly lines, electronics, Internet of things,
characteristics weaving loom electrical energy automation sharing cloud
Monitoring means Visual inspection Instrumental inspection Sensor monitoring Predictive analysis
Staff requirements Trained craftsmen Inspectors Engineers Data scientists
1: Monitoring and maintenance history overview
,4 1,4 Seg d Seg d 1,3 1,3 Seg d Seg d 1,2 1,2 Seg d Seg d 1,1 1,1
Cutting Force in Normal Condition Cutting Force in Normal Condition Cutting Force in Worn Condition Cutting Force in Worn Condition
600 600 600 600
F( F( 0 0 100 100 200 200 300 300 400 400 500 500 0 0 0.1 0.1 0.2 0.2 0.3 0.3 Revolutions k Revolutions k 0.4 0.5 0.6 0.4 0.5 0.6 Seg d 1(a) 0.7 0.8 0.9 0.7 0.8 0.9 Seg d 1 1 F( F( 0 0 100 100 200 200 300 300 400 400 500 500 98 98 98.1 98.1 98.2 98.2 98.3 98.3 Revolutions k Revolutions k 98.4 98.5 98.6 98.4 98.5 98.6 98.7 98.7 98.8 98.8 Seg d 98.9 98.9 Seg d 99 99
98,4 98,4 Seg d Seg d 98,3 98,3 Seg d Seg d 98,2 98,2 Seg d Seg d 98,1 98,1
(b)
of the "peakedness" of the probability distribution of the spectra.
Peak amplitude s pk = max (s j ) Peak of power spectrum in a specific frequency band that is expressed by the energy level (W/Hz).
Peak frequency f pk Relative frequency that corresponds to the highest amplitude.
Spectral crest factor s CF = s pk s
Table 2
2
.4: Frequency domain features and descriptions
2.4.2.3 Angular domain analysis
Although most of the TCM studies have been conducted in the time domain,
frequency domain, or time-frequency domain, research based on the angular domain
has gradually gained attention in recent years.
Table 2
2
.5 [Zha+13]. Since the targeted
Table 2 . 5
25
: List of correlation coefficients
[START_REF] Zhao | Diagnosis of artificially created surface damage levels of planet gear teeth using ordinal ranking[END_REF]
Table 4.2.
Notation Parameter Value
n z Number of inserts 4
ω (rad/s) Instantaneous angular speed 145.83
N (rpm) Spindle frequency 1392.61
V c (m/min) Cutting speed 140
f z (mm/tooth) Feed per tooth 0.1
a e (mm) Radial depth of cut Determined by different experimental cases
a p (mm) Axial depth of cut 3
R (mm) Radius of the tool 16
Table 4.2: General parameters of milling experiments
Table 5 .
5 2.
Parameter Value
Number of inserts n z 4
Rotational frequency ω (rad/s) 46π
Samples for one segment m 250
Total segments n 240 (half for straight-line; half for curve-line)
Altitudes set for segments K 1
Turning speed through the curve Ω (rad/s) 5 12π
IAS setting value D 0 (rad/s) 146
Table 5 .
5 2: Parameters for curve-time simulation signal
Table 5
5 .3.
Data Milling Sample points Radial depth of cut Total revolutions
number trajectory per segment m ae (mm) n
Test #1 4 Contour of square 512 8 1070
Test #1 18 Contour of rhombus 512 8 845
Test #1 10 Contour of rounded square 512 3.2 1200
Test #1 12 Contour of rounded square 512 12 910
Test #2 6 Contour of designed curve 512 11 450
Test #2 8 Contour of designed curve 512 3.2 440
Test #3 19 Contour of designed curve 512 3.2 450
with holes
FG #120915 Straight line 320 10 234
Table 5 .
5 3: Experimental parameters of the data sets involved in the following analysis
Obviously, the R k is a symmetric matrix and the elements of the diagonal are all equal to 1. Therefore, for the case of FG #120915, 6 valuable curves are drawn in
• • • R 2,nz . . . R nz,1 R nz,2 • • • R nz,nz . . . . . . . . . . (B.3)
Cette thèse est accessible à l'adresse : https://theses.insa-lyon.fr/publication/2022ISAL0078/these.pdf © [X. Zhu], [2022], INSA Lyon, tous droits réservés
Cette thèse est accessible à l'adresse : https://theses.insa-lyon.fr/publication/2022ISAL0078/these.pdf © [X. Zhu], [2022], INSA Lyon, tous droits réservés Cette thèse est accessible à l'adresse : https://theses.insa-lyon.fr/publication/2022ISAL0078/these.pdf © [X. Zhu], [2022], INSA Lyon, tous droits réservés
Acknowledgement
Prof. François GIRARDINThis work was funded by French government fellowships and was carried out within the framework of the LABEX CeLyA (ANR-10-LABX-0060) of Université de Lyon and the program Investissements d'Avenir (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR). I am grateful for the opportunity to work in this interesting area of tool condition monitoring.
First of all, I would like to express my appreciation to François GIRARDIN for his valuable experience and advice and for his regular encouragement and supervision.
List of Tables
• P R (Permutation with Repeat): The last category considers all possibilities, not only focusing on the data content, but also on the data sorting. At this time, the number of situations that need to be considered surges to 125.
Here the data of the Test #1 10 has been selected as an illustration.
Taking the data of the 250 th revolution as an example, it can be seen in Figure 5.14 that in the case of arbitrarily changing the data arrangement, the decomposed
Expansion of Pearson correlation coefficient
PCC for correlation analysis of two continuous variables, as discussed in Section 2.5.1, is the first attempt object. The anticipation is to extend the defining equation of PCC to accommodate the application of inter-insert correlation for n z teeth.
Equation (2.33) can be alternatively expressed as below:
where µ A and µ B are the mean of A and B.
The object matrix D k is obtained by the revolution-based extraction (3.5), and k successive analyses will be done in real-time. Each column vector in Equation 3.13 is seen as one variable. By the analogy of the PCC formula, the comprehensive equation of the inter-insert correlation is deduced as
where k is the subscript indicating the revolution counts, i is the subscript indicating the tooth counts, and j is the subscript indicating the sample counts.
For the ideal case, where each tooth has the same behavior, the result of R tot,k in a calculation is equal to 1. The safe test is carried out using data from Test #1 10 as shown in From the above diagram alone, R tot,k looks very promising. However, during the verification phase, the vulnerabilities were quickly discovered. When the teeth number of the tool n z is odd, even if the behaviors of all the inserts are consistent, it eigenvalues is obviously much more concise. From lambda1 it can be seen that the curve has a stair-step descent and that the drop points all coincide with the known locations of the tool wear stage. But these eigenvalues have no superior recognition in the corresponding physical meaning. In addition to this, the value of the eigenvalue varies with the content of R k . It is not fixed in a definite interval. Hence, the threshold for diagnostic alarms will also be difficult to determine.
Both of these attempts above are failed cases, but they are very inspiring for the following studies. The inter-insert correlation analysis with SVD for the diagnosis of the tool status in Chapter 5 is built on their basis.
Appendix C
Relevant results
Due to the high degree of similarity, out of multiple sets of similar experimental data, we processed only a few of them. The following figures show the results of several relevant tests that are not presented in the main text. In this context, a methodology for monitoring the wear of end mills in real-time production based on the inter-insert correlation is presented. The approach takes advantage of the angular domain characteristics to segment the signal into periodic cycles of the same angular duration, which are then amenable to correlation analysis. Under high rotational speeds, the external working environment experienced by the individual teeth can be considered quasi-equivalent. Through the correlation analysis, the impact of the non-stationary operation on the monitored signal is effectively reduced. From a wide range of correlation algorithms, singular value decomposition (SVD) is selected to proceed with the analysis, and an ordered separability index with latent correlation characteristics is extracted to assess the current condition of the tool. The feasibility of the proposed indicator was valid and evaluated via the simulated signal and a series of experimental data collected by designed milling patterns.
The results demonstrate the promising development of this method in forming an efficient TCM system. The proposed approach is more independent of the cutting conditions (changes in speed or direction) than the traditional teach-in method and does not require a trial run. It partially fills the gap of tool monitoring demand in flexible manufacturing for customized small batch production. At the same time, inter-insert correlation is also seen as part of a broader framework for monitoring and maintaining rotating machinery. It has great potential to be applied to the analysis of different signals generated by other mechanical components with a rotating nature.
MOTS-CLÉS |
00410644 | en | [
"info.info-hc",
"info.info-ds"
] | 2024/03/04 16:41:22 | 2009 | https://hal-lirmm.ccsd.cnrs.fr/lirmm-00410644/file/index.pdf | Guillaume Artignan
email: [email protected]
Mountaz Hascoët
email: [email protected]
Mathieu Lafourcade
email: [email protected]
Multiscale Visual Analysis of Lexical Networks
Keywords: Multiscale, Hierarchical Graph, Visualization, Hierarchical Clustering
A lexical network is a very useful resource for natural language processing systems. However, building high quality lexical networks is a complex task. "Jeux de mots" is a web game which aims at building a lexical network for the French language. At the time of this paper's writing, "jeux de mots" contains 164 480 lexical terms and 397 362 associations. Both lexical terms and associations are weighted with a metric that determines the importance of a given term or association. Associations between lexical terms are typed. The network grows as new games are played. The analysis of such a lexical network is challenging.
The aim of our work is to propose a multi-scale interactive visualization of the network to facilitate its analysis. Our work builds on previous work in multi-scale visualization of graphs. Our main contribution in this domain includes (1) the automatic computation of compound graphs, (2) the proximity measure used to compute compound nodes, and (3) the computation of the containment relation used to exhibit the dense relation between one important node and a set of related nodes.
Introduction
Many natural language processing tasks like information retrieval or anaphora resolution require lexical information usually found in resources such as thesauri, ontologies, or lexical networks. Creating such resources can be done either manually in the case of Wordnet [START_REF] Fellbaum | Wordnet an electronic lexical database[END_REF] for example or automatically from text corpora as in [START_REF] Jones | Synonymy and Semantic Classification[END_REF]. In both cases, the generation of accurate and comprehensive data over time is a complex task.
"Jeux de Mots" [START_REF] Mathieu Lafourcade | Jeuxdemots website[END_REF][START_REF] Lafourcade | Making people play for lexical acquisition[END_REF] is a game where players contribute to the creation of a complex lexical network by playing. The game is a two player blind game based on agreement: at the beginning of a game session player A is given an instruction related to a target term. For example: give any term that is related to "cat". User A has a limited amount of time to propose as many terms as possible. At the end of the session, player A's proposed terms are compared to those of a previous player say player B. Points are earned on the basis of agreement, e.g. the intersection of the two sets of terms proposed by A and B. The lexical network of "Jeux de Mots" is built by adding the terms in the agreement. A relation to the target term is also added. The relation between the target term and the terms agreed depends on the initial instruction. In the previous example the relation is a relation of type association. There are 35 other types of relations in "Jeux de mots" including synonymy, antonymy, hyperonymy, etc. Weights are further computed for terms and for relations between terms in order to reflect their importance in the network [START_REF] Lafourcade | Conceptual vectors, lexical networks, morphosyntactic trees and ants : a bestiary for semantic analysis[END_REF]. At the time of this paper's writing, "jeux de mots" contains 164 480 lexical terms and 397 362 associations. Therefore, the visualization of the network is challenging. JeuxDeMots lexical network can be considered as a large graph with terms as nodes and semantic relations between terms as edges.
Multiscale interactive visualization of graphs is an interesting solution to the visual analysis of large graphs. Hierarchical graphs, introduced in [START_REF] Eades | Multilevel visualization of clustered graphs[END_REF] for the first time, have largely influenced the literature in this domain. Approaches vary at different levels. Our approach is based on compound graph construction and full zoom exploration. The construction of the compound graph is further based on a proximity measure used to compute compound nodes and the computation of the containment relation used to exhibit dense relation between one important node and a set of related nodes.
This paper is organized as follows: we start by a review of related work, we further present the data and a careful analysis of some properties that matter for visualization. We further present our main contributions e.g. compound graph construction (section 4) and full-zoom exploration of JeuxDeMots lexical network (section 5).
Our approach to the visual analysis of JeuxDeMots lexical network is based on previous work and mainly related to multilevel graph exploration.
Multilevel graphs are largely used in graph visualization. Indeed multilevel graph drawing methods have been studied in order to accelerate run time and also to improve the visual quality of graph drawing algorithms. In [START_REF] Walshaw | A multilevel algorithm for force-directed graph drawing[END_REF], Chris Walshaw presents a multilevel optimization of the Fruchterman's and Reingold's spring embedder algorithm. The GRIP algorithm [START_REF] Gajer | Grip: Graph drawing with intelligent placement[END_REF] coarsens a graph by applying a filtration to the nodes. This filtration is based on shortest path distance. Fast Multipole Multilevel Method (F M 3 ) [START_REF] Hachul | Drawing large graphs with a potential-field-based multilevel algorithm[END_REF] is also a force-directed layout algorithm. F M 3 is proved subquadratic (more precisely in O(N logN +E)) in time, contrary to previous algorithms. Work in [START_REF] Archambault | Topolayout: Multilevel graph layout by topological features[END_REF] is based on the detection of topological structures in graphs. This algorithm encodes each topological structure by a metanode to construct a hierarchical graph.
Graph hierarchies are also used in Focus-based multilevel clustering. In [START_REF] Boutin | Focus dependent multi-level graph clustering[END_REF][START_REF] Boutin | Multi-level exploration of citation graphs[END_REF][START_REF] Boutin | Multilevel compound treeconstruction visualization and interaction[END_REF] several hierarchical clustering techniques are proposed, to visualize large graphs. These contributions are mainly concerned with accounting for a user focus in the construction of a multi-level structure. Sometimes this results in new multi-level structures such as for example MuSi-Tree (Multilevel Silhouette Tree) in [START_REF] Boutin | Multilevel compound treeconstruction visualization and interaction[END_REF]. Other approaches are based on zooming strategies that include level-of-details dependant of one or more foci [START_REF] Emden | Topological fisheye views for visualizing large graphs[END_REF].
Multilevel graph exploration is challenging. Multilevel graph exploration systems can be divided into two categories : systems needing precomputation to create a hierarchical structure and systems which create hierarchy during the exploration. Our approach fall into the first category.
Approaches that fall in the first category take more time during the construction step but they facilitate multi-level exploration. In [START_REF] Eades | Multilevel visualization of clustered graphs[END_REF] the authors propose an algorithm for creating a graph hierarchy in three dimensions, each level is drawn on a plane. In [START_REF] Schaffer | Navigating hierarchically clustered networks through fisheye and full-zoom methods[END_REF] authors propose a comparison between two methods of multi-level exploration:"Fisheyes View" and "Full-zoom" methods. Work in [START_REF] Emden | Topological fisheye views for visualizing large graphs[END_REF] is based on a zooming technique associated with a precomputed hierarchical graph. The level of detail is computed on-the-fly and depends on the distance to one or more foci. Abello et al. [START_REF] Abello | Visualizing large graphs with compound-fisheye views and treemaps[END_REF] define a compound fisheye view based on a hierarchy graph. In addition the authors link a treemap with a graph hierarchy. In [START_REF] Van Ham | Interactive visualization of small world graphs[END_REF], the authors create a force directed layout, and use it on graphs in order to highlight clusters. This technique is similar to the approaches that merge clusters in small world vizualization. In [START_REF] Balzer | Level-of-detail visualization of clustered graph layouts[END_REF] the contribution is to propose the visualisation of complex software in 3D or in 2D. Edge bundles are created in order to simplify edges. This method uses visual simplification of graphs using a level-of-detail approach.
Approaches that fall into the second category compute a hierarchical graph during the exploration step. The layout of the graph is computed on the fly. The authors of [START_REF] Van Ham | Ask-graphview: A large scale graph visualization system[END_REF] present a tool, ASK-GraphView, based on clustering and interactive navigation. Hierarchical clustering is obtained by detecting biconnex components, and by a recursive call to a clustering algorithm on biconnex components. In [START_REF] Archambault | Grouse: Feature-based, steerable graph hierarchy exploration[END_REF] the authors propose Grouse. Grouse is based on previous work [START_REF] Archambault | Topolayout: Multilevel graph layout by topological features[END_REF] and decomposes the graph based on topological features. Grouse further uses adapted layout algorithms to layout subgraphs.
Data
In the rest of this paper we focus on a subgraph of JeuxDeMots. The subgraph is obtained by studying only the edges that correspond to the relation of type "Associated Idea". Furthermore, we filtered nodes that were not connected to the biggest connected component. The resulting subgraph is composed of 20 238 words and 64 564 edges.
Data analysis
The aim of this section is to better characterise the type of graph we are working on. A study of degree repartitions (distribution of ingoing edges Fig. 1 and distribution of outgoing edges Fig. 2) is useful to show that our degree distributions have power-law tails. γ in = -1.85 the indegree exponent and γ out = -2.27 the outdegree exponent, are high determination coefficients [START_REF] Albert | Statistical mechanics of complex networks[END_REF]. Figure 2: Degree repartition, for outgoing degree. The graph studied is clearly a scale-free graph [START_REF] Albert | Statistical mechanics of complex networks[END_REF]. A second study can be made to compute the clustering coefficient [START_REF] Albert | Statistical mechanics of complex networks[END_REF]. The average of our graph C = 0.2617, and the degree average is D = 6.3805. Moreover the clustering coefficient of a random graph of the same size and average degree is C rand = 0, 00032. Our graph has an average clustering coefficient order of magnitude higher than the coefficient of clustering of a random graph with the same size and the same average degree. Furthermore, the diameter of our graph is 12. For all these reasons, our graph can be considered as a small world network.
Compound graph construction
In order to provide full-zoom exploration of the lexical network it is necessary to automatically compute a hierarchical graph that is coherent for an end-user of lexical networks like, for example, a searcher in natural language engineering or a lexicographer.
The originality of our approach is (1) that it is based on metrics derived from natural language engineering metrics that compute at low cost, and (2) we create a compound graph instead of a clustered graph. It is important to stress that the clustered graph approach is the most frequent one found in the litterature and that it constitutes a serious drawback when it comes to lexical network exploration as will be discussed in the section 4.2.
In order to create a compound graph, we first adapted one proximity measure used in information retrieval and natural language processing tools and we further extend it to provide a multilevel proximity measure used in the construction of the compound graph.
Proximity Measure
The "Direct Proximity Measure" is computed for an edge in a graph. This measure is useful in computing another measure the "Hierarchical Proximity Measure" which will be described in the next section. The hierarchical measure applies to two nodes n and m of a hierarchy, and accounts for the direct proximity measure of the edges linking n to m.
Direct Proximity Measure
The proximity measure is adapted from the measure of tfidf (term frequency -inverse document frequency) [START_REF] Salton | Introduction to modern information retrieval[END_REF]. It is computed on each edge and accounts for the weight of the edge. The weight of each edge represents a degree of confidence. This measure is defined as follows:
Let G be a graph such that G = (V, E), we take an edge e ∈ E and a node n ∈ N and we define :
• source(e) the node source of edge e, and target(e) the node target of edge e.
• ω(e) the weight of edge e.
• δ + (n) is the weighted outgoing degree of n, and δ -(n) is the weighted ingoing degree of n.
The proximity value [START_REF] Salton | Automatic Text Processing: The Transformation, Analysis, and Retrieval of Information by Computer[END_REF] is computed using the following formula :
prox(e) = ω(e) δ -(target(e)) × ω(e) δ + (source(e))
The first (resp. second) factor of our formula corresponds to the proportion of the weight of e in ingoing edges (resp. outgoing edges) of e. The proximity measure, can be computed for a weighted graph, oriented or not. It reflects the importance of e for its extremities. In the example Fig. 3 the importance of e is weak in comparison to the total weight of all incident edges. Consequently, the two nodes have a weak proximity as shown here prox(e) = 10 460 × 10 285 = 0, 00072
Hierarchical Proximity Measure
A hierarchical proximity measurement is computed between two nodes x and y on a hierarchy of l levels with l >= 0 (see figure 5). This measure accounts for the edges e i between x and nodes that are in the shortest path between x and y. The measure prox ρ (x) is computed with the following formula :
prox ρ (x, y) = l i=1 prox(e i ) * (l + 1 -i)
When an edge doesn't exist we will consider its weight to be equal to zero. The hierarchical measure takes parents in the multilevel graph into account and favours close parents over more distant parents.
The Fig. 5 gives an example of the computation of the hierarchical proximity measure. In this example, the computation of the hierarchical proximity measure is the following: prox ρ (x, y) = prox(e 1 ) * 4 + prox(e 2 ) * 3 + prox(e 3 ) * 2 + prox(e 4 )
Compound graph versus clustered graph
As mentionned previously, we chose to construct a compound graph instead of a clustered graph. The main difference between compound graphs and clustered graphs is that in the latter case meta-nodes that represent clusters are created [START_REF] Junger | Graph Drawing Software[END_REF]. A difficulty is then to find labels to attach to the meta-nodes created. By building a compound graph we avoid this problem since no new node has to be created. The final structure contains only the nodes of the original graph. Important nodes are used as clusters or compound nodes and the containment hierarchy can be used to express important relations between compound nodes and related nodes. This strategy offers several advantages. Firstly it underlines important nodes. Secondly, it encodes edges with high proximity measure by the containment relation of our compound graph. This graphical coding is not only stronger than simple links, it also simplifies the graphical representation by eliminating a lot of links. Thirdly, as mentionned above, there is no additionnal work to find representatives for meta-nodes, since meta-nodes are nodes, their name is directly found and meaningful.
Algorithm
In this section we present and explain our algorithm. Our algorithm Alg. 1 can be decomposed into three parts :
(1) The initialisation, from line 1 to line 3, (2) the grouping of neighbours, from line 5 to line 10, and (3) the reassignement of neighbours, from line 11 to line 20.
Algorithm:Graph2GraphHierarchy(Graph G;X,Y,Z integers) The initialisation consists in choosing X vertices with a maximum weighted degree (line 1). These vertices constitute seeds for our sub-hierarchies at the level 1. The Fig. 5 (A) describes this initialisation, here we take the nodes {a, b, c}.
1 max ← getMaxDegreeNode(G,
The parts 2 and 3 are enclosed in a Breadth-First search algorithm. The nodes are colored in black in order to know which nodes have already been processed.
The second part of our algorithm consists in taking the neighbourhood of our seeds (line 5). Each seed will be the compound nodes of all nodes in the neighbourhood. Each seed has a different neighbourhood, nevertheless a node can be in several different neighbourhoods, see (B) in Fig. 5, the node d can be affected in two different groups. The third part of our algorithm is the reassignment. Each seed now has a list of child nodes noted children. We select the Y nodes in children which have the highest weighted degree, and on these nodes we select the Z nodes which maximize the proximity with their parents. We obtain a list of selected nodes considered as important (line 13) in the algorithm. We must further reassign all previously added nodes, to nodes in the selected set of nodes if they maximize the proximity value. For example, see the Fig. 5, line (C), node i, g, k are reassigned to new parent nodes in selected.
We iterate with nodes contained in the next level of our hierarchy see Fig. 5 areas (D) and (E). The algorithm stops when all nodes are colored black.
The algorithm is particulary adaptable to scale-free graphs. In particular, it is possible to adapt parameter Y (number of nodes of maximal degree) and Z (number of nodes of maximal proximity) in order to favour either closer or higher degree nodes in the selected set of nodes. If the value of parameter Y is chosen in order to favour the nodes with higher degree it helps to reduce the number of links displayed (replaced by the containment relation) which in turn makes the diagram clearer.
Full-zoom exploration
Zoom is used to support multi-level exploration of the lexical network. It is based on the compound graph generated according to the procedure described in the previous section.
Figure 6: Full-zoom exploration
In the graphical representation, (see figure 5) nodes are represented by circles. The compound graph structure imposes that a node can contain a graph which can be empty or contains other nodes.
At each zoom level, the computation of the surface of a node is defined by : surf ace(x) = 8 < :
4 * π 2 if child(x) = ∅ φ × X c∈child(x)
surf ace(c) otherwise
where φ is a percentage of freedom. For instance if φ = 120%, 20% of the total surface of children is left empty for the legibility of the diagram.
Furthermore, a node is expanded if its surface is higher than a percentage ζ. For instance, if ζ = 25% the node will be expanded when its surface takes more than 25% of the screen surface. This choice allows us to adapt to various screen resolutions.
Conclusion and perspective
In this paper we have proposed an original method for the multi-level exploration of a lexical network. The graph underlying the lexical network has scale free and smallworld properties. Even though we applied our approach to a given lexical network, we believe that our approach is general enough to apply to other networks with similar scale-free and small-world properties. For example, an interesting application would be the multiscale visualization of tags in social bookmarking systems.
In future work, we plan to extend our multi-level exploration tool with editing capacities so that it is possible for a user to modify the generated compound graph when necessary. We also want to integrate a search tool so that it is possible to automatically animate the graph toward a specific term. Finally, we plan to conduct controlled experiments to validate the approach on various datasets.
Figure 1 :
1 Figure 1: Degree repartition, for ingoing degree.
Figure 3 :
3 Figure 3: A sample for the proximity metric
Figure 4 :
4 Figure 4: Hierarchical proximity measure
9
child(p) ← child(p) ∪ node; 10 color node in BLACK ; 11 for each leaf ∈ leaves do 12 children ← Child(leaf);
14 for
14 each n ∈ children\ selected do 15 near ← neighbours of n in selected; 16 node ← give a node n' in near maximizing prox ρ (n,root(n')); 17 if node = parent(n) then 18 remove n from child(parent(n)); 19 child(node) ← child(node) ∪ n; 20 leave3 ← leave3 ∪ child(leaf); 21 leave ← leave3;
Figure 5 :
5 Figure 5: Execution of Graph2GraphHierarchy AlgorithmThe third part of our algorithm is the reassignment. Each seed now has a list of child nodes noted children. We select the Y nodes in children which have the highest weighted degree, and on these nodes we select the Z nodes which maximize the proximity with their parents. We obtain a list of selected nodes considered as important (line 13) in the algorithm. We must further reassign all previously added nodes, to nodes in the selected set of nodes if they maximize the proximity value. For example, see the Fig.5, line (C), node i, g, k are reassigned to new parent nodes in selected.We iterate with nodes contained in the next level of our hierarchy see Fig.5areas (D) and (E). The algorithm stops when all nodes are colored black.The algorithm is particulary adaptable to scale-free graphs. In particular, it is possible to adapt parameter Y (number of nodes of maximal degree) and Z (number of nodes of maximal proximity) in order to favour either closer or higher degree nodes in the selected set of nodes. If the value of parameter Y is chosen in order to favour the nodes with higher degree it helps to reduce the number of links displayed (replaced by the containment relation) which in turn makes the diagram clearer. |
04106551 | en | [
"shs"
] | 2024/03/04 16:41:22 | 2019 | https://hal.science/hal-04106551/file/Culture%20num%C3%A9rique.pdf | Résumé
Le dernier ouvrage de Dominique Cardon présente la version rédigée du cours en Culture numérique que le sociologue assure depuis plusieurs années auprès du collège universitaire de SciencesPo. Dans ces pages, on peut reconnaître la démarche théorique du chercheur aussi bien que l'effort pédagogique de l'enseignant qui assume une responsabilité assez délicate : nous offrir une synthèse critique, accessible, exhaustive et efficace, de la culture numérique à l'état actuel des choses. Au carrefour de deux exigences spéculaires, celle de coder et celle de décoder, le sociologue organise ici une totalité de savoirs et de connaissances interdisciplinaires, naviguant entre informatique et philosophie, sociologie et théorie économique, sciences politiques et analyses de marché. Au centre de son analyse, les nouvelles configurations de l'espace public, les formes de la démocratie internet, l'influence des algorithmes et des moteurs de recherche, le digital labor, le problème de la surveillance. Notre compte rendu, sans aucune prétention d'exhaustivité, revient sur les principales étapes de cette véritable exploration de la culture numérique, qui s'adresse avant tout aux novices tout en constituant un instrument utile de synthèse et de suggestions bibliographiques pour les chercheurs.
Sommario
L'ultima opera di Dominique Cardon è la versione scritta del corso di Cultura digitale che il sociologo tiene da parecchi anni presso il collegio universitario di SciencesPO. In queste pagine è possibile riconoscere l'approccio teorico del ricercatore e lo sforzo pedagogico dell'insegnante che assume une responsabilità assai delicata : offrirci un bilancio critico, abbordabile, esauriente e efficace, della cultura digitale di oggi. In mezzo a due esigenze speculari, quella di codificare e quella di decodificare, il sociologo organizza un insieme di saperi e conoscenze interdisciplinari, muovendosi tra informatica e filosofia, sociologia e teorie dell'economia, politologia e analisi di mercato. Al centro della sua analisi, lo spazio pubblico, la democrazia internet, l'influenza degli algoritmi e dei motori di ricerca, il digital labor, il problema della sorveglianza. La nostra recensione, senza pretese di esaustività, segnala le tappe principali di una vera esporazione della cultura digitale, ceh è un'opera di divulgazione per i novizi e un utile strumento di sintesi e di indicazioni bibliografiche per i ricercatori.
Mot-clés : Culture numérique, web, Cardon, espace public, algorithmes, digital labor
Parole chiave: culture digitale, web, cardon, spazio pubblico, algoritmi, digital labor
Table des matières
La culture numérique selon Dominique Cardon
Peppe Cavallari
Ces dernières années, l'expression « culture numérique » s'est progressivement inscrite dans la pensée pédagogique et sociologique, mais aussi, plus généralement, en sciences humaines, obtenant une reconnaissance institutionnelle qui se manifeste par la création de cours ou de programmes dédiés dans les universités et dans les écoles de technologie et SIC. Une compréhension strictement technique des innovations apportées par la diffusion de l'informatique dans les usages communicationnels et informationnels les plus communs se révèle en effet insuffisante, car les mutations engagées par le numérique ont acquis une dimension que l'on définit, justement, de culturelle. En 2008, dans Digital Cultures, Milad Doueihi décrivait le numérique comme étant un processus civilisateur faisant émerger de nouvelles normes sociales marquées d'un côté par la contradiction entre un accès fluide à l'information et à la liberté d'expression, et de l'autre par des effets de surveillance et de contrôle. L'aspect fondamental du processus civilisateur à l'oeuvre est ainsi, d'après Milad Doueihi, l'universalisme. Cet universalisme, « qui conduit à valoriser l'uniformité, à établir un code où la similitude devient la règle », est comparable à celui de la religion, car, tout comme elle, il suggère et outille une vision du monde et une façon de l'habiter valide pour des communautés par ailleurs très lointaines et différentes.
La dimension religieuse de la culture numérique a pour effet de niveler les différences et de réduire les facteurs locaux à des simples variations superficielles d'une culture technologique universelle et homogène et de son environnement numérique. […] La culture numérique et son environnement toujours changeant sont donc à examiner comme un ensemble de pratiques discursives, qui ont leur propres normes et conventions, qui tendent à fragiliser, à perturber des catégories établies1 .
Quelques années plus tard, Doueihi élaborait la notion d'humanisme numérique basée sur la dimension culturelle de la technologie dominante : L'humanisme numérique est l'affirmation que la technique actuelle, dans sa dimension globale, est une culture, dans le sens où elle met en place un nouveau contexte, à l'échelle mondiale, et parce que le numérique, malgré une forte composante technique qu'il faut toujours interroger et sans cesse surveiller (car elle est l'agent d'une volonté économique), est devenu une civilisation qui se distingue pour la manière dont elle modifie nos regards sur les objets, les relations et les valeurs, et qui se caractérise par les nouvelles perspectives qu'elle introduit dans le champ de l'activité humaine. (Doueihi 2011b, 9-10) L'humanisme numérique apparaissait en somme comme la « convergence entre notre héritage culturel complexe et une technique devenue un lieu de sociabilité ».
Le terme « convergence » avait déjà été mobilisé en 2006 par Henry Jenkins, qui qualifiait de « culture de la convergence » (convergence culture) la culture à l'ère du numérique, car axée sur les dynamiques de la culture participative et sur les formes de l'intelligence collective [START_REF] Lévy | L'Intelligence collective : pour une anthropologie du cyberspace[END_REF]. D'autres auteurs, malgré leurs perspectives théoriques très différentes, s'accordent à reconnaître la dimension culturelle du numérique : un théoricien de la performance et du direct comme Philip Auslander l'identifie ainsi comme culture de la médiation (mediatized culture (1999)), tandis que le philosophe Maurizio Ferraris le considère comme culture de l'enregistrement et d'une nouvelle forme de documentalité 2 . Les acceptions d'un terme aussi polysémique que « culture » sont multiples et versatiles, leurs significations changent selon le cadre conceptuel de référence. Selon plusieurs acceptions, notamment celle qui se réfère à l'agriculture et celle qui évoque le culte, le concept de culture sous-tend l'idée d'un savoir qui empiète sur un savoir faire. La culture ne repose pas seulement sur des notions théoriques car, dans le travail des terres comme dans la liturgie religieuse, elle est faite de gestes, de pratiques, d'actions qui se doivent d'être efficaces et traditionnelles. En effet, d'après les auteurs que nous venons de mentionner, la « culture » est un ensemble varié et cohérent, on pourrait dire un système, de pratiques et de visions du monde, de valeurs et de savoirs, de méthodes et de comportements, de façons de penser et de communiquer, de protocoles et de normes, d'instruments de la connaissance et les connaissances mêmes qui en découlent. Si ce système, dont la dimension constitutive est sociale, est articulé, agencé et régi par une technologie aussi diffusée et omniprésente que la technologie numérique, alors la culture numérique peut être conçue comme la culture à l'époque du numérique. C'est-à-dire la culture propre à notre société contemporaine, car -renversant le concept avec une apparente lapalissade -à l'époque du numérique, la culture ne peut pas ne pas être numérique, parce que la sociabilité qui l'anime et la constitue en tant que produit collectif, elle, est créée et façonnée par « le » numérique. « Culture de la visibilité et de la recommandation », « culture expressive et relationnelle », « culture de l'information en temps réel », « culture participative et citoyenne », sont les caractéristiques de la culture numérique selon Rémy Rieffel qui constatait, en 2014, l'ampleur de la « révolution numérique », sa capacité à questionner le rapport entre l'homme et la technique et, par conséquent, ses contributions à la production d'un nouveau contexte technologique et, par-là, social et culturel. Convoquons enfin un autre auteur qui a réfléchit aux innovations majeures apportées par la démocratisation de la technologie numérique à la production et à la circulation du savoir : Daniel Caron. Dans l'ouvrage L'homme imbibé, Caron nous donne une définition du terme « culture » que nous validons car elle nous semble opérationnelle pour avancer dans notre réflexion : la culture, écrit-il, est « la manière dont nous envisageons le monde autour de nous et la manière dont nous sommes perçus ». Elle constitue « l'ensemble de ce qui affecte le développement de notre esprit et finalement de nos collectivités dans nos manières de nous comporter, de comprendre notre environnement, de l'influencer, de dire et de définir le vrai et le faux, le bon et le mauvais ou l'acceptable et l'inacceptable » 3 . Nos perceptions, nos pratiques, nos convictions, nos valeurs, ces quatre domaines fondamentaux de toute culture sont aujourd'hui infiltrés par les usages propres à la technologie numérique. Il en découle que nous ne pouvons que parler de culture numérique si nous voulons saisir et décrire ce qui caractérise la culture à l'époque et dans le milieu socio-technologique dans laquelle nous vivons.
Le dernier ouvrage de Dominique Cardon assume l'évidence de ce constat en l'érigeant comme titre : ici, culture numérique est « la somme des conséquences qu'exerce sur nos sociétés la généralisation des techniques de l'informatique », car derrière « le numérique » il y a l'informatique (Cardon 2019, 18), mais surtout la culture qu'il faut se forger et dont il faut s'équiper pour pouvoir habiter le monde à l'époque du numérique. La culture numérique, chez le sociologue, identifie et circonscrit un certain savoir, voire l'assortiment des notions et des compétences pratiques que nous nous devons de maitriser pour être à la hauteur du défi implicite posé par ce tournant historique que nous sommes en train de vivre. des livres, des articles, des entretiens, des conférences, des documentaires : en somme tout ce qui est « à lire, à voir, à écouter ». Les suggestions de Cardon ouvrent et structurent un contexte théorique assez hétéroclite : informatique et social, cognitif et politique, commercial et artistique, économique et philosophique.
Dans cet ouvrage, qui est la version rédigée du cours que le sociologue assure depuis plusieurs années, Cardon -se dissociant en cela de la vision pour laquelle la révolution numérique s'inscrit dans le sillage des révolutions industrielles4 -défend que l'impact de la technologie numérique doit être comparé à celui de l'invention de l'écriture puis de celle de l'imprimerie, car en créant un nouveau milieu informationnel et communicationnel, elle engage l'émergence d'un système cognitif et symbolique dans lequel les autorités culturelles et institutionnelles traditionnelles sont mises profondément en question, voire bouleversées. Ce point de vue est à rapprocher, sous un angle épistémologique, des travaux pionniers de Pierre Lévy qui, dans les années 1990, considérait l'informatique comme une technologie de l'intelligence5 ; mais aussi, sous un angle socio-sémiotique cette fois, des supports l'analyse des interfaces de Lév [START_REF] Manovich | The language of new media[END_REF] ; et enfin, dans une perspective davantage historique (du moins, de l'histoire de l'écriture et de la lecture), des propos de Christian Vandendorpe, pour lequel les innovations numériques caractéristiques de notre époque proviennent fondamentalement des changements du texte et de la textualité (1999).
Dans le présent article, nous nous livrons ainsi à une synthèse du travail de Cardon, dans laquelle, sans aucune prétention d'exhaustivité, nous chercherons à marquer les passages qui nous semblent être les plus cruciaux dans un ouvrage qui, tout au long de ses 421 pages, touche à un grand nombre des questions que la technologie numérique et les pratiques qui la constituent ne cessent de nous poser.
Internet et Web : la généalogie d'un nouvel espace public (Chapitres 1-3)
La démarche de Cardon est historique. L'auteur commence ainsi par revenir sur les origines de l'Internet et du Web. Cette histoire implique une leçon sociologique que l'auteur anticipe dans sa déclaration programmatique : L'histoire des sciences et des techniques enseigne qu'une invention ne s'explique pas uniquement par la technique. Elle contient aussi la société, la culture et la politique de son époque. Le choix des technologies, les alliances entre acteurs, la manière de définir les usages sont étroitement liés au contexte social, culturel et politique. Dans le cas d'internet, cet éclairage sociologique est particulièrement important. On ne peut vraiment comprendre la grande transition numérique, cette dynamique qui semble toujours orientée vers le futur, qu'en interrogeant ses origines. (p. 17) Les deux premiers chapitres organisent de façon claire et linéaire plusieurs notions que l'on peut d'habitude trouver éparpillées dans de nombreux ouvrages et pages Web. Ces données retracent l'histoire de la naissance du réseau Internet et de l'invention de l'hypertexte et forment aussi l'imaginaire très hybride, parce qu'informatique, militaire, industriel, universitaire et hippy, que nous associons à l'époque des pionniers du numérique 6 . Cardon propose une synthèse historique toujours très bien référencée, fluide, et raconte les stratégies et les finalités politiques à la base des recherches qui ont conduit à la création d'Internet, parmi lesquelles le concept de réseau distribué de Paul Baran dont s'inspire la transmission de paquets sur Internet (p. 31-35) et la notion de « communauté virtuelle », forgée par Howard Rheingold en 1987. Ce concept marque les premiers échanges à distance possibles grâce à The Well, forum et communauté virtuelle qui introduit la distinction entre 6. À lire, à voir, à écouter :pour l'histoire de l'ordinateur personnel, « History of personal Computers » (77'), http ://www.Youtube.com/watch ?v=AlBr-kPgYuU ; pour l'histoire d'Internet, J. Abbate, Inventing the Internet, Cambridge, The MIT Press, 1999 ; pour une aperçue de l'immense infrastructure physique qui se cache derrrière les écrans, « Andrew Blum : What is the Internet, Really ? » (11'59), TED Talking, september 2012 etA. Blum, Tubes : A journey to the Center of the Internet, New York, HarperCollins, 2013.
monde réel et virtuel 7 . L'utopie de cette époque est l'indifférenciation égalitaire entre les individus : « s'il faut séparer le réel et le virtuel, soutiennent les pionniers des mondes numériques, c'est justement pour abolir les différences entre les individus…ils rêvent d'une communauté atopique, déterritorialisée et ouverte ». Ce n'était que de la « cécité idéologique », remarque Cardon, celle qui caractérisait la toute jeune société de l'information et de la communication en temps réel : « en réalité, il apparaitra très vite que la frontière entre monde réel et virtuel n'est pas si étanche et que les inégalités de ressources sociales et culturelles entre internautes s'exercent aussi dans les espaces en ligne » (p.62). Si l'opposition entre réel et virtuel, qui a marqué cette première période, apparait finalement dépassée grâce à des éclairages philosophiques 8 et à la banalisation des pratiques dites numériques, Cardon conserve cependant ce mot, « internaute », qui à notre avis demanderait d'être utilement mise en question. Ouvrons une parenthèse : « internaute » semble impliquer une conception extra-ordinaire de l'expérience numérique, la même conception qui faisait parler aussi de cyberespace. À l'époque de la connexion permanente, qui ne serait pas internaute ? Nous croyons que ce mot pourrait être heuristiquement propice seulement dans une acception anthropologique et culturelle, pour laquelle, à l'époque du numérique, les êtres humains entreraient dans une nouvelle phase de civilisation dans laquelle ils devraient être qualifiés d'internautes. Mais ce n'est pas en ce sens que Cardon utilise ce terme : le sociologue l'emploie comme on fait usage des mots « spectateurs », « voyageurs » ou « clients », c'est-à-dire des mots faisant référence à une expérience (la vision d'un film, d'un déplacement en train, d'un achat) limitée dans le temps et dans l'espace, tandis que les moments d'entrée et de sortie de la connexion au réseau sont de plus en plus souples, fusionnels et immersifs avec tout ce qui se passe en dehors du réseau, en admettant qu'on puisse encore identifier des choses complètement étrangères aux effets et aux 7. « Qu'entend-on alors pour communauté virtuelle ? D'abord l'idée d'une séparation, d'une coupure entre le réel et le virtuel. Les participant de The Well se félicitent du ton de liberté, de la vivacité, de l'humour et de la curiosité qui règnent en ligne et qui n'ont pu s'épanouir dans les communautés hippies réelles. […] Les premières communautés d'internet qui sont à l'origine de cette idée de séparation entre ''en ligne" et ''hors ligne" considèrent le monde virtuel plus riche, plus authentique et plus vrai que la vie réelle, et non pas futile, trompeur et dangereux comme le voient les critiques aujourd'hui. Le virtuel, c'est un espace pour réinventer, en mieux, les relations sociales » (p. 60).
8. À voir, à lire, à écouter : pour naviguer dans les acceptions philosophiques du terme virtuel, Cardon suggère la lecture de l'ouvrage de Marcello Vitali-Rosati, S'orienter dans le virtuel (2012).
ressources propres à la connexion. Dans l'ouvrage cité plus tôt, Milad Doueihi cite un billet de blog de Tim Bray, lequel affirme que nous ne sommes pas des usagers mais tout juste des humains. Nous ne sommes pas des individus qui seraient internautes de façon transitoire, car les êtres humains d'aujourd'hui sont constitués par la connexion. Cependant, on constate que le Baromètre du numérique 2018 (que Cardon insère dans sa bibliographie) continue d'employer ce mot pour identifier les Français connectés au réseau et révéler ainsi que le pourcentage (89%) s'approche d'une couverture totale de la population, en confirmant la dimension banale, et par là culturelle, de la connexion. Quoi qu'il en soit, en tant qu'héritage des premiers discours sociologiques sur les communautés en ligne, le mot « internaute » fonctionne aujourd'hui encore, à notre avis, comme un révélateur terminologique résiduel toujours intéressant.
Ce qui émerge clairement dans la toute première partie de Culture numérique, c'est la nouvelle fonction sociale de la technologie :
La technologie est investie du pouvoir thaumaturgique de révolutionner la société. L'innovation numérique doit permettre de faire tomber les hiérarchies, de court-circuiter les institutions et de bousculer les ordres sociaux traditionnels. La technologie est véritablement pensée comme un instrument d'action politique. (p.68) Cardon note la date du 30 avril 1993, le jour où le CERN a renoncé à ses droits d'auteur sur le logiciel World Wide Web, offert la technologie de l'HTML au monde et publié le code source : ainsi, « les liens hypertexte appartiennent désormais à tous, ils constituent un bien de l'humanité 9 » (p. 88).
Et puis il en souligne une autre, le 30 avril 1995 : à cette date, les connexions au réseau, qui jusque-là étaient gérées par le gouvernement USA, sur la base d'une décision prise par la National Science Foundation, passent dans la disponibilité des opérateurs privés, ceux qu'on appellera par la suite les fournisseurs d'accès. C'est le début de la nouvelle économie, qui se révèlera vite une bulle : « Internet est né comme une utopie politique, écrit Cardon, le Web naît comme la promesse marchande de révolutionner la vieille économie et 11. « La plupart des grandes innovations du web sont parties de rien, ou, pour être plus juste de pas grand-chose. Elles ne sont pas issues de la recherche universitaire ni des laboratoires des grandes entreprises. Elles ne sont pas le fruit d'une vaste réflexion stratégique ni d'une rigoureuse analyse marketing des attentes des consommateurs. La plupart du temps, elles ont surgi sans que personne n'ait vu ni prévu quoi que ce soit. […] L'innovation arrive…de la périphérie ».
12. Cardon dédie un paragraphe entier à Wikipédia, « l'entreprise collective la plus audacieuse jamais réalisée à l'échelle du web (p. 123-131). lités, alors même que ces communautés fonctionnent sans commandement centrale. C'est un travail dans lequel les personnes s'auto-motivent, avec pour conséquence que personne ne donne explicitement d'ordre à d'autres. La valeur des individus est établie selon des critères essentiellement méritocratiques : c'est l'apport de chacun à la fabrication d'un bien collectif qui fait l'objet d'une reconnaissance par les pairs et qui confère un statut au sein de la communauté. Cette reconnaissance a beau être symbolique, lorsqu'un codeur réputé au sein d'une communauté du logiciel libre se fait embaucher par une société informatique, il obtient un meilleur salaire que la moyenne (p. 116).
À partir du troisième chapitre, l'auteur aborde la question de l'espace public, question à laquelle il avait consacré, il y a 8 ans, La Démocratie Internet, dont les pages dédiées à « l'élargissement de l'espace publique » sont ici reprises et retravaillées [START_REF] Cardon | La démocratie Internet : promesses et limites[END_REF]. L'espace public, qui est espace de visibilité et d'accessibilité à l'instar de la sociologie urbaine, et espace d'initiatives et de propositions d'intérêt public selon la théorie politique, est profondément réarrangé par les réseaux sociaux, surtout à travers une reconfiguration et une redistribution de l'acte que Cardon considère comme fondateur de cet espace, c'est-à-dire la prise de parole et, par-là, la production d'une identité. Prise de parole et identité numérique sont les deux concepts sur lesquels Cardon axe sa typologie des réseaux sociaux. La plus importante nouveauté sociologique dans ce domaine est celle pour laquelle des amateurs parlent à d'autres amateurs, rendant publics les propos de n'importe qui. Ce à quoi on assiste et à quoi on participe est un processus d'« individualisation de la vie privée », affirme Cardon (p. 185), qui consiste dans la production d'une identité numérique13 . Lorsqu' il parle de « vraie vie » pour la comparer à la « vie dite virtuelle, ou en ligne » et quand il mobilise la notion de « projection de soi » pour définir l'identité numérique, l'auteur a recours de façon quelque peu ambigüe à des définitions qui pourraient faire croire à une vision ontologiquement et psychologiquement dichotomique de deux plans d'action, de deux vécus, alors même que, pour lui, les pratiques sur les réseaux sociaux ne créent pas de vie parallèle et illusoire mais bien une amplification de la vie.
Espace public, démocratie et médias (Chapitre 4)
Notre « vie réelle » acquiert d'autres dimensions, par le biais d'une exposition qui est avant toute chose exposition à soi-même et, par là, réflexion. Une analyse contre-intuitive de la « gymnastique du bras tendu » propre au geste du selfie, amène Cardon à souligner que « le montreur est son premier regardeur ». La visibilité numérique est donc réflexive : la longueur du bras incarne la « distance à soi » qui préside tout comportement d'exhibition (p. 183). Les pratiques de mise en présence, d'exhibition et de relation contribuent à structurer un espace de parole ouvert à la conversation publique. L'essor des réseaux sociaux et leurs fonctions de producteurs et de diffuseurs d'informations, d'initiatives et de consensus, ne risque pas de menacer l'existence des institutions et des principes de la démocratie représentative. Le Web ne remplace pas l'environnement politico-médiatique traditionnel car au contraire, « les Etats, les médias et les partis politiques restent au coeur de son fonctionnement. Ils ont, eux aussi, progressivement trouvé leur place dans les mondes numériques dont ils sont devenus les acteurs majeurs » (p. 217).
Au couple « démocratie représentative »/« démocratie participative », Cardon ajoute la démocratie internet, caractérisée par des logiques de décentralisation, d'horizontalité et d'auto-organisation de plus en plus hybridées par des dynamiques de centralisation, de contrôle, de surveillance 14 . Au milieu de cette contradiction, l'espace public refaçonné par les réseaux sociaux engendre une nouvelle culture politique, « marquée par une sorte d'affinité de structure, d'une part, les formes collectives qui s'organisent (les communautés comme le logiciel libre ou Wikipedia, les structures de gouvernance comme l'IETF, les activistes du Web comme les Anonymus ou le Parti Pirate) et, d'autre part, les nouvelles formes de mouvement social qui apparaissent au début des années 2000 » (p. 228). Les mobilisations sur les réseau sociaux délégitiment les phases susceptibles de créer la figure du leader : pas de leader, pas de porte-parole ; la singularité de leurs membres est une valeur cruciale, personne ne participe au nom d'une autre association ou groupe organisé. Chacun ne représente que soi-même. L'attachement aux procédures démo- cratiques internes discipline l'absence d'un projet ou d'un programme qui ne sont jamais établis avant que ne se manifeste l'engagement des participants.
C'est pourquoi, comme sur Wikipedia ou dans le monde du logiciel libre, ces collectifs prennent leurs décisions par consensus plutôt que par un vote majoritaire. Consensus ne veut pas dire unanimité, mais une longue série de discussions pour trouver des compromis satisfaisants. Les technologies numériques deviennent alors l'objet dans lequel les militants de ces mouvements investissent une énergie considérable pour débattre des formes et des procédures de prise de décision au sein de leur collectif. Les groupes d'Occupy et de Nuit debout en France ont ainsi créé des wikies réunissant le travail d'une centaine de commissions, elles-mêmes divisées en sous-commissions. S'y manifeste de façon exacerbée un phénomène, souvent observé dans les mondes numériques, consistant à faire du débat sur les procédures la forme même de l'expérience démocratique des participants 15 . (p. 232) Regardant de plus près les revirements de l'actualité politique en Italie, en Estonie, aux États-Unis, en Turquie, en Espagne et même en Allemagne, Cardon rappelle que les médias les plus importants pour une campagne électorale sont encore la télévision et la presse. Le moment où le Web joue un rôle déterminant est lors de l'entrée en politique d'un acteur capable de former et de mobiliser rapidement des communautés en ligne. Dans les régimes autoritaires, c'est aussi en ligne qu'on peut contourner les stratégies des canaux d'information institutionnels. Le numérique accentue l'écart entre un journalisme de qualité, qui s'engage dans des enquêtes, dans la création de nouveaux formats et dans la production de contenus originaux, et un journalisme qui ne serait qu'un prétexte pour faire du marketing et faire cliquer le lecteur sur des informations de tous genres, y compris des fake news. Cardon réduit l'impact réel des fake news à l'échelle de visibilité des différents acteurs 15. À lire, à voir, à écouter : Pour une analyse des mobilisations politiques numériques et le rapport entre numérique et politique, l'ouvrage de Zeynep Tufekci, Twitter and Tears Gas. The Power and Fragility of Netwoorked Protest (2017). Pour une critique de l'utopie démocratique numérique, E. Morozov, « from Slaktivism to Activism », Foreign Policy, 5 september 2009 (disponible en ligne) (Morozov, s. d.). Pour démystifier le récit de la panique suite à l'annonce d'une attaque martienne, annonce donnée par Orson Welles, qui se serait emparé de la population des Etats-Unis, l'ouvrage de A. Brad Schwartz, Broadcast Hysteria. Orson Welles's War of the Worlds and yhe art of Fake News (2015).
numériques pour conclure que « lorsque les acteurs du haut de l'échelle de visibilité d'internet ne se préoccupent pas des informations qui viennent du bas, ou veillent à ne pas les relayer, les fake news ont une circulation limitée, et leur audience reste faible » (p. 272) 16 .
Économie du partage, publicité, digital labor (Chapitre 5)
En 2006, parmi les dix premières capitalisations boursières, une seule entreprise informatique : Microsoft. En 2016, les trois premières entreprises du top 10 sont reliées à l'univers numérique (Apple, Alphabet, Microsoft) tandis qu'Amazon et Facebook accèdent à la sixième et la septième position. C'est à partir de ce constat que Cardon se livre à une analyse du contexte économique contemporain, qui apparait comme étant dominé tout à la fois par les GAFA et par une économie du partage 17 . Ainsi, la possession de données sur les utilisateurs joue un rôle déterminant dans la compétition économique. Cardon adopte des notions de théorie économique telles que la loi des rendements croissants et la loi des effets de réseau, lois auxquelles le numérique ajoute la réduction des coûts de transaction. Dans leur ensemble, ces trois facteurs comportent une tendance monopolististique, propre à tout domaine de l'économie numérique. Dans les pages dédiées aux différents modèles économiques de l'économie numérique, Cardon mentionne Uber et Airbnb, financés par les commissions sur les échanges entre vendeurs et offreurs, Netflix, qui fonctionne par abonnement, Apple et Microsoft qui vendent leur produits, Google et Facebook dont les revenus sont, respectivement au 90% et au 97%, dues à la publicité. Google capture 50% du marché de la publicité en ligne, Facebook les trois quarts de celle affichée sur les réseaux sociaux. Le marketing digital se caractérise par une extraordinaire capacité de profilage des clients, stratégie rendue possible grâce aux cookies, de petits fichiers informatiques inventés en 1994 par Lou Montulli, ingénieur chez Netscape, qui permettent au site de reconnaitre l'identité de celui qui le visite. Son identité n'est que l'ensemble des informations récoltées pendant ses navigations passées. Sur la base de ces données, les espaces d'affichage sont mis aux enchères lors d'opérations qui durent moins de 100 millisecondes.
Le marché de la donnée publicitaire est un monde opaque qui entretient une discrétion délibérée afin de ne pas susciter l'opprobre. Les entreprises prospères qui dominent cet univers, inconnues du grand public, s'appellent Axciom, Bluekai, Rapleaf ou Weborama. Ces courtiers de données (data brokers) ont établi des places de marché électronique pour échanger entre eux les données des utilisateurs et compilent les informations aux franges de la légalité, profitant de la mansuétude des régulateurs. C'est un des domaines que le régulateur européen regarde de très près malgré la résistance des lobbies des publicitaires, qui estiment ces échanges de données indispensables à la personnalisation de la publicité. (p. 316) Après avoir décrit minutieusement le fonctionnement du service Google Ads, qui est sans doute le plus efficace modèle publicitaire sur le Web (p. 316-318), Cardon se penche sur le modèle de notation qui -à partir de l'idée d'Ebay de demander aux acheteurs de noter les vendeurs -s'est imposé comme étant la source et le régime de la réputation dans tout contexte marchand et tout espace transactionnel : Tripadvisor, La Fourchette, Allociné, Amazon, etc. Le risque de voir émerger de faux commentaires discrédite certes le système mais ce n'est pas pour autant que les clients n'arrivent pas à reconnaitre les commentaires authentiques, estime Cardon sur la base de diverses enquêtes 18 . Le cinquième chapitre se termine par une réflexion très documentée sur le digital labor, sujet auquel Cardon a déjà dédié sa discussion avec Antonio Casilli 19 . Nos actions les plus quotidiennes qui se passent dans des plateformes et dans des applications numériques produisent de la valeur et rendemt la plateforme plus performante et plus pertinente dans ses services, quels qu'ils soient. A la lumière de ce constat, Cardon cite les deux théories qui sont autant de façons de considérer le rapport entre individus et environnement numérique. L'opéraïsme de Toni Negri, pour qui le travail de la « multitude » échappe à l'emprise du marché, en engendrant ainsi des éléments d'extériorité qui ne se traduisent pas en valeur marchande. Les nouveaux théoriciens du digital labor, poursuit Cardon, soutiennent la thèse pour laquelle toute activité numérique revient à être la phase du processus de mise au travail généralisée. « Selon cette thèse, synthétise l'auteur, être sur internet c'est participer à ce mécanisme, et c'est donc subir une forme d'exploitation » (p. 341). Deux autres formes d'exploitation sont celles des tâches parcellisées qui, en attendant les robots (ainsi que nous l'annonce le titre du dernier ouvrage d'Antonio [START_REF] Casilli | En attendant les robots : enquête sur le travail du clic[END_REF], à laquelle Cardon renvoie), sont exécutées par des travailleurs souvent désocialisés qui travaillent depuis leur domicile en échange d'une rémunération infime, et celle des employés déguisés en auto-entrepreneurs dans le secteur de l'intermédiation de service (chauffeurs, livreurs, etc.). Cardon cite l'exemple de Deliveroo qui revendique une neutralité, celle de la médiation entre les restaurateurs et les particuliers. Cette neutralité selon Cardon ne peut pas être reconnue et par ailleurs, l'Inspection du travail la qualifie de travail dissimulé car les travailleurs partagent une évidente dépendance économique et technique par rapport à la plateforme. L'avis de Cardon à ce sujet ne laisse pas de place aux équivoques :
Les plateformes aiment à se présenter comme une sorte d'intermédiaire neutre et agnostique entre les différentes faces du marché qu'elles permettent de coordonner. Cette rétention à la neutralité paraît de plus en pus contestable. Les plateformes gouvernes les marchés qu'elles fabriquent non seulement pour assurer la concurrence entre vendeurs, mais aussi pour délimiter les usages et les formes d'interaction entre les segments d 'utilisateurs. (p. 347-348) 19. D. Cardon, A. Casilli, Digital labor, Paris, INA, 2015. Par ailleurs, les plateformes surveillent la qualité des produits vendus et décrochent l'information sur la fiabilité des vendeurs et des usagers, jusqu'à gérer un arbitrage entre vendeurs et acheteurs lors des situations conflictuelles. « Il semble donc difficile, en conclut Cardon, de soutenir que les plateformes soient des marchés neutres[…] Il est plus que temps d'inventer une régulation qui protège les droits de ceux qui vivent des activités qu'elles ne rendent pas simplement possible, mais qu'elles encadrent et commandent » (p. 349).
Algorithmes, Big data et surveillance (chapitre 6)
Dans le dernier chapitre, Cardon aborde la question des algorithmes. L'auteur de « Dans l'esprit du Page Rank. Une enquête sur l'algorithme de Google » et de A quoi rêvent les algorithmes ? nous avait mis en garde face à la supposée neutralité de ces infrastructures de calcul déjà en 2012, dans La démocratie Internet 20 . Ici, le sociologue ne peut que proposer de nouveau les principales acquisitions théoriques qui se basent sur deux considérations, devenues patrimoine commun à partir justement des travaux du sociologue.
La première :
Les algorithmes ne sont pas neutres. Ils renferment une vision de la société qui leur a été donnée par ceux qui les programment dans les grandes entreprises du numérique. Les artefacts techniques contiennent les principes, les intérêts et les valeurs de leurs concepteurs : la mise en oeuvre opérationnelle de ces valeurs passe par des choix techniques, des variables statistiques, de seuils que l'on fixe et des méthodes de calcul.
La seconde :
Il serait déraisonnable de ne pas s'y intéresser sous le prétexte que ce sont des objets techniques complexes que seuls les informaticiens peuvent comprendre. Sans rentrer dans les détails sophisti-20. Les algorithmes qui permettent de hiérarchiser les informations enferment des principes de classement et des visions du monde. Ils structurent très profondément la manière dont les internautes voient les informations et se représentent le monde numérique dans lequel ils se promènent, sans toujours soupçonner le travail souterrain qu'exercent les algorithmes sur leur itinéraire (Cardon 2010, 95).
qués du calcul, nous devons être attentifs à la manière dont nous fabriquons ces calculateurs car, en retour, ils nous construisent. (p. 357) Cardon revient sur sa systématisation topographique de quatre familles d'algorithmes : à côté du Web, l'algorithme mesurant la popularité des sites ; au-dessus du Web, les algorithmes qui s'occupent de hiérarchiser l'autorité des informations ; dans le Web, les algorithmes qui mesurent la réputation ; en dessous, les algorithmes prédictifs du marketing digitale. Les critères et les paramètres des ces quatre types d'algorithmes dérivent d'une certaine vision de l'homme et du social tout en apportant une autre vision de l'individu, de son habitus, de sa façon de vivre le social. Après la description de ces quatre familles qui est reprise de A quoi rêvent les algorithmes ?21 , celui qu'on peut qualifier de sociologue des algorithmes pose plusieurs questions cruciales, par lesquelles progresse sa réflexion : « Comment et par qui voulons-nous être calculés ? Quel degré de maîtrise et de contrôle voulons-nous avoir sur les décisions que prennent les algorithmes ? Qu'est-ce que nous ne voulons pas laisser calculer ? ». L'auteur est bien conscient de la portée philosophique, éthique, juridique et politique de ce questionnement radical qui seul peut démystifier la naïveté insouciante du plus grand nombre. Aujourd'hui, le RGDP (règlement européen sur les données personnelles) impose la transparence des décisions algorithmiques : nous avons le droit de vérifier si des intérêts subreptices influent sur la mise en oeuvre des algorithmes. Il faut se rappeler que les algorithmes sont « idiots » pour ne pas risquer de leurs attribuer des intentionnalités qui ne leur sont pas propres. Les algorithmes, remarque Cardon, peuvent produire des effets désagréables et trompeurs que leurs concepteurs n'ont pas anticipés et qui ne sont pas toujours manifestes (par exemple, des discriminations dans les prédictions criminelles, dans l'autocomplete de Google et dans les offres des vendeurs sur Airbnb), mais il n'y a pas lieu de les démoniser car « puisque les algorithmes forment leurs modèles à partir des données fournies par nos sociétés, leurs prédictions tendent à reconduire automatiquement les distributions, les inégalités et les discrimination du monde sociale » (p. 407)22 .
Le dernier développement de l'ouvrage est celui de la surveillance, qui semble s'être transformée en autocontrôle, écrit Cardon en citant opportunément Gilles Deleuze et l'exemple du Panopticon. Surveillance du marché, surveillance des autres individus (les voisins, les ex-partenaires, les futurs employeurs, les collègues, les parents) et de l'État, cette dernière étant particulièrement sensible dans un contexte politique marqué à l'échelle mondiale par des stratégies terroristes. La surveillance, observe cardon, s'est mise en place avec le soutien et la tolérance de nos sociétés. Maintenant, face à l'exigence d'une négociation de nos droits individuels qui doit pouvoir opposer des limites à ces systèmes, Cardon appelle à des revendications collectives, car il ne faut pas considérer la privacy comme étant une question personnelle mais comme un droit collectif. Le droit à une société dans laquelle pouvoir « garder des jardins secrets », quitte à renoncer à une partie d'efficacité dans le service que nous rendent les plateformes et les algorithmes. « Plus que jamais, écrit Cardon dans ses conclusions, il appartient aux chercheurs, aux communautés, aux pouvoirs publics et surtout aux internautes de préserver la dynamique réflexive, polyphonique et peu contrôlable amorcée par les pionniers du Web » (p. 421).
Internet et Web : la généalogie d'un nouvel espace public (Chapitres 1-3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Espace public, démocratie et médias (Chapitre 4) . . . . . . . . . . Économie du partage, publicité, digital labor (Chapitre 5) . . . . . Algorithmes, Big data et surveillance (chapitre 6) . . . . . . . . . . Bibliographie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.
Cardon nous invite à ne pas sous-estimer la portée de cette décision, compte tenu du fait que, lorsqu'il généralise le like, en 2010, Facebook crée un nouveau lien qui en revanche ne sera jamais public : le like demeure un lien propriétaire (p.89).les marchés traditionnels, mais rien n'a fonctionné comme prévu » (p. 91) 10 . Après avoir raconté les vicissitudes des premières innovations du Web, les innovations « ascendantes » 11 , dont le hashtag sur Twitter, inventé en 2006 par un simple inscrit au réseau, Chris Messina, et le « miracle sociologique auquel personne ne croyait » de Wikipedia 12 , fondée en 2000 par Jimmy Wales, petit entrepreneur dans le secteur de l'e-commerce, Cardon introduit la question du commun (deuxième chapitre) qui ensuite se poursuit par une réflexion sur l'espace public.« Le commun n'est ni un bien privé, c'est-à-dire la propriété exclusive d'un ou plusieurs détenteurs, précise Cardon, ni un bien public gouverné par la puissance publique au nom de l'intérêt général. Le Web compte beaucoup de caractéristiques d'un bien public, comme celle d'être accessible et partagé par tous, mais il a ceci de spécifique que la communauté qui veille sur lui a, non pas des droits, mais une autorité particulière de gouvernance » (p. 111). Pour décrire l'historicité de cette dialectique entre logiciels propriétaires et logiciels libres, Cardon fait référence à deux figures principales : Bill Gates, qui en 1976, dans une lettre envoyée aux membres du Homebrew Computer Club de Menlo Park, déclare de ne pas vouloir permettre l'emploi gratuit du logiciel conçu pour l'Altair 8800, et Richard Stallman, développeur et fondateur, en 1985, de la Free Software Foundation. C'est Stallman qui, en 1983, lance GNU, le logiciel libre devenu Linux en 1991 grâce à Linus Torvalds. Ces pages sont denses de références que l'auteur sait détailler sans pour autant alourdir sa synthèse, avant de réfléchir à la culture de l'open source qui marque les pratiques numériques :[Dans les communautés des logiciels libres] Le travail est régi par une fine division des tâches et une distribution des responsabi-10. À lire, à voir, à ecouter : pour l'histoire de la privatisation du réseau et de l'émérgence d'une nouvelle économie, Cardon renvoi à l'ouvrage de Shane Greenstein, How the Internet Became Commercial. Innovation, Privatization and the Birth of a New Network (2015).
14. À lire, à voir, à écouter : pour aborder la politologie du numérique avec une vision optimiste, Cardon signale l'intervention de Pia Mancini (co-créatrice du logiciel libre democracyOS) « How to Upgrade Democracy for the Internet Area », TED Talk 2014 et celle de Clay Shirky, « How Internet Will (One Day) Transform Government », TED Talks, 2012.
16. À lire, à voir, à écouter, pour une contestation de l'idée que Donald Trump ait gagné grâce aux fake news, D. J. Watts et D. M. Rothschild, « Don't Blame the Election on fake News, Blame it on the Media », Columbia Journalism Rewiew, 5 décembre 2017 (Watts et Rothschild 2017) et H. Allcott et M. Gentzkow, « Social Media and Fake News in the 2016 Election », Journal of Economic Perspectives, 31 (2), 2017 (Allcott et Gentzkow 2017). 17. À lire, à voir, à écouter : pour l'économie des plateformes, D. Evans et R. Schmalensee, De précieux intermédiaires. Comment Blablacar, Facebook, Paypal ou Uber créent de la valeur (Evans, Schmalensee, et Tirole 2017a) et C. Benavent, Plateformes. Sites collaboratifs, marketplaces, réseaux sociaux…Comment ils influencent nos choix (Benavent 2016).
18. À lire, à voir, à écouter : parmi les études sociologiques sur les systèmes de notation, D. Pasquier, Valérie Beaudouin, T. Legon, « Moi je lui donne 5/5 ». Paradoxes de la critique amateure en ligne, Paris, Presses des Mines, 2014 et S. David et T. Pinch, « Six Degrees of Reputation : The Use and Abuse of Online Rewiew and Recommandation Systems », dans T. Pinch et R. Swedberg (eds), Living in a material World. Economic Sociology Meets Science and technology Studies, Cambridge (Mass.), The MIT Press, 2008.
M.Doueihi, Digital Cultures (2011a). En traduction française, La grande conversion numérique(2008, 25, 26)
M. Ferraris, Anima e iPad (2011). Traduction française Âme et iPad (2014) ; Mobilizzazione totale (2015), traduction française, Mobilisation totale (2016).
D.-J. Caron, L'homme imbibé. De l'oral au numérique : un enjeu pour l'avenir des cultures ? (2014) : pour notre compte-rendu de cet ouvrage, voir ici).
Nous entrons dans un monde nouveau que le numérique enrichit, transforme et surveille, écrit-il dans son Introduction. Il est important de disposer de connaissances variées et interdisciplinaires pour y vivre avec agilité et prudence, car si nous fabriquons le numérique, le numérique nous fabrique aussi.(Cardon 2019, 9) Cette circularité liant l'homme aux artefacts, ainsi que la nécessité de développer et d'institutionnaliser des stratégies pédagogiques capable de nous apprendre, ne serait-ce que les grandes lignes, du fonctionnement de la machine, avaient déjà été théorisées dans A quoi rêvent les algorithmes ?, où Cardon revendiquait le droit à une instruction de base incluant les logiques et les paramètres des algorithmes qui orientent nos gestes, nos déplacements et nos décisions[START_REF] Cardon | Qu'est-ce que le digital labor ?[END_REF]. Quatre ans après cet ouvrage déjà devenu un classique de la sociologie francophone du numérique, Cardon semble vouloir prendre en charge cette responsabilité pédagogique, de plus en plus urgente, d'organiser, de stabiliser et de divulguer un savoir indispensable à l'époque actuelle. Le sociologue nous donne un cadre conceptuel dense et cohérent et un apparat de connaissances qui font aussi la narration, synthétique et exhaustive, de la généalogie et de la socialisation des outils et des pratiques liées à l'écriture et à la communication, à la socialisation et à la recherche scientifique, au partage du savoir et au marché. Le généreux corpus bibliographique anglo-francophone qui accompagne chaque chapitre nous signale
C'est la position, entre autres, de Jeremy Rifkin. Cf. J. Rifkin, La troisième Révolution industrielles. Comment le pouvoir latéral va transformer l'énérgie, l'économie et le monde, Paris, editions LLL, 2012
5. Voir P. Lévy Les technologies de l'intelligence, (1990) et Qu'est-ce que le virtuel ? (2001).
À lire, à voir, à écouter :pour une théorie générale de la vie privée, H. Nissembaum, Privacy in Context. Technology, Policy and the Integrity of Social Life (2010).
Pour cette raison nous renvoyons le lecteur à notre compte rendu de cet ouvrage, disponible ici.
À lire, à voir, à écouter : sur la gouvernementalité algorithmique, la présentation de Antoniette Rouvroy « Rencontre avec Antoniette Rouvroy : gouvernementalité algorithmique et idéologie des big data » YouTube, 6 mars 2018 ; sur les effets de discrimination, |
00403952 | en | [
"math.math-pr"
] | 2024/03/04 16:41:22 | 2009 | https://hal.science/hal-00403952v2/file/normal-reserve.pdf | Ciprian A Tudor Samos
/ Matisse
On the structure of Gaussian random variables
Keywords: 2000 AMS Classification Numbers: 60G15, 60H05, 60H07 Gaussian random variable, representation of martingales, multiple stochastic integrals, Malliavin calculus
We study when a given Gaussian random variable on a given probability space (Ω, F , P ) is equal almost surely to β 1 where β is a Brownian motion defined on the same (or possibly extended) probability space. As a consequence of this result, we prove that the distribution of a random variable in a finite sum of Wiener chaoses (satisfying in addition a certain property) cannot be normal. This result also allows to understand better a characterization of the Gaussian variables obtained via Malliavin calculus.
Introduction
We study when a Gaussian random variable defined on some probability space can be expressed almost surely as a Wiener integral with respect to a Brownian motion defined on the same space. The starting point of this work are some recent results related to the distance between the law of an arbitrary random variable X and the Gaussian law. This distance can be defined in various ways (the Kolmogorov distance, the total variations distance or others) and it can be expressed in terms of the Malliavin derivative DX of the random variable X when this derivative exists. These results lead to a characterization of Gaussian random variables through Malliavin calculus. Let us briefly recall the context. Suppose that (Ω, F, P ) is a probability space and let (W t ) t∈[0,1] be a F t Brownian motion on this space, where F t is its natural filtration. Equivalent conditions for the standard normality of a centered random variable X with variance 1 are the following: E 1 -DX, D(-L) -1 |X = 0 or E f ′ z (X)(1 -DX, D(-L) -1 = 0 for every z where D denotes the Malliavin derivative, 1 L is the Ornstein-Uhlenbeck operator, •, • denotes the scalar product in L 2 ([0, 1]) and the deterministic function f z is the solution of the Stein's equation (see e.g. [START_REF] Nourdin | Stein's method on Wiener chaos[END_REF]). This characterization is of course interesting and it can be useful in some cases. It is also easy to understand it for random variables that are Wiener integrals with respect to W . Indeed, assume that X = W (h) where h is a deterministic function in L 2 ([0, 1]) with h L 2 ([0,1]) = 1. In this case DX = h = D(-L) -1 X and then DX, D(-L) -1 = 1 and the above equivalent conditions for the normality of X can be easily verified. In some other cases, it is difficult, even impossible, to compute the quantity E DX, D(-L) -1 |X or E f ′ z (X)(1 -DX, D(-L) -1 . Let us consider for example the case of the random variable Y = 1 0 sign(W s )dW s . This is not a Wiener integral with respect to W . But it is well-known that it is standard Gaussian because the process β t = t 0 sign(W s )dW s is a Brownian motion as follows from the Lévy's characterization theorem. The chaos expansion of this random variable is known and it is recalled in Section 2. In fact Y can be expressed as an infinite sum of multiple Wiener-Itô stochastic integrals and it is impossible to check if the equivalent conditions for its normality are satisfied (it is even not differentiable in the Malliavin calculus sense). The phenomenon that happens here is that Y can be expressed as the value at time 1 of the Brownian motion β which is actually the Dambis-Dubbins-Schwarz (DDS in short) Brownian motion associated to the martingale
M Y = (M Y t ) t∈[0,1] , M Y t = E (Y |F t ) (recall that F t is
the natural filtration of W and β is defined on the same space Ω (or possibly on a extension of Ω) and is a G s -Brownian motion with respect to the filtration G s = F T (s) where
T (s) = inf(t ∈ [0, 1]; M Y t ≥ s))
. This leads to the following question: is any standard normal random variable X representable as the value at time 1 of the Brownian motion associated, via the Dambis-Dubbins-Schwarz theorem, to the martingale M X , where for every t M X t = E(X|F t )?
By combining the techniques of Malliavin calculus and classical tools of the probability theory, we found the following answer: if the bracket of the F t martingale M X is bounded a.s. by 1 then this property is true, that is, X can be represented as its DDS Brownian motion at time 1. The property also holds when the bracket M X 1 is bounded by an arbitrary constant and M X 1 and β M X 1 are independent. If the bracket of M X is not bounded by 1, then this property is not true. An example when it fails is obtained by considering the standard normal random variable W (h 1 )sign(W (h 2 )) where h 1 , h 2 are two orthonormal elements of L 2 ([0, 1]). Nevertheless, we will prove that we can construct a bigger probability space Ω 0 that includes Ω and a Brownian motion on Ω 0 such that X is equal almost surely with this Brownian motion at time 1. The construction is done by the means of the Karhunen-Loève theorem. Some consequences of this result are discussed here; we believe that these consequences could be various. We prove that the standard normal random variables such that the bracket of its associated DDS martingale is bounded by 1 cannot live in a finite sum of Wiener chaoses: they can be or in the first chaos, or in an infinite sum of chaoses. We also make a connection with some results obtained recently via Stein's method and Malliavin calculus.
We structured our paper as follows. Section 2 starts with a short description of the elements of the Malliavin calculus and it also contains our main result on the structure of Gaussian random variables. In Section 3 we discusses some consequences of our characterization. In particular we prove that the random variables whose associated DDS martingale has bracket bouned by 1 cannot belong to a finite sum of Wiener chaoses and we relate our work with recent results on standard normal random variables obtained via Malliavin calculus.
On the structure of Gaussian random variable
Let us consider a probability space (Ω, F, P ) and assume that (W t ) t∈[0,1] is a Brownian motion on this space with respect to its natural filtration (F t ) t∈[0,1] . Let I n denote the multiple Wiener-Itô integral of order n with respect to W . The elements of the stochastic calculus for multiple integrals and of Malliavin calculus can be found in [START_REF] Malliavin | Stochastic Analysis[END_REF] or [START_REF] Nualart | Malliavin Calculus and Related Topics[END_REF]. We will just introduce very briefly some notation. We recall that any square integrable random variable which is measurable with respect to the σ-algebra generated by W can be expanded into an orthogonal sum of multiple stochastic integrals
F = n≥0 I n (f n ) (2)
where
f n ∈ L 2 ([0, 1] n ) are (uniquely determined) symmetric functions and I 0 (f 0 ) = E [F ].
The isometry of multiple integrals can be written as: for m, n positive integers and
f ∈ L 2 ([0, 1] n ), g ∈ L 2 ([0, 1] m ) E (I n (f )I m (g)) = n! f, g L 2 ([0,1]) ⊗n if m = n, E (I n (f )I m (g)) = 0 if m = n. (3)
It also holds that
I n (f ) = I n f
where f denotes the symmetrization of f defined by f (x 1 , . . . , x x ) = 1 n! σ∈Sn f (x σ(1) , . . . , x σ(n) ). We will need the general formula for calculating products of Wiener chaos integrals of any orders m, n for any symmetric integrands
f ∈ L 2 ([0, 1] ⊗m ) and g ∈ L 2 ([0, 1] ⊗n ); it is I m (f )I n (g) = m∧n l=0 l!C l m C l n I m+m-2l (f ⊗ l g) (4)
where the contraction
f ⊗ l g (0 ≤ l ≤ m ∧ n) is defined by (f ⊗ ℓ g)(s 1 , . . . , s n-ℓ , t 1 , . . . , t m-ℓ ) = [0,T ] m+n-2ℓ f (s 1 , . . . , s n-ℓ , u 1 , . . . , u ℓ )g(t 1 , . . . , t m-ℓ , u 1 , . . . , u ℓ )du 1 . . . du ℓ . (5)
Note that the contraction (f ⊗ ℓ g) is an element of L 2 ([0, 1] m+n-2ℓ ) but it is not necessary symmetric. We will by (f ⊗ℓ g) its symmetrization. We denote by D 1,2 the domain of the Malliavin derivative with respect to W which takes values in L 2 ([0, 1] × Ω). We just recall that D acts on functionals of the form f (X), with X ∈ D 1,2 and f differentiable, in the following way: D α f (X) = f ′ (X)D α X for every α ∈ (0, 1] and on multiple integrals
I n (f ) with f ∈ L 2 ([0, 1] n ) as D α I n (f ) = nI n-1 f (•, α).
The Malliavin derivative D admits a dual operator which is the divergence integral δ(u) ∈ L 2 (Ω) if u ∈ Dom(δ) and we have the duality relationship
E(F δ(u)) = E DF, u , F ∈ D 1,2 , u ∈ Dom(δ). (6)
For adapted integrands, the divergence integral coincides with the classical Itô integral.
Let us fix the probability space (Ω, F, P ) and let us assume that the Wiener process (W t ) t∈[0,1] lives on this space. Let X be a centered square integrable random variable on Ω. Assume that X is measurable with respect to the sigma-algebra F 1 . After Proposition 1 the random variable X will be assumed to have standard normal law.
The following result is an immediate consequence of the Dambis-Dubbins-Schwarz theorem (DDS theorem for short, see [START_REF] Karatzas | Brownian motion and stochastic calculus[END_REF], Section 3.4, or [START_REF] Revuz | Continuous martingales and Brownian motion[END_REF], Chapter V).
Proposition 1 Let X be a random variable in L 1 (Ω). Then there exists a Brownian motion (β s ) s≥0 (possibly defined on an extension of the probability space) with respect to a filtration
(G s ) s≥0 such that X = β M X 1
where M X = (M X t ) t∈[0,1] is the martingale given by [START_REF] Hu | Some processes associated with fractional Bessel processes[END_REF]. Moreover the random time T = M X 1 is a stopping time for the filtration G s and it satisfies T > 0 a.s. and ET = EX 2 .
Proof: Let T (s) = inf t ≥ 0, M X t ≥ s . By applying Dambis-Dubbins-Schwarz theorem
β s := M T (s)
is a standard Brownian motion with respect to the filtration G s := F T (s) and for every t ∈ [0, 1] we have M X t = β M X t a.s. P . Taking t = 1 we get
X = β M X 1 a.s.. The fact that T is a (G s ) s≥0 stopping time is well known. It is true because ( M X 1 ≤ s) = (T (s) ≥ 1) ∈ F T (s) = G s . Also clearly T > 0 a.s and ET = EX 2 .
In the sequel we will call the Brownian β obtained via the DDS theorem as the DDS Brownian associated to X.
Recall the Ocone-Clark formula: if X is a random variable in D 1,2 then
X = EX + 1 0 E (D α X|F α ) dW α . (7)
Remark 1 If the random variable X has zero mean and it belongs to the space D 1,2 then by the Ocone-Clark formula [START_REF] Nualart | Central limit theorem for multiple stochastic integrals and Malliavin calculus[END_REF] we have M X t = t 0 E (D α X|F α ) dW α and consequently
X = β R 1 0 (E(DαX|Fα)) 2 dα
where β is the DDS Brownian motion associated to X.
Assume from now on that X ∼ N (0, 1). As we have seen, X can be written as the value at a random time of a Brownian motion β (which is fact the Dambis-Dubbins-Schwarz Brownian associated to the martingale M X ). Note that β has the time interval R + even if W is indexed over [0, 1]. So, if we know that β T has a standard normal law, what can we say about the random time T ? It is equal to 1 almost surely? This is for example the case of the variable X = 1 0 sign(W s )dW s because here, for every t ∈ [0, 1], M X t = t 0 sign(W s )dW s and M X t = t 0 (sign(B s ) 2 ds = t. An other situation when this is true is related to Bessel processes. Let (B (1) , . . . B (d) ) be a d-dimensional Brownian motion and consider the random variable
X = 1 0 B (1) s (B (1) s ) 2 + . . . + (B (d) s ) 2
dB (1) s + . . .
+ 1 0 B (d) s (B (1)
s ) 2 + . . . + (B (d) s ) 2 dB (d) s (8)
It also satisfies T := M X t = t for every t ∈ [0, 1] and in particular M X 1 = 1 a.s.. We will see below that the fact that any N (0, 1) random variable is equal a.s. to β 1 (its associated DDS Brownian evaluated at time 1) is true only for random variables for which the bracket of their associated DDS martingale is almost surely bounded and T and β T are independent or if T is bounded almost surely by 1.
We will assume the following condition on the stopping time T .
There exist a constant M > 0 such that T ≤ M a.s.
The problem we address in this section is then the following: let (β t ) t≥0 be a G t -Brownian motion and let T be a almost surely positive stopping time for its filtration such that E(T ) = 1 and T satisfies (9). We will show when T = 1 a.s.
Let us start with the following result.
Theorem 1 Assume (9) and assume that T is independent by β T . Then it holds that ET 2 = 1.
Proof: Let us apply Itô's formula to the G t martingale β T ∧t . Letting t → ∞ (recall that T is a.s. bounded) we get
Eβ 4 T = 6E T 0 β 2 s ds.
Since β T has N (0, 1) law, we have that Eβ 4 T = 3. Consequently
E T 0 β 2 s ds = 1 2 .
Now, by the independence of T and β T , we get E(T
β 2 T ) = ET Eβ 2 T = 1. Applying again Itô formula to β T ∧t with f (t, x) = tx 2 we get ET β 2 T = E T 0 β 2 s ds + E T 0 sds.
Therefore E T 0 sds = 1 2 and then ET 2 = 1.
Theorem 2 Let (β t ) t≥0 be a G t Wiener process and let T be a G t bounded stopping time with ET = 1. Assume that T and β t are independent. Suppose β T has a N (0, 1) law. Then T = 1 a.s.
Proof: It is a consequence of the above proposition, since
E(T -1) 2 = ET 2 -2E(T )+1 = 0.
Proposition 2 Assume that (9) is satisfied with M ≤ 1. Then T = 1 almost surely.
Proof: By Itô's formula,
Eβ 4 T = 6E T 0 β 2 s ds = 6E 1 0 β 2 s ds + E R + β 2 s 1 [T,1] (s)ds.
Since 6E Next, we will try to understand if this property is always true without the assumption that the bracket of the martingale M X is finite almost surely. To this end, we will consider the following example. Let (W t ) t∈[0,1] a standard Wiener process with respect to its natural filtration
F t . Consider h 1 , h 2 two functions in L 2 ([0, 1]) such that h 1 , h 2 L 2 ([0,1]) = 0 and h 1 L 2 ([0,1]) = h 2 L 2 ([0,1]) = 1. For example we can choose h 1 (x) = √ 21 [0, 1 2 ] (x) and h 2 (x) = √ 21 [ 1 2
,1] (x) (so, in addition, h 1 and h 2 have disjoint support). Define the random variable
X = W (h 1 )signW (h 2 ). ( 10
)
It is well-known that X is standard normal. Note in particular that X 2 = W (h 1 ) 2 . We will see that it cannot be written as the value at time 1 of its associated DDS martingale. To this end we will use the chaos expansion of X into multiple Wiener-Itô integrals.
Recall that if h ∈ L 2 ([0, 1]) with h L 2 ([0,1]) = 1 then (see e.g. [START_REF] Hu | Some processes associated with fractional Bessel processes[END_REF])
sign(W (h)) = k≥0 b 2k+1 I 2k+1 (h ⊗(2k+1) ) with b 2k+1 = 2(-1) k √ 2π(2k + 1)k!2 k , k ≥ 0.
We have Proposition 3 The standard normal random variable X given by ( 10) is not equal a.s. to β 1 where β is its associated DDS martingale.
Proof: By the product formula (4) we can express X as (note that h 1 and h 2 are orthogonal and there are not contractions of order l ≥ 1)
X = k≥0 b 2k+1 I 2k+2 h 1 ⊗h ⊗2k+1 2 and E (X|F t ) = k≥0 b 2k+1 I 2k+2 (h 1 ⊗h ⊗2k+1 2 )1 ⊗2k+2 [0,t] (•) for every t ∈ [0, 1].
We have
(h 1 ⊗h ⊗2k+1 2 )(t 1 , . . . , t 2k+2 ) = 1 2k + 2 2k+1 i=1 h 1 (t i )h ⊗2k+1 2 (t 1 , .., ti , .., t 2k+2 ) (11)
where ti means that the variable t i is missing. Now, M X t = E (X|F t ) = t 0 u s dW s where, by ( 11)
u s = k≥0 b 2k+1 (2k + 2)I 2k+1 (h 1 ⊗h 2k+1 2 )(•, s)1 ⊗2k+1 [0,s] (•) = k≥0 b 2k+1 h 1 (s)I 2k+1 h ⊗2k+1 2 1 ⊗2k+1 [0,s] (•) +(2k + 1)h 2 (s)I 1 (h 1 1 [0,s] (•))I 2k h ⊗2k 2 1 ⊗2k [0,s] (•) .
for every s ∈ [0, 1]. Note first that, due to the choice of the functions h 1 and h 2 ,
h 1 (s)h 2 (u)1 [0,s] (u) = 0 for every s, u ∈ [0, 1].
Thus the first summand of u s vanishes and
u s = k≥0 b 2k+1 (2k + 1)h 2 (s)I 1 (h 1 1 [0,s] (•))I 2k h ⊗2k 2 1 ⊗2k [0,s] (•) . Note also that h 1 (x)1 [0,s] (x) = h 1 (x) for every s in the interval [ 1 2 , 1]. Consequently, for every s ∈ [0, 1] u s = W (h 1 ) k≥0 b 2k+1 (2k + 1)h 2 (s)I 2k h ⊗2k 2 1 ⊗2k [0,s] (•) .
Let us compute the chaos decomposition of the random variable 1 0 u 2 s ds. Taking into account the fact that h 1 and h 2 have disjoint support we can write
1 0 u 2 s ds = k,l≥0 b 2k+1 b 2l+1 (2k + 1)(2l + 1)W (h 1 ) 2 1 0 dsh 2 (s) 2 I 2k h ⊗2k 2 1 ⊗2k [0,s] (•) I 2l h ⊗2l 2 1 ⊗2l [0,s] (•) . Since W (h 1 ) 2 = I 2 h ⊗2 1 + 1 0 h 1 (u) 2 du = I 2 h ⊗2 1 + 1
and
E (sign(W (h 2 )) 2 = 1 0 dsh 2 2 (s)E k≥0 b 2k+1 (2k + 1)I 2k h ⊗2k 2 1 ⊗2k [0,s] (•) 2 = 1
we get
1 0 u 2 s ds = 1 + I 2 h ⊗2 1 × 1 + k,l≥0 b 2k+1 b 2l+1 (2k + 1)(2l + 1) 1 0 dsh 2 (s) 2 I 2k h ⊗2k 2 1 ⊗2k [0,s] (•) I 2l h ⊗2l 2 1 ⊗2l [0,s] (•) -EI 2k h ⊗2k 2 1 ⊗2k [0,s] (•) I 2l h ⊗2l 2 1 ⊗2l [0,s] (•) =: 1 + I 2 h ⊗2 1 (1 + A).
Therefore we obtain that (1 + A) = 1 almost surely which implies that I 2 (h ⊗2 1 )(1 + A) + A = 0 a.s. and this is impossible because I 2 (h ⊗2 1 ) and A are independent.
We obtain an interesting consequence of the above result.
Corollary 1 Let X be given by (10). Then the bracket of the martingale M X with M X t = E (X|F t ) is not bounded by 1.
Proof: It is a consequence of Proposition 3 and of Theorem 2.
Remark 2 Proposition 3 provides an interesting example of a Brownian motion β and of a stopping time T for its filtration such that β T is standard normal and T is not almost surely equal to 1.
Let us make a short summary of the results in the first part of our paper: if X is a standard normal random variable and the bracket of M X is bounded a.s. by 1 then X can be expressed almost surely as a Wiener integral with respect to a Brownian motion on the same (or possibly extended) probability space. The Brownian is obtained via DDS theorem. The property is still true when the bracket is bounded and T and β T are independent random variables. If the bracket of M X is not bounded, then X is not necessarily equal with β 1 , β being its associated DDS Brownian motion. This is the case of the variable (10).
Nevertheless, we will see that after a suitable extension of the probability space, any standard normal random variable can be written as the value at time 1 of a Brownian motion constructed on this extended probability space.
Proposition 4 Let X 1 be a standard normal random variable on (Ω 1 , F 1 , P 1 ) and for every i ≥ 2 let (Ω i , F i , P i , X i ) be independent copies of (Ω 1 , F 1 , P 1 , X 1 ). Let (Ω 0 , F 0 , P 0 ) be the product probability space. On Ω 0 define for every t ∈ [0, 1]
W 0 t = k≥1 f k (t)X k
where (f k ) k≥1 are orthonormal elements of L 2 ([0, 1]). Then W 0 is a Brownian motion on
Ω 0 and X 1 = 1 0 1 u dsf 1 (s) dW 0 u a.s..
Proof:
The fact that W 0 is a Brownian motion is a consequence of the Karhunen-Loève theorem. Also, note that
X 1 = W 0 , f 1 = 1 0 W 0 s f 1 (s)ds
and the conclusion is obtained by interchanging the order of integration.
Remark 3 Let us denote by F 0 t the natural filtration of W 0 . It also holds that
E X 1 |F 0 t = E t 0 g u dW 0 u where g u = 1 u dsf 1 (s).
It is obvious that the martingale E X 1 |F 0 t is a Brownian motion via the DDS theorem and X 1 can be expressed as a Brownian at time 1.
Consequences
We think that the consequences of this result are multiple. We will prove here first that a random variable X which lives in a finite sum of Wiener chaoses cannot be Gaussian if the bracket of M X is bounded by 1. Again we fix a Wiener process (W t ) t∈[0,1] on Ω.
Let us start with the following lemma.
Lemma 1 Fix N ≥ 1. Let g ∈ L 2 ([0, 1] ⊗N +1
) symmetric in its first N variables such that 1 0 dsg(•, s) ⊗g(•, s) = 0 almost everywhere on [0, 1] ⊗2N . Then for every k = 1, . . . , N -1 it holds that 1 0 dsg(•, s) ⊗k g(•, s) = 0 a.e. on [0, 1] 2N -2k .
Proof: Without loss of generality we can assume that g vanish on the diagonals (t i = t j ) of [0, 1] ⊗(N +1) . This is possible from the construction of multiple stochastic integrals. From the hypothesis, the function
(t 1 , . . . , t 2N ) → 1 (2N )! σ∈S 2N 1 0
dsg(t σ(1) , . . . , t σ(N ) , s)g(t σ(N +1) , . . . , t σ(2N ) , s)
vanishes almost everywhere on [0, 1] ⊗2N . Put t 2N -1 = t 2N = x ∈ [0, 1]. Then for every x, the function (t 1 , . . . t 2N -2 ) → σ∈S 2N-2 1 0
dsg(t σ(1) , . . . , t σ(N -1) , x, s)g(t σ(N ) , . . . , t σ(2N -2) , x, s) is zero a.e. on [0, 1] ⊗(2N -2) and integrating with respect to x we obtain that -2) . By repeating the procedure we obtain the conclusion.
1 0 dsg(•, s) ⊗1 g(•, s) = 0 a.e. on [0, 1] ⊗(2N
Let us also recall the following result from [START_REF] Nualart | Central limit theorem for multiple stochastic integrals and Malliavin calculus[END_REF].
Proposition 5 Suppose that F = I N (f N ) with f ∈ L 2 ([0, 1] N ) symmetric and N ≥ 2 fixed.
Then the distribution of F cannot be normal.
We are going to prove the same property for variables that can be expanded into a finite sum of multiple integrals.
Theorem 3 Fix N ≥ 1 and et let X be a centered random variable such that X = N +1 n=1 I n (f n ) where f ∈ L 2 ([0, 1] n ) are symmetric functions. Suppose that the bracket of the martingale M X (1) is bounded almost surely by 1. Then the law of X cannot be normal.
Proof: We will assume that EX 2 = 1. Suppose that X is standard normal. We can write X as X = 1 0 u s dW s where u s = N n=1 I n (g n (•, s)). As a consequence of Proposition 4,
1 0 u 2 s ds = 1 a. s.
But from the product formula (4)
1 0 u 2 s ds = 1 0 ds N n=1 I n (g n (•, s)) 2 = 1 0 ds N m,n=1 m∧n k=1 k!C k n C k m I m+n-2k (g n (•, s) ⊗ g m (•, s))ds.
The idea is to benefit from the fact that the highest order chaos, which appears only once in the above expression, vanishes. Let us look to the chaos of order 2N in the above decomposition. As we said, it appears only when we multiply I N by I N and consists in the random variable I 2N The conclusion of the above theorem still holds if M X satisfies (9) and M X 1 is independent by β M X 1 .
Finally let us make a connection with several recent results obtained via Stein's method and Malliavin calculus. Recall that the Ornstein-Uhlenbeck operator is defined as [START_REF] Karatzas | Brownian motion and stochastic calculus[END_REF]. There exists a connection between δ, D and L in the sense that a random variable F belongs to the domain of L if and only if F ∈ D 1,2 and DF ∈ Dom(δ) and then δDF = -LF .
LF = -n≥0 nI n (f n ) if F is given by
Let us denote by D the Malliavin derivative with respect to W and let, for any
X ∈ D 1,2 G X = DX, D(-L) -1 X .
The following theorem is a collection of results in several recent papers.
Theorem 4 Let X be a random variable in the space D 1,2 . Then the following affirmations are equivalent.
1. X is a standard normal random variable.
For every
t ∈ R, one has E e itX (1 -G X ) = 0. 3. E ((1 -G X )/X) = 0. 4. For every z ∈ R, E (f ′ z (1 -G X )) = 0
, where f z is the solution of the Stein's equation (see [START_REF] Nourdin | Stein's method on Wiener chaos[END_REF]).
Proof: We will show that 1. ⇒ 2. ⇒ 3. ⇒ 4. ⇒ 1. First suppose that X ∼ N (0, 1). Then
E e itX (1 -G X ) = E(e itX ) - 1 it E De itX , D(-L) -1 X = E(e itXn ) - 1 it E Xe itX = ϕ X (t) - 1 t ϕ ′ X (t) = 0.
Let us prove now the implication 2. ⇒ 3. It has also proven in [START_REF] Nourdin | Density formula and concentration inequalities with Malliavin calculus[END_REF], Corollary 3.4. Set F = 1 -G X . The random variable E(F |X) is the Radon-Nykodim derivative with respect to P of the measure Q(A) = E(F 1 A ), A ∈ σ(X). Relation 1. means that E e itX E(F/X) = E Q (e itX ) = 0 and consequently Q(A) = E(F 1 A ) = 0 for any A ∈ σ(X n ). In other words, E(F |X) = 0. The implication 3. ⇒ 4 is trivial and the implication 4. ⇒ 1. is a consequence of a result in [START_REF] Nourdin | Stein's method on Wiener chaos[END_REF].
As we said, this property can be easily understood and checked if X is in the first Wiener chaos with respect to W . Indeed, if X = W (f ) with f L 2 ([0,1]) = 1 then DX = D(-L) -1 X = f and clearly G X = 1. There is no need to compute the conditional expectation given X, which is in practice very difficult to be computed. Let us consider now the case of the random variable Y = 1 0 sign(W s )dW s . The chaos expansion of this variable is known. But Y is not even differentiable in the Malliavin sense so it is not possible to check the conditions from Theorem 4. Another example is related to the Bessel process (see the random variable 8). Here again the chaos expansion of X can be obtained (see e.g. [START_REF] Hu | Some processes associated with fractional Bessel processes[END_REF]) but is it impossible to compute the conditional expectation given X.
But on the other hand, for both variables treated above their is another explanation of their normality which comes from Lévy's characterization theorem. Another explanation can be obtained from the results in Section 2. Note that these two examples are random variables such that the bracket of M X is bounded a.s.
Corollary 2 Let X be an integrable random variable on (Ω, F, P ). Then X is a standard normal random variable if and only if there exists a Brownian motion (β t ) t≥0 on an extension of Ω such that D β X, D β (-L β ) -1 X = 1.
(13)
Proof: Assume that X ∼ N (0, 1). Then by Proposition 4, X = β 1 where β is a Brownian motion on an extended probability space. Clearly (13) holds. Suppose that there exists β a Brownian motion on (Ω, F, P ) such that (13) holds. Then for any continuous and piecewise differentiable function f with Ef ′ (Z) < ∞ we have
E f ′ (Z) -f (X)X = E f ′ (X) -f ′ (X) D β X, D β (-L β ) -1 X = E f ′ (Z)(1 -D β X, D β (-L β ) -1 X = 0
and this implies that X ∼ N (0, 1) (see [START_REF] Nourdin | Stein's method on Wiener chaos[END_REF], Lemma 1.2).
1 0 β 2 s
2 ds = 3 and Eβ 4 T = 3 it follows that E R + β 2 s 1 [T,1] (s)ds = 0 and this implies that β 2 s (ω)1 [T (ω),1] (s) = 0 for almost all s and ω. Clearly T = 1 almost surely.
1 0 u 2 s
2 ds = 1 almost surely if and only if 1 + I 2 h ⊗2 1
1 0 1 0g
11 g N (•, s) ⊗ g N (•, s)ds . The isometry of multiple integrals (3) implies that N (•, s) ⊗g N (•, s)ds = 0 a. e. on [0, 1] 2N and by Lemma 1, for every k = 1, . . . , N -1.
1 0g 1 0 1 0
111 N (•, s) ⊗k g N (•, s)ds = 0 a. e. on [0, 1] 2N -2k . (12) Consider now the the random variable Y := I N +1 (f N +1 ). It can be written as Y = I N (g N (•, s))dW s and b y the DDS theorem, Y = β Y R ds(I N (g N (•,s))) 2 . The multiplication formula together with (12) shows that 1 0 ds(I N (g N (•, s))) 2 is deterministic and as a consequence Y is Gaussian. This is in contradiction with Proposition 5.
Acknowledgement: The Proposition 4 has been introduced in the paper after a discussion with Professor Giovanni Peccati. We would to thank him for this. We are also grateful to Professor P. J. Fitzsimmons for detecting a mistake in the first version of this work. |
00289669 | en | [
"math.math-ds",
"math.math-gt"
] | 2024/03/04 16:41:22 | 2009 | https://hal.science/hal-00289669v3/file/FixPFreeFinal-versionHal.pdf | Jérôme Los
Infinite sequence of fixed point free pseudo-Anosov homeomorphisms
Keywords: 2000 Mathematics Subject Classification. Primary: 37E30. Secondary: 32G15, 57R30, 37B10 Pseudo Anosov homeomorphisms, Rauzy-Veech induction, fixed points, traintrack maps
We construct an infinite sequence of pseudo-Anosov homeomorphisms without fixed points and leaving invariant a sequence of orientable measured foliations on the same topological surface and the same stratum of the space of Abelian differentials. The existence of such a sequence shows that all pseudo-Anosov homeomorphisms fixing orientable measured foliations cannot be obtained by the Rauzy-Veech induction strategy.
Introduction
This work started after a discussion between A.Avila and P.Hubert. The question was : "Is there a pseudo-Anosov homeomorphism fixing an orientable measured foliation and without fixed points of negative index?"
It turns out that one such example already existed in the literature due to P.Arnoux and J.C.Yoccoz, [A.Y], [A]. It was constructed for a very different purpose and enables to build some special exemples on other surfaces but no general technics was available to build families of examples and in particular in the same stratum of the space of measured foliations.
Existence or non existence of fixed points is an interesting question in it's own right but why this particular setting? 1 One motivation comes from interval exchange transformations (IET) that naturally define topological surfaces S together with orientable measured foliations. It also defines a natural transformation within the set of IET, the so-called Rauzy-Veech induction that has been widely studied over the years, in particular because it gives a natural relationship between the combinatorics of IET and some orbits of the Teichmüller flow (see for instance [Ra], [Ve]). Some of the IET are called self-similar and correspond to periodic loops of the Rauzy-Veech induction or to periodic orbits of the Teichmüller flow in moduli space. They define elements in the mapping class group M(S) of the surface that are very often pseudo-Anosov, according to Thurston's classification theorem (see [Th], [FLP] for definitions). All these pseudo-Anosov homeomorphisms obtained from the Rauzy-Veech induction strategy share one property, in addition to fixing orientable measured foliations: they all have a fixed point and a fixed separatrix starting at this fixed point. In other words they all have a fixed point with negative index. The above question is thus rephrased as: Is there some pseudo-Anosov homeomorphisms with orientable invariant measured foliations that do not arise from the Rauzy-Veech induction strategy? Our first result gives a positive answer to that question. This was not a surprise because of Arnoux-Yoccoz's example.
The interest is more on the general construction, based on a tool called the train-track automata that has not been defined in full generality in the literature even though it exists in several forms and since a long time, see for instance, [START_REF] Mosher | The classification of pseudo-Anosov's , Low-dimensional topology and Kleinian groups[END_REF], [PaPe] and more recently in [KLS]. These automata allow to construct, in principle, all pseudo-Anosov homeomorphisms fixing a measured foliation on a given stratum and are closely related to the train-track complex that U.Hamenstädt [Ha] has recently studied. These train-track automata will not be needed here, we will instead use a more elementary notion of splitting sequence morphisms that are obtained from the R, L-splitting sequences described by Papadopoulos and Penner in [PaPe] by an algebraic adaptation. In [PaPe] the aim was to describe measured foliations combinatorially. Here it is a tool for explicitly constructing elements in the mapping class group by composition of elementary train-track morphisms. Observe that the Rauzy-Veech induction can be interpreted as a special class of splitting sequences on a special class of traintracks. Constructing just one example of pseudo-Anosov homeomorphisms without fixed point, using splitting sequences, was a surprise and it became evident that such examples should be very rare on a given stratum. The natural conjecture was that at most finitely many such examples could exist on a given stratum and yet we obtain : Theorem 1.1 There exists infinite sequences of pseudo-Anosov homeomorphisms with orientable invariant measured foliations on the same stratum of the space of measured foliations and without fixed points of negative index.
The existence of an infinite sequence of pseudo-Anosov homeomorphisms without fixed points of negative index is counter intuitive at several level. The general belief, prior to these examples, was that on a given stratum, at most, finitely many such pseudo-Anosov homeomorphisms could exist. This would have been a justification for the Rauzy-Veech induction strategy to be used for counting periodic orbits of the Teichmüller flow in moduli space. This counting problem is very delicate and several asymptotic results has been obtained recently, in particular by Eskin-Mirzakhani [EM] and Bufetov [Bu]. The examples obtained in this paper indicate that another strategy should be developed for a more precise counting. The non finiteness is also counter intuitive at the growth rate level. Indeed for a sequence of pseudo-Anosov homeomorphisms whose growth rate goes to infinity it was expected that the number of fixed points should grow, these examples show this is not the case.
The paper is organised as follows. First the basic properties of train-tracks and train-track maps are reviewed in section 2, in particular with respect to the properties that we want to control: the fixed points and the orientability. Then the notion of splitting sequence morphisms will be discussed in section 3. Finally the particular sequence leading to the examples will be described and analysed.
It's a great pleasure to thanks Pascal Hubert and Erwan Lanneau for their support and questions during this work. I would like to thanks Lee Mosher who made a constructive comment on a previous version of this paper.
2 Train track maps and dynamics.
In this section we review some basic properties of train-track maps that represents pseudo-Anosov homeomorphisms. In particular the relationship between the fixed points of the pseudo-Anosov homeomorphisms on the surface and those of the train-track maps. For more details about train-tracks, laminations and foliations the reader is referred to [PH] and [FLP].
Train-track maps.
A train-track on a surface S is a pair (τ, h), where :
• τ is a graph, given by it's collection of edges E(τ ) and vertices V (τ ), together with a smooth structure, that is a partition at each vertex v ∈ V (τ ), of the set of incident edges St(v) = In(v) Out(v), each subset, called a side, being non empty (see Figure 1).
• h : τ → S is an embedding that preserves the smooth structure and such that S -h(τ ) is a finite union of discs with more than 3 cusps on its boundary and annuli (if S has boundaries and/or punctures) with more than one cusp on one boundary, the other boundary (or puncture) being a boundary component (or a puncture) of the surface S. A cusp here is given by a pair of adjacent edges, with respect to the embedding h, in the same side In(v) or Out(v) at a vertex.
• A regular neighborhood N(h(τ )) on S is a subsurface (with boundaries) and h defines an embedding ĥ : τ → N(h(τ )). The neighborhood N(h(τ )) admits a retraction ρ : N(h(τ )) → τ that is a homotopy inverse of ĥ and {ρ -1 (t); t ∈ τ } is a foliation of N(h(τ )) called the tie foliation. A leaf of the tie foliation that maps under ρ to a vertex is called a singular leaf. • A map ϕ : τ → τ ′ between two train-tracks (τ, h) and (τ ′ , h ′ ) on S that satisfies : (i) ϕ is cellular, i.e. maps vertices to vertices and edges to edge paths, (ii) ϕ preserves the smooth structure, is called a train-track morphism. It is a train-track map if τ and τ ′ are isomorphic (as train-tracks). The last property (ii) means the following: -If ϕ(v) = w, (v, w) ∈ V (τ ) then the edges issued from one side (In(v) or Out(v)) are mapped to edge paths starting at w by edges on one side.
-If an edge path ϕ(e) crosses a vertex v then it enters the neighborhood of v by one side and exits by the other side.
• If f : S → S is a homeomorphism, then a train-track map ϕ : τ → τ is a representative of [f ] ∈ M(S) if the following diagram commutes, up to isotopy: τ h -→ S ϕ ↓ ↓ f τ h -→ S
In addition, the image graph f •h(τ ) is supposed to be embedded into N(h(τ )) and transverse to the tie foliation. Such a triple (τ, h, ϕ) is called a train-track representative of the mapping class element [f ] ∈ M(S). For train-track maps there is a natural incidence matrix M(τ, ϕ) whose entries are labelled by the edge set E(τ ) and M(τ, ϕ) (e,e ′ ) is the number of occurrences of e ′±1 in the edge path ϕ(e). This definition requires the edges to be oriented but the orientation is arbitrary. This matrix does not depends upon the embedding h. The following proposition is due to Thurston and the proof to Papadopoulos and Penner [PaPe].
Proposition 2.1 Any pseudo-Anosov homeomorphism admits a train-track representative whose incidence matrix is irreducible and non periodic.
There is a converse direction result that requires an additional property to be checked, called the "singularity type condition". The first formulation of this condition was given in the non published preprint version of the book [CB] and several equivalent conditions are described in the non published monograph by L.Mosher [START_REF] Mosher | Train-track expansions of measured foliations[END_REF]. The Casson-Bleiler version has been used in several published papers, for instance in [Ba]. The condition can be described as follows: The singularity type of a measured foliation (F , µ) with m singularities {s 1 , ..., s m } is a m-tuple of integers [k 1 , ..., k m ] F , where k i is the number of separatrices at s i . A train-track (τ, h) on S has singularity type [k ′ 1 , ..., k ′ r ] τ if the components of S -h(τ ) are r discs {∆ 1 , ...∆ r } and ∆ j has k ′ j cusps on its boundary. This is the definition for closed surfaces, the adaptation for punctured surfaces is obvious. If a measured foliation (F , µ) is carried by a train-track (τ, h) (see [PH] or [START_REF] Mosher | Train-track expansions of measured foliations[END_REF]) then we say that (F , µ) and (τ, h)
have identical singularity type if [k 1 , ..., k m ] F = [k ′ 1 , ..., k ′ r ] τ as non ordered (m = r)-tuples. A reformulation of the Casson-Bleiler result is the following: Proposition 2.2 If a train-track representative (τ, h, ϕ) of [f ] ∈ M(S)
has an irreducible and non periodic incidence matrix M(τ, ϕ) and if the measured foliation
(F , µ) (τ,h,ϕ) that is canonically obtained from (τ, h, ϕ) have identical singularity type, then [f ] is pseudo- Anosov.
This result will be used in the last section to check our explicit examples. For completeness let us describe briefly the construction of the measured foliation (F , µ) (τ,h,ϕ) and one method that enables to check the singularity type condition. A train-track representative (τ, h, ϕ) with an irreducible incidence matrix defines a canonical rectangle partition and a measured foliation of the subsurface N(h(τ )) via the classical "highway construction" of Thurston (see [PH] for details). The set of transverse measures (widths) on the rectangles is given by the positive eigenvector of the largest eigenvalue of the transpose matrix t M(τ, ϕ). For simplicity we assume that the ambient surface S is closed. The above construction defines a measured foliation of the subsurface N(h(τ )). In order to extend this measured foliation to S we need to apply a "zipping" construction that is essentially the "Veech zippered rectangles" operation where the zipping parameters (lengths) are computed directly from the map ϕ. In this closed surface case all the components of S -N(h(τ )) are discs ∆ i bounded by curves ∂ i with k ′ i cusps. These boundary curves are represented by cyclic edge paths in τ . Since (τ, h, ϕ) is a train-track representative of a surface homeomorphism then ϕ |∂ i is a cyclic edge path that is homotopic to some boundary curve ∂ j . The boundary components ∂ i are thus homotopicaly permuted under ϕ and the map ϕ |∂ i has some periodic points along some sides of the cusped closed curve ∂ i , called boundary periodic points. We will see in our explicit examples of section 4 that these periodic points are easy to find. A simple test for the singularity type condition is given by : Proposition 2.3 If the map ϕ of the train-track representative (τ, h, ϕ) has exactly one boundary periodic point on each side of the boundary curves ∂ i then the singularity type condition is satisfied by (τ, h, ϕ).
Proof. This single boundary periodic point condition implies that on each ∂ i with k ′ i cusps there are exactly k ′ i boundary periodic points. The unique path that connects two consecutive boundary periodic points along ∂ i has to cross a single cusp. We observe that it is an irreducible periodic Nielsen path (see [J] and [BH]). Each boundary curve ∂ i is thus a concatenation of k ′ i periodic irreducible Nielsen paths. The zipping operation mentioned above is now easy to describe, this is just a (sequence of) folding operations (see [BH]), from the cusp to the boundary periodic points along a periodic Nielsen path. In this case with only one boundary periodic point on each side of ∂ i the zipping operation defines a measured foliation (F , µ) (τ,h,ϕ) that is canonically defined from (τ, h, ϕ). For this measured foliation the singularity type is naturally identified with the one of (τ, h) and the singularity type condition is satisfied. In particular the beginnings of the separatrices at the singularities are in bijection with the irreducible Nielsen paths. In addition the permutation of these separatrices under the pseudo-Anosov homeomorphism is given by the computable permutation of the Nielsen paths under ϕ .
Dynamics of train-track maps, orientability.
The relationship between the dynamics of a train-track map representing a pseudo-Anosov mapping class and the dynamics of the pseudo-Anosov homeomorphism is rather well understood (see for instance [Bo] or [Lo]). In this paragraph we only review the fixed point question that is much easier.
Lemma 2.4 A train-track map with irreducible incident matrix has a fixed point on an edge
e ∈ E(τ ) if and only if the edge path ϕ(e) contains e ±1 ( i.e. M(τ, ϕ) (e,e) = 0).
Proof. The first observation is that condition (i) of a train-track map implies the existence of a Markov partition for ϕ. The classical combinatorial dynamics (see [ALM] for example) for maps with a Markov partition implies immediately the result. The next step is to relate the fixed points of the train-track map representing a pseudo-Anosov mapping class with the fixed points of the pseudo-Anosov homeomorphism.
Lemma 2.5 If f : S → S is a pseudo-Anosov homeomorphism that is represented by a train-track map (τ, h, ϕ) with irreducible incidence matrix then for each fixed point x of f that is not a singular point of the invariant foliation there is a unique fixed point y of ϕ so that ρ(x) = y. The possible missing fixed points of f with respect to ϕ are in one to one correspondance with the boundary components of N(h(τ )) bounding discs on S, that are invariant up to homotopy under ϕ. In addition, each such boundary component is a union of periodic Nielsen paths between boundary periodic points and the dynamics of these Nielsen paths (under ϕ) reflects the dynamics of f restricted to the separatrices at the fixed singular points.
Proof. A sketch of proof for the first statement goes as follow. The construction, described above, of the measured foliation (F , µ) (τ,h,ϕ) from (τ, h, ϕ) gives in fact a Markov partition for the pseudo-Anosov homeomorphism f . If a fixed point of f is not a singularity then it belongs to the interior of one rectangle of the constructed partition. The retraction ρ then maps the stable segment that contains the fixed point, within the corresponding rectangle, to a point on an edge of τ . This point is fixed for ϕ by the commutative diagram of the definition. The proof of the last statement relating boundary periodic points and periodic Nielsen paths for ϕ with singular fixed points for f is in fact contained in the proof of Proposition 2.3, more details can be found for instance in [FL]. The notion of an orientable foliation or measured foliation is classical. We want to see this property at the train track level. The notion of being carried by a train track for a measured foliation (or a measured lamination) is also classical (see [PH]). If a measured foliation (F , µ) is carried by τ then each leaf Λ of F projects, under the retraction ρ, to an edge path that is compatible with the smooth structure in the sense of property (ii) of a train-track map. If (F , µ) is orientable then each leaf has a natural orientation and defines an orientation for the edge path ρ(Λ). Each oriented edge path ρ(Λ) are compatible with each others and induces an orientation for the edges of τ . These orientations are compatible with the smooth structure, i.e. at each vertex the edges are oriented from the In side to the Out side (or the opposite). If a train-track admits a global orientation that satisfies this local compatibility property at each vertex it is called orientable and we have the obvious: Lemma 2.6 If a train-track (τ, h) on S is orientable then every measured foliation carried by (τ, h) is orientable.
3 Splitting morphism sequences.
In this section we introduce a general construction that produces train-track maps and the corresponding mapping class elements fixing measured foliations in a given stratum of the space of measured foliations. The splitting operation is as old as the measured train-tracks, going back to the original paper by Williams [Wi]. The first result that relates pseudo-Anosov homeomorphisms or more precisely measured foliations invariant under a pseudo-Anosov homeomorphism to combinatorial splitting sequences is probably due to Papadopoulos and Penner [PaPe]. Splitting sequences have been used recently by Hamenstädt [Ha] as a main tool to study for instance the curve complex. In this section we introduce the notion of splitting morphisms.
Let ϕ : τ → τ be a train-track map representing f : S → S then h • ϕ : τ → N(h(τ )) is an embedding that is isotopic to f • h and we assume that f • h(τ ) is embedded in N(h(τ )) transversely to the tie foliation. Let us consider a small regular neighborhood N ′ (f • h(τ )), small enough to be embedded in N(h(τ )). This neighborhood is homeomorphic to N(h(τ )) by definition and thus N(h(τ )) -N ′ (f • h(τ )) is a union of annuli with the same number of cusps on each boundary component. For each cusp
C ′ i of N ′ (f • h(τ )) there exists a cusp C i of N(h(τ )) and a path γ i in N(h(τ )) -N ′ (f • h(τ )) that connects C ′ i to C i
and is transverse to the tie foliation. Such a path is called a splitting path. The following lemma first appeared in [PaPe].
Lemma 3.1 Every train-track map representing an element [f ] in the mapping class group is obtained by a finite sequence of "elementary" splitting operations followed by a relabeling homeomorphism.
Proof. If we cut N(h(τ )) along the finite collection of finite splitting paths γ i we obtain a new surface N 1 (τ, h, ϕ) that is obviously homeomorphic with N(h(τ )). This observation implies the result. Our goal is now to make the notion of elementary splitting operation explicit, as well as the relabeling homeomorphism. Then we shall use these splitting operations for the reverse purpose, namely for constructing train-track maps and the corresponding mapping class elements.
In their paper Papadopoulos and Penner [PaPe] were interested to splitting sequences at "generic" train-tracks, i.e. with only valency 3 vertices. For these sequences a very simple coding with 2 symbols is available, the so-called R,L sequences. More recently Hamenstädt added the constraints that (τ, h) represents foliations in the principal stratum, i.e. with only triangles for the complementary components. We will not use these restrictions here. In particular for orientable foliations we need to allow foliations to belong to more degenerate strata. We will also encourage the train-tracks to have higher valency vertices, in order to simplify the definition of the splitting operations. We want the elementary operations to be train-track morphisms, given by cellular maps between two different train-tracks that we call splitting morphisms, to this end we need the train-tracks to be labelled.
Observe that splitting operations appeared for interval exchange transformations and the Rauzy-Veech induction strategy is nothing but a very special splitting sequence at a very special class of train-tracks.
A labeling is a map ǫ : E(τ ) → A, where A is a finite alphabet. The map ǫ just names each edge. A splitting operation is defined for a given N(h(τ )) at a specific cusp C i . We want to fixe the alphabet once and for all, and in particular that the number of edges is constant. We also want to avoid changing from one stratum to another. To this end we impose the following constraints:
(a) τ has at least a vertex of valency larger than 3. (b) A splitting operation is not applied at a vertex of valency 3. (c) At a vertex with more than 2 edges on one of it's sides, only the splittings at the extreme cusps and through the extreme edges are allowed. (d) A splitting path do not connect a cusp of N(h(τ )) to another cusp of N(h(τ )).
The notion of an extreme cusp or edge is clear from the context. A splitting operation is a cutting along a path γ that connects a cusp C i of N(h(τ )) to a point in the interior of a singular leaf of the tie foliation (condition (d)). A splitting is elementary if the path γ do not intersects singular leaves except at it's extreme points. The splitting operation then defines a new neighborhood N(h ′ (τ ′ )) that is embedded into N(h(τ )) and thus, via the retraction ρ, a map S C i : (τ, h) ← (τ ′ , h ′ ) (it will be written τ ← τ ′ if no confusion is possible). Under the above conditions the following properties are obvious: Since the number of edges is fixed under a splitting operation we fixe once and for all the alphabet A.
Let us consider a labelled train-track (τ, h, ǫ). We want to define explicitly the splitting morphism S C i , as a cellular map using the labels. To this end we give an arbitrary orientation to the edges in order to describe the images S C i (e), for every edge e ∈ E(τ ), as an edge path in τ . A particular splitting morphism at a cusp C i is a local operation that changes the traintrack in a neighborhood of the vertex v where C i is based. If C i is on one side of v there are, at most, two possible splitting morphisms at C i , by condition (c). Let us consider the particular case where v has valency 4 with two edges on each side (see Figure 5), the general case is an obvious adaptation. Figure 5 shows a particular labeling and how the labeling is transformed under an elementary splitting. As observed above the transformation of the graph takes place in a neighborhood of v. ( * ) The natural convention for the labeling of τ ′ is to keep the same names as in τ for all the edges that still exist after the operation. Thus only one edge could be renamed after the operation but the alphabet is fixed and then only one name is available. This convention explains the unambiguous labeling of Figure 5. Observe that this convention also induces an orientation for the edges of the new train-track if the initial edges were oriented. In particular if τ is orientable then the new train-track τ ′ is also orientable and each edge has a well defined orientation. This convention is valid and unambiguous for any splitting morphism obtained via the conditions (a)-(d).
In the particular cases of Figure 5, i.e. with two edges (labelled c or d) on the opposite side of C i , there are two possible splittings (case I and II in Figure 5) and the two morphisms are given by the following maps : obtained as a composition of splitting morphisms
S C i : τ ′ → τ a → a.c, x → x, for all x = a,
{S C i 1 , • • • , S C in }, followed by a train-track automorphism α: τ S C i 1 ← τ 1 S C i 2 ← τ 2 • • • S C in ← τ n ≃ τ α ← τ .
Along the sequence of splitting morphisms, each τ j has a well defined labeling, obtained inductively from the one of τ by the convention ( * ). In particular τ n is homeomorphic with τ but the labeling is a priori different. The train-track automorphism α induces, in particular, a relabeling. A given train-track map admits usually many splitting morphisms decompositions and this is a serious practical problem.
Let us now make an obvious but crucial observation for the fixed point problem. The first observation is that the train-track automorphism might not be unique, this is the case if τ admits a symmetry, i.e. a non trivial automorphism group. The splitting morphisms that are given above, together with Lemma 2.4, shows that if τ ′ is homeomorphic with τ and if the relabeling is the trivial one then many fixed points exist. Therefore to avoid fixed points for the train-track map it is highly desirable for the train-track τ to admit a non trivial automorphism group.
Construction of examples.
4.1 A preliminary example.
From the previous sections we are looking for a train-track map ϕ : τ → τ satisfying the following properties : (i) τ is orientable, (ii) τ has a non trivial automorphism group, (iii) ϕ has no fixed points.
Of course the first two conditions are quite easy to satisfy and the real difficulty is to find a map ϕ, as a sequence of splitting morphisms, satisfying (iii). The first example is obtained by an experiment on the labelled train-track of Figure 6. On this train-track the orientability is obvious to check, at each vertex there is an orientation of the incident edges that is compatible with the In/Out partition of the smooth structure. The symmetry is of order two which means that the graph has two parts that are exchanged by the symmetry (an involution). With the labeling of Figure 6 this involution is given by the map : {a, b, c, d, k, l} -→ ←{g, h, i, j, e, f }, this notation means that the edge labelled {a} is mapped to the edge labelled {g} (and{g} to {a}) and so on. Any splitting sequence produces a sequence of maps that increases the length of the image of some edge. Our main strategy is to avoid any image to be "too long" on it's symmetric part.
The difficulty for constructing examples is easily expressed by observing that the train-track τ of Figure 6 has 12 cusps and the maximal number of possible splittings at any train-track along the sequence is 24. Thus the number of possibilities at each step is of order 24 and the growth is exponential with respect to the length of the sequence.
For describing the sequence we introduce a simple notation. First notice that starting from a labelled and oriented train-track as in Figure 6 implies that after each splitting step, the new train-track has a canonical orientation and labeling by (*). A given edge x ∈ E(τ ), being oriented, has an initial vertex i(x) and a terminal vertex t(x). The splitting morphisms of Figure 5 can be described as "sliding" the terminal part of "a" over the initial part of "c" in case I or the terminal part of "b" over the initial part of "d" in case II. For this reason the two possible morphisms are denoted t(a) i(c) or t(b) i(d) , this notation is unambiguous for a labelled and oriented train track and gives at the same time the cusp and the morphism.
The very first sequence we want to describe on the labelled train-track of Figure 6, with this notation, is the following one:
S 1 = i(b) t(l) ; t(b) i(d) ; t(k) i(f ) ; i(k) t(j) ; t(e) i(l) ; t(c) i(a) ; i(c) t(a) ; i(e) t(d) ; i(h) t(f ) ; t(h) i(j) ; t(f ) i(g) ; i(j) t(g) .
The surprise is that such a short sequence gives back to the same topological train-track.
The composition of all these elementary splittings gives the image train-track as shown by Figure 7. The figure should be understood as embedded in the regular neighborhood of the topological (i.e. without the labels) train-track τ of Figure 6.
We observe that this image train-track is indeed the same topological train-track but with a different labeling, given by Figure 8, and of course a different embedding. In addition, since the automorphism group of τ has order two, there are two ways to relabel the train-track by a train track automorphism, this is shown by the two "identifications" in Figure 8.
Figure 7: The image train track, after the sequence S 1 of splittings, in the regular neighborhood of τ .
Let us now analyse this example. First a simple Euler characteristic computation shows that the train track τ is embedded in a surface of genus 3 with 2 boundary components, that are the boundary components of the regular neighborhood of τ . With the labeling of Figure 8 these two boundary components are written as cyclic edge paths on τ given by : ∂ 1 = ijhdeaclbf gk and ∂ 2 = ikgjbdcaelhf . The notation x means that the oriented edge x is crossed by the path with the opposite orientation. In order to make the map ϕ explicit we just have to chose one of the two possible labelings of Figure 8 and read the image of each edge as an edge path on τ . Using the identification II of Figure 8 for τ and reading the paths of Figure 7 on this labelled train-track gives the following cellular map ϕ 1 :
• a → k, • b → f ij, • c → kgk, • d → j, • e → jbf, • f → la, • g → a, • h → lcd, • i → e, • j → ad, • k → dhl, • l → f .
The first very good news is that no letter appears in it's own image. This is enough to prove that ϕ 1 has no fixed points by Lemma 2.4. This implies that the homeomorphism that represents ϕ 1 on the genus 3 surface with 2 boundary components has no fixed points in it's interior. We need now to check what happens for the boundaries. A simple computation shows that ϕ 1 (∂ i ), i = 1, 2 is a word that cyclically reduces to a cyclic permutation of ∂ i , i = 1, 2. Thus each boundary component is invariant under ϕ 1 , i.e. the image is isotopic to itself, as a closed curve, but with a non trivial rotation because of the cyclic permutation. This proves first that ϕ 1 is induced by a homeomorphism f 1 that has no fixed points on the genus 3 surface with 2 boundary components.
We observe that each boundary curve ∂ i , i = 1, 2, as an edge path in τ , has six cusps and six sides. Checking the singularity type property using the Nielsen paths method of Proposition 2.3 is unfortunately not necessary, indeed the bad news is that the incidence matrix M(τ, ϕ 1 ) fails to be irreducible. Indeed, we check that the proper subgraph τ 0 of τ that consists of the edges {a, c, d, f, g, h, j, k, l} is invariant under ϕ 1 and therefore the matrix M(τ, ϕ 1 ) has a block structure and Proposition 2.2 does not apply. The existence of this invariant subgraph proves that the homeomorphism f 1 is in fact reducible and an invariant curve is easy to exhibit.
Infinite sequence of pseudo-Anosov homeomorphisms.
The idea to create a sequence of pseudo-Anosov homeomorphisms from the map ϕ 1 already obtained, is to compose it with an infinite sequence of Dehn twists along a curve C that belongs to only one part of τ , with respect to the symmetry, and do not change the map on the symmetric part of τ . In other words this particular curve C should have the property that it's image under the homeomorphism f 1 is disjoint from itself. An arbitrary train-track map will certainly not admit such a curve. Our previous example, in the labelled train-track of Figure 8 admits such a curve given by the edge path C = i.g, it's image ϕ 1 (C) = C ′ = e.a is disjoint from C. It turns out that the Dehn twist along C is possible to describe as splitting sequence on τ . With the labeling of Figure 8 and our previous notations, such a splitting sequence is given by :
t(f ) i(i) ; i(j) t(i) ; i(k) t(g) ; t(k) i(g) .
The new train-track τ ′ is homeomorphic to τ , with a relabeling map α : τ → τ ′ given by:
• i → g, • g → i, • x → x for all x / ∈ {i, g}.
The image of τ ′ is shown by Figure 9 and the twist map, along the closed curve C = i.g, called T (i, g) : τ ′ → τ is given by:
• f → f.i, • j → i.j, • k → g.k.
g and • x → x for all x / ∈ {f, j, k} . The composition of all these maps :
ϕ 2 : τ ′ T (i,g) → τ ϕ 1 → τ α → τ ′ gives the following : • a → k, • b → f gj, • c → kik, • d → j, • e → jbf, • f → lae, • g → a, • h → lcd, • i → e, • j → ead, • k → adhla, • l → f .
We check that ϕ 2 has no fixed points by Lemma 2.4 and that the incidence matrix M(τ ′ , ϕ 2 ) is irreducible. We now have to compute the boundary periodic points and check, via the Nielsen path method of Proposition 2.3, if the singularity type condition is satisfied in order to apply Proposition 2.2. For the labelled train-track τ ′ (see figure 9) the two boundary components are given by:
∂ 1 = cdbjikgf hlea and ∂ 2 = lcaedhjgkif b.
Each of these components have six cusps and six sides. For concreteness let us focus our computation on one of these sides, for instance the one denoted [c.d] along ∂ 1 . We consider the successive images of this side under ϕ 2 in order to find the boundary periodic points and their location along ∂ 1 .
We first check that each of these sides have period 3 : Therefore the measured foliation that is constructed from ϕ 2 satisfies the singularity type condition by Proposition 2.3 and Proposition 2.2 implies that ϕ 2 represents a pseudo-Anosov homeomorphism either on a surface of genus 3 with 2 punctures, without fixed points, or on a closed surface of genus 3 with two fixed 6-prong singularities of positive index (the six separatrices are permuted in two orbits of period 3 from our previous computation). In both cases the pseudo-Anosov homeomorphisms have orientable invariant foliations by Lemma 2.6. We thus obtain our first example.
The construction of an infinite sequence is now clear from this example, we just have to iterate the twist map. In fact since the two train-tracks τ and τ ′ differs just by an order two relabeling (denoted by α above) there is another sequence of splitting morphisms for the next twist that we denote T (g, i) : τ → τ ′ and is given by the following splitting sequence:
t(f ) i(g) ; i(j) t(g) ; i(k) t(i) ; t(k) i(i) .
The next map in the sequence is obtained by the composition:
ϕ 3 : τ T (g,i) → τ ′ T (i,g) → τ ϕ 1
→ τ . Observe that the relabeling α is not needed here and this map ϕ 3 is written combinatorially on the labelled train-track of Figure 8 with identification II as :
• a → k, • b → f ij, • c → kgk, • d → j, • e → jbf, • f → laea, • g → a, • h → lcd, • i → e, • j → aead, • k → eadhlae, • l → f .
This new map ϕ 3 satisfies exactly the same properties than ϕ 2 .
The simplest infinite sequence can then be written as:
ϕ 2n+1 : τ T (g,i) → τ ′ T (i,g) → τ n ϕ 1 → τ ,
where the notation {...} n is the natural n times composition of splitting morphisms. The combinatorial map is given on the train-track τ of Figure 8, with identification II, as: -each f i is pseudo-Anosov on the surface S of genus 3.
• a → k, • b → f ij, • c → kgk, • d → j, • e → jbf, • f → la(ea) n , • g → a, • h → lcd, • i → e,
-each f i has an orientable invariant measured foliation with 2 fixed 6-prong singularities of positive index and no other fixed points.
Proof. The fixed point property is directly checked by Lemma 2.4. The pseudo-Anosov property is checked inductively with Propositions 2.2 and 2.3. The irreducibility property of the incidence matrix M(ϕ 3 , τ ) is checked as for M(ϕ 2 , τ ). Then each entry of M(ϕ 2n+1 , τ ) is increasing with respect n and thus if M(ϕ 3 , τ ) is irreducible then all M(ϕ 2n+1 , τ ) are irreducible.
The single boundary periodic point property of Proposition 2.3 for ϕ 2n+1 is checked directly as for ϕ 3 . This implies first that the f 2n+1 are all pseudo-Anosov. The invariant foliations of these pseudo-Anosov homeomorphisms are all orientable by Lemma 2.6 and are all in the same stratum because of the same 6-cusps structure of the boundary curves ∂ 1 , ∂ 2 and the single boundary periodic point property of Proposition 2.3. Observe that the sequence of growth rates of these pseudo-Anosov homeomorphisms goes to infinity, this is obvious, but this sequence is computable since the sequence of incidence matrices is explicit.
For completeness, the maps ϕ 2n+1 is written, as a splitting sequence S 2n+1 with the above notations as:
{ i(b) t(l) ; t(b) i(d) ; t(k) i(f ) ; i(k) t(j) ; t(e) i(l) ; t(c) i(a) ; i(c) t(a) ; i(e) t(d) ; i(h) t(f ) ; t(h) i(j) ; t(f ) i(g) ; i(j) t(g) }{ t(f ) i(i) ; i(j) t(i) ; i(k) t(g) ; t(k) i(g) . t(f ) i(g) ; i(j) t(g) ; i(k) t(i) ; t(k) i(i) } n .
This completes the proof of the main theorem.
Remark. With the same basic ingredients there is another infinite sequence that is given by the composition:
ψ n : τ ϕ 1 → τ T (g,i)
→ τ ′ T (i,g) → τ
n . The corresponding combinatorial maps on the labelled train track of Figure 8 are given by:
• a → (ig) n k(gi) n , • b → f (ig) n i(gi) n j, • c → (ig) n k(gi) n g(ig) n k(gi) n , • d → (gi) n j, • e → (gi) n jbf (ig) n , • f → la, • g → a, • h → lcd, • i → e, • j → ad, • k → dhl, • l → f (ig) n .
This sequence is different from the previous one and all these maps satisfy again the same properties with respect to the fixed points question.
These sequences are very special in particular because the Dehn twists are directly realized as splitting sequences on the same train-track. Similar examples on other surfaces and other strata should be obtained the same way but each surface and each stratum requires a specific study. For genus two surfaces there is a work in progress in this direction. A systematic study of all the pseudo-Anosov homeomorphisms on a given stratum would require to develop the specific train-track automata theory. For practical purposes such automata are huge and an algorithmic description is probably necessary, in particular for counting problems.
Figure 1 :
1 Figure 1: The smooth structure at a vertex.
Figure 2 :
2 Figure 2: A regular neighborhood and the tie foliation.
Figure 3 :
3 Figure 3: Orientable and non orientable train tracks.
Figure 4 :
4 Figure 4: Splitting paths.
(i) Every train-track satisfying (a) admits some splitting operations. (ii) The structure of the complementary components is preserved by (d). (iii) The number of vertices and edges is preserved by (b), (c) and (d). (iv) The map S C i is injective on vertices by (b) and (c). (v) After a splitting operation the new train-track τ ′ satisfies (a).
Figure 5: The splitting morphisms.
Figure 6 :
6 Figure 6: The initial labelled train-track.
Figure 8 :
8 Figure 8: The two possible labelings.
Figure 9 :
9 Figure 9: The Dehn twist T (i, g).
[č.d] → k.[i. ǩ].j → ....e.a.d.[ ȟ.l]a.... → ....l.[č.d].f...Where the bracket in the previous writing represents the position of one side along the boundary curve ∂ 1 and the symbol .. shows which edge on the side contains the periodic point (here of period 3). It is then an obvious checking that the side [c.d] has only one periodic boundary point. By a similar computation we show that this single boundary periodic point property is also satisfied for the other orbit along the sides of ∂ 1 (for instance the side [b.j] ), as well as for the orbits of the sides along the component ∂ 2 .
• j → (ae) n ad, • k → (ea) n dhl(ae) n , • l → f .Proposition 4.1 The sequence {ϕ 2n+1 : τ → τ ; n ∈ N * } is a sequence of train-track representatives of a sequence of homeomorphisms f 2n+1 : S → S such that : |
00410665 | en | [
"math.math-ds",
"math.math-gr"
] | 2024/03/04 16:41:22 | 2009 | https://hal.science/hal-00410665/file/BowSer08-09VersionHal.pdf | Jérôme Los
Volume entropy for surface groups via Bowen-Series like maps
Keywords: 2000 Mathematics Subject Classification. Primary: 57.M07, 57M05. Secondary: 37E10, 37B40, 37B10 Surface groups, Bowen-Series Markov maps, topological entropy, volume entropy
We define a Bowen-Series like map for every geometric presentation of a cocompact surface group and we prove that the volume entropy of the presentation is the topological entropy of this particular (circle) map. Finally we find the minimal volume entropy among geometric presentations.
Introduction
One proof of the Mostow rigidity theorem for hyperbolic manifolds [Mo] is based on the following variational principle: the manifold admits a unique metric minimising the volume entropy and the optimum is realized by the hyperbolic metric. This approach is due to Besson-Courtois-Gallot (see [BCG] ) and gives some hope for other related area. One such hope would be to obtain an "optimal" metric or an "optimal" presentation in geometric group theory via a similar variational principle. The question of comparing volume entropy in group theory makes sense for Gromov hyperbolic groups (see [START_REF] Gromov | Hyperbolic groups[END_REF] or [Sho] for definitions), since it is well defined and depends on the presentation. The question would be to find a group presentation minimising the volume entropy. An answer to that question is only known for free groups where it is essentially trivial (see for instance [dlH]).
Recall that a finitely generated group Γ with a finite generating set X, or a finite presentation P =< X; R >, defines a metric space (Γ, d X ) with the word metric and |B(n)| X denotes the cardinality of the ball of radius n centered at the identity in (Γ, d X ).
1
The growth properties of the function n → |B(n)| X has been of crucial importance in geometric group theory over the last decades. For instance it is fair to say that a new period started after the fundamental work of M.Gromov classifying groups with polynomial growth functions [Gro2]. Another important step was the discovery by R.Grigorchuck [Gri] of a whole class of groups with growth function between polynomial and exponential.
The nature of the growth function (i.e. being polynomial, exponential or intermediate) is a group or geometric property meaning that it does not depends upon the particular presentation (as a quasi-isometric invariant), but the numerical function depends in a highly non trivial way on the group presentation. For instance, among the exponentially growing groups the following numerical function :
h vol (Γ; P ) := lim n→∞ 1 n . ln |B(n)| X ,
is called the volume entropy of the presentation P and the way this number varies with P is absolutely not understood.
For hyperbolic groups, the question of finding and characterising minimum volume entropy makes sense, but no general method is available to compute or evaluate the volume entropy from the presentation, except for the obvious free group case.
In this paper we develop a method for computing explicitly the volume entropy among the simplest presentations of the simplest (non free) hyperbolic groups, namely among the geometric presentations of co-compact surface groups.
The restriction to geometric presentations is build in our approach. For these special presentations, for which the surface structure is obvious, we re-open a tool box that has been created about 30 years ago by R.Bowen and C.Series [BS]. Their idea was to associate a dynamical system, i.e. a N-action on S 1 = ∂Γ, with all the nicest possible dynamical properties, to a very special group presentation and then to extract some informations about the group from the dynamics.
Bowen and Series defined a Markov map on the circle for one specific presentation of Fuschian groups, using a special geometric condition on the fundamental domain for the group action in H 2 . The first new contribution of this paper is to suppress all these geometric conditions but keeping the restriction to geometric presentations. We call a presentation of a surface group geometric if the two dimensional Cayley complex is planar. Generalisation of our construction to arbitrary presentations is conceivable but in a highly non trivial way. The first result, combining several parts of the paper, can be stated as:
Theorem 1.1 Let Γ be a co-compact hyperbolic surface group with a geometric presentation P then there exists a Markov map Φ P : ∂Γ = S 1 -→ ∂Γ that is orbit equivalent to the group action. In addition this particular map satisfies :
Volume Entropy (Γ, P ) = Topological Entropy Φ P .
The definition of the map follows the general idea of Bowen an Series but the construction is quite different, combinatorial here rather than geometric. The resulting combinatorial dynamical properties of these maps enable to compare the symbolic description of the orbits of the maps with the symbolic descriptions of the geodesics for the given presentation. This comparison is the key step in proving the second part of the Theorem. The Markov map being defined for any geometric presentation, it becomes possible to compare the entropy between different geometric presentations. Furthermore the map is explicit which implies that the computations of the entropy is possible, an exemple is presented in section 5 for the genus two surface. The previous Theorem is the main step in proving the following :
Theorem 1.2 The minimal volume entropy, among all geometric presentations of a cocompact surface group is realised by the geometric presentations with the minimum number of generators.
This result confirms the intuition that presentations with the minimum number of generators are natural candidates to be the absolute minimum for surface groups. The conjecture is still out of reach with the tools developed in this paper. Surprisingly, Bowen-Series like maps have been defined in very few cases, Marc Bourdon in his thesis [Bou] and Andre Rocha, also in his thesis [Ro] constructed such maps for some Kleinian groups using a condition for the action of the group on H 3 that is the exact analogue of the condition used by Bowen and Series for Fuschian group actions on H 2 .
It is a great pleasure to thank Marc Bourdon and Peter Haissinsky for discussions and comments on this work.
2 Some properties of geometric presentations.
In this section we gather some geometric and combinatorial properties of geometric presentations of surface groups that will be used throughout the paper. Recall that a group presentation : P = x 1 , ..., x n |R 1 , ..., R k is given by a set of generators and relations. The relations are words R i in the alphabet X = {x ±1 1 , ..., x ±1 n } that are cyclically reduced and are defined modulo cyclic permutations and possibly inversions. The Cayley 2-complex Cay 2 (P ) is the two complex whose 1-skeleton is the Cayley graph Cay 1 (P ) and whose 2-cells are glued to each closed path in the Cayley graph representing a relation. A presentation of a surface group is called geometric if Cay 2 (P ) is planar. Equivalent definitions that are valid in higher dimension are easy to state (see for instance [FP]). In order to simplify the formulation we assume that the group Γ is not a triangular group and has no elements of order two. In this section most of the statements are easy and are given for completeness.
Lemma 2.1 Let Γ be a hyperbolic co-compact surface group and P = x 1 , ..., x n |R 1 , ..., R k a geometric presentation of Γ. Then : 1. The set of generators {x ±1 1 , ..., x ±1 n } admits a cyclic ordering that is compatible with the group action. 2. There exists a planar fundamental domain △ P , where each side S i of △ P is dual to a generator x ±1 i . 3. Each generator x i appears exactly twice (with + or -exponent) on the set of relations {R 1 , ..., R k }. 4. Each pair of adjacent generators, according to the cyclic ordering (1.), belongs to exactly one relation and defines one relation.
These results are classical and can be found, for instance in [FP].
Let us focus on the boundary of the group. It is classical that a co-compact surface group, for surfaces of genus larger than 2, is Gromov hyperbolic (see [START_REF] Gromov | Hyperbolic groups[END_REF] ) and the boundary ∂Γ is homeomorphic to the circle S 1 . With a presentation P , the points ξ ∈ ∂Γ are described as infinite geodesic rays starting at the identity, modulo the equivalence relation, among rays, to be at uniform bounded distance from each other. These rays are expressed as infinite word representatives, in the alphabet X = {x ±1 1 , ..., x ±1 n }, considered as infinite paths in the Cayley graph Cay 1 (P ). We denote {ξ} an infinite word representative of a geodesic ray converging to ξ ∈ ∂Γ. These descriptions are non unique and ξ ∈ ∂Γ have generally more than one geodesic writing. Symbolic description of geodesic rays for surface groups goes back to at least Hedlung in the thirties [He34].
We discuss some properties that are particular to geometric presentations of surface groups. The non uniqueness of the geodesic writing is reflected by the possible existence of bigons, i.e. a pair of distinct geodesics {γ 1 , γ 2 } in Cay 1 (P ) with the same initial point and the same terminal point. We will often use some classical abuse of notations in identifying the vertices of the complexes Cay 1 (P ) and Cay 2 (P ) with the group elements and with some particular writing as geodesic segments ending at those vertices.
Figure 1: A bigon in B(x i , x j ) .
Using the group action on the Cayley graph we consider bigons starting at the identity. We denote B(x i , x j ) the set of bigons that start at the identity by the generators x i and x j , for instance γ 1 = x i .w 1 and γ 2 = x j .w 2 , with x i = x j (see figure 1).
A bigon might be infinite, if the two geodesics {γ 1 , γ 2 } are geodesic rays, otherwise the length of a bigon is the common length of the two geodesics {γ 1 , γ 2 }. We denote β(x i , x j ) a bigon in B(x i , x j ) of minimal length. If necessary we will denote B g (x i , x j ) and β g (x i , x j ) the bigons based at the vertex g ∈ Cay 1 (P ). Observe that the length of a non trivial bigon in a presentation is at least half the minimal length of a relation. Lemma 2.2 Let P be a geometric presentation of a co-compact surface group Γ, then : 1. B(x i , x j ) = ∅ only if x i and x j are two adjacent generators, with respect to the cyclic ordering of Lemma 2.1. 2. For each adjacent generators (x i , x j ) there exists a unique finite length minimal bigon β(x i , x j ).
Proof. 1. The proof of the first statement is by contradiction. Assume that B(x i , x j ) = ∅ and (x i , x j ) are not adjacent. If there is a bigon of finite length in B(x i , x j ), then we consider a minimal bigon β(x i , x j ). The planarity and the minimality assumption imply that β(x i , x j ) is realised by two geodesics whose union is a closed embedded curve in the one skeleton of Cay (2) (P ) and therefore bounds a compact topological disc D in the plane. Since x i and x j are not adjacent, according to the planar cyclic ordering of Lemma 2.1, there is at least another generator, say x ′ between x i and x j . In the Cayley graph, there is a copy of all the generators starting at the vertex denoted x ′ by an abuse of notation. This vertex and the edges starting at x ′ are contained in D. In particular there is another pair of generators x i and x j stating at x ′ and the geodesics that start by these two edges have to meet, either in the interior of D, or along the boundary i.e. along the paths defining β(x i , x j ). This intersection defines a bigon in B x ′ (x i , x j ) that is shorter than β(x i , x j ), a contradiction. For infinite bigons the argument is similar. The two geodesics {γ 1 , γ 2 } defining the bigon are infinite rays converging towards the same point ξ ∈ ∂Γ. These two rays are disjointly embedded in the plane and bound a disc D. They are at distance bounded by some δ from each other since the two rays converge to same the point on ∂Γ. The disc D we consider is such that the restriction of the sphere of radius N with D is a set of diameter bounded by δ. We assumed that (x i , x j ) are not adjacent hence, there is a copy of the pair (x i , x j ) at each vertex, at distance one from the identity, beween x i and x j and thus a copy of the disc D within D. One contradiction comes from the fact that the previous argument implies inductively that the number of vertices on the sphere of radius N within D grow at least as 3 N , a contradiction with the uniform distance between {γ 1 } and {γ 2 }.
2. For the second statement, a pair of adjacent generators (x i , x j ) defines a unique relation R by Lemma 2.1. This means there is one relation, defined as a cyclic word in the alphabet X, that contains the subword x -1 j .x i or x -1 i .x j .
If the length of the relation R is even :
Then it can be written, up to a cyclic permutation and inversion, as w 1 .x -1 j .x i .w 2 = id, where the length of w 1 and w 2 are the same. The two paths written γ 1 = x j .w -1 1 and γ 2 = x i .w 2 connect the identity to the same vertex z in the Cayley graph, where z is the element written as x j .w -1 1 or x i .w 2 .
Claim. With the above notations, the two paths γ 1 and γ 2 are geodesic segments for the geometric presentation P .
This claim is proved by a contradiction similar to the proof of the first statement. If γ 1 and γ 2 are not geodesics then there is shorter path γ connecting the identity to z. The path γ has to start with a generator that is different from x i and x j . The planarity assumption implies that γ 1 lies between γ 2 and γ or γ 2 lies between γ 1 and γ. In both cases we obtain a contradiction by producing a shorter relation defined by x i and x j , by the argument of part 1., a contradiction with Lemma 2.1. The pair of geodesics γ 1 and γ 2 defines a bigon in B(x i , x j ) and this bigon is minimal since otherwise there would be another (shorter) relation, defined by the pair (x i , x j ), a contradiction with Lemma 2.1.
If the length of R is odd:
The relation can be written as y.w ′ 1 .x -1 j .x i .w ′ 2 = id, where the length of w ′ 1 and w ′ 2 are the same. The two paths γ ′ 1 = x j .w ′ 1 -1 and γ ′ 2 = x i .w ′ 2 start at the identity and end at two different points g 1 and g 2 that differs by the generator y. The two paths γ ′ 1 and γ ′ 2 are geodesics by the above argument and y is called "opposite" to the pair (x i , x j ). By Lemma 2.1 (item 3.) the generator y appears exactly twice in the set of relations, one of them is R = R (1) , the other relation R (2) contains the letter y or y -1 . If the length of R (2) is odd, then it can be written (modulo cyclic permutation and possibly inversion) as w ′′ 1 .y -1 .w ′′ 2 = id , where w ′′ 1 and w ′′ 2 have the same length. The two paths
γ ′′ 1 = x j .w ′ 1 -1 .w ′′ 1 -1
and γ ′′ 2 = x i .w ′ 2 .w" 2 are two geodesics from the identity to the same point in Cay 1 (P ) and define a bigon in B(x i , x j ) that is of minimal length by the above arguments.
If the length of R (2) is even, then it can be written, modulo cyclic permutation and inversion, as w ′′′ 1 .y -1 .w ′′′ 2 .y 1 = id, where w ′′′ 1 and w ′′′ 2 have the same length. The argument we use for the generator y above is duplicated here for y 1 . We start an induction on the number of even relations that appear in the following sequence and is uniquely defined from the adjacent pair (x i , x j ):
R (1) defines -→ opposite : {y} defines -→ R (2) ( even ) defines -→ opposite : {y 1 } defines -→ R (3) ( even )....
If an odd relation appears in the sequence R (n) then the induction stops because the previous argument defines a unique bigon in B(x i , x j ) that is minimal for the same reasons.
If no odd relation appear in the sequence then, in particular, R (1) does not appear again. This implies, in particular, that the generator y does not appear again in the sequence {y n }. By induction y 1 , ..., y k never appear again. This is impossible since the number of generators is finite. Uniqueness of the minimal bigon is part of the proof.
Figure 2: A sequence of relations defining a bigon.
A consequence of Lemma 2.2 is :
Corollary 2.3 The boundary ∂Γ = S 1 is covered by the cylinders (of length one) C x i , x i ∈ X, where :
C x i = ξ ∈ ∂Γ | ∃{w} a geodesic ray, representing ξ starting with x i , i.e. {w} = {x i .w ′ } .
In addition C x i C x j = ∅ if and only if x i and x j are adjacent generators according to the cyclic ordering of Lemma 2.1.
Proof. The cylinders cover the boundary since the x i 's generate the group. A point in ∂Γ belongs to at most two cylinders by Lemma 2.2 (item 1.) and in this case the two cylinders C x i and C x j are defined by two adjacent generators. Conversely the cylinders of two adjacent generators do intersect because of the existence of finite bigons.
Another consequence of the planarity assumption is:
Lemma 2.4 (connectedness) For a geometric presentation P of a co-compact surface group Γ, if ξ ∈ ∂Γ and η ∈ ∂Γ are two points in the cylinder C x i then one of the two intervals ]ξ, η[⊂ ∂Γ bounded by ξ and η is contained in
C x i .
Proof. Since ξ and η belong to C x i there exists geodesic rays {ξ} and {η} starting with x i . The two rays {ξ} and {η} have a common beginning and, since ξ and η are different, there is a maximal vertex v in Cay 1 (P ) such that the two infinite paths {ξ} v and {η} v starting at v are disjoint. The union γ = {ξ} v {η} v is a bi-infinite embedded path in D 2 converging towards ξ and η. The path γ bounds two discs in D 2 , one of them contains all the vertices at distance one from the origin, except possibly the vertex corresponding to x i . The other disc, denoted D({ξ}, {η}), contains one of the two intervals bounded by ξ and η on the boundary ∂Γ. We denote ]ξ, η[ this interval. Let ρ ∈]ξ, η[, any geodesic ray {ρ} representing ρ is contained in D({ξ}, {η}) at large distance from the origin. If {ρ} has a common initial path with {ξ} or {η} then ρ ∈ C x i . Otherwise, by planarity, {ρ} has to intersect either {ξ} or {η} at a vertex w and therefore defines a bigon in some B(x i , x j ). In this case x j is adjacent to x i by Lemma 2.2 and ρ has another geodesic ray representative {ρ} ′ starting with x i .
3 Special rays and a partition of the boundary.
In this section we define some special rays in the planar 2-complex Cay (2) (P ) giving rise to a finite collection of points on the boundary ∂Γ that are uniquely defined from the presentation P . From the previous section, each intersection of two cylinders C x i C x j is non empty only if the two generators (x i , x j ) are adjacent in X. Let us call such a pair of adjacent generators a corner of the presentation P . The number of corners is even and the cyclic ordering of the generators induces a cyclic ordering of the corners. For notational convenience, the cyclic ordering of the generators is given by the labelling of the generators, in other words x i+1 is the generator next to x i for the cyclic ordering, say on the right. By convention it is understood that the notation (x i , x i+1 ) means that in the 2-complex, the edges denoted x i and x i+1 are adjacent and oriented from the vertex. The parity of the number of corners imply that at each vertex, the corner (x i , x i+1 ) defines a unique opposite corner denoted :
(*) (x i , x i+1 ) opp := (x i+n mod[2n] , x i+n+1 mod[2n] ),
where n is the number of generators (see figure 3).
We construct a unique infinite sequence of corners, bigons and vertices from any given corner (x i , x i+1 ) by the following process:
(i) Each corner, say at the identity, defines a unique minimal bigon β(x i , x i+1 ) based at id, by Lemma 2.2, for which (x i , x i+1 ) is an extreme corner called the bottom corner.
(ii) The bigon β(x i , x i+1 ) has another extreme corner, called a top corner defined by the end of the two geodesics γ 1 and γ 2 of the definition of a bigon. This extreme corner is denoted:
(x β(i) , x β(i+1)
) and is based at the vertex g 1 (x i , x i+1 ). This top corner is uniquely defined by (x i , x i+1 ).
(iii) The new corner defines an opposite corner (x
β(i) , x β(i+1) ) opp at g 1 (x i , x i+1 ).
(iv) We consider next the unique minimal bigon:
β (1) (x i , x i+1 ) := β g 1 [(x β(i) , x β(i+1)
) opp ], that gives a new bottom corner at g 1 (x i , x i+1 ) and a new top corner at the extreme vertex g 2 (x i , x i+1 ). This construction defines, by induction, a unique infinite sequence of corners, bigons and vertices (see figure 3): The bigons that are defined by the previous infinite sequence : β (0) = β, β (1) , ......, β (i) ...., are given by two geodesics {γ
β(x i , x i+1 ) -→ β (1) (x i , x i+1 ) := β g 1 [(x β(i) , x β(i+1) ) opp ] -→ β (2) (x i , x i+1 ) -→ • • • (x i , x i+1 ) -→ (x β(i) , x β(i+1) ) -→ (x β(i) , x β(i+1) ) opp = (x i , x i+1 ) (1) -→ • • •
(i) 1 , γ (i) 2 }, i = 0, 1, 2, ....
A finite concatenation of bigons : β (0) .β (1) .......β (k) is a finite length bigon defined by any finite concatenation of the paths :
γ (0) ǫ(0) .γ (1) ǫ(1) ......γ (k) ǫ(k) , for ǫ(k) = 1 or 2.
Lemma 3.1 (bigon rays) Any of the paths :
γ (0) ǫ(0) .γ (1) ǫ(1) ......γ (k) ǫ(k) , for ǫ(k) = 1 or 2 is a geodesic segment in the Cayley graph.
In addition, any two such geodesic segments stay at a uniform distance from each other when k → ∞. The infinite concatenation β (0) .β (1) ......β (i)
.... := β ∞ (x i , x i+1 ) is called a bigon ray and defines a unique point (x i , x i+1 ) ∞ in ∂Γ.
The proof of the first statement is a direct consequence of : Lemma 3.2 Let P be a geometric presentation of a hyperbolic co-compact surface group. If γ is a geodesic segment, starting at the identity and ending by a generator x = x j ∈ X, then any continuation of γ as γ.x i is a geodesic segment except when : x i = x -1 j and possibly when x i = x j±1 or x i = x j±2 . Proof : The case x i = x -1 j is always impossible for a geodesic continuation of the segment γ. The case x i = x j±1 might not be a geodesic continuation, in particular at the end vertex of a bigon. The case x i = x j±2 might not be a geodesic continuation in the case when the set R of relations contains a relation of length 3. The proof that γ.x i is geodesic in all other cases is obtained by contradiction as in the previous section.
The first statement of Lemma 3.1 is obtained inductively using 3.2. Indeed each segment in the sequence is a concatenation of geodesic segments at vertices where the condition of Lemma 3.2 is satisfied by definition (*) of the opposite corner. Indeed a hyperbolic cocompact surface group has more than 4 generators and, at each vertex of the Cayley graph, more than 8 edges start so the difference of the index by ±2 between the last generator of a segment γ A bigon ray, defined as the infinite concatenation : lim k→∞ β (1) .β (2) ...β (k) is uniquely defined by the corner (x i , x i+1 ) as well as the limit point on ∂Γ. This limit point is a particular point in C x i C x i+1 . Each generator x i belongs to a corner on it's left (x i-1 , x i ) and a corner on it's right (x i , x i+1 ) and thus each generator x i defines two particular points (x i-1 , x i ) (∞) and
(x i , x i+1 ) (∞) in C x i ⊂ ∂Γ. Lemma 2.4 implies : Lemma 3.3 The interval I x i := [(x i-1 , x i ) (∞) , (x i , x i+1 ) (∞) [ ⊂ ∂Γ is contained in the cylin- der C x i for each x i ∈ X.
Definition 3.4 For a given geometric presentation P of a surface group Γ, the boundary ∂Γ = S 1 admits a canonical partition by the intervals I x i ; x i ∈ X. We define the map :
Φ P : ∂Γ -→ ∂Γ by Φ P (ξ) = x -1 i (ξ) when ξ ∈ I x i ,
where the action x -1 i (....) is the group action by homeomorphisms on ∂Γ induced by the element x -1 i ∈ Γ. 4 Subdivision rules and the Markov property.
The goal of this section is to raffine the partition of S 1 by the intervals I x i , x i ∈ X, in order to prove the first part of Theorem 1.1; the Markov property of the map Φ P . Recall that a map F : S 1 -→ S 1 satisfies the Markov property if there is a partition (finite here) so that the map is a homeomorphism on each interval and maps extreme points to extreme points. This definition is special to one dimensional spaces (for a more general definition see [Bo] for instance). From a dynamical system point of view this is just showing that the extreme points of the partition have finite orbits. From a geometric group point of view it is interesting to understand and describe the geometry of the particular geodesic rays that are used to define the partition.
The intervals I x i are defined through the properties of minimal bigons. The simplest situation is when the presentation has only relations of even length. The next simplest situation is when all the relations are of odd length. In these simple cases the subdivision process is a little bit easier to describe.
The interval I x i is given by the two corners : (x i-1 , x i ) and (x i , x i+1 ). We focus on the left side of the interval I x i , i.e. on the corner (x i-1 , x i ), the analysis for the other (right) side is exactly the same. The corner defines a unique relation R L , a unique minimal bigon β(x i-1 , x i ), a unique bigon ray β (∞) (x i-1 , x i ) and a unique limit point (x i-1 , x i ) ∞ .
Simple cases subdivisions.
We start by assuming that all relations in P have the same parity. The even cases are the simplest situations since all bigons are defined with only one relation (by the proof of Lemma 2.2). In the odd cases all bigons are defined using two relations. All the ideas of the subdivision construction can be seen for these simple cases.
(A) The length of the relation R L is even:
In what follows we use a writing that combines some initial path followed by an infinite sequence of bigons :
α = w.β ∞ g (a, b)
, where w is a geodesic path starting at the identity and ending at a vertex g and β ∞ g (a, b) is an infinite sequence of bigons, defined exactly like a bigon ray but starting at g as a geodesic continuation of w and defined by the corner (a, b) at g. This writing describes an infinite collection of geodesic rays. By Lemma 3.1 all the rays in that collection converge to the same point on the boundary.
In particular, among the infinite possible writing, as geodesic rays, of the bigon ray β (∞) (x i-1 , x i ), the following sub-class makes the belonging of the limit point (x i-1 , x i ) (∞) to the cylinder C x i obvious (see figure 5) :
β (∞) (x i-1 , x i ) ⊃ {x i .w L .β (∞) [(x β(i-1) , x β(i) ) (opp)
]}, where :
• The corner (x β(i-1) , x β(i) ) is the top corner of the bigon β(x i-1 , x i ).
• The path written x i .w L = x i .y 1 i .y 2 i .....y k i is the x i -side of the two paths that define the minimal bigon β(x i-1 , x i ).
The path x i .w L crosses the following corners : {( xi , y 1 i ), (ȳ 1 i , y 2 i ), ..., (ȳ k-1 i , y k i )}, where z is the standard notation for the inverse orientation. Each of those corners (a, b) defines a unique opposite corner (a, b) opp and thus a unique bigon ray : β (∞) g [(a, b) opp ] based at the corresponding vertex g (see figure 5). Starting from the identity we define the following collection of rays, where the based vertex for the bigon rays β (∞) has been remove to simplify the notations :
[Rays-Even] R L = (x i = y 0 i ).y 1 i .y 2 i .....y j i .β (∞) [(ȳ j i , y j+1 i ) opp ]; j = 0, ..., k -1 .
Lemma 4.1 The collection R L defined above is a collection of geodesic rays called left subdivision rays (with respect to the interval I x i ). Each such ray converges toward a point in the interior of
I x i .
Proof. The first part is proved the same way than Lemma 3.1.
The collection of rays defining the bigons β (∞) (x i-1 , x i ) and β (∞) (x i , x i+1 ) contain two extreme rays that are disjoint in D 2 . The union of these extreme rays is a bi-infinite embedded Figure 5: Subdivision rays, the even case.
geodesic, passing through the identity, and bounding a maximal domain
D x i in D 2 with D x i ∂Γ = I x i .
Any ray that stay in the interior of D x i converges to a point in the interior of
I x i .
By construction the rays in R L stay in the interior of D x i and thus converge to points in the interior of the interval I x i , that we call (left) subdivision points
L x i = {L (1)
x i , ..., L
x i } ⊂ I x i .
(B) The length of R L is odd:
In this case the definition of the subdivision rays is a little bit more difficult but the idea is just the same. Recall that in this paragraph all relations are odd. The proof of Lemma 2.2 shows that the corner (x i-1 , x i ) in the relation R L defines an edge opposite to the corner (denoted y in Figure 2 and y 3 i in Figure 6) on which another relation is based to define the bigon β(x i-1 , x i ).
This relation and the corresponding 2-cell in the Cayley complex, is called the bigon completion of the corner or of the corresponding edge. In the more general case where even and odd relations exist, the single relation is replaced by a unique sequence of relations defining the bigon, the bigon completion in this general case is this particular sequence.
The bigon completion is well defined from the corner or from the opposite edge. In the simple case of this paragraph, the bigon completion consists of a single relation.
The bigon ray is described, from the x i side exactly like above (see Figure 6), as :
β (∞) (x i-1 , x i ) ⊃ {x i .w L .β (∞) [(x β(i-1) , x β(i) ) (opp)
]}, where :
• The corner (x β(i-1) , x β(i) ) is the top corner of the bigon β(x i-1 , x i ). The top corner belongs to the bigon completion.
• The x i -side of the bigon β(x i-1 , x i ) is written as x i .w L . This path is expressed as :
x i .w L = x i .y 1 i .y 2 i .....y m i .z m+1 i ....z m+r i
, where the first part : x i .y 1 i .y 2 i .....y m i is the x i side of β(x i-1 , x i ) along the relation R L , and the second part : z m+1 i ....z m+r i , is the part of the x i side of the bigon β(x i-1 , x i ) that belongs to the bigon completion (see Figure 6).
The path x i .w L crosses the following corners along the relation R L :
{( xi , y 1 i ), (ȳ 1 i , y 2 i ), ..., (ȳ m i , y m+1 i )}. This last corner (ȳ m i , y m+1 i
) is the one that corresponds to the last edge of the path x i .w L along the relation R L ( i.e. y m i ) and the next one along R L (i.e. y m+1 i ) is the edge opposite to the corner (x i-1 , x i ) for the relation R L (see Figure 6). Each of those corners (a, b) defines a unique opposite corner (a, b) opp and thus a unique bigon ray : β (∞) [(a, b) opp ] based at the corresponding vertex (see Figure 6). The path x i .w L = x i .y 1 i .y 2 i .....y m i .z m+1 i ....z m+r i also crosses the edges {y 1 i , y 2 i , ..., y m i } of the relation R L and are thus the opposite edge of some corner along R L . Based on each of these edges y j i there is a bigon completion of the corresponding opposite corner. We denote β C (y j i ) the bigon completion based at the edge y j i and w R [β C (y j i )] the subpath on the right of the bigon completion β C (y j i ), up to the top corner of the bigon completion. We denote this corner < β C (y j i ) > (see Figure 6), it defines a unique bigon ray β (∞) (< β C (y j i ) > opp ) based at the corresponding vertex.
Starting from the identity we define the following collection of rays : ∞) [(ȳ j i , y j+1 i
[Rays-Odd] R L = (x i = y 0 i ).y 1 i .y 2 i .....y j i . β (
) opp ]; j = 0, ..., m,
(x i = y 0 i ).y 1 i .y 2 i .....y j i .w R [β C (y j+1 i )].β (∞) (< β C (y j+1 i ) > opp ); j = 0, ..., m -1 .
This notation is not easy to manipulate, we verify, for instance, that the bigon completion β C (y m+1 i ) of the edge y m+1 i is the bigon completion of the original corner (x i-1 , x i ). Then the writing :
x i .y 1 i .y 2 i .....y m i .w R [β C (y m+1 i )].β (∞) (< β C (y m+1 i ) > opp ) is simply the bigon ray : β (∞) (x i-1 , x i ).
In particular the path w R [β(y m+1 i )] was written above as : z m+1 i ....z m+r i .
Just like Lemma 4.1, the collection R L is a collection of rays called ( left) subdivision rays (with respect to the interval I x i ). These subdivision rays stay in the domain D x i defined above. Therefore all the left subdivision rays converge towards points in the interior of the interval I x i and are called (left) subdivision points
L x i = {L (1)
x i , ..., L
x i } ⊂ I x i . (2m+1)
Observe that in both cases of even and odd relations the set of subdivision points depend only on the combinatorial properties of the relation R L . The description for the right side of I x i is the same than for the left side. The only change is to replace the subpath w R [...] on the right by w L [...] on the left. We denote R x i = {R
(1)
x i , ..., R (k)
x i } the set of right subdivision points of I x i .
Lemma 4.2 The set of subdivision points R x i and L x i of I x i are well defined and belong to the interior of the interval I x i . They depend only on the pairs of right and left relations that are uniquely defined by the two corners (x i-1 , x i ) and (x i , x i+1 ).
Proof. The proof is exactly the same than for Lemma 4.1.
Let us now consider the set of all subdivision points : S = x i ∈X (R x i L x i ∂I x i ). In the simple case of this paragraph, the main technical result is the following: Proposition 4.3 If the geometric presentation P of the group Γ is such that all the relations have the same parity then the partition points S are uniquely defined by the presentation P and are invariant under the map Φ P .
Proof. We start the proof of 4.3 in the simplest case where all the relations are even. Let ζ ∈ S be a subdivision point. It is defined by a relation (say on the left ) R L . Such a point is defined by a ray in some R L or is a boundary point in ∂I x i , in both cases and is written as: {ζ} = x i .y 1 i .y 2 i .....y j i .β ∞ [(ȳ j i , y j+1 i
) opp ] for some j ∈ {0, ..., k}. Since ζ ∈ I x i then the image under Φ P is obtained by the action of x -1 i and is represented as the limit point in ∂Γ of the rays whose writing are :
{ Φ P (ζ)} = y 1 i .y 2 i .....y j i .β ∞ [(ȳ j i , y j+1 i
) opp ] if j = 0 and
{Φ P (ζ)} = β ∞ [((ȳ 0 i = x -1 i ), y 1 i ) opp ] if j = 0.
The interpretation of this writing is simple. The action of the map, for this particular writing, is a shift map. Recall that the path x i .y 1 i .y 2 i .....y j i ....y k i is the x i -side of the bigon β(x i-1 , x i ) defined by the corner (x i-1 , x i ) in the relation R L .
If j = 0 then the path {Φ P (ζ)} starts by y 1 i .y 2 i .....y j i that begins with y 1 i at the corner (x -1 i , y 1 i ). The fundamental observation is that the corner (x -1 i , y 1 i ) and the path y 1 i .y 2 i .....y j i belong to the same relation R L (see Figure 7). In other words the rays in y 1 i .y 2 i .....y j i .β ∞ [(ȳ j i , y j+1 i
) opp ] define one of the subdivision ray, for the interval I y 1 i .
If j = 0 then {Φ P (ζ)} is represented by β ∞ [(x -1 i , y 1 i ) opp ],
this is one of the bigon rays and thus the point Φ P (ζ) is a partition point. This completes the proof in the cases of even relations. In the case where all the relations are odd the above arguments are valid among the collection [Rays-Odd] with one exception for the special ray written above as: 6). The image of that ray, under Φ P , starts with the first letter w 1 R of the word w R [β C (y 1 i )] that is the right side of the bigon completion of the edge y 1 i . This bigon completion and the path w R [β C (y 1 i )] belong to another relation R given by the corner adjacent to (x -1 i , y 1 i ). The relations R has odd length and the ray :
{ζ} = x i .w R [β C (y 1 i )].β ∞ [(β C (y 1 i )) opp ] (see Figure
w R [β C (y 1 i )].β ∞ (< β(y 1 i ) > opp ) = {Φ P (ζ)
} is a subdivision ray defining a subdivision point in the interval I w 1 R . This completes the proof in the odd case.
Subdivisions for general presentation.
In order to suppress the parity assumption of the previous paragraph, we need more subdivisions. The reason for this is observed in the last argument of the previous paragraph.
For a presentation with relations of mixed parity, the additional difficulty comes from the minimal bigons with relations of mixed parity. In this case the bigon completion contains some relations of even length and the last argument above produces a ray of the form :
w R [β C (y 1 i )].β ∞ (< β C (y 1 i ) > opp ), where w R [β C (y 1 i )]
is a path on one side (right here) of the bigon completion. The path w R [β C (y 1 i )] starts along an even relation and is followed by a path along some other relations, by the proof of Lemma 2.2. This ray does not belong to the collection of subdivision rays defined so far.
We need to add more subdivision rays that contain these additional rays. We consider all the minimal bigons β(x i-1 , x i ) of the presentation, this is a well defined finite collection. The subdivisions defined above at the corners of the odd relations are still given by the collection [Rays-Odd]. We subdivide the same way the intervals with an even relations on one side when it is necessary, i.e. at each edge on which a bigon completion is based. We define the new subdivision rays exactly as in [Rays-Odd] in these cases. We denote again S the set of subdivision points of S 1 = ∂Γ and the same arguments proves the following: Proposition 4.4 For any geometric presentation P of the group Γ, the partition points S are uniquely defined by the presentation P and are invariant under the map Φ P .
Corollary 4.5 The map Φ P satisfies the Markov property and the collection of subdivision points S defines a Markov partition for Φ P .
The Markov property is clear; on each interval of S 1 -S the map is the restriction of the action of a group element and thus is a homeomorphism. Proposition 4.4 implies that any boundary point of the partition is mapped to a boundary point of the partition.
The other assumption ζ ∈ I x i-1 implies that :
{Φ P (ζ)} = w ′ 2 .w = x j 1 ....x j k .w.
We claim that η belongs to I x i 1 and Φ P (ζ) belongs to I x j 1 . Indeed the beginning x i 1 ....x i k of {η} is a geodesic path starting with x i 1 and is strictly contained in one side of a minimal bigon whereas {Φ P (ζ)} starts with x j 1 by a geodesic path x j 1 ....x j k that is strictly contained in one side of another bigon. This implies that the two geodesic rays {η} and {Φ P (ζ)} belong respectively to the domains D x i 1 and D x j 1 , as defined in the proof of Lemma 4.1.
Therefore the Φ P -image of these two points are : {Φ P (η)} = x i 2 ....x i k .w and {Φ 2 P (ζ)} = x j 2 ....x j k .w. By the same argument : Φ P (η) ∈ I x i 2 and Φ 2 P (ζ) ∈ I x j 2 . After k iterations we obtain:
{ Φ k P (η)} = w = {Φ k+1 P (ζ)}.
5 What Φ P is good for?
5.1 Some elementary properties.
We prove first some simple properties satisfied by Φ P with non trivial consequences for the group presentation. Recall that the presentation P defines uniquely the partition x i ∈X I x i and the map Φ P is the piecewise homeomorphism :
Φ P (ζ) = x -1 i (ζ), ∀ζ ∈ I x i .
Lemma 5.1 If P is a geometric presentation of a co-compact surface group Γ with all relations of length greater than 3, let I x i be any interval of the partition S 1 = x i ∈X I x i and I x -1 i be the corresponding interval for the inverse generator. If I (x -1 i )-1 and I (x -1 i )+1 are the two adjacent intervals to I x -1 i , with I (x -1 i )-1 on the left (say) and I (x -1 i )+1 on the right. Then there exists a subdivision point : L j (x -1 i )+1 ∈ S on the left side of I (x -1 i )+1 and another subdivision point: R k
(x -1 i )-1 ∈ S on the right side of I (x -1 i )-1 such that : Φ P (I x i ) = S 1 -[R k (x -1 i )-1 , L j (x -1 i )+1 [.
Proof. Observe that Φ P is a homeomorphism on I x i and so it's image is an interval. The Lemma is proved by checking the Φ P -image of the two extreme points of the interval. The computation is the same than for the proof of the Markov property (see Figure 7). In particular, in the simple case where all the relations are even, the subdivision points L j
(x -1 i )+1
and R k (x -1 i )-1 of the Lemma are the last point before the I x -1 i and the first point after I x -1 i according to the orientation of the circle. The Lemma is illustrated by Figure 8. Let us make the explicit computation in this simple case and for one of the two points. The general case is proved exactly the same way. The left boundary point (x i-1 , x i ) ∞ of I x i is the limit point of the bigon ray β ∞ (x i-1 , x i ). In the case where the relation defined by the corner (x i-1 , x i ) is even, this point is written, as in [Rays-Even] :
(x i-1 , x i ) ∞ = x i .y 1 i .y 2 i .....y k i .β (∞) [(x β(i-1) , x β(i) ) opp ] ,
where the path x i .y 1 i .y 2 i .....y k i is the right side of the bigon β(x i-1 , x i ). The definition of Φ P on I x i implies that the image Φ P ((x i-1 , x i ) ∞ ) is written as the limit point of the ray :
{Φ P [(x i-1 , x i ) ∞ ]} = y 1 i .y 2 i .....y k i .β ∞ [(x β(i-1) , x β(i) ) opp ].
This point is, by definition [Rays-Even], the left most subdivision point of I y 1 i (see Figure 7) and this partition interval is adjacent, on the right, to the partition interval I x -1 i . With the notations of Lemma 5.1 we have just checked that: Φ P [(x i-1 , x i ) ∞ ] is the last subdivision point, in S, on the left of the interval I x -1 i +1 . The proof for the boundary point on the right is the same and we obtain that Φ P [(x i , x i+1 ) ∞ ] is the last subdivision point on the right of the interval I x -1 i -1 . The proof for a general presentations is the same. For presentations with some relations of length 3 the conclusion is a little bit different: Lemma 5.2 If P is a geometric presentation of Γ with some relations of length 3, for instance xyz = Id, where the 3 generators {x, y, z} are different then, for all generators x i that do not belong to a relation of length 3, the conclusion of Lemma 5.1 holds. For the other generators, for instance x above, then :
Φ P (I x ) = S 1 -[L k a , R j b [
, where L k a is a left subdivision point of the interval I a that is adjacent on the right to I x -1 and R j b is a right subdivision point of the interval I b that is adjacent to I y on the left.
In other words the image of the intervals I x ±1 , I y ±1 , I z ±1 miss two adjacent intervals ( I x -1 and I y in the above case) plus some partition sub-intervals before and after. The proof is the same than for Lemma 5.1 (this particular case is given by Figure 9 ). There is a missing case when the presentation has a relation of length 3 with 2 identical generators. These cases are particular and will be treated in the Appendix. It is interesting to notice that this particular type of presentations exist as a geometric presentation of a surface group but only for non orientable surfaces.
The two previous Lemmas have the following consequence : Corollary 5.3 If P is a geometric presentation of Γ with n generators and no relations of the form xxy = Id then the map Φ P is strictly expanding and, for all x ∈ S 1 , the number of pre-images under Φ P satisfies :
2n -3 |{Φ -1 P (x)}| 2n -1.
Proof. The expansivity property is proved directly, each interval of the Markov partition is mapped either on a single interval that is different from itself or is mapped to a union of intervals. In any case the second iterate is mapped to a union of more than two intervals. This argument also proves the transitivity of the map Φ P .
In the case where P has no relations of length 3 then each point in S 1 belongs to one of the intervals I x i and each such interval has a right and a left subinterval given by Lemma 5.1. The points in either the right or the left subintervals have exactly 2n -2 pre-images and the points in the "central" part have exactly 2n -1 pre-images. In this case only the image of the interval corresponding to the inverse generator is not a pre-image. If P has some relations of length 3 with 3 different generators then Lemma 5.2 implies that some points might have 2n -3 preimages.
For the two codings I and X, any initial word of length k is called a I-prefix (resp. X-prefix) of length k.
Comparison of entropies.
The last statement of the first main result, Theorem.1.1, is a comparison between the asymptotic geometry of the presentation P and the asymptotic dynamics of the map Φ P .
Theorem 5.8 The volume entropy h vol (P ) of the presentation P is equal to the topological entropy h top (Φ P ) of the Markov map Φ P .
The volume entropy is defined in the introduction and is now a classical invariant in geometric group theory (see for instance [DlH]). The topological entropy of a map is an even more classical topological invariant that was first defined in [AKM] and precised latter by Bowen ( see for instance in [Bo]). An important feature of Markov maps is the well known fact that it's topological entropy is computable as the largest eigenvalue of an integer matrix (see bellow).
One way to relate the geometry of P with the dynamics of Φ P is to introduce a decomposition of the complexes Cay (j) (P ), for j = 0, 1, 2 " suited " with the intervals I x i on the boundary. This decomposition is a weak combinatorial version of the " half spaces" in hyperbolic geometry.
Recall that the partition interval
I x i = [L i = (x i-1 , x i ) ∞ , R i = (x i , x i+1 ) ∞ [ ∈ ∂Γ
is defined by two points that are limit of infinite geodesic rays, as in section 3. The bigon ray β ∞ (x i-1 , x i ) whose limit point is L i is the concatenation: β 1 i,L .β 2 i,L .....β k i,L ..., where the bigon β k i,L is defined by two geodesic paths : {(γ k i,L ) L , (γ k i,L ) R }, where the indices L and R stands for left and right. A similar writing defines the point R i . We define now a particular representative of the geodesic rays converging to L i and R i as :
{L i } R := (γ 1 i,L ) R .(γ 2 i,L ) R .....(γ k i,L ) R ... {R i } L := (γ 1 i,R ) L .(γ 2 i,R ) L .....(γ k i,R ) L ... .
These two particular rays start at the identity by the same letter x i and are otherwise disjoint. The compactification Cay 2 (P ) of the 2-complex Cay 2 (P ) is homeomorphic with the disc D 2 and the union of the two rays {L i } R {R i } L is a bi-infinite geodesic connecting L i and R i , after removing the initial common segment given by the letter x i . This bi-infinite geodesic is an embedded Jordan curve in D 2 and therefore it bounds a domain : K 2 i (P ) in Cay 2 (P ) so that K 2 i (P ) ∂Γ = I x i . This domain is contained in the domain D x i of section 4.1.
If we consider the " adjacent" domains K 2 i-1 (P ) and K 2 i+1 (P ) for the adjacent generators x i-1 and x i+1 we obtain the following property : Lemma 5.9 The domains K 2 i (P ) defined above satisfy :
K 2 i (P ) ∂Γ = I x i and K 2 i (P ) K 2 i-1 (P ) = L i {g 1 i ; g 2 i ;
....}, where g k i are the special vertices in Cay 2 (P ) where two minimal bigon paths meet along the bigon ray β ∞ (x i-1 , x i ) . The domain K 2 i (P ) is said to be suited with I x i .
Proof. The bigon ray β ∞ (x i-1 , x i ) is the union of the two special rays {L i } R and {R i-1 } L , where {L i } R is the boundary (left) of K 2 i (P ) and {R i-1 } L is the (right) boundary of K 2 i-1 (P ). These two rays meet at the extreme vertices of the bigons β k i,L , which are precisely the set {g 1 i ; g 2 i ; ....}.
The set of domains K 2 i (P ) for all x i ∈ X is not a partition of Cay 2 (P ) because infinitely many 2-cells are missing, as well as possibly infinitely many 1-cells. For 0-cells, identified with the group elements, the situation is simpler, as given by : Lemma 5.10 For every m ∈ N * , each group element g ∈ Γ -Id of length m, with respect to the presentation P , belongs to exactly one domain K 2 i (P ), except at most 2n elements, where n is the number of generators of P .
Proof. By planarity, each g ∈ Γ belongs to at most two K 2 i (P ) that are adjacent. Lemma 5.9 implies that if a vertex belongs to more than one domain it has to be on the boundary of the domain and there are at most two such points on the intersection of the sphere of radius m with a domain.
From the definition of the domains, each z ∈ K 2 i (P ) admits a geodesic writing as z = x i .w, where the path written w is contained in K 2 i (P ) and is possibly infinite. If z = g is a group element, i.e. of finite length, there is an open connected set Ω g ⊂ I x i such that any ζ ∈ Ω g belongs to the "shadow" of g in I x i , i.e. ζ has a geodesic ray writing as {ζ} = g.ρ and the geodesic ray is contained in K 2 i (P ). This property comes from the definition of the domains.
For every point ζ ∈ Ω g ⊂ I x i the map Φ P is well defined and Φ P (ζ) is given by a geodesic ray as {Φ P (ζ)} = w.ρ ⊂ K 2 i 1 (P ). We iterate the argument and there exists a geodesic ray: {Φ P (ζ)} = x i 1 .w ′ .ρ, where x i 1 .w ′ ∈ K 2 i 1 (P ) and g admits a geodesic writing {g} = x i .x i 1 .w ′ . After a finite iteration of Φ P on ζ we obtain a geodesic writing of g as g = x i .x i 1 ....x im and the word : x i .x i 1 ....x im is a X-prefix of length m + 1. This proves the following :
Lemma 5.11 Every g ∈ Γ admits a X-prefix as a geodesic writing in the presentation P .
The coding by X-prefix is not bijective for the group elements by Lemma 5.9, but Lemma 5.10 gives a uniform bound on the number of elements with more than one X-prefix.
We make now a counting argument, let us denote :
• σ m the number of elements of Γ of length m with respect to P .
• X m the number of X-prefix of length m.
• I m the number of I-prefix of length m.
Lemma 5.12 For m ∈ N large enough, one has σ m ≈ X m .
Proof. If the coding by X-prefix were bijective we would have σ m = X m for all m. This is not the case but Lemma 5.10 implies that these two numbers are equivalent for large m. The last counting argument relates X-coding with I-coding : Lemma 5.13 There is a constant K so that for all m ∈ N : X m I m X m+K .
Proof. Every I-coding defines an X-coding by the projection : Icoding -→ Xcoding, given by : {I (i 1 ,j 1 ) , ...., I (im,jm) , ...} -→ {x i 1 , ...., x im , ...}. This map is surjective on prefixes by Lemma 5.11, which proves the first inequality. Let K be the maximal number of subintervals of the Markov partition among the intervals I x i , x i ∈ X. If two points ζ and ρ have the same I-prefix of length m then Φ k P (ζ) and Φ k P (ρ) belong to the same I (i,j) for each k ∈ {0, 1, ..., m} and in particular they belong to the same I x i . The worst case is when Φ m P (ζ) and Φ m P (ρ) belong to the same I x i but different I (i,j) . By expansivity of Φ P , after at most K iterations Φ m+K This last limit is classical (see [Shu] for instance) to compute via the "Markov transition matrix" M(Φ P ) defined as the integer matrix whose entries where λ(M) is the largest eigenvalue of the matrix M. A last classical result (see [Shu] for instance) is that : log(λ(M)) = h top (Φ P ).
The usefulness of the map Φ P is now clear since, thanks to the Markov property, the topological entropy is a computable invariant, as the spectral radius of the Markov transition matrix.
In practice the size of the Markov matrix is big but for simple examples the computation is possible.
One advantage of our construction is the fact that the Markov map is well defined for any geometric presentation so it makes the comparison of the volume entropy for different presentations possible.
It turns out that Corollary 5.3 has an immediate consequence:
Lemma 5.14 If P is a presentation with n generators (i.e. |X| = 2n ) and no relations of the form xxy = Id, then the following inequalities are satisfied: log(2n -3) h top (Φ P ) = h vol (P ) log(2n -1).
Proof. The second inequality is well known, it is just the obvious comparison between the volume entropy of the group presentation with the one for the free group of the same rank. The first inequality is the new result, it is a direct consequence of Corollary 5.3.
The next result is about the special presentations with relations of the form xxy = Id, we postpone it's proof to the Appendix.
Lemma 5.15 If P is a geometric presentation of Γ with n generators and some relations of the form ( * ) xxy = Id, then there exists a presentation P ′ with n -1 generators and one relation of the form (*) less than in P so that h vol (P ′ ) h vol (P ).
Finally we obtain : Theorem 5.16 The minimal volume entropy of a co-compact hyperbolic surface group is realized, among the geometric presentations, by the presentations with the minimal number of generators.
Lemma 5.14 and 5.15 imply that all the geometric presentations with the minimal number of generators have volume entropy less than the other geometric presentations of the same group. This minimum is realized since the number of such presentations, called minimal, is finite. It remains to prove the following result: Lemma 5.17 All the minimal geometric presentations have the same volume entropy.
Proof. Observe that the minimal geometric presentations of co-compact surface groups are very classical, for orientable surfaces of genus g for instance, they have 2g generators and one relation of length 4g. There are still several possibilities, for instance in genus 2 here are two distinct presentations : <a, b, c, d/aba
-1 b -1 cdc -1 d -1 = id> and <a, b, c, d/aba -1 cdc -1 b -1 d -1 = id>.
From the Markov map point of view these two presentations give two different maps but the difference is simply the ordering of the intervals I x i along the circle S 1 = ∂Γ. All these different maps are constructed with one relation with the same even length and thus all these maps are combinatorially conjugated, in particular the Markov matrices are the same, up to permutation of the indices, and therefore the topological entropies are the same. At the Cayley graph level, the proof is also immediate.
An example.
For an example, we compute the map Φ P for the classical presentation of an orientable surface of genus 2. We give some (partial) explicit computations for the geometric presentation :
P =< a, b, c, d/a.b.a -1 .b -1 .c.d.c -1 .d -1 >.
A small part of the Cayley 2-complex is shown in Figure 10, as well as the subdivision of the interval I a of the partition. We show bellow the computation for this particular interval and this is enough by the symmetry of the presentation.
Corollary 5.18 The minimal volume entropy among geometric presentations of genus two surfaces is : log(
3+ √ 17+ √ 22+6 √ 17 2
).
The circle is oriented clockwise and the interval I a is the concatenation of the 7 intervals I a,1 ......I a,7 . The subdivision points are denoted : {L a , L a,1 , L a,2 , R a,3 , L a,3 , R a,2 , R a,1 , R a }. With the notations of section 4 these points are written as the limit of the following rays:
{L a } = a.b.a -1 .b -1 .β ∞ [(b, c) opp ], {R a } = a.b -1 .a -1 .d.β ∞ [(c, d -1 ) opp ], {L a,1 } = a.b.a -1 .β ∞ [(a, b -1 ) opp ], {R a,1 } = a.b -1 .a -1 .β ∞ [(d, a) opp ], {L a,2 } = a.b.β ∞ [(b -1 , a -1 ) opp ], {R a,2 } = a.b -1 .β ∞ [(a -1 , b) opp ], {L a,3 } = a.β ∞ [(a -1 , b) opp ], {R a,3 } = a.β ∞ [(b -1 , a -1 ) opp ].
The computation of section 4 gives the following images of these points :
(1) Φ
P (L a ) = L b,1 , (5)
Φ P (R a ) = R b -1 ,1 , (2) Φ P (L a,1 ) = L b,2 , (6)
Φ P (R a,1 ) = R b -1 ,2 , (3) Φ P (L a,2 ) = L b,3 , (7)
Φ P (R a,2 ) = R b -1 ,3 , (4) Φ P (L a,3 ) = L d , (8)
Φ P (R a,3 ) = R d -1 .
The image of the extreme points of an interval gives the image of the interval so we obtain: The computation of the Markov matrix is now tedious and long, it gives a 56 by 56 integer matrix with many repeating blocks. The computation of the largest eigenvalue has no interest in itself and can be done with any computer. It is interesting to notice that for this classical presentation, the map defined by Bowen and Series in [BS] is a little bit different and less symmetric than the one presented here, but the number of subdivision intervals is the same (56 in this case) and the computation of the entropy gives the same value.
The characterisation of the minimal volume entropy of surface groups needs some more steps and some new ideas. It is natural to conjecture that the geometric minimum obtained in this paper is a candidate to be the absolute minimum. Another class of questions would be to understand more properties of the special circle maps that are defined here. Of course the real challenge would be to define similar maps for arbitrary presentation of an arbitrary hyperbolic group and in particular when the boundary is a higher dimensional sphere.
6
Appendix: Presentations with a relation xxy = Id.
This Appendix is a proof of Lemma 5.15. It is treated separately since a new strategy is necessary. The cases where the presentation P has a relation of the form xxy = Id is not covered by Lemmas 5.1 and 5.2 and the conclusion of Corollary 5.3 and Lemma 5.14 are wrong in this case. We assume that P has a relation of the form ( * ) xxy = Id. The construction of the map Φ P is the same than in section 4 and we consider the local structure of the 2-complex Cay 2 (Γ, P ) in this particular case (see Figure 11). We consider the partition intervals defining the map Φ P with special attention to the 4 intervals I x ±1 , I y ±1 . For the other intervals the conclusions of the Lemmas 5.1 and 5.2 are valid. We observe in particular that Φ P (I x ) misses 3 intervals : I x , I x -1 , I y as well as a sub-interval before and after (the same is true for Φ P (I x -1 ) ). The conclusion of Lemma 5.2 does not apply for these two intervals but it does for I y and I y -1 . The new step is to transform the presentation P by a specific Dehn twist that preserves the geometric structure, i.e. by a surface Dehn twist (as opposed to a free group Dehn twist). Algebraically this twist is given by τ : P -→ P ′ such that : τ (y) = yx := z and τ (x i ) = x i , ∀x i ∈ X -{y}. We check that P ′ is geometric since the Dehn twist τ is realized on the surface and we compute the map Φ P ′ (see Figure 12). Lemma 6.1 With the above notations the topological entropy satisfies : h top (Φ P ) h top (Φ P ′ ).
Proof. The number of generators is the same for P and P ′ and the automorphism τ gives an identification between the generators that induces an identification of the intervals I x i .
The partition for the two Markov maps are different, in particular because the relations have different length, but each interval I x i has a rough partition obtained from the proof of Lemma 5.1 (and 5.2). Indeed, each interval I x i has a left part I L x i , a central part I C x i and a right part I R x i . These particular partition points are the image under Φ P of the extreme points : Φ P (∂I x i ), x i ∈ I x i . The proof of Corollary 5.3 is based on the fact that, in the cases covered by Lemma 5.1, each point in these sub-intervals have 2n -1 pre-images for I C
x i and 2n -2 pre-images for I L x i and I R
x i . In these cases we say that the corresponding interval I x i is of type (2n-2, 2n-1, 2n-2).
For the cases covered by Lemma 5.2 all the intervals I x i such that x i does not belong to a relation of length 3 are also of type (2n-2, 2n-1, 2n-2) and the generators x i that belong to a relation of length 3 define intervals of type (2n-2, 2n-2, 2n-3) (or (2n-3, 2n-2, 2n-2)).
For the map Φ P , where P has a relation of the form (*) xxy = Id, we observe that the intervals I x i , x i / ∈ {x ±1 } are of type (2n -2, 2n -1, 2n -2) or (2n -2, 2n -2, 2n -3) and the special intervals I x ±1 are of type (2n -3, 2n -3, 2n -4). These values are obtained by direct checking (see Figure 11). For the map Φ P ′ the situation is quite different. We observe, with the above notations (see Figure 12), that the image Φ P ′ (I x ±1 ) covers n -1 intervals I x i minus one sub-interval (I L or I R ) on one side and Φ P ′ (I z ±1 ) also covers n -1 intervals I x i minus one sub-interval (I L or I R ) on one side. Another particular property of this map is that Φ P ′ (R x ) = Φ P ′ (L z -1 ) and Φ P ′ (L z ) = Φ P ′ (R x -1 ). These observations imply that each interval I x i , x i / ∈ {x ±1 , z ±1 } that was of type (a, b, c) for Φ P is now of type (a -2, b -2, c -2) for Φ P ′ . The last 4 intervals I x ±1 and I z ±1 are of respective types (2n -4, 2n -3, 2n -3) and (2n -3, 2n -3, 2n -4). This imply that the number of pre-images grow much slower, under iteration by Φ P ′ than for Φ P . This completes the proof of Lemma 6.1.
This example shows that an elementary transformations of the presentation, like a Dehn twist here, may have a very strong impact on the entropy even when the number of generators is fixed.
The next step is to remove the unnecessary generator (x or z). Proof. The bigon β(x, z -1 ) given by the relation of length 2 defines a partition point (x, z -1 ) ∞ that has two sides, one is called R x and the other L z -1 for the two adjacent intervals I x and I z -1 . The main observation is now that : Φ P ′ (R x ) = Φ P ′ (L z -1 ) (Figure 12 shows an example of this fact). The map Φ P ′ is thus continuous at (x, z -1 ) ∞ and Φ P ′ is a homeomorphism on the larger interval I x I z -1 . Therefore we can remove the partition point (x, z -1 ) ∞ without changing the dynamics. Removing the partition point corresponds, for the presentation, to replace the 2-cell corresponding to the relation (**) by a single edge, for instance x. Since the two dynamics before and after removing the partition point are the same the entropy are the same. This completes the proof of Lemma 5.15.
Figure 3 :
3 Figure 3: Opposite corner and bigon rays.
(k) ǫ(k) and the first generator of next segment γ (k+1) ǫ(k+1) is always satisfied. For the second statement of Lemma 3.1, each minimal bigon in the sequence {β (n) } n has finite length by Lemma 2.2 and the number of different such bigons is finite, this completes the proof of Lemma 3.1.
Figure 4 :
4 Figure 4: Partition of the circle, definition of Φ P .
Figure 6 :
6 Figure 6: Subdivision Bigon rays, odd case.
Figure 7 :
7 Figure 7: The action of the map Φ Γ P .
Figure 8 :
8 Figure 8: The image of a partition interval under Φ P .
Figure 9 :
9 Figure 9: Image of a partition interval under Φ P with a relation of length 3.
belong to different I x i . So different I-prefixes of length m implies different X-prefixes of length at most m + K. This implies the second inequality. Proof of Theorem 5.8. By definition of the volume entropy of P : h vol (P ) = lim m→∞ 1 m log(σ m ) and Lemma 5.12 implies : h vol (P ) = lim m→∞ 1 m log(X m ). Finally Lemma 5.13 implies : h vol (P ) = lim m→∞ 1 m log(I m ).
M a,b = 1 if the interval I b of the Markov partition (denoted I (i,j) above) is contained in the image Φ P (I a ) and M a,b = 0 otherwise. It is also classical that the norm M m = a,b |M m a,b | is exactly the number I m and : lim m→∞ 1 m log(I m ) = lim m→∞ 1 m log( M m ) = log(λ(M)),
Figure
Figure 10: The genus 2 case.
Figure 11 :
11 Figure 11: The particular case xxy = Id.
For
the new presentation, only two relations are transformed (according to the Figures 11 and 12): {xxy, a -1 .y.b..., * , * , * ...} -→ {xz, a -1 .z.x -1 .b..., * , * , * ...} .
Lemma 6. 2
2 If P ′ is a presentation with n generators and a relation of length 2 : ( * * ) xz = Id, then the new presentation P ′′ obtained from P ′ by removing the generator z (for instance) and the relation (**) satisfies : h top (Φ P ′ ) = h top (Φ P ′′ ).
Figure 12 :
12 Figure 12: New presentation with a length 2 relation.
Orbit equivalence.
The orbit equivalence is a relation between the group action on the boundary and the dynamical properties of the map Φ P , more precisely : Two points ζ and η in ∂Γ are in the same Γ-orbit if there exists γ ∈ Γ such that ζ = γ(η). The two points are in the same Φ P -orbit if there exists n, m ∈ N so that Φ n P (ζ) = Φ m P (η) . The two actions Γ and Φ P on ∂Γ are orbit equivalent if every points in the same Γ orbit are in the same Φ P orbit and conversely.
Theorem 4.6 The two actions Γ and Φ P on ∂Γ are orbit equivalent.
One direction of this equivalence is obvious:
then the definition of Φ P by group elements implies the existence of γ ∈ Γ so that ζ = γ(η).
The other direction requires some work. Assume that ζ = γ(η), since X = {x ±1 1 , ..., x ±1 n } is a generating set we restrict to the case γ = s ∈ X. Let ζ, η ∈ ∂Γ be written as limit points of geodesic rays starting at the identity. Since ζ = s(η) then either ζ admits a geodesic writing as : {ζ} = {s.x ′ i 2 .x ′ i 3 ...} or η admits a geodesic writing {η} = {s -1 .x ′ j 2 .x ′ j 3 ...}. In other words either ζ belongs to the cylinder C s or η belongs to the cylinder C s -1 . Let us assume for instance that ζ ∈ C s .
From the definition of Φ P then either ζ ∈ I s or ζ ∈ C s -I s . In the first case Φ P (ζ) = s -1 (ζ) = η and the result is proved. In the last case, two situations are possible:
(1) ζ ∈ I x i-1 , where x i-1 ∈ X is the generator adjacent to s = x i on the left, (2) ζ ∈ I x i+1 , where x i+1 ∈ X is the generator adjacent to s = x i on the right. We consider only the first case since the two situations are symmetric. By assumption ζ ∈ C s=x i C x i-1 and thus ζ admits two geodesic writings : ( * ) {ζ} = x i .w 1 = x i-1 .w 2 . In addition, ζ ∈ I x i-1 C x i and the definition of the subdivision, in section 4.1-2, implies that ζ belongs to the right most interval of the partition of I x i-1 . This implies that the geodesic writing ( * ) is given more precisely as :
w, where:
are the two geodesic paths defining the minimal bigon β(x i-1 , x i ) and w is a geodesic continuation of {x i .w ′ 1 , x i-1 .w ′ 2 } converging to ζ. This property is a consequence of Lemma 2.2. Indeed, condition ( * ) implies the existence of a finite length bigon in B(x i-1 , x i ) that can be chosen to be the minimal bigon β(x i-1 , x i ).
By assumption ζ = s(η) = x i (η) and thus a geodesic writing of η is :
Observe that the cases of presentations with relations of length 2 has not been considered. In this case there is a redundant generator that can be removed. This fact is obvious from a combinatorial group theory point of view, it will be justified from a dynamical system point of view in the appendix (Lemma 6.2).
Symbolic coding from Markov partition.
The map Φ P is Markov and strictly expanding then the standard methods in symbolic dynamics apply (see for instance [Shu], [Bo]) and define a symbolic coding of the orbits :
As usual in this context we don't consider the finite collection of orbits of the boundary of the partition S.
Definition 5.4 Each point ζ ∈ S 1 -S admits a symbolic coding on the alphabet I = {I (i,j) }, where each I (i,j) is an interval of the Markov partition of Φ P , with: j=1,....,K i I (i,j) = I x i , x i ∈ X. This coding C : S 1 -S → I * , where I * is the set of infinite words in the alphabet I, is defined by :
Lemma 5.5 The coding C is called an I-coding, it is injective and defines a sub-shift of finite type.
The sub-shift property is classical for expanding Markov maps (see for instance [Bo]) as well as the injectivity. It is a consequence of the fact : length [ ∞ k=1 Φ -k P (I (i,j) )] = 0, for all I (i,j) .
What is less standard is the fact that the map Φ P is defined via a group action. In addition, the partition and the action, reflect the action of the generators of the group presentation. The I-coding induces an X-coding by forgetting the second index of each letter I(i, j) in the alphabet I, more precisely:
Definition 5.6 The X-coding χ of any ζ ∈ S 1x i ∈X ∂I x i is defined by : χ(ζ) = {x i 1 , x i 2 , ...., x i k , ...}, where Φ k P (ζ) ∈ I x i k , ∀k ∈ N.
Lemma 5.7 The X-coding is injective and defines a sub-shift of finite type.
The X-coding is injective, again by expansivity of the map, and the fact that : length [ ∞ k=1 Φ -k P (I x i )] = 0, for all I x i . |
04106703 | en | [
"info.info-ai",
"info.info-ts"
] | 2024/03/04 16:41:22 | 2023 | https://pastel.hal.science/tel-04106703/file/2023UPSLM005_archivage.pdf | Martin Bauw
Santiago Velasco-Forero
Je- Sus Angulo
Claude Adnet
Jesus Angulo
Jesus Angulo
Jesus Angulo
Jesus Angulo
SPD matrices R d k ×d k-1 * Manifold of full-rank matrices v vi CONTENTS
Manifold of
Acronyms
The first chapter introduces the radar discrimination problem addressed by the thesis and the machine learning tools used in the solution put forward. The specificity of the discrimination developed in terms of available supervision and targets representation resolution is described. The data format processed, inherited from the radar systems and consisting in complex-valued signals registered over brief time segments with a sampling covering few time steps, is illustrated and enriched with a neighborhood information.
The link between the radar constraints from which the originality of this work stems and air surveillance radars with rotating antennas is established. A breakdown of the radar targets discrimination task into a two steps filter is proposed, where a first step encodes representations in a shared representation space, and a second step implements a low-supervision discrimination. The latter step is based on a one-class classification which can be understood as an anomaly detection method.
Chapter contribution:
• Enriched hit format definition to take into account a fixed-size neighborhood information centered on the range cell responsible for the hit detection.
• Two-steps processing proposition, encoding and one-class classification-based discrimination, adapted to the targets representations diversity and resolution, but also to the weak supervision of the separation addressed.
Chapter 2: Encoding IQ signals
The second chapter proposes different options for the implementation of the first encoding step of the filter proposed in the first chapter. Encoding approaches, including neural networks architectures adapted to representation learning, at the scale of one or a neighborhood of range cells are put forward. The combination of representations over a neighborhood of range cells through a graph is suggested. The encoding methods mentioned harness either an autoregressive model, or a neural network that may be trained with a generative or non-generative objective. The generative nature of a neural network is associated with the training potential of the encoding in a weak supervision context, the reconstruction error making it possible to define a training loss without labels. The preferred option in this work being a neural network and the data being complex-valued IQ signals, notions specific to complex-valued neural networks and to the optimization of the latter are introduced. The explainability of the neural networks defined is discussed thanks to the existing equivalence between finite impulse response filters and the one-dimensional convolution, the latter constituting the first layer of the non-recurrent neural networks studied. Preliminary results are presented and suggest the relevance of xiii xiv CONTENTS an encoding architecture which trains a complex-valued neural network to encode the content of a single range cell described heterogeneously in a shared representation space.
Chapter contribution:
• Proposal of a diversity of approaches for the enriched hit format encoding covering a neighborhood of range cells using neural networks and autoregressive models.
• Integration of a combination of existing tools stemming from the machine learning community for the encoding of IQ signals with a varying sampling: complexvalued neural networks, graph neural networks, recurrent models, loss functions for different levels of training supervision.
• Preliminary results presentation for the encoding of single range cells with simulated pulse Doppler radar data.
Chapter 3: One-class classification for radar targets discrimination
This chapter reviews different one-class classification approaches from the literature and suitable for the implementation of the second step of the radar targets filter proposed in the first chapter. Achieving the radar targets separation with a one-class classification appears relevant due to the low supervision context of the separation conducted: all the targets classes are not necessarily known, and the identified classes are not necessarily described by representative data sets. One-class classification enables the definition of a reference class based on a limited number of labeled data points in order to produce a score translating into the degree of membership to that reference class. For the radar operator's mission, such a score can thus help decide whether a target is close enough to an arbitrary set of classes possibly gathering diverse targets. The definition of an arbitrarily complex reference class leads to the consideration of "near" and "far" outof-distribution detection concepts which emphasize the difficulty of separating targets that may appear semantically close. A one-class classification method from the literature based on a neural network gathering output representations from the reference class next to a latent centroid is modified. The latent centroid here defines a location estimator of the reference class. Whereas the method from the literature minimizes the Euclidean distance between the output representations of the training data and the latent centroid, the proposed modified version minimizes an outlyingness measure computed with a set of random projections. A set of normalized distances, each distance being defined through a predetermined random projection, allows for the definition of a robust outlyingness measure for multidimensional representations. This measure is directly inspired by a stochastic approximation of a statistical depth, and makes it possible to work with ellipsoids without computing covariance matrices and their inverse to yield a distance.
The intuition behind the use of random projections is illustrated and used, as is the Euclidean distance in the literature, to define unsupervised and semi-supervised oneclass classifications.
Résumé des chapitres
Chapitre 1: Radar Doppler pulsé, discrimination recherchée et aperçu de la solution proposée
Le premier chapitre introduit le problème de discrimination radar traité par la thèse et les outils issus de l'apprentissage automatique utilisés dans la solution mise en avant. La spécificité de la discrimination recherchée en termes de supervision disponible et de résolution de représentation des cibles est explicitée. Le format des données à traiter, hérité de systèmes radar et consistant en des signaux à valeurs complexes représentés sur de brefs segments temporels avec un échantillonnage variable sur peu d'instants, est illustré et enrichi par une information de voisinage. Le lien entre les contraintes radar faisant l'originalité de ce travail et les radars de surveillance aérienne à antenne tournante est établi. Un découpage de la tâche de discrimination de cibles radar en un filtrage en deux étapes est proposé, avec une première étape d'encodage des représentations dans un espace de représentation commun, et une seconde étape de discrimination à faible supervision. L'étape d'encodage est définie par un réseau de neurones, comme peut l'être l'étape de discrimination. Cette dernière est basée sur une classification mono-classe qui peut-être perçue comme une méthode de détection d'anomalie.
Contribution du chapitre:
• Définition d'un format hit enrichi qui reprend une information de voisinage de taille fixe centrée sur la case distance qui a déclenché la détection d'un hit.
• Proposition du traitement en deux étapes, encodage puis discrimination par classification mono-classe, adapté à la diversité ainsi qu'à la résolution des représentations des cibles, mais aussi à la faible supervision de la séparation recherchée.
Chapitre 2: Encodage de signaux IQ
Le deuxième chapitre propose différentes options pour l'implémentation de la première étape d'encodage du filtre proposé dans le premier chapitre. Des encodages, notamment des architectures de réseaux de neurones adaptées à l'apprentissage de représentation, à l'échelle d'une ou d'un voisinage de plusieurs cases distance sont mis en avant. La combinaison des représentations d'un voisinage de cases distance par le biais d'un graphe est avancée. Les méthodes d'encodage évoquées exploitent soit un modèle autorégressif, soit un réseau de neurones qui peut être décliné en une version générative et une version non-générative. Le caractère génératif d'un réseau de neurones est associé au potentiel d'entraînement de l'encodage dans un contexte de faible supervision, l'erreur de reconstruction pouvant définir une fonction coût sans aucun label. La solution privilégiée par ce travail doctoral étant un réseau de neurones et les données étant des signaux IQ à valeurs complexes, des notions propres aux réseaux de neurones à paramètres complexes xvii xviii CONTENTS et à l'optimisation de ces derniers sont introduites. L'explicabilité des réseaux de neurones manipulés est discutée grâce à l'équivalence entre un filtre à réponse impulsionnelle finie et une convolution à une dimension, cette dernière définissant la première couche des réseaux non-récurrents exploités. Des résultats préliminaires sont proposés et suggèrent la pertinence d'une architecture d'encodage qui entraîne un réseau de neurones à valeurs complexes à encoder le contenu d'une case distance unique décrite de manière hétérogène dans un espace de représentation commun.
Contribution du chapitre:
• Proposition d'une diversité d'approches pour l'encodage du format hit enrichi couvrant un voisinage de cases distance avec des réseaux de neurones et des modèles autorégressifs.
• Intégration d'une combinaison d'outils existants de la communauté de l'apprentissage automatique pour l'encodage de signaux IQ à l'échantillonnage variable: réseaux de neurones à valeurs complexes, réseaux de neurones pour graphe, modèles récurrents, fonctions coût pour différents niveaux de supervision pendant l'entraînement.
• Présentation de résultats préliminaires pour l'encodage individuel de cases distance avec des données de radar Doppler pulsé simulées.
Chapitre 3: Classification mono-classe pour la discrimination de cibles radar
Ce chapitre passe en revue différentes méthodes de classification mono-classe issues de la littérature et adaptées à l'implémentation de la seconde étape du filtrage proposé dans le premier chapitre. Aborder la séparation de cibles radar avec une classification mono-classe apparaît pertinent du fait du contexte de faible supervision de la séparation réalisée: l'ensemble des classes de cibles n'est pas forcément connu, et les classes identifiées n'étant pas nécessairement décrites par des jeux d'échantillons représentatifs. La classification mono-classe autorise la définition d'une classe de référence à partir d'une quantité limitée de points de donnée labellisés, cela de manière à produire un score traduisant l'appartenance ou non à cette classe de référence. Pour la mission d'un opérateur radar, ce score peut ainsi fournir une aide à la décision en levant ou non une alerte pour des cibles trop proches ou trop loin d'une classe de référence arbitraire, celleci pouvant idéalement rassembler différents types de cible. La définition d'une classe de référence arbitrairement complexe mène à une réflexion quant aux notions de détection hors-distribution "proche" et "lointaine" qui traduisent la difficulté de séparation relative à la proximité sémantique des cibles traitées. Une méthode de classification mono-classe tirée de la littérature et basée sur un réseau de neurones qui concentre les représentations de sortie appartenant à la classe de référence autour d'un centroide latent de cette même classe est déclinée. Là où la méthode de la littérature minimise une distance Euclidienne entre les représentations de sortie des données d'entraînement et le centroide latent, la déclinaison proposée par cette thèse minimise une mesure d'anormalité tirée d'un jeu de projections aléatoires. Un ensemble de distances normalisées, chaque distance étant définie par une projection aléatoire fixée, permet de définir une mesure robuste de l'anormalité pour des représentations à plusieurs dimensions. La mesure en question est directement inspirée par une approximation stochastique de la notion de profondeur statistique, et permet de travailler avec des distributions suivant des ellipsoïdes tout en s'affranchissant du calcul d'une matrice de covariance, puis de son inverse pour l'obtention d'une distance. L'intuition derrière l'emploi de projections aléatoires Chapter 1
Pulse Doppler radars, discrimination task and solution overview
This thesis aims at proposing novel machine learning-powered processing to discriminate more efficiently between radar targets using Doppler features, with a focus on air surveillance radars as sensors and small and slow objects as targets. The solution put forward will emphasize the choice of one-class classification for low-supervision discrimination and will take into account the resolution constraint associated with pulse Doppler air surveillance radars. This chapter describes the motivation of the thesis from a radar perspective and presents an overview of the answer proposed to tackle the problem addressed. Since this manuscript is written both for radar and machine learning engineers and researchers, footnotes and reminders are present to make notions specific to one field understandable to people from the other field (see for example the brief reminder dedicated to deep learning in 1.3.4, and the one dedicated to signal processing in 1.3.5).
The radar motivation
Intuitively, radar amounts to sending energy in a certain direction and detecting objects with the energy reflected back. Whereas sonar uses sound, i.e. a mechanical wave, radar harnesses electromagnetic waves. Radar mostly relies on two attributes to discriminate between targets: the target apparent size and its velocity with respect to the sensor, i.e. its radial velocity. When the radar itself sends energy that will reflect on targets, one considers active radars. On the other hand, when the radar only harnesses reflections of signals it has no control over, radars are said to be passive. The same distinction exists between active and passive sonar. One of the key advantages of passive sensors is their electromagnetic stealth, the reliance on external transmitters remaining a challenge. This work concerns itself with active radars, whose transmitted signal is thus deliberately designed to characterize potential targets. The fundamental equation behind the electromagnetic remote sensor intuition is the radar equation which defines the power received by the antenna P R as a function of the transmitted power P T and wavelength λ, the antenna gain G, the distance to the target R, and the effective target area σ called radar-cross section (RCS) [START_REF] Levanon | Radar principles[END_REF]:
P R = P T G 2 λ 2 σ (4π) 3 R 4 (1.1)
Radars come in all shapes and sizes. Antenna design, transmit power, waveform, Figure 1.1: Radar processing pipeline. The processing proposed uses the output of the detection step, the hits, as input. The processing steps can be describe in simple terms as follows: before detection, there is beamforming to locate the target direction, pulse compression to increase range resolution and Doppler filtering to generate velocity descriptions of targets. The background block consists in a filter to discard potential detections that are too consistent with the weather. The detection step uses a threshold to define one hit each time a backscatter deserves discriminative processing, the extractor step refines the hits by agglomerating them when relevant, and the tracking step creates tracks outlined by successive plots. The steps located between the beamforming and detection blocks, the two latter being included, constitute the actual signal processing pipeline within the radar receiver chain.
and other sensor parameters widely vary depending on the radar mission. This thesis concentrates on air surveillance radars with frequent updates and long range constraints. Historically, many relevant targets to civilian and military radars dedicated to air surveillance could count on the fact that the targets they needed to detect were either relatively big, or relatively fast, or both at the same time. This allowed for the elimination of numerous clutter-related false alarms, where clutter consists of undesirable reflected signals. Indeed, such targets allow the sensor to immediately discard small and slow detections, which populate the target domain containing most clutter. However, with the advent of commercially available drones accessible to the general public, small and slow detections can not be as easily removed anymore. In addition to a possibly very discrete radar cross-section, a drone can have a null bulk speed while airborne and active. Air surveillance radars should now ideally be able to cherry pick relevant small and slow targets, such as drones, before discarding the latter category altogether.
In order to ensure the detection of relevant small and slow targets, the radars for which this work is intended use decreased detection thresholds at the Detection stage of the pipeline depicted on Fig. 1.1, letting through potentially relevant targets as well as clutter. The fact that supplementary clutter will be part of the detections let through by the lowered thresholds is unavoidable, the small and slow targets domain usually containing a lot of unwanted backscatter. Lowering the detection thresholds can thus lead to a dramatic increase in the number of detections, each of which requires processing in the radar processing pipeline. This increase in detections can in turn lead to an excessive amount of false alarms (false positives), imposing an additional false alarm filter. This false alarm filter should be implemented as early as possible once the additional detections are admitted into the processing pipeline, this to avoid spending scarce computing resources and the operator's attention on pointless detections.
The proposed targets discrimination aims at alleviating the false alarms excess and will therefore take the detections called hits as input, and filter the latter before passing on the filtered detections to the Extraction step, as suggested on Fig. 1.1. This positioning choice in the radar processing pipeline makes this work complementary to existing approaches working on subsequent detection representations such as aggregated hits, i.e. plots, and tracks [START_REF] Yao | Trajectory clustering via deep representation learning[END_REF]. Hybrid approaches also combine several of the aforementioned representations in setups falling within the domain of sensor fusion [START_REF] Moruzzis | Automatic target classification for naval radars[END_REF]. Whereas processing plots could seem similar since features are similar to the ones attached to hits, discriminating between tracks moves the targets separation task away from radio signal processing and closer to common time-series or spatial trajectories processing.
It is important to realize that processing the excess of false alarms stemming from lowered thresholds only at these later stages is ill-advised, since it would likely imply wasting computing resources. Now, what does processing hits actually mean ? As hits is a detection defined by a variety of features, one needs to choose which features to process to discriminate targets. The proposed filter will be based on a single feature of the hit object within the radar processing pipeline. This feature is defined in 1.2.
Pulse Doppler radars 1.2.1 Pulse Doppler radar targets backscatter
This work focuses on a specific kind of radar called pulse Doppler radar (PDR). Pulse Doppler radars send bursts of pulses of modulated electromagnetic waves toward targets to detect and characterize them. After one pulse is sent in the scanned direction, the received signals are sampled to retrieve the pulse reflections. The sampling frequency of the radar receiver defines a space discretization in the scanned direction, i.e. leads The proposed detections filter relies on I/Q samples stemming from an I/Q detector [START_REF] Levanon | Radar signals[END_REF]. This I/Q detector takes the received (RX) signal as input, either at its carrier frequency or at a lower intermediate frequency. The received signal is passed through a bandpass filter beforehand to limit the noise [START_REF] Levanon | Radar principles[END_REF]. The I/Q signal processed is complex-valued of the form I + jQ. This corresponds to a usual quadrature sampling setup, which can also be called complex sampling.
to the description of range bins contents with the pulse reflections. Each range bin is described by one complex coefficient per pulse, the coefficient being complex because of the I/Q reception of the radar [START_REF] Alabaster | Pulse Doppler Radar: Principles, Technology, Applications[END_REF][START_REF] Levanon | Radar principles[END_REF].
This vector of complex coefficients defines the hit object feature used by the proposed filter to discriminate targets, and is called I/Q sweep response. In this thesis, it will also be called I/Q signal since no ambiguity is possible. In order to provide as much information as possible to the targets discrimination, the existing feature is enriched to take into account a neighborhood of range cells centered around the cell of the detection instead of only the latter. It is important to understand that this is a neighborhood in ranges and not in azimuths. This transforms the 1D complex-valued vector input into a 2D complex-valued matrix input, as illustrated on Fig. 1.3. Using the notation of the latter where H is the size of the neighborhood of range cells and N the number of pulses in the burst, the input feature can be described by:
Z I/Q = z 11 . . . z 1H . . . . . . . . . z N 1 . . . z N H (1.2)
In the previous equation, the number of rows in the matrix is the number of pulses in the burst creating the hit, while the number of columns is the number of neighboring range cells in the spatial context centered around the hit range cell. The discriminatory power of the feature chosen to filter detections lies in the Doppler information it contains. Any object reflecting the radar pulses with a non-zero radial velocity can induce a Doppler effect frequency modulation in the backscattered pulses. Let us identify this effect in the naive case of a sinusoidal pulse. At time t, the signal backscattered and received by the sensor receiver (RX) has a phase delay τ indicating the target range x(t) and radial velocity ẋ(t) (see Fig. 1.5): The proposed detections filter modifies the existing I/Q attribut of hits to integrate a neighborhood of range cells over the whole burst of pulses to provide the discrimination with additional information. In this figure the vertical axis describes the slow time of the sampling happening at PRF, i.e. between pulses, while the horizontal axis describes the fast time of the sampling happening at a much higher frequency and harbouring the range cells. The PRF is the sampling frequency that varies between hits, introducing a challenging physical diversity between input data points, the overall diversity also stemming from the varying number of pulses. The number of pulses here corresponds to N , the number of rows. Respecting the low-resolution constraint, we will consider N ∈ 8, 32 , as these are typical values for Pulse-Burst radars (PBR) [167, p.188][133, p.470], which will be considered as equivalent to air surveillance PDRs in this thesis. The fast time, much higher, sampling frequency is considered to be constant across all hits and systems considered. An illustration of this diversity is proposed on Fig. 1.4. Top: original hit object I/Q feature (1D complex-valued vector) Bottom: new hit object I/Q feature (2D complex-valued matrix).
s rx (t) = sin (2πf tx (t -τ )) (1.
Hit 1 (P RF 1 , 8 pulses, waveform 1)
∆t 1 = 1 P RF 1 ∆r 1 = c 2B 1
Hit 2 (P RF 2 , 32 pulses, waveform 2)
∆t 2 = 1 P RF 2 . . . ∆r 2 = c 2B 2 Figure 1
.4: Illustration of the diversity of the input representation with two example inputs to our processing pipeline: the PRF and the number of pulses may vary between hits, forbidding direct comparison. This suggests a common encoding space is necessary. On this figure, we set the size of the range cells neighborhood, denoted as H on Fig. 1.3, to 5. The number of pulses per burst considered in this work, under the resolution constraint associated with air surveillance PDRs, belong to the integers interval 8, 32 , in accordance with the typical values put forward in [167, p.188][133, p.470]. This will be the example case considered all along this document. The neighborhood includes the central range cell carrying the actual hit detection, i.e. the passing of a detection threshold. Each range cell is depicted with its own color to translate the possibility of each range cell carrying a specific target or Doppler information. In the case of air surveillance PDR the range cells spread widely in range because of the bandwidth as explained in 1.2.2, so they may very well contain unrelated information. Neighborhood correlation could also point to a weather phenomenon, the effective detection of the latter being of paramount importance to extract small and slow targets from the clutter. Bandwidth, i.e. pulse signal duration, is considered constant, meaning that on the illustration B 1 = B 2 , thus ∆r 1 = ∆r 2 . Said in other words, the spatial sampling does not change between hits. This illustration emphasizes how each input I/Q sweep response can be viewed as a multidimensional complex-valued time-series of varying length and sampling frequency. In a realistic setup, the waveform also differs between pulses. The Doppler effect provides an effective way to discriminate between radar targets through the modulation of backscattered pulses due to the movement of targets with respect to the sensor. The phase of the backscattered signal received at time t carries the Doppler information.
where f is the frequency of the pulse transmitted, c the speed of light. If one considers a target with constant speed, i.e. x(t) = x 0 + ẋt, the received signal becomes:
s rx (t) = sin 2πf tx t - 2(x 0 + ẋt) c (1.5) s rx (t) = sin 2πf tx t -2π 2 ẋ c f tx t -4π x 0 c f tx (1.6)
Eq. (1.6) brings forth the constant phase term ϕ 0 = 4π x 0 c f tx revealing the initial range of the target, and the time-dependent phase term ϕ d = 2π 2 ẋ c f tx t which carries the Doppler information. The Doppler frequency of the moving target is therefore:
f d = 2 ẋ c f tx = 2 ẋ λ tx (1.7)
indicating the impact of the emitted wavelength over the Doppler shift. The Doppler shift is greater for smaller wavelengths, i.e. the radial velocity of a target is more noticeable with radars transmitting high frequencies.
The Doppler information can be extracted from the I/Q samples using either a discrete Fourier transform (DFT) or a filters bank. The filters bank is useful to extract Doppler content in frequency bins while adapting the sensibility of the filters to sidelobes, the latter being particularly problematic in the presence of clutter. The ability to characterize targets and distinguish between similar Doppler signatures is related to radar parameters such as the pulse repetition frequency (PRF) and the number of pulses available. For instance, if one retrieves the Doppler information of the I/Q coefficient for one range cell for a burst of N pulses with a DFT, one gets a spectrum of N -1 Doppler (i.e. frequency) bins describing -P RF 2 ; P RF 2 as a target signature. This emphasizes how critical the PRF and the number of pulses available are when it comes to targets discrimination. An illustration of the impact of the number of pulses over the Doppler spectrum is proposed on Fig. 1.6. As the I/Q samples sampling frequency for each range cell, the PRF effectively defines the frequencies limiting the unambiguous measure of a Doppler frequency shift, a high PRF allowing for greater unambiguous Doppler frequencies. This is due to the Shannon-Nyquist sampling theorem [START_REF] Eldar | Sampling theory: Beyond bandlimited systems[END_REF]. As many systems, a pulse Doppler radar is a sum of trade-offs, and adopting a high PRF lowers the unambiguous range of the sensor. For a target range to be unambiguously defined, a pulse reflection needs to be received before the next pulse is transmitted. The maximum unambiguous range r max of a PDR is thus defined by the pulse repetition interval (PRI), where P RI = 1 P RF , and is determined by the following equation [START_REF] Alabaster | Pulse Doppler Radar: Principles, Technology, Applications[END_REF][START_REF] Levanon | Radar principles[END_REF]: This additional middle ground between range and velocity ambiguity adds itself to the compromise on the transmitted wavelength. The trade-off on wavelength can be understood with Eq. (1.1) and Eq. (1.7), where the received backscattered power of Eq. (1.1) favors large wavelengths whereas the Doppler frequency shift analysis of Eq.(1.7) necessitates relatively shorter ones. The relevance of shorter wavelengths, i.e. higher frequencies, to make the Doppler features more visible is clearly identified where it's not only explicitly mentioned in [START_REF] Rahman | Radar micro-doppler signatures of drones and birds at k-band and w-band[END_REF], but also simply due to the micro-Doppler experiments put forward often relying on higher frequency bands than the L and S bands usually selected for medium and long range air surveillance [START_REF] Rahman | Radar micro-doppler signatures of drones and birds at k-band and w-band[END_REF][START_REF] Molchanov | Classification of small uavs and birds by micro-doppler signatures[END_REF].
r max = P RI × c 2 (1.8)
To overcome the range and Doppler ambiguities without completely sacrificing either the maximum unambiguous range or the maximum unambiguous Doppler frequency, one can regularly change the PRF of the operating radar. A varying number of pulses per burst can also be introduced to provide a diversity of Doppler resolutions for a given PRF, i.e. for a given range of unambiguous Doppler frequency shifts. This PRF and pulses quantity agility implies producing I/Q sweep responses of varying sizes and physical attributes. Handling such a variety of inputs is a key specificity of the proposed approach, this specificity being imposed by the real radar data to be processed. This variation of the input data points concretely translates into a changing number of rows in the 2D enriched input feature of Fig. 1.3, and into a possibly inconsistent sampling period separating rows from one sample to another.
The radars associated with the proposed targets discrimination are the Thales Ground Master (GM) radars, a line of air surveillance radars depicted on Fig. 1.7. In terms of resolution constraints, this line of radars has much lower Doppler resolutions than radars specialized in countering UAVs such as the Aveillant Gamekeeper [START_REF]Gamekeeper 16u counter-uas radar[END_REF]. The GM radars all have rotating antennas and operate on the S band between 2 and 4 GHz, i.e. with wavelength between 7.5 and 15 cm [START_REF]IEEE standard letter designations for[END_REF]. The GM radars range goes from 80 to 515 Km, with 1.5 to 6 seconds update period [START_REF]Thales Ground Master 60[END_REF]7,8]. In comparison with these air surveillance radar parameters, the Aveillant Gamekeeper radar is in the L band between 1 and 2 GHz, has less than 10 Km of range and is equipped with a stationary antenna that allows for large numbers of pulses in bursts. High resolution range profiles (HRRP) describing generated by a different radar, a Thales Coast Watcher 100 (see Fig. 1.8), will also be used to conduct some of our one-class classification (OCC) experiments. This other radar, however, is not of interest regarding our hits encoding and was just taken advantage of in order to diversify the data employed to evaluate discrimination methods.
Doppler characterization of small and slow targets
Improving the detection and discrimination of small and slow targets in air surveillance radars is the goal of this work. Small and slow targets constitute a challenge in radar signal processing since they belong to the same target domain as most of the clutter, and have already motivated other dedicated works [START_REF] Hadded | Localisation à haute résolution de cibles lentes et de petite taille à l'aide de radars de sol hautement ambigus[END_REF]. Targets having low RCS and low velocities can thus easily be confused with natural phenomena. Nonetheless, relevant small and slow targets could be distinguished from clutter using high-resolution Doppler features, or high-resolution range profiles, though obtaining HRRPs discriminative enough for small targets could be prohibitive since this would require a very large bandwidth. The relationship between the range resolution ∆r and the reception where c is the speed of light. The bandwidth is directly determined using the pulse width in time τ through the expression B = 1 τ , and is constrained by the antenna and the microwave hardware [START_REF] Alabaster | Pulse Doppler Radar: Principles, Technology, Applications[END_REF]. Intuitively, stretching out over too many frequencies can complicate processing since so many critical parameters are functions of λ. One can also mention that increasing the bandwidth also increases the noise which negatively impacts the signal-to-noise ratio (SNR) in the receiver, and thus the radar performances. The bandwidth accepted by the receiver may be chosen to be wider than the transmitted bandwidth in order to accommodate the bandwidth distortion due to the Doppler effect [START_REF]Receivers bandwidth[END_REF]. Many drones of interest for radar detection, such as the consumer market drones equipped with several rotors, would require centimeter-scale spatial resolution or even better to capture discriminative enough geometric features of the target [START_REF] Molchanov | Classification of small uavs and birds by micro-doppler signatures[END_REF]. As mentioned in the previous section, we do manipulate HRRPs for some of our OCC experiments, but the HRRPs in question depict large targets such as boats, thus not contradicting the the incompatibility of small targets discrimination and HRRP characterization, at least for the GM radars which have an inadequate bandwidth anyway contrary to the Coast Watcher 100. This work rather aims at using Doppler features, i.e. targets Doppler signatures, to discriminate between small and slow targets. Doppler features remain relevant even for slow targets since any vibration or movement on the target can induce a Doppler modulation [START_REF] Victor C Chen | Advances in applications of radar micro-doppler signatures[END_REF]. Hence, target Doppler characterization is not limited to the bulk speed of a target, which can be flanked by Doppler spectrum modulations on both sides as it appears on Fig. 1.6. When such modulations are generated by moving parts on the target, one can talk about micro-Doppler effect [START_REF] Victor C Chen | The micro-Doppler effect in radar[END_REF].
Micro-Doppler modulations can reveal a bird wings movements, or the rotating blades of a drone, making it a suitable air surveillance radar targets discrimination basis [START_REF] Rahman | Radar micro-doppler signatures of drones and birds at k-band and w-band[END_REF][START_REF] Brooks | Temporal deep learning for drone micro-doppler classification[END_REF][START_REF] Molchanov | Classification of small uavs and birds by micro-doppler signatures[END_REF]. This radar spectral feature is impressively versatile since it can be used in many other discrimination tasks, such as human activity identification [START_REF] Kim | Human activity classification based on microdoppler signatures using a support vector machine[END_REF], wind turbine detection [START_REF] Oleg | Radar micro-doppler of wind turbines: simulation and analysis using rotating linear wire structures[END_REF], or ground vehicles classification [START_REF] Smith | Template based microdoppler signature classification[END_REF]. The fact that it is not limited to the radar fingerprinting of aerial targets is an argument by itself emphasizing its descriptive power, and how relevant the choice of such features for our small and slow targets filter is. The micro-Doppler modulation patterns are such that they are adapted to image processing classification approaches. As such, state-of-the-art (SOTA) image processing methods such as Convolutional Neural Networks (CNN) are naturally applied to micro-Doppler data [START_REF] Gérard | Micro-doppler signal representation for drone classification by deep learning[END_REF][START_REF] Rahman | Classification of drones and birds using convolutional neural networks applied to radar micro-doppler spectrogram images[END_REF][START_REF] Kwan | Drone classification using convolutional neural networks with merged doppler images[END_REF]. Other machine learning-based discrimination approaches were commonly used on micro-Doppler information before the advent Figure 1.9: Radar targets domain conceptual split. The small and slow targets domain is especially hard to handle since it contains most of the clutter, i.e. the detections irrelevant to the operator's mission. This is the targets domain motivating this work.
of modern deep learning such as Support Vector Machines (SVM) [START_REF] Björklund | Target detection and classification of small drones by boosting on radar micro-doppler[END_REF][START_REF] Molchanov | Classification of small uavs and birds by micro-doppler signatures[END_REF], k-nearest neighbor (k-NN) [START_REF] Smith | Template based microdoppler signature classification[END_REF], naive Bayes classifiers (NBC) and regression trees [START_REF] Björklund | Target detection and classification of small drones by boosting on radar micro-doppler[END_REF]. Direct replacement of human classification on pre-existing target classification tasks were permitted by the application of machine learning to micro-Doppler data: [START_REF] Smith | Template based microdoppler signature classification[END_REF] describes the case of micro-Doppler modulations brought down to baseband so that ground targets could be classified by soldiers listening to the signal.
Still, as previously suggested, not all radars are equal when it comes to Doppler spectrum resolution and range. As Eq. 1.7 pointed out, the carrier frequency of the pulses should be high enough not to crush the micro-Doppler modulation against the bulk speed Doppler frequency. This can be said to be a Doppler effect intensity limitation. Additionally, in a pulse Doppler radar, one needs numerous pulses to get numerous frequency bins to successfully isolate Doppler spectrum components with a sufficient velocity resolution. This can be said to be a Doppler cell resolution limitation. Since we are working with air surveillance radars equipped with rotating antennas, these two means of obtaining Doppler modulations noticeable enough to directly harness micro-Doppler effect signatures components are severely challenged. On the one hand the carrier frequency can not be too high on air surveillance radars in order to avoid excessive absorption by the atmosphere, where oxygen and water vapor resonance can even forbid certain frequencies [START_REF] Van Vleck | The absorption of microwaves by oxygen[END_REF][START_REF] Van Vleck | The absorption of microwaves by uncondensed water vapor[END_REF]. Atmospheric absorption is even more of a problem for ground-based air surveillance radars, like the GM radars with which we work, since absorption is stronger at lower altitudes because of greater molecular density [START_REF] Skolnik | Radar handbook[END_REF]. On the other hand, the sensor can not afford to pause its rotation to send a large number of pulses in every direction, and then wait after each pulse to acquire the backscattered energy, as this would imply insufficiently frequent situation updates.
Overview of the proposed solution 1.3.1 Two-steps processing: encoding and discrimination
The previous sections of the chapter described where in the radar processing chain our targets filter should be implemented, what kind of targets the filter should focus on and under what resolution constraints the proposed filter is to work. We can now provide an overview of the filter put forward in this thesis with Fig. 1.10: the filter can be divided
C N ×H detection "hit2vec" R Q embedding one-class classification R score Figure 1
.10: High-level description of the proposed targets discrimination. Hits are processed in two steps: they will first be encoded and projected to a shared representation space before discrimination using one-class classification. Note that the multidimensional complex-valued input of the "hit2vec" encoder has a varying size (due to a varying number of pulses per burst) and is defined by a variable sampling frequency (PRF). Here the output scalar score can either be a continuous real-valued abnormality score or a binary one-class classification score indicating whether the hit processed is in-distribution or not.
into two steps, encoding the enriched hit feature of varying size using deep learning and then filtering encoded hits with a one-class classification approach. The notation used on Fig. 1.10 and Fig. 1.11 corresponds to the one already used on Fig. 1.3, except for the introduction of an embedding size Q. Encoding data points before classification or other forms of discrimination is a common task in the literature. In a way, it amounts to a more or less thorough preprocessing of the data before classifying it.
Neural networks owing their success to their ability to discover and extract patterns [START_REF] Goodfellow | Deep learning[END_REF][START_REF] Lecun | Deep learning[END_REF], the literature is unsurprisingly rich with examples of deep learning being used to extract intermediate representations capturing the main components of diversity within a dataset. For instance, an autoencoder (AE) trained to reconstruct input samples and remove noise at the same time can provide intermediate representations as encoding after an unsupervised training of the generative neural network [START_REF] Vincent | Extracting and composing robust features with denoising autoencoders[END_REF]. Producing compact representations for a downstream discrimination with AEs does not necessarily require another task than the reconstruction itself [START_REF] Sarafijanovic | Fast distance-based anomaly detection in images using an inception-like autoencoder[END_REF], although when the sole reconstruction error is minimized during training a lower dimensionality bottleneck can help prevent an AE from learning the identity function [START_REF] Vincent | Extracting and composing robust features with denoising autoencoders[END_REF]. One should note that a lower dimensionality bottleneck, implying an undercomplete AE is used, does not suffice to guarantee the identity function is not learned. An AE with a one-dimensional latent code could theoretically learn to map each training sample to its own unshared scalar representation before mapping it back to the input representation, assuming the encoder is powerful enough [77, p.494]. This reminds the machine learning practitioner the need to remain suspicious of what was actually learned by the neural network. One of the key ideas put forward by this thesis is a neural network-based encoding of radar hits I/Q features. The final processing step is one-class classification due to the lack of labeled data points, a choice thus stemming from a low supervision constraint. This final step could however more generally be described as an encoding discrimination process, if one ignores the lack of supervision, as suggested on Fig. 1.11. Decomposing the filter into these two stages decouples the hits representation problem from the hits separation problem. It enables the independent development of each of the two stages, and makes each stage useful even if taken alone. For instance, whatever the level of hits labels availability, i.e. whether one is going for out-of-distribution detection (OODD) or fully supervised multi-class classification, an effective hit2vec1 en-
C N ×H detection "hit2vec" R Q embedding encodings discrimination R score Figure 1.
11: High-level description of the proposed targets without our problem-specific supervision constraint: once they have been projected to a shared representation space, encoded hits can be separated with any discrimination adapted to the availability of labels. One can think of clustering, one-class classification, or in the ideal case supervised classification. This makes the proposed pipeline relevant to most targets separation tasks with PDR, whatever the supervision level. In the case of clustering, the output scalar is a cluster label. For one-class classification, the output scalar can either be a continuous real-valued abnormality score or a binary one-class classification score indicating whether the hit processed is in-distribution or not. If enough labels are available and supervised classification is used to discriminate between encoded hits (also called hits embeddings), the output scalar would be a class label, which may include a rejection class to filter out out-of-distribution embeddings.
coding neural network architecture would be equally relevant. This also emphasizes how this work is relevant beyond the very specific scope of air surveillance radars and small and slow targets. If a successful model is found to embed enriched hits I/Q sweep responses into an ordinary vector representation, such a model could be used for targets discrimination task in any pulse Doppler radar, for any level of training supervision.
One-class classification for radar targets discrimination
The low supervision constraint is tackled using a data points separation approach adapted to the low availability of labels, i.e. one-class classification. This OCC can be understood as an anomaly detection (AD) problem, which relates to the evaluation of the distance between a test point and a given data distribution. In this manuscript we allow ourselves to use interchangeably OCC, AD and OODD, although the literature sometimes introduces distinctions between these three tasks [START_REF] Salehi | A unified survey on anomaly, novelty, open-set, and out-of-distribution detection: Solutions and future challenges[END_REF]. In consequence, we will also use the terms in-distribution samples and one-class samples to refer to the same samples. Anomaly detection can as well be called outlier detection, and be associated with novelty detection. As [START_REF] Chandola | Anomaly detection: A survey[END_REF] pointed out, various kinds of anomalies can be distinguished: points anomalies, contextual anomalies, and collective anomalies. Whereas point anomalies can be detected thanks to their features alone, contextual anomalies admit normal features values but are unusual given their data context. Collective anomalies are multiple related data points deemed abnormal because of their collective realization. One can note that given the previous definitions, one can integrate additional contextual information to switch from a point or collective anomaly detection problem to a contextual anomaly detection. All of these types of anomalies can be found in radar targets processing, where one test data point amounts to one target. For instance, a drone-type target having a very high speed, or an odd Doppler signature or an odd HRRP, could constitute a point anomaly. On the other hand, a normal-looking drone with a unique trajectory representation in machine learning pipelines. This is analogous to word2vec which represents words with vectors [START_REF] Mikolov | Efficient estimation of word representations in vector space[END_REF], or wav2vec which encodes tens of milliseconds of audio in a single vector [START_REF] Schneider | wav2vec: Unsupervised pre-training for speech recognition[END_REF]. The architecture behind wav2vec will be discussed in chapter 2.
when compared with the other drones of the targets dataset would define a contextual anomaly. A collective anomaly could be created by a set of drones that alone have normal individual features and trajectories, but whose trajectories combined are unusual. This not only shows how relevant anomaly detection is to radar targets processing, but also emphasizes deep and non-deep AD can contribute to different stages of the radar processing pipeline as it was depicted on Fig. 1.1. For example, in addition to the implementation of our hits filter located before the Extraction step, patterns of suspicious trajectories over a short period of time could be analyzed after the Tracking step. The literature offers numerous examples of AD being used to separate radar targets. For instance, a Gaussian Mixture Model (GMM) is used as a cluster model in [START_REF] Laxhammar | Anomaly detection for sea surveillance[END_REF] to achieve AD on velocities and positions routinely generated by a surface radar watching over ships. In [START_REF] Wagner | Target detection using autoencoders in a radar surveillance system[END_REF], an AE intermediate representation is used as a code to enable the computation of an averaged background code of radar range profiles. A substantial difference with this reference code can then be used to reveal an out-of-distribution test sample.
In the specific case of our hits filter, we consider one enriched hit, i.e. a neighborhood of range cells described by their respective I/Q sweep, as an input data point to be evaluated. This implies we are working on a point anomaly detection task. Our filter could be seen as a contextual anomaly detection task if the data point processed was a single I/Q sweep feature describing one single range cell, since we are computing an AD score based on the I/Q sweep of the center range cell responsible for the hit detection, and its relationship with the neighboring I/Q sweeps. In the end the unit of data that needs to be described as in or out-of-distribution (OOD) with respect to reference targets is the target behind the central range cell in the enriched I/Q sweep representation, that is we are considering a single target with a local I/Q context, and not a neighborhood of hits or targets as it would be the case for a collective anomaly detection task. Considering a neighborhood of hits would not fit in with the existing processing chain of the GM radars. Indeed, the latter currently processes hits individually, and creating new data flows combining hits on a local basis to process them would be computationally too demanding. This will have an impact on the proposed hits processing, and will be discussed more deeply in chapter 2.
The one-class classification approaches compared allow for both unsupervised and semi-supervised anomaly detection (SAD), based on the definitions provided by [START_REF] Ruff | Deep semi-supervised anomaly detection[END_REF][START_REF] Ruff | Deep one-class classification[END_REF]. We will identify the experiments where the training data is considered to be "normal", i.e. the data points belong to the "one-class", as unsupervised AD experiments. On the other hand experiments where in addition to a majority of normal samples, a non-representative minority of "anomalies", i.e. out-of-distribution samples, is available during training to refine the one-class classification boundary will be identified as SAD. As reminded in [START_REF] Ruff | Deep semi-supervised anomaly detection[END_REF], this is not necessarily aligned with the literature, where rightly so, what we call unsupervised AD is already semi-supervised learning since the possibility to count training samples as normal implies some level of supervision. Such a definition is put forward in [START_REF] Chandola | Anomaly detection: A survey[END_REF], where unsupervised AD refers to AD without training data and under the assumption that the test data is made of a large majority of in-distribution samples. This can be seen as the common outlier detection conducted as preprocessing when one works with a dataset outside of a machine learning context. The data types used for the OCC experiments of this thesis are diverse: encoded hits, standard images datasets from the deep learning literature, 2D spectrums, covariance matrices but also 1D range profiles. Self-supervised learning (SSL) will also contribute to some of our experiments through transformations converting in-distribution data into negative samples to enable SAD based on artificial anomalies. The Toeplitz and covariance matrices representations are put forward as relevant representations for signals discrimination, as recently reminded in [START_REF] Cabanes | Multidimensional complex stationary centered Gaussian autoregressive time series machine learning in Poincaré and Siegel disks: application for audio and radar clutter classification[END_REF][START_REF] Brooks | Deep Learning and Information Geometry for Time-Series Classification[END_REF][START_REF] Mian | Robust low-rank change detection for multivariate sar image time series[END_REF].
The choice of deep learning for I/Q encoding
As indicated in 1.3.1, deep learning was selected to build the encoding step of our hits discrimination. We mentioned a basic use case stemming from the literature where AEs are used to generate compact representations of data points that will be fed to a discrimination function in 1.3.1. We can first add that such an embeddings generating mechanism through a generative architecture has been reproduced several times for various kinds of inputs, for instance in [START_REF] Hou | Lstm-based auto-encoder model for ecg arrhythmias classification[END_REF] with the evolved version of recurrent neural networks (RNN) called long short-term memory (LSTM) architectures [START_REF] Hochreiter | Long short-term memory[END_REF], further supporting the encoding power of deep learning. One can also notice that the handcrafted features, such as the renowned SIFT [START_REF] David G Lowe | Distinctive image features from scale-invariant keypoints[END_REF] in image processing, have disappeared in favor of both feature extraction and decision taking being handled by backpropagation-guided neural networks, the switch being motivated by much improved performances. The key remaining choice for the deep architecture designer being perhaps the information flow constraints set by the architecture itself. Works actually indicate that a very substantial part of a deep architecture relevance can stem from its architecture alone, which can already lead to pleasing performances even when left untrained [START_REF] Frankle | Training batchnorm and only batchnorm: On the expressive power of random features in cnns[END_REF][START_REF] Giryes | Deep neural networks with random gaussian weights: A universal classification strategy[END_REF][START_REF] Andrew M Saxe | On random weights and unsupervised feature learning[END_REF]. This point could nevertheless be balanced by the fact that neural networks are usually sensible to initialization, this sensibility bringing about the need to systematically evaluate neural networks over a series of distinct random initializations.
The choice to favor deep learning is not only due to the pattern identification power of deep neural networks, but also to their ability to integrate a diversity of input information through the filters and operations already available in the deep architectures of the literature. Deep learning is not limited to the now almost aged dense, simple recurrent and convolutional architectures, it is also the continuously renewed tale of deep network adapted to non-Euclidean data lying on graphs and Riemannian manifolds [START_REF] Michael M Bronstein | Geometric deep learning: going beyond euclidean data[END_REF], or innovative sequence-to-sequence architectures such as Transformers [START_REF] Vaswani | Attention is all you need[END_REF]. As we will see both in chapter 1 and chapter 2, this neural networks architectural wealth will be leveraged in our work, where geometric deep learning (GDL) and sequence-to-sequence (seq2seq) models will be employed. In the case of geometric deep learning, graph neural networks (GNN) will be put forward in chapter 2 to handle the neighborhood of range cells describing a hit and integrate the varying radar parameters from one it to another in the graph edges. Symmetric positive definite (SPD) manifold neural networks are proposed in chapter 3 to conduct OCC on SPD representations, several of which will be mentioned in this work in the context of radar targets discrimination.
Since we just mentioned how quickly deep learning evolves and redefines its own SOTA, one could wonder what the point is of exploring the aged models of tomorrow for our specific application. Deep learning keeps relying on backpropagation and gradient descent mechanisms to learn to extract features and produce classes or regression results, therefore we argue that understanding how such a learning mechanism can complement the usual radar signal processing will be useful even if the architecture experimented on does not belong to SOTA. Furthermore, the need to physically understand and interpret the operations, recalling the rising demand for eXplainable artificial intelligence (XAI), constrains the relative arbitrary complexity of neural architectures anyway. This need for interpretable sets of learned transformations is rooted, as in other critical applications of artificial intelligence, in the required guarantees widespread in the military sensors industry. In the end, deep learning filter or not, the targets discrimination sold is required to stay within the limits of the advertised requirements existing for any radar system.
On the contrary, deep learning was never meant to be a definite choice regarding the second step of our filter, i.e. the target embeddings discrimination. This is why chapter 3 considers a variety of non-deep machine learning OCC among the conducted experiments. Our pipeline remains completely reliant on machine learning, which, simply put, aims at letting a neural network extract what is believed to be suitable Doppler information and use it to produce a useful score for the radar mission. The search for automatic or semi-automatic Doppler, and more generally, spectral information extraction and encoding dates back to decades ago, when expert systems were already considered to help improve spectral information analysis [START_REF] Adnet | Unification des méthodes d'analyse spectrale (Fourier et haute résolution) en vue de la réalisation d'un système expert d'aide à l'analyse[END_REF].
A brief reminder on deep learning
To make this document friendlier to readers with only a radar background, a short nonexhaustive reminder on deep learning, the machine learning method put forward in this work, is now proposed. If confusion remains on deep learning notions readers may refer to [START_REF] Goodfellow | Deep learning[END_REF] for supplementary explanations, this reference being the inspiration of most of the elements presented here. Deep learning gathers a wide and diverse ensemble of machine learning methods based on neural networks architectures. Neural networks can be described as a succession of parameterized linear and non-linear transformations applied to numerical input data whose parameters are trainable. These trainable parameters are organized in successive layers, each layer producing intermediate representations supposedly yielding higher level features than the previous ones. The non-linear transformations are defined using non-linear functions called activation functions. Specific architectures include parallel and independent successions of layers, residual connections bypassing layers to provide the training phase with the opportunity to recombine information from distinct depths, convolutions to extract multi-scale spatial or temporal patterns and recurrent computing mechanisms for time-series processing. To determine the values of the parameters a computationally expensive, data-dependent, training phase is executed during which a scalar objective quantity often called a cost or a loss function is minimized thanks to the optimization of the already mentioned parameters. Assuming a joint distribution D of the samples x and of their associated targets y, the training phase yields the following minimization problem:
min W E (x,y)∼D [L(Φ W (x), y)] (1.10)
where W are the optimized trainable parameters organized within the neural network defined by the function Φ, and L is the loss function defining the learning task. The minimization is said to be empirical since the training is actually based on a finite number of inputs and outputs pairs drawn from D, i.e. only a sampled version of D is available to optimize W . Various loss functions will be put forward in this work. The latter will be regression losses, which can be opposed to the equally widespread classification losses. An intuitive way to distinguish these two types of losses is to see regression losses as the prediction of a continuous output representation, for instance based on the minimization of a metric, while the classification losses aim at predicting a discrete class label. One can note that the prediction of a discrete label can still translate into the prediction of a continuous quantity such as a float in [0; 1] interpreted as a class probability. A typical approach to classification involves the minimization of a cross-entropy. This optimization is usually implemented using a variation of the gradient descent called stochastic gradient descent (SGD). The SGD gradient descent is deemed stochastic because it estimates the gradient over a large dataset using a smaller set of randomly sampled data points called a minibatch. For instance, if one considers a minibatch of n vectorized samples x i ∈ R d and their associated targets y i picked in the training set for a given task, the update to the parameters W can happen as follows:
W ← W -ϵ g (1.11)
where ϵ is the learning rate and g is the gradient estimate over the set of n samples of the loss with respect to the parameters W :
g = 1 n ∇ W n i L(Φ W (x i ), y i ) (1.12)
The obtained sequence of transformations outputs a representation that translates into a discriminative or generative result and can handle unsupervised and supervised tasks, the supervision level being the result of the amount and relevance of the labels available in the training data. Based on the scalar objective computed after each complete forward pass going through all layers in a unique order, one computes the scalar objective gradient with respect to the neural network parameters and uses this gradient to update the parameters so that the previous objective is minimized. Since the trainable parameters are distributed in several transformations layers, the chain rule of calculus is used to backpropagate the learning signal under the form of derivatives and produced by the scalar objective gradient back through the layers. For instance, for a scalar x and two functions f and g, if one defines z = f (g(x)) = f (y), the chain rule corresponds to:
dz dx = dz dy dy dx = df (y) dy dg(x) dx (1.13)
The previous equation clearly illustrates the intuition behind the chain rule: it allows for the appearance of the intermediate function g and representation y in the derivative of the functions composition with respect to the initial input x. So-called intermediate derivatives can thus be combined to compute the gradient with respect to the final loss at any stage within the neural network. This rule is used because layers are compositions of the ones preceding them and is easily transposed from scalars to vectors and tensors. In multidimensional cases the Jacobian matrix of first-order partial derivatives thus naturally appears in the learning phase. This process constitutes one of the pillars of deep learning and is called backpropagation. This simple yet intuitive description of backpropagation already suffices to understand that one of the difficult points in deep neural network training is the ability to maintain a usable training signal until the first layers when backpropagating the training loss gradient. This difficulty motivates the choice of operations within neural networks, and notably led to the exploration of a diversity of non-linear activation functions.
Assuming relevant data quantity and quality, each deep learning application necessarily leads to a set of choices which make the difficulty of creating a successful deep learning setup. Along with the neural network architecture itself, finding the right combination of hyperparameters, i.e. all the parameters of the architecture outside of the parameters optimized during the training phase, can be particularly challenging. The recent developments in every dimension underlying the task of defining a deep learning architecture, whether it is optimization, constraints on weights, or data augmentation methods, contribute to the difficulty of making the right design choice. Constraints on the weights and the output representations can relate to a major hypothesis of deep learning called the manifold hypothesis. This hypothesis implies that relevant intermediate and output representations only span over a limited part of the representation space. Assuming this hypothesis is valid suggests finding a way to learn parameters and representations limited to the relevant subset of points is crucial to effective neural network training, in addition to easing the understanding of the trained parameters and representations. The black-box nature of complex architectures complicate the acceptance of discrimination and representations generated by neural networks, especially when it comes to critical systems such as radars. This legitimate defiance calls for a careful evaluation of the trained neural networks performances using thoughtfully selected metrics, and for an attentive analysis of the learned parameters after training. Several methods such as Grad-CAM [START_REF] Ramprasaath R Selvaraju | Grad-cam: Visual explanations from deep networks via gradient-based localization[END_REF] relying on gradient and activation values are already popular, and sometimes learned parameters can be compared to finite impulse response (FIR) filters or Fourier atoms [START_REF] Daniel | Exploring complex time-series representations for riemannian machine learning of radar data[END_REF][START_REF] Daniel | Complex-valued neural networks for fully-temporal microdoppler classification[END_REF]. Directly visualizing the learned filters can be an option, typically in image processing [START_REF] Vincent | Extracting and composing robust features with denoising autoencoders[END_REF].
The necessity of thoroughly explaining and evaluating neural networks was already emphasized in the literature, the latter putting forward the necessity of constraining weights in a regularization effort and of tackling the sensibility of neural networks to slightly noisy inputs that can be called adversarial examples, or their sensibility to parameters initialization. The fact that a lucky random initialization of weights changes performances, or that random weights can already yield interesting high level features highlights how important it is to understand what was gained during the training phase. Knowing the right amount of parameters optimization that should be extracted from a given training set is another important part of successful neural network training. Indeed, one can train a neural network too much with a given training set, leading to a situation called overfitting, where the neural network is becoming so specialized with respect to the training data that it loses generalization capabilities on unseen data for the task at hand. Among other key elements neural networks and backpropagation, the foundations of deep learning, have existed for decades but became widely popular only a little more than a decade ago thanks to the combination of the availability of large datasets for unsupervised and supervised training and of new specialized computing hardware and software [START_REF] Lecun | Deep learning[END_REF]. Nowadays large, suitable datasets are even more widespread and distributed computing capabilities are harnessed to train enormous neural networks made of hundreds of millions or even billions of parameters [START_REF] Chowdhery | Palm: Scaling language modeling with pathways[END_REF][START_REF] Brown | Language models are few-shot learners[END_REF][START_REF] Devlin | Bert: Pretraining of deep bidirectional transformers for language understanding[END_REF]. This thesis distances itself from such very large models since it develops small neural networks with few parameters and little data that are meant to be easily investigated, explained and implemented on computationally constrained platforms.
A brief reminder on signal processing
As for deep learning above, a few fundamental notions of signal processing are reminded to provide the reader with some perspective. This thesis implements complex-valued signals processing pipelines in the context of radar systems. The proposed solution combines machine learning and traditional signal processing concepts to improve existing pipelines which are void of machine learning. The complex-valued signals are sampled signals which are assumed to respect the condition brought by the Shannon-Nyquist sampling theorem. This implies the frequency content of the signals processed does not contain frequencies above half of their sampling frequency, the latter being here defined by the PRF. This condition relates to the concept of aliasing, which translates into the impossibility of reconstructing a signal based on the sampled points due to the violation of the Shannon-Nyquist sampling theorem. Said in other words, we need enough samples per time span to accurately and uniquely capture the frequency content of a sampled signal. Here, this requirement is critical not to reconstruct the signal but to ensure the discriminative information our proposed processing needs is not limited by the sampling. This criteria is assumed to be systematically met in our work, and basically means the targets Doppler spectrums do not go beyond the Doppler ambiguity boundaries from a radar engineer point of view. This is made easy by our focus on slow targets. More generally, an anti-aliasing filter can be used to avoid aliasing. This anti-aliasing can translate into a low-pass filter forbidding any information to lie outside of the bandwidth covered by the sampling frequency. Such a low-pass filter would be equivalent in our PDR case to the rejection of targets with velocities beyond the ambiguous velocity allowed by the sampling PRF. The usual bandlimited signal hypothesis applied to PDR results in the unambiguous velocities range in which we will place ourselves.
Since we face a discrimination between signals with varying sampling frequencies, usual signal resampling approaches should be kept in mind. A common way of resampling a signal is to combine upsampling and downsampling by integer factors, resulting in a final rate conversion corresponding to a rational factor. More precisely, the upsampling can be associated with an integer interpolation factor I and the downsampling can be associated with an integer decimation factor D, yielding the rational factor I D . Upsampling by an integer factor can be achieved by adding zeros between existing samples and then applying a low-pass filter, which acts as interpolation. Downsampling on the other hand first applies a low-pass filter and then keeps a fraction of the existing samples. This rate conversion is not necessarily relevant in our case because of the possibly close and too diverse PRFs at play in a set of input signals to be compared. Both the proximity and the diversity of the sampling frequencies in our radar pulses bursts make it likely to require conversion rates where either the factor I or the factor D, or both, are large. This, in addition to the very few signal samples available, would likely lead to inefficient conversions. For instance, the larger the upsampling factor I, the larger the computational and memory loads the processing needs to face due to the additional samples generated. The specificity of our PDR application case encourages us to evaluate the possibility of bringing the diversity of input signals to a shared representation space with less common and more complex approaches and motivates our deep learning-based experiments. One should note that more subtle signal interpolation could allow for the conversion rates necessary in our particular PDR case study, but this is outside of the scope of our work. Details regarding the resampling notions presented in this paragraph as well as complementary information can be found in [START_REF] Eldar | Sampling theory: Beyond bandlimited systems[END_REF].
Assuming an effective rate conversion with a proper rational factor I D could be found to bring back all bursts of pulses to a common PRF and a common number of I/Q samples, one could imagine a resampling-based "hit2vec" encoder. Such an encoder could output the DFT generated features for each resampled single range cell I/Q signal as representation in a shared representation space. The combination of the DFT output for each range cell in a given neighborhood of H range cells (see Fig. 1.3), for instance through the computation of a mean, would then constitute a fixed size representation of the enriched input format matrix. This possible machine learning-free encoder baseline is depicted on Fig. 1.12. The latter respects the hypothesis according to which the discriminative information among the small and slow targets we focus on and the confusing clutter lies in the Doppler i.e. frequency domain, since it relies on the DFT to extract the representations fed to the ensuing separation step.
A notion central to both general signal processing and radar signal processing, the signal-to-noise ratio, will not be discussed in this work, the latter focusing on the proposal of a coherent machine-learning based processing adapted to the discrimination task and the specific input matrix. A favorable SNR nonetheless remains a sine qua non condition for the success of the proposed pipeline, despite the lowering of detection thresholds. The requirement of acceptable SNR is actually closely tied to the few radar notions highlighted by our processing. For instance, hits stem from a detection step (see Fig. 1.1) that may rely on a coherent or non-coherent pulses integration to accumulate the energy of a train of pulses in order to decide whether to detect a target or not. Here, the pulses integration aims at improving the SNR to ease the decision to detect or not. This detection translates into the passing of a threshold, while the SNR increase can be seen as making the signal energy spike more visible with respect to the ambient clutter. The detection threshold is in turn decided by a constant-false-alarm-rate (CFAR) directly tied to the false alarm filtering this thesis puts forward. In the PDR processing pipeline, the described pulses integration should not be confused with pulse compression, the latter aiming at improving the range resolution [START_REF] Alabaster | Pulse Doppler Radar: Principles, Technology, Applications[END_REF].
The history of signal processing suggests the combination of signal processing and machine learning is a natural development of the field. Signal processing was originally very close to the field of physics and less so to the field of mathematics due to its analog nature. With the advent of microprocessors and modern computing chips, signal processing took the digital turn, digital signal processing became widespread and moved the field towards data processing and applied mathematics. This shift can be illustrated by the impact of the Fast Fourier Transform (FFT) algorithm [START_REF] Flandrin | Temps-Fréquence[END_REF]. The development of machine and deep learning to process all kinds of data was thus set to equally impact signal processing. Key deep learning tools can be defined with equivalent concepts stemming for signal processing. As was already mentioned in 1.3.4, a convolutional layer is a FIR filter, and the DFT can be learned by the coefficients within a neural network [START_REF] Daniel | Exploring complex time-series representations for riemannian machine learning of radar data[END_REF][START_REF] Daniel | Complex-valued neural networks for fully-temporal microdoppler classification[END_REF][START_REF] Velik | Discrete fourier transform computation using neural networks[END_REF]. On a side note, one could add that the FFT has also been used to make the computation of convolutions more efficient thanks to the equivalence between convolutions in the spatial or temporal domain and the Hadamard pointwise product in the Fourier domain [START_REF] Pratt | Fcnn: Fourier convolutional neural networks[END_REF].
Deep neural networks have been adapted to process the representations usually handled by signal processing such as complex-valued vectors stemming from a DFT or I/Q signal. This adaptation did not wait for the recent explosion of deep learning research and was engaged decades ago [START_REF] Hirose | Complex-valued neural networks: theories and applications[END_REF][START_REF] Kim | Fully complex multi-layer perceptron network for nonlinear signal processing[END_REF][START_REF] Hui | Mri reconstruction from truncated data using a complex domain backpropagation neural network[END_REF][START_REF] George | Complex domain backpropagation[END_REF][START_REF] Leung | The complex backpropagation algorithm[END_REF]. Audio signal [START_REF] Baevski | wav2vec 2.0: A framework for self-supervised learning of speech representations[END_REF][START_REF] Schneider | wav2vec: Unsupervised pre-training for speech recognition[END_REF][START_REF] Chung | Audio word2vec: Unsupervised learning of audio segment representations using sequence-to-sequence autoencoder[END_REF], EEG [START_REF] Tibor Schirrmeister | Deep learning with convolutional neural networks for eeg decoding and visualization[END_REF] and ECG [START_REF] Hou | Lstm-based auto-encoder model for ecg arrhythmias classification[END_REF] processing are among the most common and successful applications of deep learning to real and complex-valued valued signal. Denoising, one of the most fundamental tasks of image and signal processing, has been a successful application of deep learning [START_REF] Dalsasso | Sar2sar: A semi-supervised despeckling algorithm for sar images[END_REF][START_REF] Lehtinen | Noise2noise: Learning image restoration without clean data[END_REF][START_REF] Vincent | Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion[END_REF]. Closer to our radar application, beamforming [START_REF] Bayu | Beamforming of ultra-wideband pulses by a complex-valued spatio-temporal multilayer neural network[END_REF] and radio modulation discrimination [START_REF] Liu | Modulation recognition with graph convolutional network[END_REF][START_REF] Krzyston | High-capacity complex convolutional neural networks for i/q modulation classification[END_REF][START_REF] Timothy | Convolutional radio modulation recognition networks[END_REF] define other applications of deep learning to signal processing. Among the applications of deep learning to signals discrimination, one can notably distinguish the processing of signal based on 2D spectrums, such as spectrograms, from the processing based on raw temporal or spatial signal. This sepa-ration is relevant since methods used on 2D spectrums can be seen as closer to image processing than to signal processing, even though such a separation is debatable due to the proximity between signal and image processing operators.
Organization of the chapters and contributions
This chapter has now introduced each element of the thesis title: OCC and low supervision refer to the need to tackle a lack of labels, the sensors generating the data are PDRs with a low Doppler resolution constraint, and the task at hand is targets discrimination using I/Q samples carrying supposedly discriminative Doppler information.
The assumption regarding the relevance of the Doppler information for our discrimination task is supported by the considerable literature using exclusively micro-Doppler information to detect and classify small and slow targets such as drones and birds. The second chapter of this thesis will describe the encoding stage of the proposed hits processing pipeline, while the third chapter will outline the final OCC discrimination stage. The encoding stage will put forward the use of RNNs, CNNs, complex-valued neural networks (CVNN) and also of GNNs to produce a relevant real-valued vector representation of the hit feature described by Eq. (1.2). The acronym RVNN for real-valued neural network will be used in opposition to CVNN. The third chapter will on the other hand present OCC baselines considered in our comparisons, but also original work and OCC methods developments. A fourth chapter will then analyze experiments conducted with the complete two-steps pipeline, and is followed by a concluding chapter that ends with perspectives. Deep learning definitions regarding specific deep architectures put forward in our experiments and complementary to the reminder proposed in 1.3.4 will be proposed in the second chapter.
The contributions of this thesis are disseminated in the chapters 2, 3 and 4. As previously mentioned, an original contribution to OCC is put forward in chapter 3 in section 3.1.3. This contribution consists in the modification of an existing Deep OCC method along with the proposal of a latent space regularization scheme in section 3.1.5. Another modification of the same existing Deep OCC is proposed in section 3.2.3 without being supported by experimental results. Although the building blocks of the encoding architectures presented in chapter 2 stem from the literature, their use, combination and adaptation to the specific nature of PDR data and to the problem of small and slow targets discrimination constitute another contribution to the literature. In addition to the previous contributions, we can also note that the choice of the enriched input features format with subsequent representation learning and machine learning-based discrimination constitute a novelty in the field of radar signal and data processing. At the time of writing, the research papers associated with this thesis, either submitted or published, all stem from the content of chapter 3. The chapters 2 and 4 will remain mostly exploratory with the presentation and comparison of encoding ideas supported by a few preliminary results.
Chapter 2
Encoding IQ signals
C N ×H detection "hit2vec" R Q embedding one-class classification R score Chapter 2
Let us now consider the first stage of our hits filter, i.e. the hits encoding stage. Learning robust latent embeddings which concentrate latent representations of similar targets next to each other is the purpose of this stage. As explained in chapter 1 and depicted on Fig. 1.3 and Fig. 1.4, targets are represented by a neighborhood of range cells centered on the range cell actually carrying the potential target that has triggered the detection of a hit due to the passing of a threshold. This neighborhood spans over range cells but not over the neighboring azimuths (see Fig. 2.1). Ideally, distinct classes of targets will be projected in distinct latent clusters by the encoding neural network. The output representation should be real-valued vectors of fixed size in order to enable the downstream discrimination to be achieved by common machine learning approaches, without further adaptation or the need for specific interpretation of the latent components. The adjective robust here emphasizes the necessity for comparable radar targets to be projected similarly, even with small perturbations in the backscattered radar signal. As we will see, this encoding neural network will associate different types of deep learning architectures to handle the particularity of the input data.
The common representation space necessity
As we have seen in Chapter 1, the input I/Q signals we wish to transform are of varying size and varying physical nature, since the number of pulses and the PRF change during the radar operation (see Fig. 1.4). This input heterogeneity renders it irrelevant to directly compare the coefficients of several hits. In order for common data points classification and discrimination methods to be applicable, the coefficients constituting each hit representation should have the same mathematical nature and belong to a representation space of unique dimensionality. Obtaining such a shared representation is the aim of the first stage of the filter. In a way, we are here looking for representations The ideal PDR targets I/Q backscatter embeddings invariances. All of the embeddings invariances need to be learned, implying the availability of dedicated data samples to enable the learning of the invariances during training. There is no way to integrate operations within the encoding hit2vec neural network to make the neural network intrinsically invariant to the cause of input representations variations. In such a case, learning invariances is similar to the use of geometric transformations for data augmentation to make the result of classification neural networks invariant to the same geometric transformations. In our experiments only the target type and the number of pulses per burst change. Such a setup already appeared as challenging.
of targets insensible to so-called signal sampling perturbations: one target should have the same output representation for different number of pulses, i.e. different numbers of samples, and different PRFs. We can talk of target-aspect invariance in the output space of the encoding architecture, in reference to the usual target-aspect sensitivity of radar targets [START_REF] Liao | On the aspect sensitivity of high resolution range profiles and its reduction methods[END_REF]. Another target-aspect invariance ideally expected from the proposed encoding that we will not further mention is the invariance to waveform changes, since in addition to varying PRF and number of pulses, hits of the GM air surveillance radars are defined by flexible modulation frequencies and other waveforms parameters.
The number of pulses, the PRF, and the waveform are all parameters of the sensor generating the target raw representation. Additional invariance to external factors should ideally be ensured for a discrimination relevant to a radar operator to be produced. Such external factors are for example the weather or the target range. For instance, two drones sharing a common geometrical and propellers configuration should be encoded similarly, independently from the range at which they have been detected. Interesting encoding invariances are summarized in table 2.1.
One can note that we chose to process the backscatter describing a neighborhood of range cells centered around a single hit, instead of processing a spatio-temporal neighborhood of hits, i.e. several hits at once. This is imposed by the radar processing pipeline in which our filter should be integrated, where hits are processed alone before the extraction stage. Adding a search mechanism among local combinations of hits would be computationally prohibitive for the embedded hardware of the radar system. This computational barrier is even truer in the context of the lowered detection thresholds motivating this thesis, since lowering filters increases the number of hits requiring real-time processing as explained in section 1.1. The increase in the number of hits would in turn increase the number of possible hits combinations to be considered if the hits filter had been based on spatio-temporal neighborhoods of hits instead of fixed-size range cells neighborhoods. Opting for spatio-temporal neighborhoods also implies answering challenging questions: how far can a hit be in time and space in order to be accepted within a local processing ? How many hits would the local processing require or would accept at most ? To be fair, the increase of hits due to lower thresholds also impact the number of fixed-size neighborhoods of range cells to process, but this computational load is linear since no combinations are considered. This lighter impact on hardware is much more embedded systems-friendly, while the fixed-size of the range spanned by the input complex-valued matrices makes the computational load more predictable. In the spirit of fair comparison, one can furthermore notice that the fixed-size of the neighborhood of range cells is a choice similar to the spatio-temporal delimitation of hits combinations.
Another way to understand the input we have is to see the enriched I/Q feature as a contextualized I/Q feature, where the central I/Q sweep response of the range cell generating the hit detection is associated with the neighboring I/Q sweeps on the discretized radial axis. Neighboring range cells could be carrying other hits themselves, integrating the initially processed hits into their own enriched I/Q format. Detecting a correlation in the Doppler content among neighboring ranges cells is actually critical for the effective discrimination of small and slow targets. Since weather phenomena and irrelevant objects also belong to the small and slow targets domain, a correlation between range cells close to each other may help discerning weather phenomena and other large scale motion. This is in particular due to the fact that interesting small and slow targets are too small to spread their Doppler signature across several range cells. The necessity of taking into account the relation between range cells within the enriched input representation depicted on Fig. 1.3 will have an impact on how we extract an encoding from this representation as explained in 2.4.2. An illustration of the intuition behind the motivation to identify any local correlation, and of the previously mentioned alternative defined by a spatio-temporal neighborhood of hits is proposed in Fig. 2.1.
The choice to encode a neighborhood of I/Q features makes our processing analogous to the processing of data generated by a sensors network. From this perspective, our neighborhood of range cells described by their respective I/Q signals could be a regularlyplaced sensors network, the regular nature of the network stemming from the consistent range sampling. We find it particularly relevant to mention this sensors network interpretation of our problem since sensors network data processing seems of interest for a diversity of radar targets processing approaches. Indeed, in addition to our neighboring range cells I/Q sweeps, one could choose to process spatially or temporally neighboring hits, which in turn could be seen as processing data generated by an irregularly-placed sensors network. In that case, the irregularity comes from the fact that targets and clutter, and thus hits, are not necessarily regularly distributed in time and space around the radar. The latter approach, however, goes against the limitations of the radars we are working on as already mentioned, since it would imply processing hits combined and not individually. The apparent relevance of the sensors network interpretation of our input data naturally leads to considering graph neural networks (GNN) as they are a usual tool to process sensors network data [START_REF] Ortega | Graph signal processing: Overview, challenges, and applications[END_REF]. The use of GNNs will be detailed in 2.4.2, but first we will go through encoding methods for a single range cell I/Q sweep in 2.3, and propose a simple baseline for the encoding of a neighborhood of range cells in 2.4.1. Since the neural networks models defined are built with complex-valued parameters, a short introduction to the implications of complex-valued neural networks is made beforehand in 2.2.
Processing complex-valued data with deep learning
Complex-valued neural networks (CVNN) were chosen to process the complex-valued I/Q signal input matrices of our discrimination task. As reminded in 1.3.5, such networks have existed for decades and were already used to process signals in their beginnings [START_REF] Hirose | Complex-valued neural networks: theories and applications[END_REF][START_REF] Kim | Fully complex multi-layer perceptron network for nonlinear signal processing[END_REF][START_REF] Hui | Mri reconstruction from truncated data using a complex domain backpropagation neural network[END_REF][START_REF] George | Complex domain backpropagation[END_REF][START_REF] Leung | The complex backpropagation algorithm[END_REF]. The recent popularity boom of deep learning was accompanied by a visible continued interest in CVNNs [START_REF] Jose | Complex-valued vs. real-valued neural networks for classification perspectives: An example on non-circular data[END_REF][START_REF] Trabelsi | Deep complex networks[END_REF][START_REF] Tygert | A mathematical motivation for complex-valued convolutional networks[END_REF][START_REF] Guberman | On complex valued convolutional neural networks[END_REF]. This notably translated into the adaptation of the most common deep learning frameworks to CVNNs, for instance with PyTorch [START_REF]Pytorch: Autograd for complex numbers[END_REF]. The need for CVNNs was actually strong enough among the application domains that smaller scales initiatives appeared to develop specialized pro- The dot colors describe the nature of the Doppler content of the range cells, i.e. range cells filled with the same color contain similar Doppler signatures. Here, the neighborhood size is H = 5, as is the case in our experiments. Right: Generic enriched input format of our processing as described in chapter 1 (see Fig. 1.4). Each dot has a different color since by default the range cells Doppler content can be uncorrelated. We process a spatial neighborhood spanning over a radial axis and centered over the range cell that detected a potential target, i.e. whose content passed a threshold. The neighboring cells in azimuths are ignored, i.e. the cells marked with diamonds do not contribute any information to the neighborhood representation. Left: Neighborhood of range cells with highly correlated Doppler signatures. This means the neighboring cells and the central cell carrying the detection triggering the definition of a hit may contain the same kind of potential target. Top: Forbidden alternative to our enriched input representation, defined by a flexible neighborhood of hits. In such a neighborhood, each cell taken into account carries an actual hit, which is not necessarily the case in the fixed-size neighborhoods centered around each individual hit we chose as input Bottom:Intermediate neighborhood correlation case. The neighborhood embedding produced by the first hit2vec stage of our filter (see Fig. 1.10) should lead to the effective separation of this neighborhood and the two other valid ones represented here.
gramming libraries dedicated to CVNNs [START_REF] Agustin | Negu93/cvnn: Complex-valued neural networks[END_REF][START_REF] Matthès | Learning and avoiding disorder in multimode fibers[END_REF] 1 , confirming the relevance of these neural network architectures. A distinct kind of radar, synthetic aperture radar (SAR), is among the common applications benefiting from CVNNs [START_REF] Ja Barrachina | Real-and complex-valued neural networks for sar image segmentation through different polarimetric representations[END_REF][START_REF] Hänsch | Complex-valued multi-layer perceptrons-an application to polarimetric sar data[END_REF]. Closer to our own application, CVNNs have been applied to complex-valued micro-Doppler spectral features in [START_REF] Daniel | Complex-valued neural networks for fully-temporal microdoppler classification[END_REF]. The use of trainable complex parameters is key to take advantage of the modulus and phase information contained within complex-valued data. To avoid the resort to proper complex analysis, the deep learning literature sometimes opted to take into account complex-valued data by doubling the number of real-valued channels of a neural network, treating the real and imaginary parts of a complex representation as two distinct features. Doing so doubles the number of inputs. This was also done with one channel for the modulus and a second one for the phase. However, such a neural network construction discards the simultaneous dependency of a complex representation on both values. The potential superiority of complex neural networks over real-valued equivalents in specific setups was confirmed in diverse experiments by the literature [START_REF] Ja Barrachina | Real-and complex-valued neural networks for sar image segmentation through different polarimetric representations[END_REF][START_REF] Daniel | Complex-valued neural networks for fully-temporal microdoppler classification[END_REF].
Complex neural networks operations
The parameters of the linear operations building the neural networks of the encoding stage will all be complex-valued. The conversion of common deep learning operators to complex parameters is made quite simple by the modern deep learning libraries. In the case of PyTorch for example, it only requires a datatype conversion at the neural network initialization [START_REF]Pytorch: complex numbers[END_REF]. In 1.3.5, we mentioned the close relation between signal processing and deep learning operators with the example of the convolutions defining FIR filters. For instance for a real-valued FIR filter of order K applied to a real-valued input signal x we compute the filtered output y:
y(n) = K-1 k=0 h(k)x(n -k) (2.1)
This can rewritten using the convolution operator * :
y(n) = (h * x)(n) (2.2)
This is also true in the context of complex-valued neural networks and complex-valued signals z = z r + jz i , where the subscript r indicates the real part of a complex representation, while the subscript i indicates the imaginary part. A complex-valued FIR filter h = h r + jh i applied to the complex signal equally translates into the computation of a convolution, which can be developed as follows:
y(t) = (z * h)(t) (2.3) = (z r + jz i )(t) * (h r + jh i )(t) (2.4) = (z r * h r -z i * h i )(t) + j(z r * h i + z i * h r )(t) (2.5)
One can note that the use of a complex filter allows for a non-symmetrical frequency response [START_REF]Fir filter design for complex signal[END_REF]. This contrasts with the application of a unique real-valued FIR filter to both the real and imaginary parts of an input complex-valued signal, and has a direct impact on the interpretation of the trained parameters of complex-valued convolutions. Such an interpretation is one way of explaining the relevance of the neural networks trained when they involve convolutions.
In the experiments conducted, we considered two non-linear activation functions applicable to a complex input z = z r + jz i . The first one is a complex-valued adaptation of the usual rectified linear unit (ReLU):
CReLU (z) = ReLU (z r ) + jReLU (z i ) (2.6)
where the usual ReLU is defined as follows:
ReLU (x) = max(0, x) (2.7)
The second activation function we considered for our CVNN architectures was the same adaptation where instead of the ReLU we used the Gaussian error linear unit (GELU) [START_REF] Hendrycks | Gaussian error linear units (gelus)[END_REF]:
CGELU (z) = ReLU (z r ) + jGELU (z i ) (2.8)
where the usual GELU is defined as follows:
GELU (x) = xΦ(x) (2.9)
where Φ(x) is the standard normal distribution cumulative distribution function. This activation function is actually approximated by the following expression:
GELU (x) = 0.5x(1 + tanh( 2 π )(x + 0.044715x 3 ))) (2.10)
The consideration of GELU was motivated by its use in the state-of-the-art wav2vec 2.0 [START_REF] Baevski | wav2vec 2.0: A framework for self-supervised learning of speech representations[END_REF] framework which also encodes raw signal, and due to the willingness to explore a more complex activation scheme which proved effective in other deep learning applications. Instead of simply gating inputs according to their sign like ReLU, GELU outputs an activation based on the value of the input. It also benefits from a probabilistic interpretation, and again in opposition to ReLU, is non-convex, non-monotonic and not linear in the positive domain. The two different element-wise applications of common RVNN activation functions to the real and imaginary parts of a complex-valued input corresponds to one of the two usual adaptations of activation functions to CVNNs. These two types are respectively called the split-complex and the joint-complex activation adaptations, and can be generically defined as follows [START_REF] Jose | Complex-valued vs. real-valued neural networks for classification perspectives: An example on non-circular data[END_REF][START_REF] Hänsch | Complex-valued multi-layer perceptrons-an application to polarimetric sar data[END_REF]:
act SP LIT (z) = act(z r ) + jact(z i ) (2.11) act JOIN T (z) = act(|z|) exp(j arg(z)) (2.12)
Following this naming convention, we opted for split-complex activation functions in Eq. (2.6) and Eq. (2.8). The joint-complex type processes the real and imaginary parts of the input jointly instead of separating them in the application of a non-linearity. We ended up privileging the CReLU activation since it was a common choice in the CVNN literature [START_REF] Jose | Complex-valued vs. real-valued neural networks for classification perspectives: An example on non-circular data[END_REF][START_REF] Guberman | On complex valued convolutional neural networks[END_REF] and it has been shown to perform better than alternative ReLU adaptations for CVNNs [START_REF] Trabelsi | Deep complex networks[END_REF]. The definition of CReLU enables the function to choose to preserve the phase and the magnitude, or to project the phase to zero or π 2 to cancel the imaginary or the real part respectively, or alternatively to cancel both parts [START_REF] Trabelsi | Deep complex networks[END_REF]. This follows the intuition of the real-valued ReLU which has a componentwise nullification power.
A similarly naive adaptation of the batch normalization (BN) [START_REF] Ioffe | Batch normalization: Accelerating deep network training by reducing internal covariate shift[END_REF] was also harnessed in the CVNN architectures put forward. Batch normalization was initially introduced to reduce the internal covariate shift, i.e. to control the input distribution fed to successive layers in neural networks. Batch normalization was meant to enable more efficient learning and reduce the criticality of the learned parameters initialization [START_REF] Trabelsi | Deep complex networks[END_REF][START_REF] Ioffe | Batch normalization: Accelerating deep network training by reducing internal covariate shift[END_REF]. It was later shown that, although successful in improving the training process stability and performance, the BN success would actually be due to a smoothing effect on the optimization landscape [START_REF] Santurkar | How does batch normalization help optimization? Advances in neural information processing systems[END_REF]. The BN adaptation we used is the independent application of one real-valued BN to the real and imaginary parts of the inputs:
CBN (z) = BN r (z r ) + jBN i (z i ) (2.13)
Here, as opposed to the activation functions expressions, the input z is a batched input.
A less naive complex-valued BN was proposed in [START_REF] Trabelsi | Deep complex networks[END_REF] but was described as a limited source of performances improvement in the CVNN-focused programming library associated with [START_REF] Matthès | Learning and avoiding disorder in multimode fibers[END_REF] 2 . It is important to keep in mind that this normalization operation, in addition to having a different behavior in the training and testing phases, also maintains two trainable parameters γ and β:
BN (x) = γ x -E[x] V ar[x] + ϵ + β (2.14)
where x is a batched input features tensor. In the case of the naive adaptation to a CVNN application described by Eq. (2.13), this implies that the CBN has four trainable parameters (γ r , γ i , β r , β i ) instead of two.
Complex neural networks backpropagation
The adaptation of deep learning to complex-valued parameters involves redefining the usual linear and non-linear operations of neural networks as we have seen in 2.2.1. Once parameters are complex and the transformations of the forward pass are adapted to the complex format, the gradient also requires some adaptations to ensure a relevant backpropagation iteratively modifies the trainable parameters. In order to compute the necessary complex derivatives for training, CVNNs can harness the two Wirtinger derivatives [START_REF] Wirtinger | Zur formalen theorie der funktionen von mehr komplexen veränderlichen[END_REF], which for a complex-valued function θ : C → C are:
∂θ ∂z = 1 2 ∂θ ∂x -j ∂θ ∂y (2.15) ∂θ ∂ z = 1 2 ∂θ ∂x + j ∂θ ∂y (2.16)
where z = x + jy ∈ C. Thus, Eq. (2.15) and Eq. (2.16) define complex differentiation as a transposition of the real domain differentiation. These Wirtinger derivatives unlock the notion of complex derivative by making it applicable to non-holomorphic functions. Holomorphic functions are the functions usually associated with the definition of a complex differentiation, but are too restrictive to enable the transposition of RVNNs concepts and tools to the complex domain [START_REF] Jose | Complex-valued vs. real-valued neural networks for classification perspectives: An example on non-circular data[END_REF][START_REF] Hänsch | Complex-valued multi-layer perceptrons-an application to polarimetric sar data[END_REF][START_REF]Pytorch: Autograd for complex numbers[END_REF]. PyTorch [START_REF] Pytorch | [END_REF] 3 , the library we mostly used, computes the gradient with respect to the conjugate ∂ ∂ z to compute the gradient of a real-valued loss function to train a CVNN. Using a real-valued loss function is intuitive since there is no natural order among complex numbers, and we mostly want to minimize metrics during training.
In the complex domain and for a complex-valued input z, the chain rule of Eq.(1.13) critical to the backpropagation introduced in 1.3.4 becomes [START_REF] Hänsch | Complex-valued multi-layer perceptrons-an application to polarimetric sar data[END_REF]:
∂u(v(z)) ∂z = ∂u(v(z)) ∂v(z) ∂v(z) z + ∂u(v(z)) ∂v(z) ∂v(z) ∂z (2.17)
where u : C → C and v : C → C are arbitrary complex functions. This chain rule can be rewritten to make a real-valued loss function L : C → R and the gradient with respect to the conjugate computed by some of the common deep learning libraries as mentioned earlier [START_REF] Jose | Complex-valued vs. real-valued neural networks for classification perspectives: An example on non-circular data[END_REF]:
∂L(v(z)) ∂ z = ∂L(v(z)) ∂v r (z) ∂v r (z) z + ∂L(v(z)) ∂v i (z) ∂v i (z) ∂ z (2.18)
Encoding a single hit range cell
Let us now discuss the encoding of a single I/Q signal, i.e. the generation of an embedding for a single range cell described by a single burst of pulses. This task amounts to encode a column (z 11 . . . z N 1 ) T ∈ C N ×1 of the complex-valued matrix Z I/Q defined in Eq. (1.2). As indicated by Fig. 1.11, we aim at producing a single embedding R Q for the enriched input defined by the complete matrix. A single I/Q signal will first be mapped to a real-valued vector in R Q , combinations of such vectors to form a neighborhood embedding being proposed in section 2.4. As put forward in section 1.3.5, a naive way to encode the diversely sampled input signals in comparable representations would be to resample them and use their subsequent and equally sized representation in the Fourier domain to compare them. This approach comes with the limitations posed by the resampling method. The reader may refer to section 1.3.5 for details in that regard.
Here, we will first review an approach recently proposed by [START_REF] Cabanes | Multidimensional complex stationary centered Gaussian autoregressive time series machine learning in Poincaré and Siegel disks: application for audio and radar clutter classification[END_REF] precisely developed for radar backscatter processing which can be interestingly coupled with an upcoming manifold-aware OCC presented in chapter 3. We will then go on with recurrent and fully convolutional neural networks (FCNN), with which we conducted preliminary experiments. Both the RNN and the FCNN will be used to define unsupervised range cell encoding thanks to generative architectures in section 2.3.2, and will then be extended to take advantage of labels and supervision during training in section 2.3.3. Each of these methods is adapted to the processing of input signal with different number of input signal samples. In opposition to the resampling approach, these methods do not explicitly take into account the varying sampling frequency of the input signals, i.e. the PRF. This can be rectified later for our neighborhood of range cells as we will see in section 2.4.
A straightforward approach with AR models
One way of representing the complex-valued I/Q radar signals is to use their autocorrelation matrix. This was done in [START_REF] Cabanes | Toeplitz hermitian positive definite matrix machine learning based on fisher metric[END_REF] to encode PDR range cells just like what we wish to do. For a 1D I/Q signal z with N time steps sampled (z 1 , ..., z N ) in compliance with the notations of Eq. (1.2), the autocorrelation coefficient r t can be defined as follows for a time lag of t time steps:
r t = E [z(k + t)z(k)] (2.19) = E [z(k)z(k + t)] (2.20) = E [z(k -t)z(k)] (2.21) = r -t (2.22)
The signal is assumed to be stationary, turning the computation of a correlation into a function of the time steps lag. This directly leads to the definition of the Toeplitz autocorrelation matrix for an arbitrary order of time steps lag, as long as there are enough signal samples (N large) to estimate the individual correlations. For an autocorrelation of up to t time steps lag, the autocorrelation matrix is:
R t = r 0 r 1 r 2 . . . r t-1 r 1 r 0 r 1 . . . r t-2 r 2 r 1 r 0 . . . r t-3 . . . . . . . . . . . . . . . r t-1 r t-2 r t-3 . . . r 0 (2.23)
The autocorrelation matrix is additionally Hermitian positive definite (HPD), which is the complex-valued equivalent of a symmetric positive definite matrix. An SPD matrix would be processed instead of an HPD one had the input signal or time-series been realvalued. The autocorrelation coefficients can be estimated using the empirical mean, for each t:
r t = 1 N -t N -1-t k=0 z(k + t)z(k) (2.
24)
Estimating the autocorrelation matrix through the empirical mean may however produce a matrix R that is not hermitian positive definite. The relevance of the autocorrelation coeffcients estimated with the empirical mean r t of Eq. (2.24) depends on the length of the input signal, which translates into the statistical significance of the mean estimation.
To grasp the representative power of an autocorrelation matrix, one can think of white noise which yields zero as autocorrelation for any time steps lag. The autocorrelation matrix is directly related to the autoregressive (AR) coefficients of an equivalent AR model. The AR model is said to be equivalent since knowing the autocorrelation coefficients suffice to determine the AR coefficients. The relationship between both kinds of coefficients is defined by the Yule-Walker equation [START_REF] Cabanes | Multidimensional complex stationary centered Gaussian autoregressive time series machine learning in Poincaré and Siegel disks: application for audio and radar clutter classification[END_REF]. Assuming a signal z with zero mean and an AR model of order P , the AR coefficients (a 0 , . . . , a P ) are the coefficient minimizing the mean square prediction error E as defined by the following expression:
P p=0 a p z N -p = E N (2.25)
The autocorrelation can appear in a development of Eq. (2.25) to produce the following expression linking the AR coefficients to the autocorrelation ones up to an autocorrelation order t, under the hypothesis that the mean square prediction error E is white noise:
P p=0 a p r t-p = 0 (2.26)
With a 0 = 1 since the autocorrelation with itself is always 1, we end up with an expression allowing for the computation of any order of autocorrelation based on the autocorrelation coefficients of lower order and on the associated AR model coefficients, which can be called reflection coefficients:
r t = - P p=1 a p r t-p (2.27)
Using Eq. 2.27 to compute each autocorrelation coefficient up to the order P of the AR model, one finally ends up with the Yule-Walker equation which makes the autocorrelation matrix of Eq. (2.23) reappear:
r 1 r 2 . . . r P = - r 0 r 1 r 2 . . . r P -1 r 1 r 0 r 1 . . . r P -2 r 2 r 1 r 0 . . . r P -3 . . . . . . . . . . . . . . . r P -1 r P -2 r P -3 . . . r 0 a 1 a 2 . . . a P = -R P A P (2.28)
The complete Yule-Walker expression of Eq. (2.28) now makes the bijection between the two families of coefficients clear: if the matrix R P is invertible the AR model coefficients can be computed from the autocorrelation coefficients, while the reversed dependency is directly ensured by Eq. (2.28). Furthermore, the Levinson algorithm can be used to recursively compute the AR model coefficients, or reflection coefficients, of AR models of successive orders. Autoregressive models of different orders are thus iteratively linked, and the knowledge of the AR coefficients of lower order models enable the computation of the same coefficients for a higher order model. The bijection between the reflection and autocorrelation coefficients, as well as the recursive relationship among reflection coefficients associated with AR models of successive order enable to bypass the drawbacks of estimating the autocorrelation matrix through the empirical mean (see Eq. (2.24)). Indeed, algorithms such as the Burg algorithm [START_REF] Parker | Maximum entropy spectral analysis[END_REF] can be used to directly estimate reflection coefficients using the minimization of forward and backward prediction errors. Details regarding these developments, and adaptations of the previous expressions to multivariate time-series and signals can be found in [START_REF] Cabanes | Multidimensional complex stationary centered Gaussian autoregressive time series machine learning in Poincaré and Siegel disks: application for audio and radar clutter classification[END_REF]. One of the challenges of using AR models is to find a relevant AR model order P . The search of P translates into finding up to what order the autocorrelation coefficients have a significant value. The representation of I/Q radar signals with an autocorrelation matrix will not be experimented with since our focus was set on complementary approaches to the works of [START_REF] Cabanes | Multidimensional complex stationary centered Gaussian autoregressive time series machine learning in Poincaré and Siegel disks: application for audio and radar clutter classification[END_REF] and [START_REF] Brooks | Deep Learning and Information Geometry for Time-Series Classification[END_REF]. Harnessing the autocorrelation coefficients as a single range cell representation could have still provided us with a relevant input representation baseline with respect to our raw I/Q signal input representation. The computation of autocorrelation matrices can yield representations constrained to the Riemannian manifolds of real-valued symmetric positive definite matrices, or to the Riemannian manifold of hermitian positive definite matrices. Processing radar data on such manifolds and on other manifolds whose points are built thanks to the autocorrelation and AR model coefficients has been continuously explored in [START_REF] Brooks | Deep Learning and Information Geometry for Time-Series Classification[END_REF][START_REF] Brooks | A hermitian positive definite neural network for microdoppler complex covariance processing[END_REF][START_REF] Le | Probability on the spaces of curves and the associated metric spaces via information geometry[END_REF][START_REF] Yang | Medians of probability measures in Riemannian manifolds and applications to radar target detection[END_REF] during the last decade. To retrieve a single range cells embeddings belonging to R Q that can be combined to form a single R Q embedding as required by the pipeline of Fig. 1.10, one could use manifold-aware neural networks taking the representation belonging to the manifold as input and producing the fixed-size vector as output. Manifold-aware and specifically SPD manifold-aware deep learning are further discussed in chapter 3 in 3.2. We can additionally note that the Burg algorithm mentioned to determine reflection coefficients is based on the minimization of prediction errors, a task now commonly achieved with deep learning approaches.
Sequence-to-sequence models encoding
In the input C N ×H matrix gathering H signals of N samples, we can consider each range cell to be a complex-valued time-series. This leads us to consider the recurrent neural networks usually applied to time-series in the deep learning community. Let us remind what such neural networks entail. The most simple recurrent neural network recursively combines an input x and a hidden state h at time step t:
h t = act (W x x + b x + W h h t-1 + b h ) (2.29)
where Recurrent neural networks based on Eq. (2.29), which we will call vanilla RNN, are challenging due to the vanishing or exploding gradients appearing in training and the difficulty of learning long-term dependencies [START_REF] Chung | Empirical evaluation of gated recurrent neural networks on sequence modeling[END_REF]. To tackle the challenges posed by the training of vanilla RNNs, a more complex recurrent unit called long short-term memory (LSTM) was proposed in [START_REF] Hochreiter | Long short-term memory[END_REF]. This recurrent unit combines operations acting as information and gradient gates to control the flow of information through successive time steps. As for the vanilla RNN described by Eq. (2.29), a hidden state h t is updated through successive time steps, along with an additional cell state c t and other complementary operations. These operations define the following gates: an input gate i, a forget gate f , a cell gate g and an output gate o. The output of each gate is computed according to the following equations 4 :
i t = σ(W ii x t + b ii + W hi h t-1 + b hi ) (2.31) f t = σ(W if x t + b if + W hf h t-1 + b hf ) (2.32) g t = tanh(W ig x t + b ig + W hg h t-1 + b hg ) (2.33) o t = σ(W io x t + b io + W ho h t-1 + b ho (2.34) c t = f t ⊙ c t-1 + i t ⊙ g t (2.35) h t = o t ⊙ tanh(c t ) (2.36)
In these equations, the subscript letters indicate to which gate and input and intermediate representation weights are applied. Input and intermediate representations are associated with a time step subscript. The symbol σ stands for the sigmoid function:
σ(x) = 1 1 + exp(-x) (2.37)
The distinction between the four gates put forward can seem confusing at first due to their output being systematically based on the same input representations: the current time step input x t and the previous time step hidden state h t-1 . Each of the gates however has its own role and its own trainable weights, in addition to a distinct nonlinear activation for the cell gate g. The LSTM cell passes the two hidden states h t and c t to the next time iteration. An illustration of the recurrent nature of the LSTM defined by Eq. (2.31) is proposed on Fig. 2.2. A prominent gated recurrent unit variant is the gated recurrent unit (GRU) [START_REF] Cho | On the properties of neural machine translation: Encoder-decoder approaches[END_REF]. The GRU combines three gates instead of four like in the LSTM; a reset gate r, an update gate z and a new gate n. Unlike the LSTM, a single hidden state h t is passed to the next iteration of the gates. The gates output are determined as follows 5 :
r t = σ(W ir x t + b ir + W hr h t-1 + b hr ) (2.38) z t = σ(W iz x t + b iz + W hz h t-1 + b hz ) (2.39
)
n t = tanh(W in x t + b in + r t ⊙ (W hn h t-1 + b hn )) (2.40) h t = (1 -z t ) ⊙ n t + z t ⊙ h t-1 (2.41)
Achieving unsupervised representation learning with the recurrent cells described is enabled by the definition of sequence-to-sequence (seq2seq) generative architectures where one RNN encodes the input signal and another RNN decodes the last hidden states of the encoder to reproduce the input. This is the sequence equivalent of an autoencoder discussed in chapter 1. The encoder and decoder RNN are separate architectures with their own trainable weights. Several recurrent cells can be stacked in layers, and the size of the hidden states provides us with a way to control their expressive power and learning potential. To generate a real-valued fixed-size representation in R Q for the input signal (z 1 . . . z N ) T ∈ C N ×1 describing a single column of the complex-valued matrix Z I/Q defined in Eq. (1.2), the last hidden state of a seq2seq encoder is reduced to its real part. The imaginary part is discarded and only real-valued coefficients are passed on to the decoder. This seq2seq architecture can be implemented with any of the diverse recurrent neural networks cells available in the literature, and in particular with the three recurrent setups described. The generative seq2seq approach with RNNs and its application to range cell signal encoding is illustrated on Fig. A different kind of generative sequence-to-sequence neural network can be adapted to our input data. One of the difficulties of our inputs is its variable size. To tackle it, one can harness a global pooling [START_REF] Lin | Network in network[END_REF] in a generative fully convolutional neural network [START_REF] Ronneberger | U-net: Convolutional networks for biomedical image segmentation[END_REF][START_REF] Long | Fully convolutional networks for semantic segmentation[END_REF]. The generative FCNN uses convolutions, normalization, activation and pooling layers without the intervention of dense or fully-connected layers to accept inputs of varying size and produce a fixed-size representation in R Q in its bottleneck. One of the key advantages of applying convolutions to signal inputs is the interpretability potential of the learned weights. If the convolution kernel is large enough the parallel with FIR filter becomes practical. In such a case, one should also keep in mind that having numerous independent convolutional kernel, i.e. numerous channels for the convolutional layer, can be relevant to tackle the difficulty of handling input signals with varying sampling frequency. Indeed, a given FIR filter will define a certain frequency response depending on its weight and the sampling frequency of the signal to which the weights are applied. Thus, to ensure similar frequency responses can be extracted from filters applied to input signals with different sampling frequencies, one should stay generous with the number of channels available in convolutional layers. The generative FCNN can be thought as closer to the autoencoder architecture than to the recurrent seq2seq one, and will thus be called fully convolutional autoencoder (FCAE). The bottleneck is the intermediate representation of lower dimensionality, assuming one defines an undercomplete [77, p.494] generative architecture, which separates the encoder from the decoder.
Global max or average pooling allows an FCAE to reconstruct output of different sizes while producing a bottleneck representation of fixed-size. In order to do so, the global pooling operation is applied to the bottleneck features dimension and only happens to retrieve an R Q embedding. In a forward pass on the other hand, the features dimension, whose size varies along with the input size, is left unreduced. This implies the Q dimensions of the latent representation defining the range embedding are equal to the number of convolutional channels in the bottleneck representation. The FCAE proposed here combines an encoder and a decoder with complex-valued parameters. However, as for the previously detailed RNN-based seq2seq architecture, the bottleneck representation will be constrained to real values in order to learn real-valued latent representations. One of the advantages of the FCAE with respect to the previously introduced RNNs is that the processing of all time steps is done in parallel instead of sequentially. As for RNN-based seq2seq architectures, convolution-based seq2seq architectures define a popular option to encode signals, for example to process ECG data [START_REF] Yildirim | An efficient compression of ecg signals using deep convolutional autoencoders[END_REF]. In our case as well as in the case of ECG encoding, one could arguably talk about signal2signal architecture. An FCAE architecture adapted to our
I/Q signals is proposed in table C.2.
Both seq2seq approaches, either recurrent or fully convolutional, train their neural networks parameters by minimizing the reconstruction error computed over the signal samples (z 1 . . . z N ) T ∈ C N ×1 describing a single column of the complex-valued matrix
Z I/Q defined in Eq. (1.2): min W 1 N N i=1 (Φ(z i ; W ) -z i ) 2 (2.42)
where Φ is the generative neural network of trainable weights W producing the complexvalued reconstruction of the input signal samples. In the computation of the previous error, the real and complex parts of the complex representation are taken into account like two components of a real-valued representation. This leads to a real-valued loss despite the complex-valued nature of the generative neural network, as announced in 2.2. The loss here is defined at the scale of a single range cell I/Q signal, sampled over N pulses. This loss is computed over batches of multiple such signals during a neural network training. One can note that the reconstruction task can be coupled with a denoising or a missing input interpolation task. The latter is made particularly easy by the availability of dropout [START_REF] Srivastava | Dropout: a simple way to prevent neural networks from overfitting[END_REF] 6 in deep learning libraries, although such dropout should not apply any scaling as the common dropout layer does. Now that we have discussed the possibility to encoder single range cell IQ signal with AR models, RNN and FCNN-based generative neural networks, what is there to say to differentiate them ? A simple way to put it could be to say that whereas the AR models reduces the representation of the signal to an autocorrelation Toeplitz matrix of a carefully chosen order, the RNN-based seq2seq approach is more about choosing the right recurrent neural network complexity. Similarly, the FCAE can also be summarized to a neural network architecture choice, however here the model offers the interpretability potential of the convolutions weights as FIR filters. All three approaches benefit from the application cases and successes offered by the literature. As reminded in 2.3.1 and in the current section, processing radar signal with AR models has been put forward in [START_REF] Cabanes | Multidimensional complex stationary centered Gaussian autoregressive time series machine learning in Poincaré and Siegel disks: application for audio and radar clutter classification[END_REF] and ECG signal processing can be done with recurrent neural networks [START_REF] Hou | Lstm-based auto-encoder model for ecg arrhythmias classification[END_REF]. The application of successive layers of convolutions to raw signal has also had its successes on audio [START_REF] Baevski | wav2vec 2.0: A framework for self-supervised learning of speech representations[END_REF][START_REF] Schneider | wav2vec: Unsupervised pre-training for speech recognition[END_REF], ECG [START_REF] Yildirim | An efficient compression of ecg signals using deep convolutional autoencoders[END_REF] and radio signal [START_REF] Liu | Modulation recognition with graph convolutional network[END_REF][START_REF] Timothy | Convolutional radio modulation recognition networks[END_REF], making the case for the FCAE method.
Taking advantage of limited supervision to avoid generative encoding
A prominent advantage of encoding a range cell signal using either the Toeplitz autocorrelation matrix of an AR process as described in 2.3.1 is that the range cell encoded representation is produced without supervision. This advantage also applies to the sequence-to-sequence recurrent or fully convolutional architectures described in 2.3.2: since the training task is reconstruction, possibly coupled with a denoising or missing input completion task, the model training does not require class labels. Since the encoding task is difficult, it appears relevant to propose neural network architectures and training setups able to take advantage of labels when the latter are available. We will therefore discuss two setups adapted to two distinct level of supervision. The first one will remain on a generative architecture but will take advantage of a limited amount of labels to penalize distances relative to class centroids within its latent space. The second one will assume sufficient supervision is available to eliminate the reconstruction training task altogether, in order to only harness the encoder part of the generative networks discussed in 2.3.2.
The first setup corresponds to the following loss:
min W 1 n n i=1 ||Φ(z i ; W ) -z i || 2 + 1 m m j=1 ||Φ E (z j ; W E ) -c j || 2 (2.43)
Here, unlike in Eq. 2.42, z i is the i -th input range cell signal described by a vector, and not a single sample of one input signal, and Φ(z i ; W ) is a vector containing its reconstruction by the generative neural network Φ of weights W . In the second term, Φ E (z j ; W E ) is a vector containing the embedding of the labeled sample z j , and c j is the reference point or centroid associated with its label in the embedding or encoding space.
Since the encoding network is part of the generative architecture, the trainable weights W E are part of W , hence the sole presence of W under the minimization operator. As the embeddings of our framework live in R Q (see Fig. 2.4) both Φ E (z j ; W E ) and c j necessarily belong to R Q as well. The sums over n and m correspond to a loss computed for a set of n unlabeled and m labeled samples. This loss thus indicates a semi-supervised learning setup, in accordance with the semi-supervised learning appreciation of [START_REF] Ruff | Deep semi-supervised anomaly detection[END_REF].
The two terms defined thus respectively relate to a reconstruction error and a latent clustering constraint. The association of a reconstruction term and a latent space clustering term is similar to what the literature of simultaneous clustering and representation learning already proposed. In [START_REF] Song | Autoencoder based data clustering[END_REF] for instance, an autoencoder is trained with a less where one term minimizes the reconstruction error and another term minimizes the distance to assigned cluster centers. A key difference with respect to our proposal however remains: [START_REF] Song | Autoencoder based data clustering[END_REF] additionally suggests an alternate optimization of the mapping neural network and of the cluster centers. Here, we simply propose c j as targets classes mean representation or arbitrary coordinates towards which embeddings should be mapped. On a side note, in deep learning a regularization term is usually added to the training loss functions. For instance, the L2 norm of the neural network weights is added to the loss with a dedicated fixed weight [START_REF] Goodfellow | Deep learning[END_REF]. Such a term will not appear in the loss functions discussed here. Now that the first setup has been presented, let us turn to the second one. The concentration of latent representations of specific labels around centroids proposed as a
Layers in forward order
Layer parameters C-conv 1D kernel size 10, in channels Using a convolution with a large kernel size on the input signal is particularly interesting since it makes the potential interpretation of the learned weights as FIR filter coefficients more expressive. The mean pooling layer is applied over the features dimension, so that the remaining representation is a vector whose size equals the number of channels of the final convolutional layer. The final layer discards the imaginary part to produce the real-valued R Q vector representation of the range cell. Since only the real part is taken into account in the training loss, the network learns to concentrate the information in the real part only.
complementary loss term in Eq. (2.43) could actually suffice to train an encoder neural network alone directly, assuming enough labeled data is available. This would lead to a setup close to supervised classification, although it would not necessary be the case for our encoding task. For instance, there could be known classes samples with yet unseen signal sampling parameters in the test set, resulting in a not completely supervised machine learning setup despite the existence of the right centroid for the test samples label. In this higher supervision setup, one can harness the following loss:
min W 1 m m j=1 ||Φ E (z j ; W E ) -c j || 2 (2.44)
where the loss terms and indices correspond to the definitions provided for Eq. (2.43). The possibility to use such a loss to directly train the encoder would allow to discard the decoder in the architectures presented in section 2.3.2. This could simplify the trained architecture but it does not make the right centroid selection heuristic obvious.
In the case of the recurrent seq2seq generative architecture, one ends up with a recurrent encoder whose last hidden state is reduced to its real part, the latter now defining the final representation in a forward pass of the neural network trained. In the case of the FCAE autoencoder, one ends up with a fully convolution network (FCN) that retains the potentially interpretable convolutional layers. An example architecture for such an FCN trained with the loss provided by Eq. (2.44) is proposed in table 2.2.
Encoding a neighborhood of range cells
Now that we have investigated the encoding of single range cells each described by an I/Q signal, let us go back to the more elaborate problem of encoding our actual input, i.e. the neighborhood of ranges cells (see Fig. 1.4 and Fig. 2.1). The task at hand consists in encoding the input matrix defined by Eq. (1.2) belonging to C N ×H , where N is variable among the input data points due to the varying number of pulses in the transmitted bursts. In the latter, each column describes a range cell. Since all single range cells are first encoded into a fixed-size real-valued vector in R Q , the remaining transformation to produce a single real-valued vector in R Q for the entire neighborhood should define a function Φ : R Q×H → R Q . The definition of such a function Φ is the matter of upcoming sections and is illustrated on Fig. 2.4. We will first mention naive approaches to combine the single range cells respective embedding and discuss the closely related subject of combining independent range cells to produce an artificial dataset respecting our input format in 2.4.1. We will then present the favored approach based on a graph neural network in 2.4.2. One can note that the inquired function Φ correspond to the mean of the Fourier features after resampling in the baseline free of machine learning proposed for perspective in Fig. 1.12. Assuming a resampling scheme could successively provide fixed-size R Q single range cells embeddings, any embeddings combination approach presented here could also be integrated to this resampling baseline for the neighborhood embeddings integration subtask. The methods presented here could also extend to the combination of embeddings based on an individual range cell information different then the one carried by the I/Q signal, as suggested by the well-identified sensors networks data processing application of graph neural networks [START_REF] Ortega | Graph signal processing: Overview, challenges, and applications[END_REF].
RNN
RNN RNN
z 1 z 2 z N . . . h 1 h 2 h N RNN RNN RNN ẑ1 ẑ2 ẑN . . . ĥ1 ĥ2 ĥN z 1 ?(z 2 , ẑ2 ) ?(z N , ẑN ) h R + h I h R + h I ∈ R Q h R encoder decoder Figure 2.
3: Unsupervised seq2seq learning with recurrent neural networks to retrieve the fixed-size vector of R Q encoding each range cell signal. The upper part describes the encoder, the lower part describes the decoder. The blue and red arrows illustrate the recursive calls of the recursive architectures to their previous hidden states in the encoder and the decoder respectively. The last hidden state of the encoder h R + h I is passed on to the decoder with its imaginary part zeroed out, thus creating the R Q single cell embedding we are looking for. Indeed, since the only way to provide information to the decoder is to put it in the real part of the hidden state, the seq2seq trainable weights are forced to learn the complex to real conversion. This does not forbid the decoder to adopt complex-valued weights in order to output the reconstructed input signal ẑ. The question mark among the inputs of the decoder ?(z N , ẑN ) denotes teacher forcing [77, p.372], the first input of the decoder being the ground truth and no end-of-sequence (EOS) token being used. We allowed ourselves to work without start-of-sequence (SOS) and EOS tokens [START_REF] Sutskever | Sequence to sequence learning with neural networks[END_REF] since unlike in a sentence, the positioning towards the beginning of the end of a signal does not translate into semantic information. This is true due to the fact that we are interested in the discriminative frequency content spanning over the whole sequence.
R Q embedding "hit2vec" = "cell2vec" + "graph2vec" cell2vec C N C N C N C N C N R Q R Q R Q R Q R Q R Q graph2vec Ctest C-1 C-2 C+1 C+2 C N ×H detection Z I/Q = z 11 . . . z 1H . . . . . . . . . z N 1 . . . z N H H = 5 = z 11 z 12 z 13 z 14 z 15 . . . . . . . . . . . . . . . z N 1 z N 2 z N 3 z N 4 z N 5 Figure 2
.4: Proposed hit2vec step combining a cell2vec step and a graph2vec step. The cell2vec step, discussed in 2.3, corresponds to encoding a single range cell and transiting from a variable size complex-valued representation to a fixed-size real-valued one. The graph2vec step, discussed in 2.4.2, corresponds to encoding the graph of the range cells neighborhood where each range cell defines a node whose features is the R Q embedding provided by cell2vec. The graph2vec relies on an adjacency matrix defining the range cells neighborhood connectivity pattern and allowing for the definition of a graph convolution through Eq. (2.48). Example neighborhood graphs are proposed on Fig. 2.6, Fig. 2.8 and Fig. 2.5. The input matrix is the enriched hit input format defined by Eq. (1.2), and the output graph embedding is passed on to a one-class classification method for low supervision discrimination.
Naive approaches and the possibility of data augmentation
A naive way of combining the encoded representations of each range cell to implement Φ is to use a simple mean computed over each one of the Q components provided by the single cells:
Φ(R single ) = 1 H R single I Q (2.45)
where R single is the R Q×H matrix containing the single cells embeddings of the neighborhood of H range cells, and I Q is a R H×1 vector of ones. This mean can be adjusted with fixed or trainable weights over the H cells, which leads to the following alternative for Φ:
Φ(R single ) = 1 H R single A Q (2.46)
where A Q is a R H×1 vector:
A Q = a 1 a 2 . . . a Q (2.47)
These weights can be interpreted as a naive form of attention spanning over the neighborhood of range cells, in reference to the deep learning concept of attention [START_REF] Vaswani | Attention is all you need[END_REF] already used in generative [START_REF] Gong | Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection[END_REF] and graph neural networks architectures [START_REF] Veličković | Graph attention networks[END_REF]. An important difference remains with the deep learning attention, since the latter produces a weight based both on a vector representation key and a vector representation value. Assuming a large neighborhood is handled, a sparsity constraint could also be enforced to combine single cell embeddings on as was done on encoded representations in the context of a generative task in [START_REF] Gong | Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection[END_REF]. Such a sparsity constraint over a neighborhood of range cells could be inspired by or interpreted as an extension of the guard cells radar concept already regulating the contribution of neighboring cells for radar constant-false-alarm rate (CFAR) detection [START_REF] Arnaudon | Riemannian medians and means with applications to radar signal processing[END_REF][START_REF] Alabaster | Pulse Doppler Radar: Principles, Technology, Applications[END_REF]. The matrices R and A here have no relation with the autocorrelation matrix R and the autoregressive coefficients of 2.3.1.
In terms of deep learning experimentation, one can note that producing classes of neighborhood of range cells with diverse levels of local relative correlations can be done through the combination of single range cells even when they are not identified as neighbors. This can substantially ease the generation of a sizeable dataset to train and evaluate the proposed approaches with the enriched input format on already existing and not modifiable platforms. Indeed, such a dataset generation possibility allows to avoid a very inconvenient modification of the hits format produced by the signal processing part of the radar processing pipeline (see Fig. 1.1) by unlocking the artificial construction of neighborhoods of range cells of arbitrary size H from the already existing single range cell format. This range cells recombination is equally applicable when the hit format deployed on a system already produces a neighborhood of range cells but this neighborhood is not of the desired size H. Here are a few examples of possible range cells combinations to produce different local correlation configurations:
• Replicate a single range cell to create a perfectly correlated neighborhood. This translates into the pattern A . . . A, where one letter is one class or type of pattern in terms of Doppler content. An example of this pattern for a neighborhood of size H = 5 is illustrated on Fig. 2.1, on the left part.
• Replicate one range cell in the central part of the neighborhood, then another one on the border cells. This creates a symmetric neighborhood with two group of cells perfectly correlated, and translates into the pattern B . . . BA . . . AB . . . B.
An example of this pattern for a neighborhood of size H = 5 is illustrated on Fig. 2.1, on the bottom part.
• If the number of types of range cells available exceeds the number of range cells in the neighborhoods, one can define an uncorrelated, asymmetric neighborhood. This translates into the pattern ABC . . .. An example of this pattern for a neighborhood of size H = 5 is illustrated on Fig. 2.1, on the right part.
With these example classes, one can ensure the classes of neighborhoods separated already represent some level of diversity of targets in the hit central range cell responsible for the detection. Evidently, the choice of the types of neighborhood artificially generated, when they are needed, should depend on the downstream discrimination task addressed. Furthermore, when building neighborhoods as well as for the development of neighborhood encoding methods, one should keep in mind that the relative position with respect to the range cell matters, but not the absolute range. On the other hand, the absolute position of the range cells with respect to the ranges axis should not matter. While creating a dataset of labeled neighborhoods of range cells, one should also not lose track of the actual objective which is the discrimination of the target described by the central range cell of the neighborhood. This should translate into a systematic greater relative contribution of the central range cell to the final neighborhood representation.
A deep learning encoding with graph neural networks
The solution proposed by this thesis to encode a neighborhood of encoded range cells I/Q sweeps relies on a neighborhood graph which will be processed by a graph neural network [START_REF] Scarselli | The graph neural network model[END_REF][START_REF] Gori | A new model for learning in graph domains[END_REF]. As for the FCNN and the seq2seq architectures, let us first remind how a graph neural network works. A GNN takes a set of nodes N and a set of edges E as input and transforms these elements, i.e. their features, with both linear and non-linear operations constrained by the connectivity pattern of an adjacency matrix. Classification and regression tasks can thus be conducted on graphs at node scale as well as on a whole graph. This notably translates into the classification of individual nodes whose features have been processed by graph neural networks, and into the classification of graphs whose representation stems from an aggregation of processed nodes features. Generative architectures adapted to data distributed on graphs have recently been proposed, mirroring the Euclidean deep learning community findings with graph autoencoders [START_REF] Sabbaqi | Graphtime convolutional autoencoders[END_REF] and graph UNets [START_REF] Gao | Graph u-nets[END_REF]. Here, GNNs are considered for training with a form of supervision, but the existence of generative GNNs suggest that unsupervised neighborhood of range cells encoding is already accessible. Key deep learning operations, such as convolutions and attention, have been adapted to graphs [START_REF] Veličković | Graph attention networks[END_REF][START_REF] Kipf | Semi-supervised classification with graph convolutional networks[END_REF][START_REF] Henaff | Deep convolutional networks on graph-structured data[END_REF][START_REF] David K Duvenaud | Convolutional networks on graphs for learning molecular fingerprints[END_REF][START_REF] Bruna | Spectral networks and locally connected networks on graphs[END_REF]. Among these adaptations, some directly relate to the graph Laplacian eigendecomposition and are not spatially localized over the graph. Less common edge-focused processing is also explored by the recent literature [START_REF] Bielak | Pytorch-geometric edge -a library for learning representations of graph edges[END_REF][START_REF] Gong | Exploiting edge features for graph neural networks[END_REF]. The association of a convolutional neural network to produce signals embeddings with a GNN over which embeddings are placed was explored for modulation recognition in [START_REF] Liu | Modulation recognition with graph convolutional network[END_REF]. On another note, the combination of a spatial graph with a temporal convolutional processing of signal was applied to electroencephalogram (EEG) signals in [START_REF] Li | Classify eeg and reveal latent graph structure with spatio-temporal graph convolutional neural network[END_REF].
The ability to process information organized according to a graph is critical to enable deep learning to tackle problems based on non-Euclidean data. Graph neural networks are part of the modern deep learning tools gathered under the name of geometric deep learning [START_REF] Michael M Bronstein | Geometric deep learning: going beyond euclidean data[END_REF]. Geometric deep learning (GDL) achieves deep learning using representations and parameters constrained to graphs and differentiable manifolds. Here only the graphs aspect of GDL will be discussed, but a different application belonging to GDL involving the SPD matrices manifold is discussed in 3.2. Geometric deep learning follows the refinement trend of deep learning approaches to adapt the vanilla framework brought by backpropagation, activation functions, dense and convolutional layers to data different from standard images and features vectors.
The point of using a graph in our application case is that the latter enables to enforce a constraint over how the information distributed over the neighborhood of range cells will be taken into account in a final encoding. The connectivity pattern enshrined in the adjacency matrix rules the relative flow of already encoded range cell information, while the importance weights among these information flows are determined in training. The graph convolutional layer (GCL) used in our experiments is the one proposed by [START_REF] Kipf | Semi-supervised classification with graph convolutional networks[END_REF] which transforms H input nodes defined by features vectors in R M and organized in a graph as follows:
H = D -1 2 A D -1 2 XW (2.48)
where:
• X ∈ R H×M is the nodes features matrix in which each of the H rows is described by a vector of M real-valued coefficients;
• A ∈ R H×H is the graph adjacency matrix with inserted self-loops (see Figs. 2.7, 2.9 and 2.5);
• D ∈ R H×H is the diagonal degree matrix of the graph with inserted self-loops (see Figs. 2.7, 2.9 and 2.5);
• W ∈ R M ×M ′ is the trainable weights matrix enforcing the output nodes features dimensions M ′ ;
• H ∈ R H×M ′ is the output nodes features matrix, and the input of the following graph convolutional layer if several layers are stacked to form the GNN.
This GCL is spatially localized in opposition to, for instance, the convolution of [START_REF] Bruna | Spectral networks and locally connected networks on graphs[END_REF]. It is followed by the application of an activation function like the ReLU defined in Eq. (2.7).
To completely define a neighborhood graph one needs both the nodes features matrix X and the adjacency matrix A. The degree matrix D and D are inferred from the adjacency matrix A and A = A + I H respectively, where I H is the H × H identity matrix. The self-loops added in the graph thanks to the addition of I H to A allows the convolution to take into account a node itself when computing its next representation according to its one step neighbors. The two occurrences D -1 2 act as a normalization critical for gradient stability, and are part of the renormalization trick proposed by [START_REF] Kipf | Semi-supervised classification with graph convolutional networks[END_REF]. If several layers are chained one after the other, the input nodes features matrix X is replaced by the output nodes features representation H of the previous layers. Here, realvalued GNNs are considered since the transition from complex-valued representations to real-valued representations is already achieved during the single range cells encoding described in 2.3 (see Fig. 2.4). To produce the fixed size real-valued vector in R Q with graph convolutions, one can either process the neighborhood according to a node2vec or a graph2vec framework7 .
Opting for a node2vec approach amounts to taking the central range cell vector representation, which belongs to R Q , as the output neighborhood representation. This node features vector takes into account local information thanks to the graph convolutional layer. However, since the GCL of Eq. (2.48) operates over a one-step neighborhood, its receptive field depends on the number of GCLs stacked before it. For instance, the
C test C -1 C -2 C +1 C +2 A 0 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 D ii = H j=1 A ij 5 0 0 0 0 0 2 0 0 0 0 0 2 0 0 0 0 0 2 0 0 0 0 0 2 Figure 2
.5: Left: Example neighborhood graphs without weights on edges. If all weights are equal to one as it is the case here, the relative ranges of the neighborhood are lost, i.e. information is lost using this graph. Since every cell is a single step away from the one carrying the actual detection, one convolutional layer as defined by Eq. (2.48) is enough for the whole neighborhood to impact the output representation of the central node. Center: Adjacency matrix A of the neighborhood graphs, without the inserted self-loops necessary to compute the graph convolutional layer proposed in [START_REF] Kipf | Semi-supervised classification with graph convolutional networks[END_REF] and defined in Eq. (2.48). Right: Degree matrix D of the neighborhood graph with the inserted self-loops necessary to compute the graph convolutional layer.
neighborhood graph (a) of Fig. 2.6 requires two GCL stacked one after the other in order for the whole neighborhood information to be taken into account in the C test range cell features vector. On the contrary, the neighborhood graph proposed on Fig. 2.5 would only require one GCL to achieve the same goal. Now that the stacking of GCLs is mentioned, it is necessary to mention the over-smoothing problem of GNNs. Stacking too many layers in GNNs may lead to nodes becoming hardly separable, eventually rendering an GNN architecture void of discriminatory power [START_REF] Cai | A note on over-smoothing for graph neural networks[END_REF][START_REF] Oono | Graph neural networks exponentially lose expressive power for node classification[END_REF]. Here, node2vec amounts to a so-called cell2vec since one node defines one radar range cell.
Opting for a graph2vec approach can be achieved through a similar application of graph convolutions, although this time stacking a minimal number of GCL is not essential for the final representation vector to capture data stemming from every node. A GNN can indeed be made into a graph2vec architecture by means of a final global pooling over the graph nodes, for instance through the computation of a mean. Such a global pooling discards the semantic information associated with the graph structure, although the latter could have been extracted by the preceding layers [START_REF] Chen | Optimal transport graph neural networks[END_REF]. The intervention of the graph2vec encoding mechanism is illustrated on Fig. 2.4, where it is interchangeable with the previously detailed node2vec technique. The problem of range cells neighborhood graph encoding can seem very similar to the task of encoding molecules, which also happen to define small graphs where nodes are connected through bonds of varying nature. The molecules representation learning success [START_REF] Chen | Optimal transport graph neural networks[END_REF][START_REF] Yang | Analyzing learned molecular representations for property prediction[END_REF][START_REF] David K Duvenaud | Convolutional networks on graphs for learning molecular fingerprints[END_REF] in the literature remarkably emphasizes how small graphs of varying size can be effectively transformed in to fixed-size vectors, suggesting augmented relevance for radar range cells neighborhoods. The varying nature of the molecular bonds can provide an interesting perspective with respect to the sampling diversity separating the graphs of our own application case. In [START_REF] Chen | Optimal transport graph neural networks[END_REF], the embedding space is populated with graphs prototypes which reminds of the dictionary learning literature and the latent memory developed for an AE in [START_REF] Gong | Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection[END_REF]. This could inspire future orientations of the proposed hit2vec processing.
The neighborhood of range cells graphs presented on 2.6 are limited by the fact that all edges share the same weight e. This can seem surprising in the context of radar range cells since the latter can already span over large areas individually, as a direct consequence of the constrained bandwidth, as implied by Eq. (1.9). This suggests that it should be possible to attach relative importance to range cells within a neighborhood. .9: Alternative fully-connected neighborhood of range cells proposed adjacency and degree matrices: these are the adjacency and degree matrices associated with the alternative neighborhood graphs proposed on Fig. 2.8. As for Fig. 2.5, the degree matrix takes into account the inserted self-loops necessary to compute the graph convolutional layer. For (a) we end up with an adjacency matrix that associates each link between two range cells with a weight representing the distance in amount of range cells, while for (b) we get a unique edge weight for each edge in the graph, each weight appearing twice in the adjacency matrix since the graph is undirected. One can notice that both undirected graphs translate into symmetric adjacency matrices.
C test C -1 C -2 C +1 C +2
z 1 z 2 z 3 . . . z N Figure 2
.10: Illustration of the close relationship between processing data with a graph neural network and a recurrent neural network. A signal z sampled over N points can be seen as a directed graph with N nodes and N -1 edges.
The GNN neighborhood encoding is interestingly versatile since it could be combined to SPD manifold-aware processing to take advantage of the AR representation of the input signals put forward in section 2.3.1. For instance, each autocorrelation matrix computed over one range cell signal could be fed to a manifold-aware neural network to produce a fixed-sized real-valued vector. These vectors could then be similarly distributed over the neighborhood graphs proposed here. Graph neural networks are closely related to the recurrent neural networks put forward to encode a single range cell signal since a recurrent neural network simply follows a directed graph translating the chronological order of signal samples. For a signal z sampled over N points, the directed graph is illustrated on Fig. 2.10. The corresponding adjacency matrix is:
A RN N = 0 1 0 . . . 0 0 0 0 1 . . . 0 0 . . . . . . . . . . . . . . . . . . 0 0 0 . . . 1 0 0 0 0 . . . 0 1 0 0 0 . . . 0 0 (2.49)
However, GNNs are much more generic and versatile than RNNs since they allow to enforce arbitrary symmetry and weighting schemes over the range cells defining the neighborhood graph. Such weights can be applied to nodes but also to edges features. While the upstream encoding methods proposed in 2.3 do not explicitly take into account the varying input signals sampling frequency (the PRF), this information could be added to the edge features to intervene in the features flow of the graph during training. Taking the PRF explicitly into account seems relevant since this information is leveraged in the resampling baseline previously mentioned. Taking into account so-called features of features, or input data hyperparameters, within the architecture was already done in the literature. For instance, [START_REF] Gao | Graph u-nets[END_REF] recorded the nodes locations in a graph pooling layer to subsequently place nodes back to their position in the input graph while unpooling. This was done in order to define a generative graph UNet architecture. That being said, the varying input signal length is already available in our input signals as a data dimension, but not necessarily explicitly valued by the proposed range cell encoding of 2.3. As is, the example GCL defined by Eq. 2.48 could only take into account scalar edge weights integrated within the adjacency matrix, as the examples of Fig. 2.5, Fig. 2.7 and Fig. 2.9 show. Other GNNs architectures however address this limitation [START_REF] Gong | Exploiting edge features for graph neural networks[END_REF]. Graph neural networks can be said to be a generalization of RNNs to deal with cyclic, directed and undirected graphs [START_REF] Veličković | Graph attention networks[END_REF][START_REF] Scarselli | The graph neural network model[END_REF]. Constraints over a range cells neighborhood graph could for instance contribute to define and enforce a concept similar to the one of guard cells used for radar CFAR detection [START_REF] Arnaudon | Riemannian medians and means with applications to radar signal processing[END_REF][START_REF] Alabaster | Pulse Doppler Radar: Principles, Technology, Applications[END_REF]. In such a case, a graph could maintain constrained edge weights in order to take into account guard, test and reference cells in framework smoother than simply choosing to take into account a cell features or not to determine a local clutter or targets map.
Single range cell encoding experiments
Since the experiments evaluate the separability of targets and number of pulses classes using one-class classification approaches and metrics, the experiments encoding neighborhoods of range cells are presented in chapter 4. Here, we will only report preliminary results regarding the encoding of single range cells. The experimental results presented here stem from only one of the single range cell encoding approaches detailed in this chapter, as it is the only approach that showed encouraging results.
Experiments protocol and data
The dataset used to evaluate the representation learning over range cell IQ signals is a slightly modified version of the publicly available8 simulated PDR dataset used in [START_REF] Bauw | Near out-of-distribution detection for low-resolution radar micro-doppler signatures[END_REF]. Whereas in [START_REF] Bauw | Near out-of-distribution detection for low-resolution radar micro-doppler signatures[END_REF] the discrimination task was conducted at the scale of several bursts of pulses to define a Doppler signature containing the evolution of a spectrum over time, here we work at the scale of a radar hit, i.e. at the scale of a single burst. This is equivalent to a single row of the Doppler signatures depict on Fig. 1.6 and Fig. 3.8. Such a row is defined by the DFT computed over the burst pulses backscatter. Looking at the previous figures, one can realize that the reduced information available in a single burst is can translate into an unlucky combo where the burst describes an uncharacteristic node in the modulation pattern of a target, assuming the target creates one such pattern in the first place. This should be kept in mind as it could explain a small amount of encoding or discrimination failures.
The dataset defines four classes of helicopter-like targets, each of the latter defining a Doppler signature modulation of specific complexity as illustrated on Fig. 3.8. This varying modulation pattern is due to the different numbers of blades characterizing each class. The main difference between the targets representation here and the one used in [START_REF] Bauw | Near out-of-distribution detection for low-resolution radar micro-doppler signatures[END_REF], other than the single burst scale of the data points, is the varying number of pulses available in the radar bursts. The effect of such a variation on the resolution of the targets I/Q response has already been illustrated on Fig. 1.6. The robustness of our single range cell encoding with respect to such an input resolution variation is one of the pursued invariances as summarized in table 2.1. The four kinds of modulation patterns define the so-called targets classes in our experiments, in opposition to the so-called pulses classes whose labels refer to the number of pulses describing the target in the input representation. Again, this number of pulses translates into the size of the complex-valued input matrix Z I/Q of Eq. (1.2) and the resolution at which the target can be described.
The model used to encode the data is the FCN whose architecture is described in table 2.2. The aim of the learning phase is to separate the targets classes and to verify how strongly the neural network output representations are distributed according to the number of pulses defining the input representations. In order to create the dataset, the MATLAB [125] simulation creates numerous series of bursts of pulses to describe targets individually. Several targets are generated to populate each class. The diversity among targets is generated through the variation of the the rotor rotation speed, the initial position, the blades length, and the target velocity. The bursts themselves are then individually distributed in the training, validation and test sets for our experiments. The creation of bursts in series to describe a moving target helps avoid the unlikely generation of bursts batckscattering only or mostly modulation patterns at their modulation node, which hides the discriminative modulation pattern we are relying on.
This approach aims at exploring the ability of the considered architectures to learn the sought after invariances and is flawed in the sense that it suffers from a data leakage potential. Data leakage here amounts to the presence of bursts backscatter describing the same target in more than one of the three divisions (training, validation, testing) of the complete dataset. This deficiency may allow the encoding neural network to learn based on targets features irrelevant to the actual objective and create misleading performances. A typical example case of the danger of data leakage was illustrated by the successive versions of an X-rays discrimination study [START_REF] Rajpurkar | Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning[END_REF] 9 where each patient produced several input representations, and patient overlap corrupted the (training, validation, testing) data division. This led to deceptive performances where the neural network could cheat with the features identifying patients instead of their pathology. The risk of data leakage is set aside to enable a more accessible preliminary study of the proposed encoding scheme, and should be reduced by the lack of features identifying individual targets across bursts anyway. Indeed, there is nothing else than the modulation pattern identifying the class in the inputs. In a more realistic setup the danger of data leakage remains, since individual targets could be identified across the three data divisions by a situational clutter spilling over the single burst signatures.
Preliminary results with supervised representation learning
In our preliminary experiments, the only promising performances were stemming from the FCN architecture for which supervision was available, i.e. where the training setup described by Eq. 2.44 was used. The unsuccessful RNN, seq2seq and FCAE architectures are still briefly described in Appendix C. To evaluate the evolution of the separability of targets and pulses classes in the single range cell embedding space, we record the AUCs of the one-class classification (OCC) of each of these classes during training, the minority monitored class defining the positive class for the AUC computation. The AUCs are computed over the test data made of separate bursts. An example run yields the metrics indicated on Fig. 2.11, where the loss per batch during training complements the AUCs evolutions. On these metrics, one can observe the encouraging rise of the targets classes AUCs while the pulses classes AUCs remain aroung the randomness performance of 0.5. The OCC methods selected to produce the successive AUC scores are the widespread, shallow learning isolation forest (IF) [START_REF] Fei | Isolation forest[END_REF] and one-class support vector machine (OC-SVM) [START_REF] Schölkopf | Estimating the support of a high-dimensional distribution[END_REF], both being defined in chapter 3. All targets classes seem to benefit similarly from the encoding network training, and the performances of both OCC methods remain close during training. On the contrary and as hoped, the pulses classes separability is not favored since the associated AUCs remain around 0.5, which is the random discrimination performance. The improving separability of targets classes appears when using specialized machine learning methods like IF and OC-SVM, and is not visible when replacing the latter with a simple Euclidean distance to a class reference point, or with a Silhouette clustering score [START_REF] Peter | Silhouettes: a graphical aid to the interpretation and validation of cluster analysis[END_REF] computed with a Euclidean metric.
To complement the evolution of the losses and test dataset AUCs during training, one can compare the latent distribution of the classes before and after training. An example of one such comparison if proposed with Fig. 2.12 and Fig. 2.13, where one can observe how the targets classes appear partially disentangled after training on the TSNE visualization (see top left image on both figures). The 2D visualization of such distribution is made possible by the dimensionality reduction methods TSNE [START_REF] Van Der Maaten | Visualizing data using t-sne[END_REF] and PCA. The OCC methods IF and OC-SVM, as well as the dimensionality reduction and the AUC scores computation are all implemented using Scikit-learn [START_REF] Pedregosa | Scikit-learn: Machine learning in python[END_REF].
Necessary follow-up experiments
The previous experimental results can only be taken as a proof-of-concept that aims at demonstrating the feasibility of encoding a semantic diversity of single range cells in order to separate them based on their individual Doppler content. On the one hand, in terms of deep learning experiments the results presented here lack statistical significance due to the fact that a single experimental run, i.e. a single random seed, is put forward. Ideally, the following improvements should be made to ensure a fair and relevant evaluation of the proposed encoding methods: • Evaluate mean and variance of the AUC metric over a dozen of random seeds to take into account the benefits and drawbacks of random initialization.
• Compare the proposed approaches with a simple baseline. Here the latter should be fabricated since the use case is very specific. In a very permissive way, one could argue that the unsuccessful approaches we proposed but which did not yield encouraging performances and thus do not appear in the results here constitute a form of baseline.
• Remove the targets overlap among the three (training, validation, testing) dataset divisions to suppress the data leakage risk.
• Enrich the targets dataset with single burst targets backscatter of more variable SNR and including progressively confusing clutter.
• Add targets from an unknown target class in the testing set to see if the encoding framework remains somewhat discriminative for this unseen class, since this is the semi-supervised encoding we are actually seeking.
• Add targets of known and unknown targets classes (cf. previous point) represented by input signals defined by a number of pulses different from the ones seen in training to see if the encoding remains somewhat discriminative for this unseen input signal format, since this is the semi-supervised encoding we are actually seeking.
• Add targets from known and unknown classes (cf. previous point) represented by input signals with a sampling PRF unseen in the training set to see if the encoding remains somewhat discriminative for this unseen input signal format, since this is the semi-supervised encoding we are actually seeking.
As a reference, rigorously conducted comparison of machine learning methods can be seen in chapter 3. The two last improvement points proposed here are motivated by the ideal invariances the encoding neural network should manifest. These invariances are summarized in table 2.1.
Chapter 3
One-class classification for radar targets discrimination
C N ×H detection "hit2vec" R Q embedding one-class classification R score Chapter 3
This chapter contains contributions of this thesis in sections 3.1.3, 3.1.5 and 3.2.3. These contributions are associated with our three publications [START_REF] Bauw | Near out-of-distribution detection for low-resolution radar micro-doppler signatures[END_REF][START_REF] Bauw | Deep random projection outlyingness for unsupervised anomaly detection[END_REF][START_REF] Bauw | From unsupervised to semi-supervised anomaly detection methods for hrrp targets[END_REF]. The previous chapter introduced approaches to encode the enriched I/Q sweep feature of radar hits, i.e. to go from C N ×H to R Q . Once detections are encoded, one can implement a discrimination method to help the radar operator isolate relevant targets from clutter and low-priority objects. Since encoding leads to the projection of hits into the shared representation space R Q , any discrimination method applicable to vector representations could be used. This opens the door to radar hits discrimination with any of the numerous deep and non-deep, unsupervised, semi-supervised and supervised classification approaches available in the literature. Ideally, such discrimination would be handled by a supervised classification or an open-set recognition (OSR) pipeline with specific enough targets classes with respect to military and civilian activities.
Such a supervised classification approach would require excessive supervision, i.e. would call for large, diverse and completely labeled datasets to be available for training. Such dataset is unthinkable in military radar applications, since not many labeled samples are accessible for friendly targets, while unfriendly targets can remain completely unknown to the sensor. An OSR pipeline supposedly identifies a closed set of classes seen during training just like supervised classification, but also whether a test sample belongs to one of the known classes at all. This entails OSR demands as much supervision as supervised classification regarding the closed set of classes encountered during training, making it equally unsuitable for the radar targets discrimination pursued. This emphasizes the similarity between OSR and classification with rejection [START_REF] Mahdavi | A survey on open set recognition[END_REF][START_REF] Peter | Classification with a reject option using a hinge loss[END_REF][START_REF] David | Growing a multi-class classifier with a reject option[END_REF].
One can still note that a high-performing open-set recognition, identifying well referenced classes during training, coupled with a clustering of samples belonging to poten-tially unindentified but consistent data modes could perfectly answer the challenges of a radar deployment. Indeed, targets perception varies a lot due to radar target-aspect sensitivity, and both the geography surrounding the sensor and the weather influence the values being processed. Thus, setting up a radar amounts to update the clutter and relevant targets appearances and how to separate the latter, making the adaptive nature of open-set recognition approaches and clustering notably adapted when they are combined.
To tackle the challenge of low supervision, we chose to discriminate between encoded radar hits using OCC. The intuition of the favored OCC method is to use the few labeled data points available during training to gather the output representations of a neural network around one or several latent reference points. The distance to the reference points in the output space can then be used to generate an outlyingness score O(x) : R d → R for a test sample x ∈ R d . If we then consider the latent reference points as capturing the distribution of a set of radar targets classes, one can conclude that this score directly translates into a mean to discriminate between targets belonging to the latter set of targets classes and targets that do not. Details regarding this favored OCC technique and other OCC methods will be provided in the upcoming sections.
Assuming a performing enough OCC, one could separate a set of radar targets described by a limited quantity of labeled samples from other detected objects and phenomena. The advantage of this approach is that it does not require labeled samples for all the classes being separated, which gives an answer to the lack of supervision in the task at hand. The concentration of latent outputs belonging to the set of targets classes offers an intuitive way of including labeled samples outside of the one-class during training by repelling their output representation from the reference points. In the proposed one-class classification setup, it is worth noting that the one-class could potentially contain a certain diversity of targets, i.e. not be limited to a single kind of radar targets. This last point is critical with respect to the targets discrimination task considered, since an operator would likely want to set up an alarm1 for arbitrary sets of targets, however diverse.
This arbitrary one-class diversity is yet another key challenge of the ideal radar targets discrimination. Semantically close data modes, i.e. samples so close that their separation can be difficult, can be found both inside and outside of the one-class. This question of semantic proximity leads to the definition of near and far OODD. Near out-of-distribution detection (OODD) aims at distinguishing one or several data classes from semantically similar data points. For instance, identifying samples from one class of CIFAR102 among samples of the other classes of the same dataset solves a near OODD task. On the other hand, separating CIFAR10 samples from MNIST3 samples is a far OODD task: there is no strong semantic proximity between the data points being separated [START_REF] Ren | A simple fix to mahalanobis distance for improving near-ood detection[END_REF]. The concept of OCC for radar targets discrimination is illustrated on Fig. 3.1, which depicts how diverse targets can be gathered within a one-class boundary, and how labeled out-of-distribution samples can contribute to the boundary. This additional supervision translates into semi-supervised AD. In some cases and with expert knowledge involved, it is possible to generate artificial labeled anomalies to contribute to the training phase, defining a form of self-supervised learning 4 . This idea could also be interpreted as data augmentation 5 , although it may involve creating a label initially Figure 3.1: One-class classification intuition diagram, which can also be understood as anomaly detection: the "normal" one-class distribution has to be captured in order for the detection of out-of-distribution samples to be possible. The one-class ideally can be composed of several classes, and the anomalies or out-of-distribution samples are of infinite diversity. The latter can be other data classes, and noisy samples. In the case of radar targets discrimination, the one-class may for instance gather a diversity of small and slow targets for which labeled reference samples are available for training, and for which an alarm would be raised to warn the radar operator. In such a context, negative labeled samples available during training to refine the one-class boundaries could stem from weather phenomena which are known to appear close to relevant small and slow targets. The use of a minority of unrepresentative labeled anomalies during training for additional supervision will be addressed by some of the OCC methods presented in this chapter.
absent from the learning setup. Our application of OCC to radar targets discrimination can be considered as a near OODD task, since the OCC is meant to separate valid radar targets stemming from a unique sensor.
In addition to the presentation of OCC methods, this chapter will present OCC experiments conducted independently from any hit encoding in section 3.3.
One-class classification methods considered
This section will put forward several anomaly detection methods we have considered for encoded hits discrimination in this work. One of these methods, deep random projection outlyingness (RPO), is one of our original contributions in this thesis. This contribution is detailed in 3.1.3 and is an evolution of the non-deep RPO presented in 3.1.1 beforehand. Experiments were first conducted on generic datasets, i.e MNIST, Fashion-MNIST and CIFAR10, and on simulated and real radar data, all different from encoded hits. These experiments on data types of a different nature than the hits we are considering remain relevant for our AD experiments, since encoded hits are vectors of features, and given a comparable level of supervision discriminating encoded hits should not be different from discriminating the images or 1D range profiles we will consider. In points. For instance, one can add noise to existing data points to make the model training more robust.
all cases however, the difficulty of the task relates to the relative proximity of the data modes separated. This emphasizes the relevance of working on a filter with two independent steps: one can improve and plug various discrimination approaches on the output of the radar targets encoding, and vice versa. An important feature of the methods presented is that they all produce a continuous scalar decision score that would allow the radar operator to have a refined appreciation of the anomalous nature of a target, in contrast with a less expressive binary result.
Shallow and deep one-class classification baselines in the literature
The widespread use of deep learning for data discrimination, including OCC, being recent and calling for specific data and computational needs, it seemed necessary to include a diversity of non-deep learning discrimination methods in our study to end up with a useful perspective on which discrimination to prefer to sort out encoded hits. We begin by considering a classic outlier detection measure, the Mahalanobis distance (MD) [START_REF] Mahalanobis | On the generalized distance in statistics[END_REF], which computes a distance between a multivariate distribution sampled over n samples in d dimensions X ∈ R d×n and a data point x in the same representation space. This distance is defined by the following expression:
O M D (x; X) = (x -µ X ) T Σ -1 X (x -µ X ) (3.1)
where Σ X denotes the sample covariance matrix Σ X = 1 n X c X T c with X c the centered data matrix, i.e. the sample mean µ X = 1 n n i=1 x i was subtracted from every data point in X to compute X c . We can make two remarks regarding the MD: it is found in the exponential of the probability density function of a multivariate Gaussian distribution, and it uses dedicated spread and location estimators for everyone of the d dimensions. One of the downsides of the MD is the computation of a covariance matrix and its inverse. The estimation of a covariance matrix is problematic because for a sampled version to be relevant and well conditioned, it is necessary to estimate it with numerous samples and while respecting a relatively small d n ratio. In other words, the number of samples must be much larger than the number of dimensions in the representation space where the covariance matrix is estimated. Not respecting this and computing a Mahalanobis distance based on the ill-conditioned covariance matrix is ill-advised since the bad conditioning prevents the accurate computation of the covariance inverse required by Eq. (3.1) [START_REF] Velasco-Forero | Comparative analysis of covariance matrix estimation for anomaly detection in hyperspectral images[END_REF]. This leads us to another old non-deep outlyingness measure, called random projection outlyingness (RPO), that does not require a covariance matrix estimate and which we will be using in our experiments. RPO combines numerous normalized outlyingness measures over 1D projections with a max estimator in order to produce a unique and robust multivariate outlyingness measure, which translates into Eq. (3.2):
O RP O (x; p, X) = max u∈U |u T x -M ED(u T X)| M AD(u T X) (3.2)
where x is again the data point we want to compute the outlyingness for, p the number of random projections (RP) u of unit norm gathered in the set U, and X the training data matrix. M ED stands for median, a location estimator, and M AD for median absolute deviation, a spread estimator. The max implies retaining only the worst outlyingness measure available among all the 1D projections, i.e. the worst normalized deviation from the projected median. In [START_REF] Velasco | Robust rx anomaly detector without covariance matrix estimation[END_REF], the asymptotic equivalence between Eq. (3.2) for a large number of RPs and the MD of Eq. (3.1) (up to a constant factor) is established, with the motivation of obtaining an equivalent of the latter without computing a covariance matrix 6 . This equivalence with the Mahalanobis distance indicates that RPO with enough RPs, after the max integration over RPs, describes a normality ellipsoid in the input space, i.e. the representation space of x. The RPO outlyingness actually leads to the definition of a statistical depth7 approximation [START_REF] David L Donoho | Breakdown properties of location estimates based on halfspace depth and projected outlyingness[END_REF][START_REF] Peter | Projection pursuit[END_REF], another quantity that orders data points from a given set from most to least normal. This stochastic approximation of a statistical depth is called random projection depth (RPD) and is defined by:
RP D(x; p, X) = 1 1 + O RP O (x; p, X) (3.3)
The exact statistical depth is computed with a sup replacing the max in Eq. 3.2, the computation of the sup implying the consideration of all possible random projections, i.e. an infinity of random projections. Both Eq. (3.2) and Eq. (3.3) can be said to combine 1D views of a multivariate distribution to produce a multivariate outlyingness measure based on normalized univariate distances. This intuition behind the use of 1D RPs to generate a multivariate outlyingness measure is illustrated on Fig. 3.2. A deep adaptation of RPO is proposed in section 3.1.3, and RPs with multiple output dimensions were also considered in our experiments. Harnessing RPs with multiple output dimensions however reintroduces a covariance matrix as a spread estimator in order to produce normalized distances. One can notice that the use of RPs to project data points is related to the generation of intermediate representations with neural networks layers whose weights are frozen right after their random initialization, something quite recently mentioned in the literature [START_REF] Frankle | Training batchnorm and only batchnorm: On the expressive power of random features in cnns[END_REF][START_REF] Iwo | Training neural networks on highdimensional data using random projection[END_REF][START_REF] Giryes | Deep neural networks with random gaussian weights: A universal classification strategy[END_REF][START_REF] Andrew M Saxe | On random weights and unsupervised feature learning[END_REF].
The other common shallow AD methods chosen are one-class Support Vector Machine (OC-SVM) [START_REF] Schölkopf | Estimating the support of a high-dimensional distribution[END_REF], Isolation Forest (IF) [START_REF] Fei | Isolation forest[END_REF] and Local Outlier Factor (LOF) [START_REF] Breunig | Lof: identifying density-based local outliers[END_REF]. The first method is an extension to one-class classification of the now classic SVM classifiers [START_REF] Boser | A training algorithm for optimal margin classifiers[END_REF]. OC-SVM projects data in a feature space where it will try to find a maximum margin hyperplane to separate data points from the feature space origin. This is achieved thanks to the following objective function:
min w,ρ,ξ 1 2 ||w|| 2 F -ρ + 1 νn n i=1 ξ i (3.4)
ρ is the distance separating the origin from the hyperplane w, ξ i are slack variables allowing boundary violation with penalization. ||w|| F regularizes the definition of the hyperplane w using the norm of the feature space F in which data points are projected by a kernel. The integer n is the number of data samples available for training and ν ∈ (0, 1] is an upper bound on the fraction of outliers during training and a lower bound on the fraction of support vectors for the hyperplane boundary. IF [START_REF] Fei | Isolation forest[END_REF] uses recursive partitioning on subsets of data points in the feature space, and produces an anomaly score based on the ease with which each point is isolated from the rest in each subset. It works based on the assumption that anomalous samples are more susceptible to isolation in the feature space. One recursive partitioning isolation binary tree is built for each subset of data points, making IF an ensemble method. In the end, IF computes an anomaly score O IF for each instance x whose expression is:
O IF (x; n, X) = 2 - E(h(x)) c(n) (3.5)
n is, as for OC-SVM, the number of data points available for training and c(n) the average path length of an isolation tree. The average path length intuitively translates into the average number of recursive splits needed to isolate a data point in the feature Figure 3.2: Illustration of the intuition behind the use of 1D random projections to compute a multivariate outlyingness measure. Once a set of 2D samples is projected, evaluating the normalized distance to the location estimator of each projection easily allows to detect the obvious outlier, the latter being positioned at greater distance from the location estimator on at least one random projection. One random projection is enough to raise the maximum seen in Eq. (3.2). This depicts that multivariate outlyingness can translate into multiple univariate outlyingnesses. space. The vector x is the sample whose anomaly score we want to obtain, and h(x) the associated path length. The previous equation uses E(h(x)), the average path length across the forest of isolation trees, normalized by c(n), to obtain O IF (x, n). IF is a particularly interesting shallow AD method since it is advertised as being able to provide good performances with a small subsample of data and few isolation trees.
The last common shallow AD method considered is LOF [START_REF] Breunig | Lof: identifying density-based local outliers[END_REF]. To compute LOF, we choose a certain number k of nearest neighbors to be considered for each data point. The local density attached to a point will be determined by how close its k nearest neighbors are. A point having a higher local density than its neighbors will be more likely to be an inlier, since this translates into belonging to a higher density part of the feature space. Thus, LOF assigns to each data point an outlier score based on the ratios of its own local density and the local densities of its k nearest neighbors. Now that we proposed shallow baselines, let us describe our deep OCC baseline: the autoencoder (AE). The autoencoder is a generative neural network architecture which is commonly trained to reproduce its input and, thanks to constrained intermediate representations, provide lower dimensionality encodings. In order to do so one typically defines an undercomplete autoencoder [77, p.494], where the reduced representation capacity of the neural network compels it to select the most important information to encode the input before recreating it through some form of upsampling [START_REF] Goodfellow | Deep learning[END_REF]. Assuming an autoencoder neural network Φ, the training loss is the reconstruction error ∥Φ(x) -x∥. A simple way to achieve AD with an AE is to use the reconstruction error of test samples as the AD score [START_REF] Xia | Learning discriminative reconstructions for unsupervised outlier removal[END_REF][START_REF] Sakurada | Anomaly detection using autoencoders with nonlinear dimensionality reduction[END_REF]. Once the AE is trained to recreate samples belonging to the one-class of OCC, the reconstruction error of OOD samples can be expected to be higher. The outlyingness computed thanks to an AE is thus:
O AE (x; Φ, X) = ∥Φ X (x) -x∥ (3.6)
where Φ(x) is the reconstructed input, i.e. the output of the autoencoder. This AE anomaly detection intuition has already been applied to radar data in [START_REF] Brüggenwirth | Cognitive radar for classification[END_REF][START_REF] Wagner | Target detection using autoencoders in a radar surveillance system[END_REF]. The reconstruction score intuition also makes it easy to grasp the challenge of separating semantically similar inputs belonging to different OCC classes in the case of near OODD, since it is likely that an AE able to reconstruct one sample will also achieve a fair reconstruction on semantically similar ones.
Not only the AE is an interesting choice due to it being an OCC method based on a pretense reconstruction task unrelated to AD, but it also led to diverse variants designed for AD. A memory-augmented deep AE, called MemAE, was for instance proposed in [START_REF] Gong | Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection[END_REF], where the latent space is discretized through the definition of reference elements based on the training data. These latent prototypical representations are then combined with weights determined by an attention mechanism with a sparsity constraint to produce the latent representation provided to the decoder. This approach aims at limiting the generalization capacity of the AE, the latter potentially being able to reproduce some anomalies just as well as some normal samples after training. The sparsity constraint limiting the complexity of the latent dictionary entries combination ensures the reconstructed input can not escape the main patterns observed in the training data. Another extension of the initial intuition of using AE for AD worth mentioning is ConAD, which stands for consistency-based anomaly detection, which was put forward in [START_REF] Tam Nguyen | Anomaly detection with multiple-hypotheses predictions[END_REF]. This method utilizes a multi-headed decoder network to handle multi-modality in the input and latent spaces, and particularly to avoid the acceptance of reconstructions based on an artificial global mean mode in the latent space as belonging to the "normal" one-class. The shallow methods previously mentioned can also be associated with an AE to produce a hybrid one-class classification method where the AE embeds the inputs in the latent representation space of the generative architecture, i.e. the neural network bottleneck, the embeddings being then used for AD through a shallow AD [START_REF] Sarafijanovic | Fast distance-based anomaly detection in images using an inception-like autoencoder[END_REF]. In such a case, the reconstruction error does not contribute to the outlyingness measure, and is only harnessed to train the generative neural network whose encoding part produces embeddings. The reconstruction-oriented parameters optimization amounts to a pretense task requiring no supervision since the input serves as output, and is thus close to the fundamental idea of self-supervision setups. This closeness to self-supervision also suggests transformations could be used to enrich the training set and reinforce the generality and relevance of the bottleneck representation in the AE.
Deep one-class classification with latent hyperspheres
Most of the OCC experiments and developments proposed by this thesis are inspired by [START_REF] Ruff | Deep one-class classification[END_REF] which proposed a deep AD method called Deep Support Vector Data Description inspired by the now decades old non-deep Support Vector Data Description (SVDD) method [START_REF] David | Support vector data description[END_REF]. The SVDD approach estimates a hypersphere boundary around a training set made of samples x i to allow for one-class classification based on whether the tested point lies within or outside of the boundary learned. To find this boundary the hypersphere volume is minimized while keeping as many training points within the volume as possible. This can be understood as the minimization of R 2 where R is the radius of the one-class hypersphere under the constraint:
∥x i -c∥ 2 ≤ R 2 , ∀i (3.7)
During this boundary optimization the training points are all considered to belong to the so-called one-class. Slack variables ξ i ≥ 0 are introduced to make the boundary soft and lead to the minimization of the following quantity: min
R,c,ξ R 2 + C i ξ i (3.8)
while respecting the soft boundary constraint ∥x i -c∥ 2 ≤ R 2 + ξ i , ξ i ≥ 0 retaining the training data points within the one-class boundary, where c is the hypersphere centroid and C is a weight to balance the volume with the boundary trespassing [START_REF] David | Support vector data description[END_REF]. The use of kernels, as for Support Vector Machines, makes the boundary more adaptive through the usual implicit mapping trick. Since SVDD defines a hypersphere, the ideal mapping would enclose the data points within a spherical volume. Using SVDD can be equivalent to using the OC-SVM method described in 3.1.1 if one uses a Gaussian kernel [START_REF] Ruff | Deep one-class classification[END_REF][START_REF] David | Support vector data description[END_REF]. With a very close intuition, Deep SVDD [START_REF] Ruff | Deep one-class classification[END_REF] uses a neural network to learn representations of training samples in an output space where their Euclidean distance to a one-class reference point or centroid c defines the loss to be minimized during training. The training loss of Deep SVDD for a sample of size n (i.e. n data points) with a neural network Φ with weights W distributed over L layers is as follows:
min W 1 n n i=1 ||Φ(x i ; W ) -c|| 2 + λ 2 L l=1 ||W l || 2 (3.9)
The second term is a weights regularization controlled by λ, and will also appear in Eq. (3.10), (3.13), (3.14) and (3.17). This l2-norm parameter regularization is called weight decay, ridge regression or Tikhonov regularization in the scientific literature [START_REF] Goodfellow | Deep learning[END_REF].
In our experiments, the heuristic of defining c as the mean output representation of the training samples x i before training any parameter in the encoding neural network sometimes appeared as relevant as the coordinates of one of the training samples in the output space also taken before training. It also seemed that a reasonable yet arbitrary output coordinates choice could replace the heuristic, reasonable coordinates depending on the transformations implemented by the neural network in use.
A natural extension of Deep SVDD is the replacement of the single hypersphere boundary in the output representation space with a multitude of hyperspheres, which could be understood as one-class atoms8 . This approach was proposed in [START_REF] Ghafoori | Deep multi-sphere support vector data description[END_REF]. Such a composite boundary could be better tailored to the output representations distribution. For this multisphere alternative the loss, regularization put aside, combines the sum of the radii r k of the K still active hyperspheres with a penalty term greater than zero as soon as training samples, with each sample assumed to be part of the one-class, are further away from the nearest centroid than the radius of the latter:
min W,r 1 ...r K 1 K K k=1 r 2 k + 1 νn n i=1 max(0, ||Φ(x i ; W ) -c j || 2 -r 2 j ) + λ 2 L l=1 ||W l || 2 (3.10)
The second, penalty term is controlled by ν ∈ [0, 1], and training samples are assigned to the nearest hypersphere of center c j . Numerous spheres centers are initialized using the k-means clustering algorithm and progressively merged during training, while each radius r k is updated with a preset quantile parameter of the distances separating its centroid from its assigned data points. This quantile parameter is a hyperparameter of the method. The relevance of latent hyperspheres is determined thanks to the cardinality of the latent cluster they encompass. This seemed particularly relevant to handle a oneclass actually containing a variety of data modes, since it could potentially capture disjoint clusters in the representation space without engulfing artificial in-between data modes within the one-class boundaries. This work presents anomaly detection or one-class classification methods as a mean to discriminate between a diversity of points belonging to different classes using machine learning with limited supervision. While Deep MSVDD and Eq. (3.10) provided a natural extension to the intuition of SVDD and Deep SVDD by proposing the use of several hypervolumes instead of one to make the one-class boundaries more flexible and better tailored, one can wonder how to integrate additional supervision coming from labeled negative examples9 x j when the latter are available for training. This is handled by SVDD thanks to additional slack variables ξ j :
∥ x j -c∥ 2 ≥ R 2 -ξ j , ξ j ≥ 0, ∀j (3.11)
in the minimization of Eq. (3.8), which thus becomes: min
R,c,ξ R 2 + C 1 i ξ i + C 2 j ξ j (3.
12)
The new constraint described in Eq. (3.11) has the opposite effect of the constraint of Eq. (3.7) on negative j-indexed data points. It repels them out of the minimized hypervolume. Here again the optimization minimizes the hypervolume that contains as many one-class data points as possible while simultaneously pushing the negative samples out. Like C in Eq.(3.8), C 1 and C 2 balance the optimization objectives with the hypervolume minimization. The same repulsion intuition was proposed for Deep SVDD [START_REF] Ruff | Deep one-class classification[END_REF] under the name of Deep Semi-supervised Anomaly Detection (Deep SAD) [START_REF] Ruff | Deep semi-supervised anomaly detection[END_REF], where in addition to the minimization of the distance to the reference centroid, the inverse of the distance to the reference centroid is added with a weight η to the training loss for the negative training samples x j . This translates into a similar mathematical trick where a similar yet inverted term is added to the minimized quantity in order for the negative samples to be pushed away from the reference centroid, transforming Eq. (3.9) into:
min W 1 n + m n i=1 ||Φ(x i ; W )-c|| 2 + η n + m m j=1 (||Φ( x j ; W )-c|| 2 ) -1 + λ 2 L l=1 ||W l || 2 (3.13)
In Eq. (3.13), in addition to the n in-distribution x i data points, the loss harnesses m labeled anomalies x j during training. Here, SAD can be said to be achieved through a form of outlier exposure 10 , although the latter does not necessarily rely on a semantically different auxiliary dataset [START_REF] Hendrycks | Deep anomaly detection with outlier exposure[END_REF]. The contribution of labeled anomalies to a refined one-class classification intuitively depends on the actual proximity of the labeled anomalies with the one-class boundary. Thus, in addition to the specification of a near OODD task at the beginning of this chapter, near out-of-distribution samples can be here seen as key to the success of semi-supervised AD. Labeled anomalies in the training set need to be distinguished from potential unlabeled anomalies that are considered to be normal samples, which confuse the AD by contaminating the training set instead of providing supervision. Robustness to such contamination is critical since experts providing the labels with which neural networks are trained are not exempt from mistakes.
Experiments evaluating such robustness are proposed in sections 3.3.1 and 3.3.3. This semi-supervision adaptation can be repeated for Deep MSVDD, although in that case the multiplicity of normality centers calls for an additional consideration on how to choose from which centroid the labeled anomalies should be repelled as long as several centroids are kept active. One could also think of weighted inverse distances to the active centroids, the weights possibly implementing latent data modes attention. The experiments implementing Deep MSVDD adapted to SAD with an additional loss term for labeled anomalies were inconclusive, such an adaptation will therefore not be further discussed. The additional loss term evaluated to train Deep MSVDD in a SAD context either minimized the latent distance between anomalies and dedicated centroids, or maximized the latent distance between anomalies and normality, one-class, centroids. The labeled negative samples bringing additional supervision during training for SAD can be created through the transformation of in-distribution samples. Thus, a form of self-supervision can be associated to SAD. This possibility is explored in 3.3.3. Arbitrary sets of outliers could not be completely gathered around a reference point since they do not necessarily belong to a common mode [START_REF] Ruff | Deep semi-supervised anomaly detection[END_REF][START_REF] Steinwart | A classification framework for anomaly detection[END_REF]. However, this does not forbid the concentration of identified modes among labeled anomalies close to dedicated centroids to provide additional supervision during training, a case which is part of the experiments presented in section 3.3.3. The possibly arbitrary distribution of normal and anomalous centroids and the relative distance between the centroids adds a way to use prior information regarding the proximity between the training samples. Such a setup can seem close to classification with rejection [START_REF] Hendrycks | Deep anomaly detection with outlier exposure[END_REF][START_REF] Peter | Classification with a reject option using a hinge loss[END_REF], since the concentration of data points around dedicated normal and anomalous centroids can be interpreted as classification while the data points attached to no centroid and thus supposedly repelled from all centroids by the trained network constitutes a rejection. This parallel with classification with rejection is not necessarily relevant since the availability of labeled anomalies to train machine learning-based AD methods is usually very limited if not nonexistent. In contrast, supervised classification of identified data modes would imply rich, representative and relatively balanced datasets for each latent mode. The limited availability of labeled anomalies applies to actual anomalies and not to artificial anomalies provided by the transformation of existing training samples i.e. through self-supervision. With proper transformations self-supervision can produce as many labeled anomalies for training as there are normal samples, or even more if each normal sample is transformed multiple times. However this does not overcome the lack of representativeness of labeled anomalies. This is also made difficult since the choice of transformations requires expert knowledge to control the semantic implications of the transformations. Such transformations should also be selected according to the properties of the neural network when doing deep AD, for instance to take into account the invariances implemented within the architecture.
Deep random projection outlyingness
In addition to Deep MSVDD, a second Deep SVDD variant considered here is Deep RPO [START_REF] Bauw | Deep random projection outlyingness for unsupervised anomaly detection[END_REF], which replaces the latent Euclidean distance to the normality centroid with a RPs-based outlyingness measure adapted from Eq. (3.2) in the latent space. This modification leads to the following minimization problem:
min W 1 n n i=1 mean u∈U |u T Φ(x i ; W ) -M ED(u T Φ(X; W ))| M AD(u T Φ(X; W )) + λ 2 L l=1 ||W l || 2 (3.14)
This training loss uses the outlyingness defined in Eq. 3.2 after the neural network encoding, with a max estimator transformed into a mean as suggested in [START_REF] Bauw | Deep random projection outlyingness for unsupervised anomaly detection[END_REF] for better integration with the deep learning setup. The mean estimator computes a mean over the set of RPs available to compute the latent outlyingness, while the 1 n computes a mean over the sample of size n. The use of a mean instead of a max removes the convergence to the Mahalanobis distance-inferred ellipsoid (up to a constant factor) already mentioned in the RPO definition (see 3.1.1) for a large set of RPs. This RPO variant however remains affine invariant (see Appendix B). The loss nonetheless still combines 1D outlyingness measures individually centered by their median and normalized by their median absolute deviation, but with no ellipsoid-like score distribution guarantee in the input space of the RPO once integrated (see Appendix A). Note that the input space mentioned here is the output space of the encoding neural network in the case of Deep RPO. No square was applied to the first loss term, in accordance with the quantity put forward in [START_REF] David L Donoho | Breakdown properties of location estimates based on halfspace depth and projected outlyingness[END_REF]. An SVDD adaptation where the latent distances are computed using a Mahalanobis distance has been proposed in [START_REF] Kim Phuc Tran | Anomaly detection in wireless sensor networks via support vector data description with mahalanobis kernels and discriminative adjustment[END_REF], but the latter does not encode data with a neural network. Combining the deep version of SVDD with a Mahalanobis score would be another way to achieve a trainable latent normality representation based on an ellipsoid of minimal volume [START_REF] Van | Minimum volume ellipsoid[END_REF]. Since Eq. (3.14) relies on an encoding deep neural network, one way to see the difference between Deep RPO with mean and Deep RPO with max is to consider that while with max the gradient will be based on a single projection at a time for each sample, the mean systematically produces a gradient stemming from all projections simultaneously.
We can also define an evolution of (3.2) and (3.14) with multidimensional random projections, as we proposed in [START_REF] Bauw | Deep random projection outlyingness for unsupervised anomaly detection[END_REF]. In the case of random projections leading to a single output dimension, we have the following setting: if d is the data samples dimensionality, and m the random projection output dimensionality, a random projection u with m = 1 will lead to a single projected coordinate u T x for any individual sample x with d dimensions. This projected coordinate can then be compared to a location estimator computed with the application of u on all the available samples, for instance u T x -M ED(u T X) where the location estimator chosen is the median. On the other hand, for multidimensional random projections, i.e. m > 1, a covariance matrix Σ m×m can be harnessed in the reduced space to obtain a more subtle normalized distance to location estimators. Each sample x can then be associated with a robust Mahalanobis distance, transforming Eq. (3.2) into:
O RP O multidim (x; p, X) = max u∈U (u T x -M ED) T Σ -1 m×m (u T x -M ED) (3.15)
where M ED stands for the median projected vector M ED(u T X). In this configuration, each sample has an outlyingness based on m projected coordinates per RP. Each of the projected coordinates is compared to a location estimator determined on each random projection dimension. One projected location estimator is thus computed over all data samples, for each output dimension of the random projections in use. This transforms the training objective described by Eq. (3.14), the normalized distance to the median in Eq. (3.2) before the integration by a max operator becoming:
(u T Φ(x) -M ED) T Σ -1 m×m (u T Φ(x) -M ED) (3.16)
where M ED stands for the median latent projected vector M ED(u T Φ(X; W )). Thanks to Eq. (3.16), the Deep RPO training loss now has the possibility to incorporate multidimensional projected representations for data samples, enabling additional latent representation flexibility. As for Deep SVDD, this new RP-based setup can incorporate labeled anomalies x j during training using an inverse distance to repel negative samples from the in-distribution data centroids:
min W 1 n + m n i=1 mean u∈U |u T Φ(x i ; W ) -M ED(u T Φ(X; W ))| M AD(u T Φ(X; W )) + η n + m m j=1 mean u∈U |u T Φ( x j ; W ) -M ED(u T Φ(X; W ))| M AD(u T Φ(X; W )) -1 + λ 2 L l=1 ||W l || 2 (3.17)
Experiments evaluating the performances associated with Eq. (3.14) are proposed in 3.3.2 and in 3.3.3. Experiments implementing Eq. (3.15) are presented in 3.3.2. The latter include the proposal of a dropout [START_REF] Srivastava | Dropout: a simple way to prevent neural networks from overfitting[END_REF] over entire random projections, and over random projection components.
Density and boundary one-class characterization
We have discussed OCC driven by SVDD and its deep learning variants Deep SVDD, Deep SAD and Deep MSVDD. One of the prominent features of all the aforementioned methods is that they achieve OCC through the explicit or implicit definition of a oneclass boundary. More importantly, this opposes these approaches to density estimationbased outlier detection, where an actual density distribution is estimated. This key difference between boundary estimation and density estimation is illustrated on Fig. 3.3. The point of opting for boundary one-class characterization is that it can require less data than a density estimation approach, the latter necessitating a training set very authentically distributed. This requirement is even more critical in high dimensional problems [START_REF] David | Support vector data description[END_REF]. In Deep SVDD, Deep SAD and Deep MSVDD the boundary is made of one or several hyperspheres in the output space of the encoding neural network. One of the losses put forward by the original Deep SVDD paper [START_REF] Ruff | Deep one-class classification[END_REF] actually has the hypersphere radius as a term in the sum to be minimized, as in the minimization of Eq. (3.8) defining the original non-deep SVDD. In Eq. (3.10), the sum of radii similarly defines a term in the training loss. These radii, in addition to the hypersphere centroids, completely define a one-class boundary without density description. In Eq. (3.9), no radius is taken into account in the training loss which only contains the centroid as the in-distribution samples location estimate, making the boundary only implicit this time. One could call Deep SVDD, associated with this objective function, a distancebased [START_REF] Fei | Isolation forest[END_REF] OCC since the distance to the centroid defines both the minimized loss and the AD score at test time, making this score independent of an actual boundary. Among the most common density estimation approaches are the normal density estimation and the Gaussian Mixture Model mentioned in section 1.3.2. The normal density estimate relies on the computation of a covariance matrix and mean estimate, leading back to the Mahalanobis distance (cf. section 3.1.1). The challenge of high dimensional data for density estimation is revealed in such a case since it can make the covariance matrix estimate singular and may therefore dictate proper regularization. Such normal density estimation brings Deep RPO nearer to density-based OCC since RPO and Deep RPO also comparably rely on estimates of the training data location and spread, respectively provided by the median and the median absolute deviation. In terms of performances, boundary characterization should be used for OCC when the data availability is too weak for a density-based approach, and is especially relevant when labeled outliers are available to refine the one-class boundary. On the other hand, a density estimation approach should be used when the training data availability allows it, the latter likely leading to better performances [START_REF] David | Support vector data description[END_REF]. One should keep in mind that other kinds of OCC exist besides density, distance and boundary-based OCC, for instance the isolation method that is IF [START_REF] Fei | Isolation forest[END_REF].
Latent space regularization and specialization
It is important to notice that when talking about AD using the representation from an AE's bottleneck representation or the output of Deep SVDD's [START_REF] Ruff | Deep one-class classification[END_REF] neural network, one considers nothing more than using an encoded version of the input data. The relevance of the information saved in the data representation depends primarily of the task used to train the encoding neural network, which in the case of the AE is usually reconstruction, perhaps including denoising [START_REF] Vincent | Extracting and composing robust features with denoising autoencoders[END_REF], and in the case of Deep SVDD is the normal training data latent isotropic concentration. One of the key challenges in the design of such an encoding neural architecture is not only to choose wisely the transformations contained in the successive layers, responsible for information retrieval, but also to select an appropriate output representation to encode the data. As Deep SVDD's latent normality hypersphere collapse risk showed [START_REF] Chong | Simple and effective prevention of mode collapse in deep one-class classification[END_REF][START_REF] Ruff | Deep one-class classification[END_REF], neural network design choices or the association of limited training supervision and a specific loss can favor the collapse of the output representations distribution, i.e. to a naive and useless constant mapping. This raises the question of the evaluation of the quality of the distribution in the output encoding space, which the literature could call an embedding space [START_REF] Sanakoyeu | Divide and conquer the embedding space for metric learning[END_REF], encoded data points actually defining embeddings. This evaluation should typically be sensible to the partial or complete collapse of the embeddings distribution. This matter was examined in [START_REF] Jing | Understanding dimensional collapse in contrastive self-supervised learning[END_REF], where the authors observe the complete or partial collapse in the embedding space using the curve described by the sorted singular values of a covariance matrix computed over a set of embeddings. On such a curve, a sharp collapse of these singular values on a logarithmic scale can highlight a partial collapse, or as they call it, a dimensional collapse. This led us to unsuccessfully experiment with regularization terms computed using the singular values of the minibatch output representations covariance matrix for Deep SVDD. We thus defined a singular values uniformity regularization term R unif ormity : Here, the one-class boundary discriminating between in and out-of-distribution samples is defined by the domain [x min , x max ]. This boundary can evidently benefit from the near OOD samples to refine its range, and holds no information describing the density of the one-class within the boundary. Characterizing this density intuitively requires more data, the density estimation being accurate only if the data distribution leading to the estimate is correctly distributed. Supposing a successful density estimation, the AD score proposed by a boundary will necessarily be less relevant than the one provided by the density estimate. Harnessing a SVDD-based OCC here could translate into the use of the distance between a test sample and the peak of the density estimate. This further depicts how SVDD-inspired approaches rely on a simple distance and not on a density information.
R unif ormity = log Λ max + ϵ Λ min + ϵ (3.
and a singular values spread regularization term R spread :
R spread = log 1 Λ min + ϵ (3.19)
In both equations Λ min and Λ max represent respectively the smallest and the largest eigenvalue of the minibatch output representations covariance matrix. The ϵ is a small constant parameter added to the denominator of fractions to ensure numerical stability.
The R unif ormity regularization term encourages the eigenvalues of the covariance matrix to remain close to each other, penalizing any partial collapse in the eigenvalues distribution. The R spread punishes low eigenvalues, to help discard uniformly distributed but very small eigenvalues. This term is called spread since discarding small eigenvalues for the covariance matrix of the embeddings guarantees some variance among the embeddings. A very small variance along one or more embedding dimensions would indeed translate into at least one very small covariance matrix eigenvalue. Note that whereas [START_REF] Jing | Understanding dimensional collapse in contrastive self-supervised learning[END_REF] used the term singular values, we speak of eigenvalues because these quantities are equal in the case of real symmetric positive definite matrices, covariance matrices belonging to this manifold of matrices. We also experimented with another R spread term which has the advantage of systematically taking into account all eigenvalues, and not just the one or two highest or lowest ones:
R spread = log 1 d e=1 Λ e + ϵ (3.20)
The regularization provided by Eq. (3.20) thus ensures the gradient of the loss encompasses all dimensions of the output representations at each optimization step. Two intuitive comments can be made in such a case: the regularization impact is smoothed over all dimensions, and one optimization step can not freely focus on one output dimension at the expense of the distribution of representations along other dimensions. Such a regularization setup is related to what [START_REF] Chong | Simple and effective prevention of mode collapse in deep one-class classification[END_REF] proposed, where they encourage the mean minibatch variance over all dimensions to remain above a threshold thanks to a dedicated regularization term which is nonzero only below the threshold. This was implemented along with an adaptive regularization weight defined with a momentum term and the ratio of the actual Deep SVDD loss with the regularization term. The proximity of this regularization mechanism with ours relies on the fact that both relate to the covariance matrix. The mean minibatch variance across dimensions considered in the regularization proposed by [START_REF] Chong | Simple and effective prevention of mode collapse in deep one-class classification[END_REF] even corresponds to the mean of the diagonal values of the covariance matrix from which we compute the eigenvalues used in Eq. (3.18) and Eq. (3.19). This mean minibatch variance even corresponds to the eigenvalues mean when the components of the output representations are uncorrelated, i.e. when the embeddings covariance matrix is diagonal. Regularizing in such a way the embeddings covariance matrix eigenvalues specializes further the output representation space. The final training loss, when including both of our supplementary regularization terms, would for instance be defined by the following equation in the case of the vanilla Deep SVDD setup, that is without taking into account labeled out-of-distribution samples during training:
min W 1 n n i=1 ||Φ(x i ; W ) -c|| 2 + λ 2 L l=1 ||W l || 2 + λ u R unif ormity + λ s R spread (3.21)
The two eigenvalues-based regularization terms are balanced through the fixed weights λ u and λ s . The ratio Λmax Λ min is the condition number of the covariance matrix and describes how well the inverse of the covariance matrix can be computed by classical numerical methods. It is related to the ratio between the number of dimensions in the output representation space d and the number of samples n used to evaluate the covariance matrix, i.e. d n .
Here n amounts to the batch size. If this ratio is relatively large, i.e we estimate the covariance matrix with not that many samples when compared with the number of dimensions of the output space, the condition number tends to be large indicating an ill-conditioned estimation [START_REF] Velasco-Forero | Comparative analysis of covariance matrix estimation for anomaly detection in hyperspectral images[END_REF]. This is important in our regularization proposal since it makes it irrelevant for all neural network architectures and training batch sizes where d n ends up being quite large. One of the advantages of the minibatch variance regularization proposed in [START_REF] Chong | Simple and effective prevention of mode collapse in deep one-class classification[END_REF] is that it does not rely on a covariance matrix estimation and instead directly estimates the diagonal terms, the variance along each output dimension, of the latter to compute their mean value to finally define a penalty term. This does not go without reminding us one of the points of using the random projection outlyingness (see Eq. (3.2)) instead of the Mahalanobis distance (see Eq. (3.1)), which was precisely to avoid relying on a covariance matrix estimation to produce a robust outlyingness measure. Using the regularization terms defined by Eq. (3.18), Eq. (3.19) and Eq. (3.20) in semi-supervised OCC setups notably implies choosing whether to include negative labeled samples of the training set in the embeddings covariance matrix estimation preceding the eigenvalues decomposition.
Constraining a representation space to avoid dimensional collapse11 can recall the discretization of the AE bottleneck of [START_REF] Gong | Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection[END_REF] mentioned in section 3.1.1. The definition of latent prototypical representations could also be constrained so that a certain metric of dimensional collapse remains below a predefined threshold. This would involve adapting the training loss of the proposed memory-augmented autoencoder since in the published setup the prototypical representations are trained with a backpropagation discarding the prototypical representations with zero as attention weight. The reunion of normal latent representations achieved through the Deep SVDD-based losses put forward is analogous to the alignment principle put forward in [START_REF] Wang | Understanding contrastive representation learning through alignment and uniformity on the hypersphere[END_REF], which also argued for a latent uniformity. Whereas the alignment principle compels similar samples to be assigned similar representations, the uniformity principle demands the preservation of maximal information. One way to achieve that according to [START_REF] Wang | Understanding contrastive representation learning through alignment and uniformity on the hypersphere[END_REF] is to push all features away from each other on the unit hypersphere to intuitively facilitate a uniform distribution. Here the unit hypersphere is the manifold on which representations lie, and not a minimal volume holding in-distribution samples. This uniformity principle is rarely mentioned in the literature even when the building of an embedding space is discussed.
The extension of the Deep SVDD loss to encourage a form of latent uniformity using the pairwise distance between normal samples during training was investigated without ever improving the baselines. Such latent uniformity can be interpreted as a latent space regularization as well. The experiments conducted to evaluate the contribution of a pairwise distance of normal samples latent representations loss term revolved around the following training loss format, where the term tasked with enforcing latent uniformity is weighted using λ unif ormity and was expected to be judiciously balanced with the overall latent concentration:
min W 1 n n i ∥Φ(x i ; W ) -c∥ 2 + λ unif ormity n n i̸ =j ∥x i -x j ∥ 2 -1 + λ 2 L l=1 ||W l || 2 (3.22)
The failure to make a loss term enforce a form of beneficial latent uniformity could signal the necessity of associating such a constraint with latent representations confined to a relevant manifold. In our experiments, the representation were free to lie anywhere in the Euclidean output representation space, without any constraint. Enforcing the uniformity principle is intuitively appealing for Deep SVDD-like training objectives because it would simultaneously discourage latent hypersphere collapse, i.e. the learning of a fixed mapping to a constant output representation.
SPD-manifold specific processing
The processing of spectrums and time-series led us to consider methods stemming from the active research field devoted to geometric deep learning [START_REF] Michael M Bronstein | Geometric deep learning: going beyond euclidean data[END_REF], this time not for the deep learning architectures adapted to data on graphs (see 2.4.2) but for learning parameters and representations constrained to Riemannian manifolds with a special interest for the SPD matrices manifold. Said in simple words and without being formally exhaustive a manifold is a smooth space where each point admits a Euclidean tangent space. As an intuitive example, one can think of a smooth surface embedded in a Euclidean space: each point admits a tangent space from which points can be mapped back to the surface. Riemannian manifolds are manifolds with a Riemannian metric which enables the computation of distances and angles. Paradoxically, the availability of a Euclidean tangent space at each point on the manifold is actually key to the computation of manifold-aware metrics and gradients. Indeed, a Riemannian metric can hide an inner product on the tangent space of a given point lying on the manifold, and the optimization of parameters can be defined through the repeated projection of a gradient descent in the tangent space back to the manifold. Regarding the Riemannian gradient, see
Recent developments offer approaches specialized in processing data points represented by symmetric positive definite (SPD) matrices. These approaches typically constrain representation learning in the intermediate layers of a neural network to the SPD matrices Riemannian manifold, which defines a convex half-cone in the vector space of matrices [START_REF] Pennec | A riemannian framework for tensor computing[END_REF]. Doing so implies redefining linear and non-linear operations commonly used in deep learning, and to replace the usual backpropagation with a Riemannian one. The use of manifold-aware operations and statistics for radar targets detection and discrimination was previously put forward in [START_REF] Brooks | Riemannian batch normalization for spd neural networks[END_REF][START_REF] Daniel | Exploring complex time-series representations for riemannian machine learning of radar data[END_REF][START_REF] Arnaudon | Riemannian medians and means with applications to radar signal processing[END_REF][START_REF] Yang | Riemannian median, geometry of covariance matrices and radar target detection[END_REF]. A real-valued matrix M ∈ R d×d is symmetric positive definite if and only if:
x T M x > 0 ∀x ∈ R d \ {0} (3.23)
Processing spectrums and time-series naturally lead to work with SPD representations. In the case of spectrums for example, a series of spectrums, each associated with a pulse Doppler radar burst, allows for the computation of a covariance matrix of frequency bins over time. Time-series of one or a handful of dimensions can be characterized using autoregressive models which in turn are associated with Toeplitz matrices of autocorrelation coefficients, the latter being a specific case of SPD matrix. The generation of such Toeplitz matrices is discussed in 2.3.1. In the case of the series of radar spectrums however, we move away from the radar hits encoding problem we are interested in since one hit is defined over a single burst, i.e. one hit generates a single spectrum. This would not prevent this spectrum from being processed with CNNs or the complex-valued samples in time to be processed as a time-series. A covariance matrix could be built over the output channels of a convolutional layer, which could be understood as a second-order pooling [START_REF] Acharya | Covariance pooling for facial expression recognition[END_REF][START_REF] Ionescu | Matrix backpropagation for deep networks with structured layers[END_REF]. Such covariance computation layers can notably be adapted to include first order information in the generated intermediate representation while maintaining the SPD nature of the output, thanks to the two following formulations available in the literature [START_REF] Yu | Second-order convolutional neural networks[END_REF][START_REF] Calvo | A distance between multivariate normal distributions based in an embedding into the siegel group[END_REF]:
Σ + β 2 µµ T βµ βµ T 1 (3.24) Σ + βµµ T βµ βµ T β (3.25)
In Eq. (3.24) and Eq. (3.25), Σ is a covariance matrix, µ a mean vector, and β a constant parameter. These SPD representations combining first and second order statistics were implemented for the ICLR 2021 Computational Geometry & Topology Challenge12 [START_REF] Miolane | Iclr 2021 challenge for computational geometry & topology: Design and results[END_REF]. They define the following mapping, where S + * is a manifold of SPD matrices of adequate dimensions:
(µ, Σ d×d ) → Σ (d+1)×(d+1) (3.26) (R d , S + * ) → S + * (3.27)
As suggested by [START_REF] Brooks | A hermitian positive definite neural network for microdoppler complex covariance processing[END_REF], the complex-valued nature of raw IQ data can benefit from the definition of a similar processing framework for Hermitian Positive Definite (HPD) matrices, which are the complex-valued equivalent of Symmetric Positive Definite realvalued matrices. Furthermore, whereas in the earlier mentioned Mahalanobis distance (see Eq. (3.1)) one used the covariance matrix to compute the distance between a test data point and a reference distribution, here the covariance matrix represents a test data point by capturing the distribution of its features generated by convolutions. Transforming SPD matrices with specialized neural networks open the way to manifold-constrained learning for one-class classification. As we will see in 3.2.3, SPD neural networks are not the only way of conducting one-class classification specific to SPD representations. In the remainder of this section, building blocks of SPD neural networks and SPD-specific one-class classification will be presented.
SPD neural network operations
Let us first define a linear operation for SPD neural networks. This linear operation is a bilinear mapping called BiMap [START_REF] Huang | A riemannian network for spd matrix learning[END_REF] and relies on a trainable parameters matrix of full-rank W ∈ R
d k ×d k-1 * to transform the input X k-1 SPD matrix into the output X k SPD matrix: X k = W k X k-1 W T k (3.28)
Here, d k is the output square matrix dimension, while d k-1 is the input square matrix dimension, which indicates the BiMap layer also allows to reduce the intermediate SPD representations size. This reduction in the SPD representation size in turn implies that representations are going from one SPD matrices manifold to another of lower dimensionality. The trainable W k matrix is kept orthogonal, and thus ends up belonging to a compact Stiefel manifold in order for the representation produced by Eq. (3.28) to remain SPD matrices and to enable the optimization of the parameters [START_REF] Huang | A riemannian network for spd matrix learning[END_REF]. Now that SPD neural networks are equipped with a linear transformation, one needs to define a nonlinear one. In a similar fashion to the rectified linear unit (ReLU), [START_REF] Huang | A riemannian network for spd matrix learning[END_REF] proposed the ReEig layer that rectifies the small eigenvalues of an SPD matrix, increasing the ones above an arbitrary threshold to the threshold value:
X k = U k-1 max (ϵI, Λ k-1 ) U T k-1 (3.29)
In both Eq. (3.29) and Eq. (3.30), the matrices U k-1 and Λ k-1 are respectively the square and diagonal matrices produced by the eigenvalue decomposition
X k-1 = U k-1 Λ k-1 U T k-1 .
The matrix I is the identity matrix. After a linear transformation that can reduce the representation size and a nonlinear activation, [START_REF] Huang | A riemannian network for spd matrix learning[END_REF] proposes another nonlinear layer called LogEig:
X k = U k-1 log (Λ k-1 ) U T k-1 (3.30)
The LogEig layer serves as a mapping from the manifold to a Euclidean representation. This mapping stems from the logarithmic and exponential mappings that allow to go respectively from a Riemannian manifold to the Euclidean tangent space and vice versa. The LogEig layer is actually a specific case of the logarithmic mapping in the SPD matrices space where the reference of the mapping is the identity matrix [32, p.101]. The transposition of the now defined BiMap, ReEig and LogEig layers for the real-valued case of SPD (∈ S + * ) matrices to the equivalent complex-valued case of HPD matrices (∈ H + * ) has been detailed in [START_REF] Brooks | Deep Learning and Information Geometry for Time-Series Classification[END_REF], which also puts forward a simple way to bring the HPD case to a real-valued SPD one through the following complex components mean:
∀H ∈ H + * , 1 2 (H r + H i ) = S ∈ S + * (3.31)
where H r and H i are the real and imaginary parts of the input HPD representation respectively. A manifold-aware Riemannian batch normalization [START_REF] Ioffe | Batch normalization: Accelerating deep network training by reducing internal covariate shift[END_REF] adapted to SPD representations was proposed in [START_REF] Brooks | Riemannian batch normalization for spd neural networks[END_REF] and further discussed along with the proposal of variants in [START_REF] Reinmar | Spd domain-specific batch normalization to crack interpretable unsupervised domain adaptation in eeg[END_REF][START_REF] Reinmar | Controlling the fréchet variance improves batch normalization on the symmetric positive definite manifold[END_REF][START_REF] Lou | Differentiating through the fréchet mean[END_REF]. The mean used in the batch normalization of [START_REF] Brooks | Riemannian batch normalization for spd neural networks[END_REF] is a Riemannian barycenter corresponding to the Fréchet mean. This geometric mean has no closed form when more than two SPD matrices are involved and is computed using an iterative method called the Karcher flow algorithm [START_REF] Karcher | Riemannian center of mass and mollifier smoothing[END_REF].
SPD neural network gradient and backpropagation
Transforming SPD representations into other SPD representations implies a set of constraints over the operations conducted within the SPD neural network. These constraints can be divided into two categories: the constraints over the parameters used to compute new intermediate representations, and the constraints over the gradient computation with respect to these parameters. The first category dictates that the W k bilinear mapping matrix is required to be part of the Stiefel manifold as indicated in 3.2.1. The second category implies the definition of specific manifold-aware gradients for the backpropagation during training. For instance, computing the gradient to update parameters constrained to a Riemannian manifold such as the ones defining the bilinear transformation of Eq. (3.28) implies three steps: first determine the Euclidean gradient, then retrieve the normal component with respect to the tangent space of the parameters matrix on the Riemannian manifold, and then finally project the gradient descent-updated parameters in the tangent space on the manifold [START_REF] Brooks | Riemannian batch normalization for spd neural networks[END_REF][START_REF] Michael M Bronstein | Geometric deep learning: going beyond euclidean data[END_REF][START_REF] Huang | A riemannian network for spd matrix learning[END_REF][START_REF] Harandi | Generalized backpropagation,etude de cas: Orthogonality[END_REF]. To get back to the Riemannian manifold from the tangent space one can use a Riemannian exponential mapping, or a first-order approximation of the latter called a retraction [START_REF] Nielsen | An elementary introduction to information geometry[END_REF][START_REF] Absil | Projection-like retractions on matrix manifolds[END_REF]. On the other hand, passing the gradient through the SPD neural networks nonlinearities such as Eq. 3.29 and Eq. 3.30 requires other specific operations adapted to the eigenvalues decomposition. In such cases backpropagation relies on the matrix generalization of backpropagation [START_REF] Huang | A riemannian network for spd matrix learning[END_REF][START_REF] Ionescu | Matrix backpropagation for deep networks with structured layers[END_REF] which will not be detailed here since it is quite elaborate and does not contain any original contribution from this thesis. One can note that [START_REF] Brooks | Riemannian batch normalization for spd neural networks[END_REF] required both kinds of gradient adaptation to implement a Riemannian batch normalization for SPD neural networks. All experiments using the SPD neural network operations and backpropagation were conducted using a third party implementation 13 publicly available [10]. One can note the proximity of using a Riemannian gradient in the context of an SPD neural network and with the intuition of the natural gradient put forward in [START_REF] Amari | Natural gradient works efficiently in learning[END_REF] which takes into account the possible non-Euclidean structure of a parameter space.
One-class classification specific to SPD matrices
Using the previously defined SPD neural network operations and backpropagation, one can adapt the already presented Deep SVDD and Deep SAD one-class classification to an SPD setup. In such an approach, input SPD representations, e.g. covariance matrices, are transformed into other SPD representations of similar or lower dimensionality, and the Riemannian distance to the Riemannian mean output representation is either minimized or maximized during training. The lower dimensionality is controlled thanks to the BiMap constrained parameters matrix W shown in Eq. (3.28). An example architecture of such an SPD-manifold aware Deep SVDD adaptation is detailed in section C.2. The mean output representation serving as reference point during training is the mean training data output representation computed before training, in accordance with the heuristic put forward in [START_REF] Ruff | Deep one-class classification[END_REF]. Note that this Riemannian mean is a geometric mean of the SPD representations in the output space of the SPD neural network and corresponds to what was done in [START_REF] Brooks | Riemannian batch normalization for spd neural networks[END_REF] to define a Riemannian batch normlization as was previously explained in 3.2.1.
The distance minimization in the output space equally constrained to the SPD matrices manifold implies the use of a Riemannian distance between two arbitrary SPD matrices S 1 and S 2 . In our experiments, we used either the Log-Euclidean metric (LEM):
dist(S 1 , S 2 ) = ∥log (S 1 ) -log (S 2 )∥ F (3.32)
or the affine-invariant metric (AIM):
dist(S 1 , S 2 ) = log S -1 2 1 • S 2 • S -1 2 1 F (3.33)
The LEM is actually a specific case of the AIM [32, p.101].
Since we consider input, output and intermediate representations belonging to the SPD matrices manifold, no LogEig (see Eq. (3.30)) layer is necessary to plug representations in Euclidean layers. Such a fully Riemannian approach can be opposed to [START_REF] Daniel | Exploring complex time-series representations for riemannian machine learning of radar data[END_REF][START_REF] Yu | Second-order convolutional neural networks[END_REF] which suggested to use SPD-specific operations in portions of a processing pipeline otherwise Euclidean, these setups actually requiring logarithmic mappings to get back to Euclidean representations. Two other non-deep manifold-aware OCC approaches are put forward in this work, both relying on tangent PCA (tPCA). The tPCA projects SPD points on the tangent space of the Fréchet mean, a Riemannian mean which allows to compute an SPD mean, keeping the computed centroid on the Riemannian manifold naturally occupied by the data. A common principal component analysis [START_REF] Goodfellow | Deep learning[END_REF] is then performed in the tangent space at the Fréchet mean. Using tPCA offers the advantage of being sensible to the manifold on which the input samples lie, but implies that input data is centered around the Riemannian mean and not too scattered. This makes tPCA a questionable choice when the objective set is AD with multimodal normality [START_REF] Pennec | Barycentric subspace analysis on manifolds[END_REF]. In other words, although the linear approximation of the SPD inputs around its Riemannian mean provides us with a manifold-aware dimensionality reduction, the distribution of the inputs in the SPD matrices manifold may lead this approximation to be excessively inaccurate, leading to irrelevant reduced representations. This is due to the linear approximation not preserving the Riemannian distances between points [START_REF] Sommer | Manifold valued statistics, exact principal geodesic analysis and the effect of linear approximations[END_REF].
The first non-deep manifold-aware OCC consists in replacing the principal component analysis-based dimensionality reduction usually part of machine learning pipelines preprocessing with tPCA dimensionality reduction. To achieve OCC following the dimensionality reduction one only need to plug the reduced representations provided by tPCA in a OCC method afterwards. The second manifold-aware OCC approach consists in using the norm of the last components of the tPCA as an AD score, an anomalous sample being considered as out of the one-class for OCC. This would be the Riemannian equivalent of taking the last components of a vanilla PCA as an AD score, a method called negated PCA. This negated PCA is motivated by the possibility that, in one-class classification where model fitting occurs on normal data only, the first principal components responsible for most of the variance in normal data are not the most discriminating ones when it comes to distinguishing normal samples from anomalies [START_REF] Miolane | Iclr 2021 challenge for computational geometry & topology: Design and results[END_REF][START_REF] Rippel | Gaussian anomaly detection by modeling the distribution of normal data in pretrained deep features[END_REF][START_REF] Rippel | Modeling the distribution of normal data in pre-trained deep features for anomaly detection[END_REF]. These two last manifold-aware approaches relying on tPCA are not fully Riemannian either, in reference to the purely Riemannian Deep SVDD SPD adaptation previously proposed. In the first case the OCC method is not Riemannian in any way, and in the second case the AD score is not manifold-aware, whereas for the Riemannian Deep SVDD SPD adaptation the final score is provided by a Riemannian metric in addition to all transformations being manifold-aware. A non-exhaustive map of the OCC approaches mentioned in this chapter is proposed on Fig. 3.4.
One-class classification experiments
As indicated in Chapter 1, we consider unsupervised AD as experiments where no labeled anomalies are available during training to refine the implemented discrimination, and SAD experiments as experiments where in addition to the samples belonging to the one-class, a small minority of unrepresentative labeled anomalies is accessible during training. The OCC experiments presented in this section are presented in their order of publication in conference and workshop papers, i.e. each upcoming section refers to specific datasets while combining experiments with different levels of supervision.
OCC on high-resolution range profiles
The experiments presented in this section stem from our 2020 IEEE Radar Conference paper [START_REF] Bauw | From unsupervised to semi-supervised anomaly detection methods for hrrp targets[END_REF] which applied OCC to HRRPs. The dataset is composed of real radar 1D range profiles (RP) generated by a very high performance radar for coastal surveillance, the Coast Watcher 100 Thales radar (see Fig. 1.8). Each range profile is composed of 200 cells with various intensities. The labelling of the data samples stems from the AIS14 announced ship types. This constitutes a first potential difficulty in the creation of the dataset since the labels provided by AIS are functional (i.e. they describe the function of a ship, not its appearance and dimensions). Another possible interpretation of our training set pollution with unlabeled anomalies for unsupervised AD is the mislabeling of AIS data, or the intra-class variance of AIS labels, both interpretations affirming the point of experimenting with such pollution. Note that here pollution has the same meaning as contamination in 3.3.3. The dataset is balanced and composed of four classes of range profiles: the latter either belong to cargo ships, fishing ships, passenger ships or tankers. A characteristic example RP of each class can be seen on Fig. 3.5. A total of 12000 HRRPs are available for each class.
A possible operational use of anomaly detection with the previous classes could be the following: considering a busy but regulated waterway in terms of fishing, an operator could be helped by automatic AD alerts detecting fishing ships in the area. This implies * : SPD manifold aware approach Figure 3.4: Non-exhaustive map of the OCC approaches mentioned in chapter 3. Each method name is associated with a reference to the equation or paragraph describing it. Not all methods are positioned on the illustration since the aim of the figure is to map either the proposed approaches based on Deep SVDD with respect to the latter, or the manifold-aware methods. Additionally, an isolation-based OCC such as IF (see Eq. (3.5)) has no place in the categorization proposed here. A deep OCC auto-encoder using a reconstruction error as training loss and OOD score is mapped outside of the red circle even though it learns to capture the training data distribution, i.e. the one-class distribution. This choice stems from the AE learning the distribution only implicitly. In such a case, no centroid or one-class reference coordinates are available in any representation space as in-distribution data location estimate, although one could harness a clustering algorithm on the representations generated by the trained AE. The part of the semi-supervision circle without intersection is empty since the circle is drawn to highlight the shallow and deep methods able the take advantage of a minority of labeled anomalies during model fitting. One can note that all the deep OCC approaches represented here should be within the semi-supervision circle had we opted for the alternative AD supervision categorization put forward in [START_REF] Chandola | Anomaly detection: A survey[END_REF] as explained in 1.3.2. normality is defined by the three other classes (cargo ships, passenger ships, tankers), for which many samples should be available. Note that our experiments define normality based on a single class of ships, but the described operational use can still be achieved by combining the results of AD on each normal class. This scenario can be easily extended to a variety of realistic surveillance contexts: detection of specific ships despite the shutdown of AIS, detection of military ships without AIS and IFF 15 .
In order to obtain range profiles expressive enough for relevant experiments, the data samples chosen for our experiments were beforehand selected according to their combined range cells energy. This selection aims at avoiding both too small and too high energies as a crude effort to avoid model bias imputable to the outliers of each class. This work deliberately avoids elaborate preprocessing to reveal the potential of AD on raw HRRPs. We will see in our results AD methods applied on data with and without normalisation. Outside this samples preselection and non systematic normalisation, no steps are taken in order to counter amplitude-scale, time-shift and target-aspect sensitivities, which seems uncommon when compared with other HRRP processing approaches [START_REF] Feng | Radar hrrp target recognition with deep networks[END_REF][START_REF] Du | Hrrp clutter rejection via one-class classifier with hausdorff distance[END_REF][START_REF] Wan | Convolutional neural networks for radar hrrp target recognition and rejection[END_REF]. Regarding time-shift sensitivity, one should note that most of our samples are nonetheless approximately aligned. About the diversity of targets, no exact count was made of the ships eventually retained by our preselection, but this diversity is above the one observed in other studies dedicated to HRRP targets such as [START_REF] Wan | Convolutional neural networks for radar hrrp target recognition and rejection[END_REF].
Our unsupervised AD results aim at comparing between shallow and deep AD methods, but also at individually appreciating the sensitivity to training pollution of each unsupervised AD method, with fixed hyperparameters through the pollution changes. We maintain constant hyperparameters when polluting the training set in order to stay relevant regarding an actual implementation of one of the methods considered with imperfect radar data. The results of Deep SAD are here to demonstrate the ability of labeled anomalies to contribute to Deep AD through recent semi-supervised approaches adaptable to HRRP data. Each unsupervised and semi-supervised experiment includes detecting anomalies of at least two classes completely unseen during training at test stage. As was already mentioned, this is fundamental in order to respect the diversity and unpredictability of anomalies. The metric defining all results is the area under the receiver operating characteristic curve (ROC AUC), as was used in other recent works on AD such as [START_REF] Ruff | Deep one-class classification[END_REF][START_REF] Du | Hrrp clutter rejection via one-class classifier with hausdorff distance[END_REF][START_REF] Wan | Convolutional neural networks for radar hrrp target recognition and rejection[END_REF][START_REF] Wang | Gods: Generalized one-class discriminative subspaces for anomaly detection[END_REF]. For each experiment, over the 48000 HRRPs of our dataset a random fraction of 10% defines the test set.
Experimental setup
As was done in [START_REF] Ruff | Deep one-class classification[END_REF], we use a LeNet-type CNN with leaky ReLU activations for our Deep SVDD and Deep SAD neural network. The CAE architecture will be similar to allow the use of the trained weights of its encoder for the initialization of the Deep SVDD network (such initialization constitutes a pretraining). For these three neural network based AD, a batch size of 128 and a learning rate of 10 -3 were used. The weight decay hyperparameter was set to 10 -6 . The CAE used for AD and to provide Deep SVDD with pretrained parameters has been trained during ten epochs. For unsupervised AD, Deep SVDD was trained during 20 epochs whether it was initialized with the CAE encoder parameters or not. For SAD, the Deep SAD network adapted from Deep SVDD with a new objective was trained during 20 epochs, with a semi-supervised objective term balanced by η equal to one, as found in [START_REF] Ruff | Deep semi-supervised anomaly detection[END_REF].
Regarding the hyperparameters of our shallow methods: LOF was set with a number of nearest neighbors of 48 to be considered in local densities estimations, and a contamination of 10%. RPD involves 1000 random projections to produce its statistical projection depth approximation, and OC-SVM had its ν set to 10 -1 . Our IF was executed with 100 estimators, a contamination of 10% and a maximum number of samples per subsample of 1024, thus respecting the original spirit of IF, designed to work best with a substantially limited subsample [START_REF] Fei | Isolation forest[END_REF]. The methods parameters were directly inspired by the available implementation of [START_REF] Ruff | Deep one-class classification[END_REF].
In the case of dimensionality reduction by PCA, the PCA is systematically preceded by min-max normalization and the number of components kept is chosen in order to retain 95% of the variance. This dimensionality reduction setup was inspired by the methodology proposed in [START_REF] Ruff | Deep one-class classification[END_REF]. It is to be noted that the PCA is always fitted using the training samples of the normal class only, this means that for each normal class change, there is a different PCA that is being used. The PCA used for dimensionality reduction of the test samples of a given class will, for instance, not be the same between an experiment where the same class is defined as normal, and another experiment for which another class will define normality and be responsible for the AD training. For Deep AD experiments, only a similar min-max normalization will be applied to the data. For example, a point associated with the normal cargos will be defined by the mean AUC of the experiments where pollution anomalies stem from the passenger ships, the fishing ships and the tankers respectively. Also, each single experiment setup is executed on three different seeds. This implies that each point actually depicts a mean of mean (over varying pollution and seeds respectively). The vertical bar associated with each point represents the mean standard deviation over all experiments and seeds (one standard deviation is computed over each set of seeds, then a mean is determined over all experiments). Each method suffers a loss of AUC for a higher pollution ratio of its training set supposedly pure of anomalous samples.
Unsupervised AD with training set pollution
The results of unsupervised AD with training pollution are illustrated on Fig. 3.6. Results are globally good, with an expected impact of progressive pollution of the supposedly purely normal training set. Weaker detection is achieved when the normal class consists of passenger ships. This class likely makes it harder to guess a normality boundary since it is associated with an important features variance, and ships belonging to a wider range of lengths. This interpretation is compatible with the smaller drop of AUC for this normal class when pollution is introduced in the training set: the anomaly detection was already a little confused by the variety within the normality before any pollution. This reminds us how essential it is to wisely choose what the one-class can be made of when discriminating using OCC. The most sensible drops of performances can be seen for IF associated with normalization and PCA in preprocessing. A slight improvement can be observed when Deep SVDD benefits from a pretrained network thanks to an initialization that uses the weights of a CAE encoder trained on similar data. The most harmful pollution to normal classes cargo, passenger and tanker is the one made of fishing ships. Indeed, introducing anomalous fishing ships as pollution makes it harder to detect the most distinguishable class from normality in those cases.
IF and RPD stand out as the most stable AD through pollution (apart from IF with normalization and PCA), whereas the deep unsupervised AD methods and LOF indicate high sensibility to training set impurity, with an apparent plateau effect for LOF polluted experiments AUCs. LOF nonetheless obtains excellent results when the training set is not polluted. RPD also stands out as a shallow AD for which the normalization and PCA have no substantial impact on the AUC obtained. This stability illustrates the affine invariance properties of RPD. AUC is exceptionally high when the normal class is composed of fishing ships (class one) because of the easy distinction of this class among our three others: one could easily select most fishing ships in our dataset based on the active range-profile length only. The active range profile size is perhaps the most reliable feature of our HRRPs since the latter are subject to high variability within a single class, the dominant scatterers changing between ships with the same function (i.e the same AIS label), without even mentioning target-aspect sensitivity. Finally, regarding the resources needed to train the AD methods considered, this study revealed that outside the neural networks training that requires the biggest resources, LOF was the slowest, followed by OC-SVM (even though helped by a PCA-reduced dimensionality) and then IF. RPD was the quickest method to train in our study. Note that RPD training amounts to setting the location and spread estimators of RPO using the training data.
Semi-supervised AD
The results of semi-supervised AD with labeled samples during training are illustrated on Figure 3.7. For normal classes cargo and passenger, the introduction of labeled anomalies improves the mean AUC. One can further note that where unsupervised AD encountered difficulties for normal class passenger, SAD seems to tackle the latter. The mean AUC however drops with labeled anomalies for normal class tanker. Upon investigation, it emerged that a single SAD experiment (i.e. a single type a labeled anomaly) is responsible for this drop: once labeled anomalies from class cargo are added to the training data when the normal class is tankers with a non zero ratio, the AUC drops. It seems not to stem only from excessive proximity of the two classes in terms of ship size otherwise the same drop would impact the AUC when the roles are reversed (normal class defined by cargo and contaminated by tanker). The reason behind this asymmetrical decrease could be a combination of the untreated HRRPs sensitivities and the intra-class ships diversity. Going back to the successful SAD experiments, the rise of AUC due to labeled anomalies for normal class cargo and normal class passenger ships appears to reach a plateau: 1% and 5% of labeled anomalies help approximately as much as 10% labeled anomalies. A possible interpretation would be that few samples suffice to clarify the decision boundary of the AD method towards one specific kind of anomaly, with no further improvement in that so-called anomaly direction after a certain point. It is irrelevant to explore higher ratios of labeled anomalies within the training set, since doing so would switch our AD context with a supervised classification one.
Concluding remarks
This study shows that a variety of anomaly detection methods can be effective for unusual HRRP targets detection. Our results on semi-supervised detection demonstrate the possibility to improve such detection using a few labeled anomalies. Besides, since hyperparameters were not extensively fine-tuned the methods could yield additional improvements. This potential is also increased by the various advantages and drawbacks offered by each method considered.
OCC on images
The experiments presented in this section stem from our 2021 ICML UDL workshop paper [START_REF] Bauw | Deep random projection outlyingness for unsupervised anomaly detection[END_REF] which applied OCC to the now classic image classification datasets of the deep learning community: MNIST [START_REF] Lecun | The mnist database of handwritten digits[END_REF], Fashion-MNIST [START_REF] Xiao | Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms[END_REF], and CIFAR10 [START_REF] Krizhevsky | Learning multiple layers of features from tiny images[END_REF]. Additional experiments were conducted on the Statlog (Landsat Satellite) dataset from the UCI machine learning repository16 [START_REF] Dua | UCI machine learning repository[END_REF]. The satellite data was considered to make our comparison of OCC methods more diverse for the data part, thus making the comparison more robust, and to make sure to consider a realistic discrimination task.
All experiments in 3.3.2 were conducted using PyTorch, on either of the following hardware configurations: AMD Ryzen 7 2700X with Nvidia RTX 2080, or Intel Xeon E5-2640 with Nvidia GTX Titan X. Table 3.1 reports the main results of this work. RPO stands for the original RPO, described in Eq. (3.2) with its location estimator and spread measure, respectively the median and MAD, defined on the training dataset completely made of normal samples. This means RPO is adapted to a machine learning data paradigm, whereas the original RPO was meant to directly be applied to a test set in which there would not be a significant proportion of anomalies. The direct application of RPO to our test sets without determining the medians and MADs on the training data leads to performances next to randomness. Such unsupervised and untrained RPOs are therefore not represented in the results tables. This poor performance is due both to the inadequate balance between samples considered as anomalies and the normal ones, and the potentially insufficient number of RPs with respect to the input space. Indeed, the more the input space to which RPs are applied to is of high dimensionality, the more RPs you need to obtain an informative projected estimator [START_REF] Gueguen | Local mutual information for dissimilarity-based image segmentation[END_REF]. Most of the failure can however be attributed to the data balance of the test sets in this case. Apart from the results dedicated to the study of the influence of the number of RPs used in the latent RPO for deep RPO in table 3.3 and table 3.4, RPO is implemented using 1000 RPs.
In table 3.1, RPO-max is the closest AD to the original RPO but as previously stated it is beforehand adapted to take into account training data. RPO-mean is the shallow equivalent of the proposed method, deep RPO-mean, which adds an encoding neural network in front of RPO in the AD process. The same goes for RPO-max and deep RPO-max, which constitutes a more direct descendant of the original RPO. The random projections tensor is initialized by a random realization of a standard normal distribution. Random projections leading to a single projected dimension are normalized, so that they belong to the unit sphere in accordance with (3.2).
The input dimensionality for the shallow methods RPO-max and RPO-mean in table 3.1 is the dimensionality of the flattened input images, i.e. 784 for MNIST and Fashion-MNIST, and 3072 for CIFAR10. Deep SVDD and deep RPO encode the input images into latent representations of 32 and 128 dimensions, for MNIST, Fashion-MNIST and CIFAR10 respectively, before projecting using RPs when RPs are used. Hyperparameters were directly inspired by the ones used by deep SVDD authors since their method constitutes the baseline to which the proposed method is compared. In particular, the encoding networks architectures are the ones used for MNIST, Fashion-MNIST and CIFAR10 for the original deep SVDD [START_REF] Ruff | Deep one-class classification[END_REF] and deep SAD [START_REF] Ruff | Deep semi-supervised anomaly detection[END_REF]. The weight decay hyperparameter was kept at 10 -6 , even though for deep RPO it did not have a great impact in our experiments when compared with trials where the decay had been removed.
The metric used to evaluate the AD methods is the average AUC over several seeds, associated with a standard deviation, as can be found in the AD literature. One should keep in mind that when the number of classes defining normality increases, the datasets classes balance change. Before training, a validation set, made of 10% of the original training set, is created using scikit-learn common split function. For all deep experiments, the retained test AUC is the one associated with the best epoch observed for the validation AUC as was done in [START_REF] Chong | Simple and effective prevention of mode collapse in deep one-class classification[END_REF]. AUCs reach either a convergence plateau or a maximum before dropping in 50 epochs, this number of epochs was thus chosen for all the experiments. This represents a substantial difference with the experimental setup proposed in [START_REF] Ruff | Deep one-class classification[END_REF], where models were trained with many more epochs, benefited from pre-training accomplished using an auto-encoder, and a tailored preprocessing. The comparison between deep SVDD and deep RPO remains fair in this work since the network architecture is shared, along with the training hyperparameters. For each experiment, a new seed is set and a random pick of normal classes is performed. This means that, unlike many other papers in the literature, the nature of normality can change every time a new seed is adopted. This additional diversity behind the average AUCs presented explains the high standard deviations observed in the results.
# modes
RPO-max ( The results of experiments over the three datasets considered, with 30 seeds per experimental setup, are gathered in table 3.1. The latter demonstrates the superiority of the mean over the max as an estimator for RPO when working with the deep RPO setup. The shallow RPO setup, on the other hand, suggests better performances can be obtained using a max. The neural network thus favors a loss balanced over all the single projected outlyingnesses. Moreover, the increasing AUC gap between deep RPO and shallow RPO ADs for MNIST and Fashion-MNIST supports the hypothesis that the encoding neural network allows RPO to face multimodal normality in AD. The growing gap in the last column is not observed for CIFAR10, however this failure is likely to stem from the excessive difficulty of the AD task rather than from an inability of deep RPO. The better performance of deep RPO-mean compared to deep SVDD placed the proposed method at the state-of-the-art level at the time of the workshop paper [START_REF] Bauw | Deep random projection outlyingness for unsupervised anomaly detection[END_REF] publication.
The other experiments conducted mostly rely on two Fashion-MNIST setups, where the normality is defined by either one or three classes, as can be seen in tables 3.2, 3.3, 3.5, 3.6 and 3.7. CIFAR-10 was not selected, except to check whether the poor performances associated with this dataset stem from an inadequate number of RPs in table 3.4, because the multimodal AD remains an excessively complex task for the methods considered as table 3.1 points out, while MNIST does not carry multiscale structure, making it a less interesting example [START_REF] Ruff | Rethinking assumptions in deep anomaly detection[END_REF].
Dimensionality and number of random projections
The results of Table 3.2 indicate that no improvement was achieved through the increase of the RP dimensionality, the best score being associated with RPs projecting output representations stemming from the neural network on a single dimension. These scores also support the idea that Deep RPO works better with a mean estimator to integrate single RP outlyingness scores over all RPs.
Results in Table 3.3 indicate an adequate number of RPs was chosen to implement the RP outlyingness for the encoded data. A slight AUC increase has been achieved by decreasing the number of random projections shared by the rest of the experiments, i.e. 1000, possibly indicating the approximate minimum number of RPs necessary to handle the dimensionality of the neural network encoded latent space. One can think that using the minimum number of RPs to handle the RPO score input dimensionality in a deep setup constitutes a sensible strategy since it avoids superfluous parameters without hurting the outlyingness measure.
As announced, Table 3.4 suggests that the poor performances observed on CIFAR10 do not stem from an insufficient number of random projections, eventhough the greater latent dimensionality used for this dataset encoding could be expected to call for ad- ditional model complexity. These results emphasize the difficulty of the learning task considered when it comes to more realistic multimodal data.
Projections and components dropouts
Picking up the previously introduced notation in sections 3.1.1 and 3.1.3 regarding random projections, d is the data samples dimensionality, m the random projections output dimensionality, and p the number of random projections. Two types of dropouts [START_REF] Srivastava | Dropout: a simple way to prevent neural networks from overfitting[END_REF] can be introduced on the random projections leading to the encoding network training loss: a dropout on the projections themselves, and a dropout on the components of the projections. In the first case, the dropout removes entire projections, implying a selection, in accordance with the dropout rate, over the p-dimensional channel of the projecting random tensor. Components dropout implies a selection, with its own dropout rate, along the d-dimensional channel. The indexes selected for this dropout will then cancel the corresponding dimensions in the random projections, thus ignoring as many components among the inputs. The RPs are normalized again after the components dropout, when m = 1, in accordance with Eq. (3.2). To respect the notation introduced, applications on images would flatten the input pixels array into a d-dimensional vector before their projection. For Deep RPO there is no need to flatten matrices as the output of the neural network has a unique dimension. As an intuitive example, in the specific case m = 1 which coincides with the original RPO, the random projections define a matrix d × p: the projections dropout here would remove columns over the second dimension, whereas the components dropout would discard rows over the first dimension. No rescaling is operated under the proposed dropout mechanism, which differs from the original dropout implementation [START_REF] Srivastava | Dropout: a simple way to prevent neural networks from overfitting[END_REF]. Table 3.5 indicates there is no actual AUC increase when harnessing either of the dropouts put forward for the random projections leading to the outlyingness measure. Since no substantial performances improvement was reached using the dropouts individually, their combined effects were not studied. Experiments conducted with either one or three modes in the normality on Fashion-MNIST for 10 seeds (truncated mean AUC ± std).
Deep RPO for SAD
As introduced in 3.1.2 and in 3.1.3, both Deep SVDD and Deep RPO can integrate the additional supervision of labeled out-of-distribution samples during training to refine the boundaries of the latent normality. In Table 3.6 Deep SVDD, transformed into Deep SAD, appears to more significantly benefit from the additional information provided by a small minority of labeled anomalies during the training. Nevertheless Deep RPO also takes advantage of the latter to improve detection performances, confirming the generality of the distance inversion method to allow a location estimator based unsupervised AD to achieve SAD.
Stability against affine transformation
An affine transformation, defined as a constant multiplication of every component of the input representation of the samples, is applied to challenge the affine stability of the AD methods performances once the training is over. This affine transformation, defined by the constant α shown in the upper part of Table 3.7: Deep RPO-mean test AUCs with varying affine transformation coefficient α on Fashion-MNIST for 10 seeds (truncated mean AUC ± std). The AUC gap is the mean AUC error, computed over all seeds, with respect to the AUC obtained when α = 1, i.e. the baseline case. In the first part of the table, α denotes the constant value along the affine transformation diagonal matrix. In the second part, the diagonal elements are randomly generated according to either a uniform or a gaussian standard distribution. No AUC gap is computed since the seed by seed comparison with the baseline AUC would be unfair, a mean AUC test being computed over 20 random picks of the diagonal matrix for each seed.
input component, another diagonal matrix is used for which the diagonal coefficients are generated using either a random uniform or a standard gaussian distribution. Again, deep RPO and deep SVDD show comparable stability when confronted with the more distorting affine transformations. Looking at the standard deviations overall, deep RPO seems slightly more stable.
Additional experiments on tabular data
Since AD on MNIST, Fashion MNIST and CIFAR10 is very common and excellent performances have already been obtained on these datasets using self-supervised learning, we compare the highlighted shallow and deep methods of our main results in The results also put forward the contribution of the trainable neural network projecting data samples, deep SVDD performing better then the shallow method RPO-Max. Finally, the standard deviation of the performances appears to be higher for deep methods.
Concluding remarks
These experiments emphasize the possibility of adapting simple abnormality measures to complex and realistic anomaly detection tasks in which normality is multimodal.
The experiments conducted on MNIST, Fashion-MNIST, CIFAR10 and satellite data show a light improvement in performance with respect to Deep SVDD when using Deep RPO and suggest that the task of anomaly detection in a fully unsupervised framework, in the case of multimodal normality, remains a challenge. The relative success of the proposed approach highlights the relevance of random projections and more generally of untrained transformations in neural networks, when they are associated with a well chosen trainable architecture.
OCC on radar spectrums and covariance matrices
The experiments presented in this section stem from our 2022 ECML PKDD paper [START_REF] Bauw | Near out-of-distribution detection for low-resolution radar micro-doppler signatures[END_REF] which applied OCC to Doppler signatures. The latter details a comparison of deep and non-deep OODD methods on simulated low-resolution pulse radar micro-Doppler signatures, considering both a spectral and a covariance matrix input representation. The covariance representation aims at estimating whether dedicated second-order processing is appropriate to discriminate signatures. Pulse Doppler Radar (see Chapter 1) signatures are generated by a MATLAB [125] simulation 19 . The Doppler signatures are a series of periodograms, i.e. the evolution of spectral density over several bursts, one periodogram being computed per burst. Both periodograms and the series of periodograms can be called spectrum in the remainder of this paper. The samples on which the discrete Fourier transform is computed are sampled at the PRF frequency, i.e. one sample is available per pulse return for each range bin. This work compares deep and non-deep OODD methods, including second-order methods harnessing the SPD representations provided by the sampled covariance matrix of the signatures. The extension of the deep learning architectures discussed to SAD and self-supervised learning (SSL) is part of the comparison. The use of SSL here consists in the exploitation of a rotated version of every training signature belonging to the normal class in addition to its non-rotated version, whereas SAD amounts to the use of a small minority of actual anomalies taken in one of the other classes of the dataset. In the first case one creates artificial anomalous samples from the already available samples of a single normal class, whereas in the second case labeled anomalies stemming from real target classes are made available. To avoid confusion, one should note that this single normal class can be made of one or several target classes, which end up being considered as a unique normality. No SSL or SAD experiments were conducted on the SPD representations, since the SSL and SAD extensions of the deep methods are achieved through training loss modifications, and the SPD representations were confined to shallow baselines. In the previously described setup, SSL is nothing more than SAD with artificial data points provided by SSL transformations.
Doppler signatures generation
The main parameters of the simulation are close to realistic radar and target characteristics. A carrier frequency of 5 GHz was selected, with a PRF of 50 KHz. An input sample is a Doppler signature extracted from 64 bursts of 64 pulses, i.e. 64 spectrums of 64 points, ensuring the full rank of the covariance matrix computed over non-normalized Doppler, i.e. Fourier, bins. The only simulation parameter changing across the classes of helicopter-like targets is the number of rotating blades: Doppler signatures are associated with either one, two, four or six rotating blades, as can be found on drones and radio-controlled helicopters. The quality of the dataset is visually verified: a non-expert human is easily able to distinguish the four target classes, confirming the discrimination task is feasible. The classes intrinsic diversity is ensured by receiver noise, blade size and revolutions-per-minute (RPM) respectively uniformly sampled in [4.5, 7] and [450, 650], and a bulk speed uniformly sampled so that the signature central frequency changes while staying approximately centered. The possible bulk speeds and rotor speeds are chosen in order for the main Doppler shift and the associated modulations to remain in the unambiguous speeds covered by the Doppler signatures [START_REF] Levanon | Radar signals[END_REF]. Example signatures and their covariance matrix representations are depicted for each class on Fig. 3.8. For each class, 3000 samples are simulated, thus creating a 12000-samples dataset. While small for the deep learning community, possessing thousands of relevant and labeled real radar detections would not be trivial in the radar industry, making larger simulated datasets less realistic for this use case.
Preprocessing This work is inspired by [START_REF] Ruff | Deep one-class classification[END_REF], which experimented on MNIST, a dataset in which samples are images of objects without background or irrelevant patterns. In order to guarantee a relevant neural architecture choice, this kind of input format is deliberately reproduced. The idea of creating MNIST-like benchmarks has been of interest in different scientific communities such as biomedical images [START_REF] Yang | Medmnist classification decathlon: A lightweight automl benchmark for medical image analysis[END_REF] and 3D modeling [START_REF] Jimenez Rezende | Unsupervised learning of 3d structure from images[END_REF]. The series of periodograms, i.e. non-SPD representations are therefore preprocessed such that only the columns with top 15% values in them are kept, this operation being done after a switch to logarithmic scale. This results in periodograms where only the active Doppler bins, portraying target bulk speed and micro-Doppler modulations, have non-zero value. Only a grayscale region of interest (ROI) remains in the input matrix with various Doppler shifts and modulation widths, examples of which are shown on Fig. 3.9. This preprocessing leads to the "(SP)" input format as indicated in the results tables, and is complementary to the covariance representation. Covariance matrices are computed without such preprocessing, except for the switch to logarithmic scale which precedes the covariance computation. Comparing covariance-based OODD to OODD on spectral representations is fair since both representations stem from the same inputs, the covariance only implying an additional transformation of the input before training the AD. All input data is min-max normalized except for the covariance matrices used by tPCA.
Riemannian methods for covariance matrices
Two SPD-specific AD approaches were considered. The first approach consists in replacing the principal component analysis dimensionality reduction preceding shallow AD with an SPD manifold-aware tangent PCA (tPCA). As explained in 3.2.3, tPCA is a questionable choice when the objective set is AD with multimodal normality, something that is part of the experiments put forward in this work. Nonetheless, the Euclidean PCA being a common tool in the shallow AD literature, tPCA remains a relevant candidate for this study since it enables us to take a step back with respect to non-deep dimensionality reduction.
The second SPD-specific approach defines a Riemannian equivalent to Deep SVDD already mentioned in 3.2.3: inspired by recent work on SPD neural networks [START_REF] Lou | Differentiating through the fréchet mean[END_REF][START_REF] Brooks | Riemannian batch normalization for spd neural networks[END_REF][START_REF] Huang | A riemannian network for spd matrix learning[END_REF][START_REF] Yu | Second-order convolutional neural networks[END_REF], which learn intermediate representations while keeping them on the SPD matrices manifold, a Deep SVDD SPD would transform input covariance matrices and project the latter into a latent space comprised within the SPD manifold. Taking into account SAD Figure 3.9: Random samples of the fourth class after the preprocessing erasing the irrelevant background, which makes the dataset closer to the MNIST data format. One can notice the varying modulation width of the target spectrum and central Doppler shift. The fourth class has the highest number of rotating blades on the helicopter-like target, hence the higher complexity of the pattern. These samples illustrate the input format of the various AD methods compared in this work, and are min-max normalized so that their values belong to [0, 1]. Only the inputs of the Riemannian setup where shallow learning AD is used on SPD inputs after tPCA uses a different input format, where the input covariance is not normalized. and SSL labeled anomalies during training was expected to be done as for the semi and self-supervised adaptations of Deep SVDD described earlier, where labeled anomalies are pushed away from the latent normality centroid thanks to an inverse distance term in the loss. For Deep SVDD SPD, the distance would be a geometry-aware distance such as the Log-Euclidean distance. Despite diverse attempts to make such a Deep SVDD SPD model work, with and without geometry-aware non-linearities in the neural network architecture, no effective learning was achieved on our dataset. This second approach will therefore be missing from the reported experimental results. An example architecture of such an SPD-manifold aware Deep SVDD adaptation is still proposed in section C.2. Since this approach defined the ReEig [START_REF] Huang | A riemannian network for spd matrix learning[END_REF] non-linearity rectifying small eigenvalues of SPD representations, the related shallow AD approach using the norm of the last PCA components as an anomaly score was also considered. This approach was applied to both spectral and covariance representations, with the PCA and tPCA last components respectively, but was discarded as well due to poor performances. The latter indicate that anomalous samples are close enough to the normal ones for their information to be carried in similar components, emphasizing the near OODD nature of the discrimination pursued.
Experiments
AD experiments are conducted for two setups: a first setup where normality is made of one target class, and a second setup where normality is made of two target classes. When a bimodal normality is experimented on, the normal classes are balanced. Moreover, the number of normal modes is not given in any way to the AD methods, making the experiments closer to the arbitrary and, to a certain extent, unspecified one-class classification useful to a radar operator. Within the simulated dataset, 90% of the samples are used to create the training set, while the rest is equally divided to create the validation and test sets. All non-deep AD methods are applied after a preliminary dimensionality reduction, which is either PCA or tPCA. The number of RPs used to compute the outlyingness score with RPO and Deep RPO is the same and set to 1000, even though the estimator used differs between the shallow and the deep approach. All deep experiments were run on a single GPU, which was either a NVIDIA Tesla P100 or a NVIDIA RTX 2080. In both cases running one deep AD setup for ten seeds took approximately one hour. Non-deep experiments were CPU-intensive and also required around one hour for ten seeds on a high-end multi-core CPU.
Deep learning experiments
The test AUC score of the best validation epoch in terms of AUC is retained, in line with [START_REF] Chong | Simple and effective prevention of mode collapse in deep one-class classification[END_REF]. All experiments were conducted with large 1000 samples batches, which stabilizes the evolution of the train, validation and test AUCs during training. The training is conducted during 300 epochs, the last 100 epochs being fine-tuning epochs with a reduced learning rate, a setup close to the one in [START_REF] Ruff | Deep one-class classification[END_REF]. As was suggested in [START_REF] Chong | Simple and effective prevention of mode collapse in deep one-class classification[END_REF] a relatively small learning rate of 10 -4 is chosen to help avoid the latent normality hypersphere collapse, i.e. the convergence to a constant projection point in the latent space in the non-SAD and non-SSL cases, with λ = 10 -6 . Such a latent normality collapse is made impossible when SAD or SSL samples are concentrated around dedicated centroids or scattered away from normality centroids, since the network is then trained to disperse representations. Loss terms integrating labeled anomalies for extra training supervision are balanced with η = 1, and for Deep MSVDD ν = 0.1. Hyperparameters are kept constant across all experiments conducted, in order to ensure fair comparisons. In the results tables, the second and third columns indicate whether SAD and SSL samples were used for additional supervision during training, and describe how such samples affected the training loss if present. When the SAD or SSL loss term is defined by a centroid, it means that the distance to the mentioned centroid is minimized during training, whereas "away" implies the projection of the SAD or SSL samples are repelled from the normality centroid thanks to an inverse distance as described previously. For example, the first line of the second part of Table 3.10 describes an experiment where SAD samples are concentrated around the SAD samples latent centroid, and SSL samples concentrated around the SSL samples latent centroid. Centroids are computed, as for the normal training samples, with the averaging of an initial forward pass, therefore yielding the average latent representation.
Non-deep learning experiments
Shallow AD conducted on the covariance representation after a common PCA uses the upper triangular part of the min-max normalized input as a starting point, avoiding redundant values. This contrasts with the Riemannian approach replacing PCA with the tPCA, the latter requiring the raw SPD representation. Furthermore, shallow approaches were also tried on the periodograms individually, where each row of an input signature, i.e. one vector of Doppler bins described for one burst, was given a score, the complete signature being then given the mean score of all its periodograms. This ensemble method did not yield relevant results and is therefore missing from our comparison. Such an approach ignores the order of periodograms in signatures.
Neural network architecture While the MNIST-like input format is thus replicated, the 2D features remain specific to radar signal processing and may therefore benefit from a different neural network architecture. Several neural networks architectures were considered, including architectures beginning with wider square and rectangular convolutions extended along the (vertical) bursts input axis, with none of the investigated architectures scoring systematically higher than the Fashion-MNIST architecture from the original Deep SAD work [START_REF] Ruff | Deep semi-supervised anomaly detection[END_REF], which was only modified in order to handle the larger input size. The latter was consequently selected to produce the presented results. This architecture projects data with two convolutional layers followed by two dense layers, each layer being separated from the next one by a batch normalization and a leaky ReLU activation. The outputs of the two convolutional layers are additionally passed through a 2D max-pooling layer.
Riemannian AD
The tPCA was computed thanks to the dedicated Geomstats20 [START_REF] Miolane | Geomstats: A python package for riemannian geometry in machine learning[END_REF] function, while experiments implementing a Riemannian equivalent of Deep SVDD were conducted using the SPD neural networks library torchspdnet21 [START_REF] Brooks | Second-order networks in pytorch[END_REF]. The AD experiments based on a SPD neural network ending up inconclusive, they are not part of the results tables. Unsupervised AD results, for which the training is only supervised by normal training samples, are presented in Table 3.9. These results indicate the superiority of deep learning for the OODD task considered, while demonstrating the substantial contribution of geometry-aware dimensionality reduction through the use of tPCA for non-deep AD. RPO is kept in Table 3.9 even though it does not achieve useful discrimination because it is the shallow equivalent of Deep RPO, one of the highlighted deep AD methods, deprived of the neural network encoder and with a max estimator instead of a mean, as was previously justified. Deep MSVDD does not lead to the best performances, and is as effective as Deep SVDD and Deep RPO, which could seem surprising at least when normality is made of two target classes. Indeed, since Deep MSVDD has the possibility to use several disjointed hyperspheres to capture the latent normality distribution, one could expect it to better model more complex, e.g. multimodal, normality.
Potential contribution of SAD and SSL
The contribution of additional supervision during training through the introduction of SAD samples and SSL samples is examined in Table 3.10. Regarding SAD experiments, labeled anomalies will be taken from a single anomalous class for simplicity, and because only four classes are being separated, this avoids unrealistic experiments where labeled anomalies from every anomalous class are seen during training. When SAD samples are used during training, labeled anomalies represent one percent of the original training set size. This respects the spirit of SAD, for which labeled anomalies can only be a minority of training samples, which is not representative of anomalies. This is especially realistic in the radar processing setup initially described where labeled detections would rarely be available. SSL samples are generated thanks to a rotation of the spectral input format, rendering the latter absurd but encouraging better features extraction since the network is asked to separate similar patterns with different orientations. SSL samples are as numerous as normal training samples, implying they do not define a minority of labeled anomalies for training as SAD samples do, when they are taken into account.
Individually, SAD samples lead to better performances than SSL ones, but the best results are obtained when combining the two sets of samples for maximal training supervision. Deep SVDD appears to be substantially better at taking advantage of the additional supervision provided by SAD and SSL samples. Quite surprisingly for a radar operator, the best test AUC is obtained when SSL samples are concentrated around a specialized centroid while SAD samples are repelled from the normality centroid. Indeed, SSL samples being the only absurd samples considered in our experiments radarwise, it could seem more intuitive to project SAD samples, which remain valid targets, next to a dedicated centroid while repelling SSL samples. Likewise, on an ideal outlyingness scale, SSL samples should be further away from normality than SAD samples. This counter-intuitive performance could stem from the test set which only evaluates the separation of targets in a near OODD context. No invalid target representation, like the SSL samples are, is present in the test set, only valid representation from the four target classes make up the latter. This is consistent with the application put forward in this study: use OODD to discriminate between various kinds of radar detections.
AD method (input format) SAD loss SSL loss Mean test AUC (
Training with a contaminated training set
Unsupervised AD refers to the experiments of Table 3.9 where only training samples assumed to be normal supervise the training of the neural network. Real-life datasets, labeled by algorithms or experts, are unlikely to respect that assumption and will suffer from contamination of normal samples with unlabeled anomalies. The results in Table 3.11 depict how sensible the deep AD methods previously introduced are to training set contamination. The contamination is carried out using the one percent SAD samples already used for SAD experiments in Table 3.10. While in the SAD experiments SAD samples were repelled from the normality centroid or concentrated next to their dedicated latent reference point, here they will be processed as normal samples. SSL samples again appear to better contribute to improving AD when concentrated next to a specialized centroid, while the performance drop due to contamination does not seem to be particularly stronger for one of the approaches considered.
AD
Chapter 4
One-class classification for encoded hits
C N ×H detection "hit2vec" R Q embedding one-class classification R score Chapter 4
This chapter will present preliminary results regarding the encoding and discrimination of radar hits enriched under the form of neighborhoods of range cells, a format introduced in chapter 1. It thus allies the hit format enrichment proposed in chapter 1, with the single and neighborhood of range cells encoding architectures of chapter 2 and the one-class classification methods presented in chapter 3. The combination of these processing steps create a filter answering the task motivating the works of this thesis and oriented towards the discrimination of small and slow targets by air surveillance radars. The pipeline presented here is therefore designed to face a varying input representation in terms of signal length and resolution, and a lack of supervision for the final discrimination.
Experiments protocol and data
The dataset used to evaluate the proposed representation learning over range cells neighborhoods is built with the single range cells embeddings provided by the FCN-based cell2vec approach presented in section 2.5. The interaction of the latter approach and the graph2vec method evaluated here is illustrated on Fig. 2.4. The dataset is created with the testing set embeddings of the single range cells encoding experiment, thus completely discarding the initial training set single cell embeddings. The single range cells embeddings of the target classes defined in section 2.5 are combined into four different classes of varying correlations to define a neighborhood of H = 5 range cells. The four neighborhood classes follow the following neighborhood correlation patterns XXXXX, Y XXXY , Y Y XZZ and W Y XZW . In the latter, one letter stands for one kind of Doppler signature, i.e. one target class in the single range cells experiments of 105 section 2.5. The different correlation classes will here be named targets classes. The latter thus have no more direct connection to the four modulation patterns classes of the simulation generating the input signals. The ability to discriminate diversely correlated range cells neighborhoods is notably thought as a way to distinguish clutter spanning several range cells from actual targets surrounded by clutter. The creation of relevant neighborhoods of range cells with a set of individual range cells was discussed in section 2.4.1 and will therefore not be developed here, except for the next remark regarding the XXXXX pattern.
The creation of a neighborhood class based on the replication of a single range cell embedding to create a perfectly correlated local neighborhood has the positive side effect of allowing us to partially keep track of the input signal resolution impact. Indeed, if a neighborhood stems from the replication of a unique range cell embedding, the number of pulses of the original range cell IQ signal is the unique signal resolution associated with the final neighborhood representation. This allows us to verify whether the clusters potentially appearing in the output representation space of the hit2vec encoding are essentially related to the burst resolution, and not to the range cells content. This is much trickier to follow as soon as the neighborhood of range cells is made of different cells, which often implies the neighborhood is derived from a mix of input signals resolutions. A refined range cells combination and labeling mechanism to produce the dataset of range cells neighborhoods could tackle this, but this was not done here in order to avoid the question of how exactly to label neighborhoods of range cells combining input resolutions with the objective of understanding the final influence of the latter. One might add that it is actually relevant and intuitive to build neighborhoods with ranges cells of variable input signal resolution since the single cell encoding is expected to be invariant with respect to the input sampling diversity. The difficulty to identify the exact relation between the final embeddings distribution and the input sampling parameters makes it even more important to control the distribution of Doppler classes and burst resolution in evaluation datasets. The reader is reminded that the burst resolution corresponds to the association of the number of pulses and the PRF. Another example of useful constant neighborhood in terms of Doppler content could be defined by a set of range cells filled with the same clutter. Such a neighborhood could help evaluate the false alarm risk.
The more or less correlated neighborhoods of range cells are distributed over a graph defined by two possible adjacency matrices: the matrix (a) and the matrix (f) as shown on Fig. 2.7 and illustrated on Fig. 2.6, where the shared edge weight e is set to one. The graph neural network used to encode the neighborhood of H = 5 range cells consists in the stacking of two graph convolutional layers as defined in section 2.4.2 according to the layer proposed by [START_REF] Kipf | Semi-supervised classification with graph convolutional networks[END_REF]. These graph convolutional layers are defined with a single channel, i.e. they maintain one trainable weights matrix W ∈ R Q×Q each. Since the neighborhood is of size H = 5 and the central range cell is the one whose features we use as output neighborhood representation when no graph-scale pooling is done, two layers are enough to cover the entire neighborhood with the output representation receptive field. The term central has meaning for the non-cyclic graphs such as the graphs (a) and (e) of Fig. 2.6, in other cases it refers to the range cell under test, i.e. the one carrying the actual target detection. The latter remains central in the initial input representation being encoded, since it is the central column of the input matrix Z I/Q defined in Eq. (1.2). One can note that the trainable weights of the GNN are real-valued since the complex-to-real representation transition is done during the single cell encoding (see Fig. 2.4).
Preliminary results with supervised representation learning
The preliminary results presented here evaluate the separability of the fabricated neighborhoods of range cells in the output representation space generated by the two-layers graph convolutional neural network. A single set of metrics is presented on Fig. 4.1, Fig. 4.2 and Fig. 4.3 since none of the few setups tested yielded performances substantially superior to the others. Regarding the generation of the figures and the supervision setup, the readers may refer to the description and the references indicated in section 2.5 as the same tools and training objective were used. Four setups were tested since we experimented with both of the graph adjacency configurations mentioned in section 4.1, and with both a global mean pooling and the node2vec encoding approach applied to the central neighborhood cell to retrieve a neighborhood embedding R Q . Here, the weights matrices defining the GNN are both of dimensions 16 × 16, and thus the GNN produces a neighborhood embedding of dimension 16.
The graph embeddings 2D distribution visualization using TSNE and PCA on Fig. 4.2 and Fig. 4.3 show a pulses class 0 which contains the mixed-resolution range cells neighborhoods, while the 56 and 64 pulses classes correspond to perfectly correlated neighborhoods stemming from bursts of 56 and 64 pulses respectively. While the two figures confirm no clear latent clusters exist for specific input signals resolutions before and after training, they also suggest not much of an improvement can be observed in the latent distribution of the four signatures correlation classes. The latter observation is confirmed by the metrics on Fig. 4.1: the AUCs evolutions suggest the training is beneficial for the OCC isolation of two of the targets classes only, the other two being quickly associated with a stable random classification performance. Among the pulses classes, the 0 class gathering the mixed-resolution neighborhoods seems easily identifiable, while the pulses classes associated with single signal resolution neighborhoods also quickly end up associated with random classification performance. This suggests our graph2vec encoding yields a weak discrimination power, and is favoring neither the targets classes separation, nor the pulses classes separation.
Necessary follow-up experiments
As said in section 2.5.3 regarding the single range cell encoding experiments, the previous experimental results can only be taken as a proof-of-concept that aims at demonstrating the feasibility of encoding neighborhoods of range cells with diverse levels of correlation. Here, the potential contribution of training a GNN to encode neighborhoods of range cells for subsequent discrimination remains unclear and calls for more experiments and comparisons. Besides the discouraging nature of the performances presented in this chapter, the experiments suffer from the same shortcomings as the ones discussed in section 2.5.3. The suggested extensions can be reformulated here in the context of neighborhood encoding. Finally, another observation may be suggested for this more challenging GNN-based encoding: the possibly greater discriminatory power of deep OCC in the embeddings space operated at the output of the GNN may complement the search for a better encoding architecture. Thus, before completely discarding a neighborhood encoding method, deep OCC discrimination could be harnessed to get a more thorough evaluation of the difficulty of the targets separation.
Conclusion and perspectives
Concluding remarks
This thesis is the result of the search for improvements in the discrimination of targets by air surveillance radars using techniques from the recent advances in machine learning. It is directly motivated by the constraints over pulse Doppler radars bursts used for air surveillance, i.e. the constantly changing small number of pulses available in each burst and the varying pulse repetition frequency. The thesis proposes an approach divided into two steps, first encoding, then discriminating, to end up with a score useful to the radar operator for targets discrimination. The discrimination proposed relies on the Doppler information contained in the I/Q sweep response and introduces an I/Q sweep feature enrichment to provide the neural networks targets filter with a spatial I/Q context to broaden the information available to the filter. The motivation behind this input features enrichment lies in the fact that the filter developed aims, to a certain extent, at reproducing the small and slow targets discrimination usually done with micro-Doppler features but without the required Doppler resolution to reveal such features. The solutions put forward propose to integrate the physical diversity of inputs in sampling frequency and number of pulses in the architecture of the graph neural network encoding the neighborhood of range cells processed. This information integration is inspired by the literature of resampling and array processing, and follows the trend of letting a neural network choose which information to use and how so. On a historical note, one could say the proposed Doppler processing is part of a third generation of pulse Doppler radar targets discrimination along with the geometric deep learning approaches [START_REF] Michael M Bronstein | Geometric deep learning: going beyond euclidean data[END_REF] proposed in [START_REF] Cabanes | Multidimensional complex stationary centered Gaussian autoregressive time series machine learning in Poincaré and Siegel disks: application for audio and radar clutter classification[END_REF][START_REF] Brooks | Deep Learning and Information Geometry for Time-Series Classification[END_REF]. Before that, the first generation could be the early low PRF radars with moving target indicator (MTI) that did not determine the Doppler shift but discarded zero and low Doppler shifts [START_REF] Alabaster | Pulse Doppler Radar: Principles, Technology, Applications[END_REF], and the second generation could gather the radars operating at higher PRF with refined slicing of the Doppler velocities to implement automatic targets discrimination with non-geometric deep learning methods.
Among the neural network architectures put forward by this work, a noticeable point is the choice of associating different kinds of architectures at an unusual level. The encoding neural network combining either a recurrent or a convolutional architecture with a graph neural network which then forwards a representation to a more classic discriminative network, our filter could be identified as a so-called hybrid architecture. That being said, since such a hybrid architecture as a whole is not trained at once with a common training objective, judging it as one single architecture may not be relevant. This is emphasized by our presentation of the proposed hits filter as two successive but independent steps in chapter 1, as illustrated on Fig. 1.10. This is further emphasized by the additional separation of the encoding hit2vec step into a cell2vec step and graph2vec 111
Perspectives
While the results of the experiments presented in chapter 3 were quite conclusive, the performances put forward in chapters 2 and 4 were at best exploratory. As such, the experiments defined should be continued with better statistical relevance and extended by following the indications provided in the associated sections. The unsuccessful experiments aiming at developing a latent space eigenvalues-based regularization and an SPD-manifold aware adaptation of deep SVDD remain useful intuitions and may be refined. On another note, several leads for distinct possible approaches to improve the processing of a neighborhood of range cells were mentioned without actual developments:
• the definition of a global one-class classification pipeline applied to our range cells neighborhood discrimination, trainable as a whole with a unique training objective, and possibly using a generative GNN architecture;
• the adaptation of the neighborhood graph processing to handle SPD representations of range cells distributed over graph nodes instead of an R Q embedding;
• the evaluation of the learned coefficients in the complex-valued convolutional layers as FIR filters while considering the varying sampling frequency of the input signals;
• the definition of an application case where the diverse radar I/Q signals are accessible to a resampling approach as presented in chapter 1, in order for a fair comparison with a non-machine learning method to be conducted.
These leads constitute approaches that remain to be explored and can provide perspective with respect to the proposals already developed in this work. Since the processing defined and evaluated by this thesis was shown to be close to ECG or radio signal processing, and also to molecules representation learning, the active research motivated by these applications should be a continuous source of tools to solve the problem of neighborhood of range cells encoding. More generally, we did not make use of all of the tools and innovative neural networks layers available in the literature. The recent developments of the deep learning literature establish therefore a promising avenue for improvements. Finding an effective and explainable way to compare short signals with varying sampling parameters and dispersed over local graphs remains a task relevant for many applications, especially with the advent of the internet of things.
Lastly, one should note that even with a relevant encoding of similar targets and an effective targets separation, additional mission-specific data could be taken into account to detach targets similar physically but with behaviors making them of different levels of interest for the radar operator. This additional processing could be handled by a downstream processing step in the radar pipeline, such as the tracking stage where the targets' trajectories can be a revealing information with respect to their respective missions. For instance a small civilian drone is not necessarily an object of interest to a radar operator but if it keeps flying over a sensible site it should become one. In such a case, our filter aims at detecting the drone and dissociating it from the surrounding clutter, but it does not address the target behavior analysis task. The previous point emphasizes how much automatic processing could be necessary to actually help radar operators fulfill their missions, and thus how much machine learning could help improve this kind of sensor, assuming enough labeled data is at hand and the possibility to modify different parts of the radar processing pipeline. This RPO-MEAN is the RPO variant introduced in [START_REF] Bauw | Deep random projection outlyingness for unsupervised anomaly detection[END_REF] and used in the encoding neural network output space for our Deep RPO experiments. The Eq. B.1 stems from [START_REF] David L Donoho | Breakdown properties of location estimates based on halfspace depth and projected outlyingness[END_REF][START_REF] Peter | Projection pursuit[END_REF] and defines the outlyingness used to generate the statistical depth [START_REF] Zuo | General notions of statistical depth function[END_REF] called random projection depth (RPD), the latter corresponding to the following expression: RP D(x, X) = 1 1+O RP O (x,X) . Note that this depth, and the associated outlyingness of Eq. B.1, are defined with an infinite number of random projections u ∈ R d , thus the two quantities (RPO and RPD) can only be implemented with a stochastic approximation, i.e. with a large but finite number of random projections. For instance, in the experiments described in this paper, the sup and mean over all RPs of unit norm are approximated with a max and mean over 1000 RPs respectively. For other experiments on RPO with diverse quantities of RPs, see [START_REF] Bauw | Deep random projection outlyingness for unsupervised anomaly detection[END_REF].
Proof: Let us start by noticing that both the upper and lower parts of the RPO ratio are invariant to the bias term b of the affine transformation: Moreover, since ϕ is a bijection from S d-1 to S d-1 , for g the mean or sup operator applied to every existing random projection u:
g u∈S d-1 [f (u)] = g u∈S d-1 [f • ϕ(u)] (B.18)
Combining the two last equalities provides us with the invariance to the linear transformation defined by A:
g u∈S d-1 [f (u)] = g u∈S d-1 [f • ψ(u)] (B.19)
In other words, since ϕ is a bijection and all existing random projections of unit norm are considered during the integration over S d-1 , the operator g is not affected whether the RPs are transformed by ϕ beforehand or not. The intuition behind this invariance is that we are looking over the same infinite set of random projections, the transformation matrix A at most reshuffling the RPs in the infinite set the estimator integrates over. More generally, that is true for any permutation invariant 1 operator g.
The previous translation invariance with respect to b and the linear transformation invariance with respect to A prove the affine invariance of both RPO and RPO-MEAN. , where tconv stands for transposed convolution. Note that in our case, the global pooling over the features dimension is only applied when retrieving the bottleneck embedding, and is not necessary in the forward pass to compute the training loss. Thus, this global pooling does not appear in this table as a layer. To allow for the generation of a reconstruction of varying size, the bottleneck features dimensionality will vary along with the input size. The fixed-size embeddings can still be produced since the number of channels of the convolutional bottleneck remains constant. As for the LSTMbased seq2seq (see table C.1), only the real part of the output of the encoder is passed on to the decoder to limit the transit of information to a real-valued representation that will serve as range cell embeddings in R Q . The complex-to-real encoding is thus learned by the generative network, even though the decoder has complex-valued weights in order to output a complex-valued input reconstruction. As for the FCN (see table 2.2), using a convolution with a large kernel size on the input signal is particularly interesting since it makes the potential interpretation of the learned weights as FIR filter coefficients more expressive. Only the last encoder layer hidden state was kept as the embedding of the input range cell I/Q signal. This hidden state was furthermore reduced to its real part in order to produce a real-valued representation that will serve as range cell embedding in R Q . Input dimensionality is equal to one in order to accept the complex-valued I/Q signal.
Layers in forward order
Layers in forward order
MOTS CLÉS
Discrimination de cibles radar, encodage de signal, fouillis radar, détection d'anomalie, classification monoclasse, apprentissage semi-supervisé, micro-Doppler, réseau de neurones à valeurs complexes, réseau de neurones pour graphes, appprentissage profond géométrique, apprentissage de représentation, apprentissage profond, radar, traitement du signal.
RÉSUMÉ
Les radars Doppler pulsés (RDP) de surveillance aérienne ont pour mission de discriminer des cibles en se basant sur le signal réfléchi de petites rafales d'impulsions modulées. Les antennes tournantes imposent un faible nombre d'impulsions pour caractériser le contenu de cases distance, celles-ci résultant d'une discrétisation radiale et azimutale. Cette caractérisation est tirée de signaux courts échantillonnés à la période de répétition des impulsions (PRI), un échantillon par impulsion étant disponible dans la rafale transmise. Le nombre d'impulsions et la PRF, qui changent constamment dans un radar en opération, définissent donc la résolution et les fréquences extrêmales du spectre Doppler qui définit les cibles à l'échelle d'une case distance. L'avènement de drones petits et bon marché impose l'amélioration de la détection des cibles petites et lentes qui pouvaient auparavant se retrouver rejetées en tant que fouillis. Cette thèse propose une chaîne de traitement branchée après l'étape de détection d'un RDP pour discriminer les hits afin de permettre l'abaissement des seuils de détection. Cette chaîne de traitement se divise en deux étapes: un encodage hit2vec, et une étape de discrimination. L'encodage traite une matrice à valeurs complexes et de taille variable pour produire un vecteur embedding à valeurs réelles de taille fixe. Cette représentation d'entrée contient un signal I/Q réfléchi par la case distance porteuse d'une détection enrichi par le signal réfléchi par les cases distance voisines. L'étape de discrimination met en oeuvre une classification mono-classe sur les représentations d'entrée encodées dans le but de séparer les cibles dans un contexte de faible supervision. La chaîne de traitement complète s'appuie donc sur l'hypothèse que le voisinage de spectres Doppler, même à faible résolution, contient l'information nécessaire à la discrimination. Autrement dit, la solution mise en avant se base sur l'apprentissage de représentation pour faire face à l'hétérogénéité des signaux I/Q qui doivent être séparés et pour permettre à une discrimination à faible supervision en aval de produire une distance vis-à-vis de cibles de référence utile à l'opérateur radar.
ABSTRACT
Air surveillance pulse Doppler radars (PDR) need to discriminate targets using the backscatter generated by small bursts of modulated pulses. Rotating antennas constrain the number of pulses available to characterize the content of range cells, the latter resulting from azimuthal and radial discretization. This characterization is based on short signals sampled at the pulse repetition frequency (PRF), one sample being available per pulse in the transmitted burst. The number of pulses and the PRF, which constantly change in an operating radar, thus define the resolution and the maximal frequencies of the Doppler spectrum that defines targets within a range cell. With the advent of small and cheap drones, air surveillance radars are required to improve their detection of small and slow targets that previously had chances to end up discarded as clutter. This thesis proposes a processing pipeline plugged after the detection stage of a PDR to discriminate between hits in order to allow for the lowering of detection thresholds. This processing pipeline is made of two steps: an encoding hit2vec step, and a discrimination step. The encoding handles an input complex-valued matrix of varying size and outputs a real-valued vector embedding of fixed size. This input contains the backscattered I/Q signal of the range cell carrying a detection enriched by the backscatter from the neighboring range cells. The discrimination step applies a one-class classification to the encoded inputs to separate targets representations in a low-supervision context. The complete pipeline is therefore based on the assumption that the neighborhood of Doppler spectrums, even at low resolutions, contains the required discriminatory information. Said in other words, the solution put forward uses representation learning to tackle the sampling heterogeneity of the I/Q signals that ought to be separated, and to enable a subsequent low-supervision discrimination to provide a radar operator with a useful distance to reference radar targets.
KEYWORDS
Radar targets discrimination, signal encoding, radar clutter, anomaly detection, out-of-distribution detection, one-class classification, semi-supervised learning, micro-Doppler, complex-valued neural network, graph neural network, geometric deep learning, representation learning, deep learning, radar, signal processing.
2 Figure 1 . 2 :
212 Figure 1.2:The proposed detections filter relies on I/Q samples stemming from an I/Q detector[START_REF] Levanon | Radar signals[END_REF]. This I/Q detector takes the received (RX) signal as input, either at its carrier frequency or at a lower intermediate frequency. The received signal is passed through a bandpass filter beforehand to limit the noise[START_REF] Levanon | Radar principles[END_REF]. The I/Q signal processed is complex-valued of the form I + jQ. This corresponds to a usual quadrature sampling setup, which can also be called complex sampling.
3 )N
3 Figure 1.3: The proposed detections filter modifies the existing I/Q attribut of hits to integrate a neighborhood of range cells over the whole burst of pulses to provide the discrimination with additional information. In this figure the vertical axis describes the slow time of the sampling happening at PRF, i.e. between pulses, while the horizontal axis describes the fast time of the sampling happening at a much higher frequency and harbouring the range cells. The PRF is the sampling frequency that varies between hits, introducing a challenging physical diversity between input data points, the overall diversity also stemming from the varying number of pulses. The number of pulses here corresponds to N , the number of rows. Respecting the low-resolution constraint, we will consider N ∈ 8, 32 , as these are typical values for Pulse-Burst radars (PBR)[167, p.188][133, p.470], which will be considered as equivalent to air surveillance PDRs in this thesis. The fast time, much higher, sampling frequency is considered to be constant across all hits and systems considered. An illustration of this diversity is proposed on Fig.1.4. Top: original hit object I/Q feature (1D complex-valued vector) Bottom: new hit object I/Q feature (2D complex-valued matrix).
Figure 1.5:The Doppler effect provides an effective way to discriminate between radar targets through the modulation of backscattered pulses due to the movement of targets with respect to the sensor. The phase of the backscattered signal received at time t carries the Doppler information.
Figure 1 . 6 :
16 Figure 1.6: Illustration of the impact of the number of pulses over the Doppler spectrum of two helicopter-like targets belonging to classes differing only in the number of rotating blades creating the specificity of the modulation pattern, the remaining radar parameters remaining constant. Top row: Doppler signature. Bottom row: Covariance matrix computed over the rows of the Doppler signature. Left: 8 pulses. Center: 16 pulses. Right: 64 pulses. Provided with enough pulses, i.e. enough samples to compute a spectrum, one ends up with modulation patterns adapted to drone discrimination even without strong domain expertise, and easily identified using usual image processing methods and more general machine learning (see 1.2.2).
Figure 1 .
1 Figure 1.7: The Ground Master air surveillance radars. Notice the rotating antenna, a key factor in the resolution constraints motivating this work. Top left: GM 200, medium-range radar Top right: GM 60, short-range radar Bottom: GM 400, longrange radar. ©Thales
Figure 1 .
1 Figure 1.8: Thales Coast Watcher 100 radar, which generated the HRRPs used in this work to evaluate the possibility to discriminate radar targets with one-class classification approaches. See chapter 3 for the associated experiments. ©Thales
20 CHAPTER 1 .
1 PDR, DISCRIMINATION TASK AND SOLUTION OVERVIEW C N ×H detection R Q embedding Resampling-based "hit2vec" I FILTER D DFT Repeat H times to encode each range cell I/Q signal, then combine the H representations into a single real-valued one
Figure 1 . 12 :
112 Figure 1.12: Example of "hit2vec" resampling-based baseline for the first of the two steps processing proposed and already depicted on Fig. 1.10 and Fig. 1.11. Assuming the succession of an upsampling by the integer factor I, an intermediate filter and a downsampling by the integer factor D can allow to represent all input signals with a common sampling frequency and an equal number of samples, the neighborhood of H range cells (see Fig. 1.3) could be encoded by the mean of the DFT of each individual range cell signal.
Figure 2 . 1 :
21 Figure2.1: Example neighborhoods of range cells and spatio-temporal neighborhood of hits mentioned as a forbidden alternative to put our processing choice in perspective. The dot colors describe the nature of the Doppler content of the range cells, i.e. range cells filled with the same color contain similar Doppler signatures. Here, the neighborhood size is H = 5, as is the case in our experiments. Right: Generic enriched input format of our processing as described in chapter 1 (see Fig.1.4). Each dot has a different color since by default the range cells Doppler content can be uncorrelated. We process a spatial neighborhood spanning over a radial axis and centered over the range cell that detected a potential target, i.e. whose content passed a threshold. The neighboring cells in azimuths are ignored, i.e. the cells marked with diamonds do not contribute any information to the neighborhood representation. Left: Neighborhood of range cells with highly correlated Doppler signatures. This means the neighboring cells and the central cell carrying the detection triggering the definition of a hit may contain the same kind of potential target. Top: Forbidden alternative to our enriched input representation, defined by a flexible neighborhood of hits. In such a neighborhood, each cell taken into account carries an actual hit, which is not necessarily the case in the fixed-size neighborhoods centered around each individual hit we chose as input Bottom:Intermediate neighborhood correlation case. The neighborhood embedding produced by the first hit2vec stage of our filter (see Fig.1.10) should lead to the effective separation of this neighborhood and the two other valid ones represented here.
Figure 2 . 6 :
26 Figure 2.6: Neighborhood of H = 5 range cells proposed graphs: a diversity of edges configurations are considered to generate a relevant embedding for the range cell under test. Among the configurations illustrated, one can notice (a) that is close to timeseries processing, the directed graphs {(d),(e) } which reveal what cell we are actually interested in learning a representation for, and the complete graph (e) that lets the learning phase chooses how to take into account the neighboring cells. The corresponding adjacency and degree matrices are shown on Fig. 2.7.
Figure 2 . 7 :
27 Figure 2.7: Neighborhood of H = 5 range cells proposed adjacency and degree matrices: these are the adjacency and degree matrices associated with the neighborhood graphs proposed on Fig. 2.6. As for Fig. 2.5, the degree matrix takes into account the inserted self-loops necessary to compute the graph convolutional layer. One can notice that only the undirected graphs translate into symmetric adjacency matrices.
e 9 /e 10 e 9 /e 10 Figure 2 . 8 :
91028 Figure 2.8: Alternative fully-connected neighborhood of H = 5 range cells proposed graphs, this time with different possible edge weights: For (a) we propose to associate each link between two range cells with a weight representing the distance in amount of range cells, while for (b) we propose a unique edge weight for each edge in the graph. The corresponding adjacency and degree matrices are shown on Fig. 2.9. The edge weights could for instance integrate the input signals sampling parameters creating the input diversity and one of the main difficulties of the representation learning task addressed.
0 e 1 e 2 e 3 e 4 e 1 0 e 1 e 2 e 3 e 2 e 1 0 e 1 e 2 e 3 e 2 e 1 0 e 1 e 4 e 3 e 2 e 1 0 e 1 e 5 e 10 e 8 e 1 0 e 2 e 7 e 9 e 5 e 2 0 e 3 e 6 e 10 e 7 e 3 0 e 4 e 8 e 9 e 6 e 4 Figure 2
142 Figure2.9: Alternative fully-connected neighborhood of range cells proposed adjacency and degree matrices: these are the adjacency and degree matrices associated with the alternative neighborhood graphs proposed on Fig.2.8. As for Fig.2.5, the degree matrix takes into account the inserted self-loops necessary to compute the graph convolutional layer. For (a) we end up with an adjacency matrix that associates each link between two range cells with a weight representing the distance in amount of range cells, while for (b) we get a unique edge weight for each edge in the graph, each weight appearing twice in the adjacency matrix since the graph is undirected. One can notice that both undirected graphs translate into symmetric adjacency matrices.
Figure 2 . 11 :
211 Figure 2.11: Learning metrics of the cell2vec FCN architecture training. The batches loss rapidly decrease and converge, outlining a common loss trajectory in deep learning experiments. The targets classes AUCs rise during training, indicating a continuously improving separability of the targets classes in the embedding space.On the contrary and as hoped, the pulses classes separability is not favored since the associated AUCs remain around 0.5, which is the random discrimination performance. The improving separability of targets classes appears when using specialized machine learning methods like IF and OC-SVM, and is not visible when replacing the latter with a simple Euclidean distance to a class reference point, or with a Silhouette clustering score[START_REF] Peter | Silhouettes: a graphical aid to the interpretation and validation of cluster analysis[END_REF] computed with a Euclidean metric.
Figure 2 . 12 :
212 Figure 2.12: 2D visualization of the individual range cell embeddings distribution produced by an FCN as cell2vec, before training. Top -each color depicts one target class, i.e. one Doppler pattern. Bottom -each color depicts one pulse class, i.e. one Doppler resolution class.
Figure 2 . 13 :
213 Figure 2.13: 2D visualization of the individual range cell embeddings distribution produced by an FCN as cell2vec, after training. Top -each color depicts one target class, i.e. one Doppler pattern. Bottom -each color depicts one pulse class, i.e. one Doppler resolution class.
18 )
18 1D boundary x min 1D boundary x max f Y (x) distribution density distribution boundary x in-distribution sample out-of-distribution sample
Figure 3 . 3 :
33 Figure 3.3: Illustration of the difference between boundary and density-based OCC.Here, the one-class boundary discriminating between in and out-of-distribution samples is defined by the domain [x min , x max ]. This boundary can evidently benefit from the near OOD samples to refine its range, and holds no information describing the density of the one-class within the boundary. Characterizing this density intuitively requires more data, the density estimation being accurate only if the data distribution leading to the estimate is correctly distributed. Supposing a successful density estimation, the AD score proposed by a boundary will necessarily be less relevant than the one provided by the density estimate. Harnessing a SVDD-based OCC here could translate into the use of the distance between a test sample and the peak of the density estimate. This further depicts how SVDD-inspired approaches rely on a simple distance and not on a density information.
Figure 3 . 5 :
35 Figure 3.5: Example range profiles (one per class considered). Horizontal axis denotes the range cell index, vertical axis denotes the amplitude.
Figure 3 . 6 :
36 Figure 3.6: Unsupervised AD results. Each color point describes an AUC averaged over the experiments where a single pollution class is considered among the anomalous classes at a time.For example, a point associated with the normal cargos will be defined by the mean AUC of the experiments where pollution anomalies stem from the passenger ships, the fishing ships and the tankers respectively. Also, each single experiment setup is executed on three different seeds. This implies that each point actually depicts a mean of mean (over varying pollution and seeds respectively). The vertical bar associated with each point represents the mean standard deviation over all experiments and seeds (one standard deviation is computed over each set of seeds, then a mean is determined over all experiments). Each method suffers a loss of AUC for a higher pollution ratio of its training set supposedly pure of anomalous samples.
Figure 3 .
3 Figure 3.7: Semi-supervised AD results using Deep SAD, the semi-supervised adaptation of Deep SVDD. AUC averaged over three experiments, with three seeds per experiment as for unsupervised AD: in each experiment we change the anomaly class from which labeled anomalies originate. Vertical bars represent standard deviations computed over the various experiments and seeds. The colors describe the four different labeled anomalies ratios in the training data considered (blue: 0%, cyan: 1%, green: 5%, red: 10%).
Figure 3 .
3 Figure 3.8: One sample of each target class: the varying number of rotating blades defines the classes, the modulation pattern being easily singled out. The first line of images shows Doppler signatures, i.e. the time-varying periodogram of targets over 64 bursts of 64 pulses. On those images, each row is the periodogram computed over one burst, and each column a Fourier i.e. a Doppler bin. The second line contains the covariance SPD representation of the first line samples. The width of the Doppler modulations around the bulk speed on the periodograms varies within each class, as well as the bulk speed, the latter being portrayed by the central vertical illumination of the signature.
Figure 3 .
3 Figure 3.10: Left -Training metrics of a successful run where normal samples are concentrated around their average initial projection, and SAD and SSL samples are pushed away thanks to a loss term using the inverse of the distance with respect to the normality latent centroid. This is one of the most successful setups in Table 3.10, and one of the easiest AD experiments since the two classes defining normality here are class 3 (four blades are responsible for the modulation pattern around the central Doppler shift) and class 4 (six blades are responsible for the modulation pattern around the central Doppler shift), meaning the separation with the other classes deemed anomalous is actually a binary modulation complexity threshold. One of the contributions of the SAD and SSL supervisions can be observed on the evolution of AUCs during training: no AUC collapse can be seen during training, discarding the possibility of a latent distribution collapse during training. Experiments showed that large training batches contributed to stable AUCs growth. Spikes in the training loss match the drops in AUCs. Right -Latent distribution of the training samples visualized in 2D using t-SNE after projection by the untrained (top) and the trained neural network (bottom). One can notice that normal training samples from both normal classes are completely mixed up with the minority of SAD labeled anomalies from class 1 in red (one blade), semantically similar, whereas SSL samples which are rotated normal training samples are already gathered in their own latent subclusters. SAD labeled anomalies end up well separated after training.
Figure 4 . 1 :
41 Figure 4.1: Learning metrics of the graph2vec architecture training. The GNN training impact on the AUCs describing the separability of the targets and pulses classes is disappointing. Neither the targets classes, nor the pulses classes benefit from an overall improvement of their AUC scores during training. The AUCs are computed for two OCC approaches presented in Chapter 3: IF and OC-SVM. These OCC methods are applied to the final representation produced by graph2vec.
Figure 4 . 2 :
42 Figure 4.2: 2D visualization of the range cells neighborhoods embeddings distribution produced by the two-layers GNN, before training. One can note that this figure is still influenced by the training of the single range cell encoding neural network, which generated the single range cells representations necessary to create the dataset of range cells neighborhoods. Top -each color depicts one target class, i.e. one Doppler pattern. Bottom -each color depicts one pulse class, i.e. one Doppler resolution class.
Figure 4 . 3 :
43 Figure 4.3: 2D visualization of the range cells neighborhoods embeddings distribution produced by the two-layers GNN, after training. No improvement can be observed with respect to the targets classes disentanglement we are seeking. Top -each color depicts one target class, i.e. one Doppler pattern. Bottom -each color depicts one pulse class, i.e. one Doppler resolution class.
CHAPTER 5 .
5 CONCLUSION AND PERSPECTIVES step, as depicted on Fig. 2.4.
16 )
16 |u T (Ax + b) -M ed(u T (AX + b))| = |u T Ax + u T b -M ed(u T AX) -u T b| (B.4) = |u T Ax -M ed(u T AX)| (B.5) M AD(u T (AX + b)) = M ed(|u T (Ax + b) -M ed(u T (AX + b))|) (B.6) = M ed(|u T Ax + u T b -M ed(u T AX) -u T b|) (B.7) = M ed(|u T Ax -M ed(u T AX)|) (B.8) = M AD(u T (AX)) (B.9)This indicates both RPO and RPO-MEAN are translation invariant, a partial requirement to achieve affine invariance. Let us factor the upper and lower parts of the RPO ratio to make a unit norm vector u T A ∥u T A∥ appear:|u T (Ax) -M ed(u T (AX))| = u T A u T A ∥u T A∥ x -M ed u T A u T A ∥u T A∥ X (B.10) = u T A u T A ∥u T A∥ x -M ed u T A ∥u T A∥ X (B.11) M AD(u T (Ax)) = M ed u T A u T A ∥u T A∥ x -M ed u T A u T A ∥u T A∥ X (B.12) = u T A M ed u T A ∥u T A∥ x -M ed u T A ∥u T A∥ X (B.13) (B.14)This enables us to rewrite the RPO ratio as follows:|u T Ax -M ed(u T AX)| M AD(u T AX) = u T A u T A ∥u T A∥ x -M ed u T A ∥u T A∥ X ∥u T A∥ M ed u T A ∥u T A∥ x -Thus for any u, x and X, let f (u) be:f (u) := |u T x -M ed(u T X)| M AD(u T X) (B.17) if one defines ϕ(u) := u T A ∥u T A∥ and ψ(u) := u T A, we have f • ϕ(u) = f • ψ(u).
23 Input variation cause Ideal embedding variation
PRF Invariant
# of pulses in burst Invariant
Target-aspect sensitivity Invariant
Transmitted frequency Invariant
Target type Varying
Table 2 .
2
1:
Wx controls what information to extract from the current time step input and W non-linear activation function, or simply fed to the next time step to compute h t+1 within the same recurrent cell. In Eq. (2.29), typical activation choices are the rectified linear unit (see Eq.(2.7)) or the hyperbolic tangent:
tanh(x) = exp(x) -exp(-x) exp(x) + exp(-x) (2.30)
These activation functions are applied element-wise on vector or tensor representations
within neural networks. An illustration of the recurrent nature of the RNN defined by
Eq. (2.29) is proposed on Fig. 2.2.
h (h, c)
RNN LSTM
x
h controls what to keep from the previous hidden state. In this subsection notations otherwise usual in the remainder of this manuscript can be set aside due to the specific nature of recurrent architectures. For instance, samples are indexed by a t for time step instead of an index i. The previous hidden state ideally keeps the relevant information from the previous states. This output can then be further transformed by an affine transformation and an x Figure
2
.2: RNN principle for a simple RNN cell (left) and for a more complex LSTM cell (right). The RNN cell maintains one hidden state h t that is passed to the next time step, while the LSTM cell passes two hidden states h t and c t .
Table 2 .
2 2: Overview of the FCN architecture used in our experiments, which begins by applying a large convolutional kernel to the input signal.
1, out channels 8
Table 3 .
3 1: The integrator estimator choice: mean versus maximum. RPO, deep RPO and deep SVDD test AUCs on MNIST, Fashion-MNIST and CIFAR10 for one to four modes considered as normal, for 30 seeds (truncated mean AUC ± std). The best AUC per number of modes and dataset is indicated in bold.
1) RPO-mean Deep SVDD Deep RPO-max Deep RPO-mean (2) (2)-(1)
MNIST -1 84.64 ±6.73 84.12 ±6.74 88.60 ±4.62 87.96 ±5.31 90.10 ±4.10 5.46
2 75.27 ±8.68 72.83 ±9.42 84.35 ±6.57 83.79 ±6.97 85.36 ±6.48 10.09
3 69.67 ±9.65 66.92 ±10.25 81.23 ±6.76 80.16 ±7.12 81.60 ±7.00 11.93
4 66.54 ±9.20 63.60 ±10.31 78.89 ±6.56 77.35 ±6.92 78.65 ±7.05 12.11
F-MNIST -1 89.19 ±5.81 89.73 ±5.79 90.45 ±5.76 90.17 ±6.09 91.13 ±5.20 1.94
2 78.52 ±8.39 76.47 ±8.38 85.24 ±6.45 84.57 ±7.01 85.81 ±6.36 7.29
3 71.06 ±7.38 69.37 ±7.64 80.30 ±6.99 80.64 ±6.69 81.28 ±6.40 10.22
4 67.58 ±5.89 65.79 ±6.55 77.30 ±4.99 77.53 ±5.07 77.82 ±5.34 10.24
CIFAR10 -1 57.62 ±10.96 58.62 ±9.43 64.15 ±7.38 60.22 ±7.00 63.14 ±7.30 5.52
2 53.85 ±9.49 53.81 ±7.61 56.37 ±9.25 55.66 ±8.54 56.46 ±8.89 2.61
3 52.20 ±6.95 52.53 ±5.08 54.16 ±6.94 53.87 ±6.20 54.30 ±6.80 2.10
4 51.88 ±5.91 52.32 ±4.97 53.64 ±5.97 53.71 ±5.78 53.88 ±5.89 2.00
Table 3 .
3 2: Deep RPO test AUCs with varying RP latent dimensionality for the two estimators studied on Fashion-MNIST for 30 seeds (truncated mean AUC ± std). The best AUC per number of modes is indicated in bold.
±5.21 81.96 ±6.67 1000 RPs 90.30 ±5.25 81.67 ±6.87 2000 RPs 90.42 ±5.19 81.83 ±6.83
# modes 1 3
100 RPs 90.25 ±5.18 81.70 ±6.73
500 RPs 90.46
Table 3 .
3 3: Deep RPO-mean test AUCs with varying number of RPs for the latent space RPO on Fashion-MNIST for 20 seeds (truncated mean AUC ± std).
# modes 1000 RPs 3000 RPs
1 63.14 ±7.30 63.18 ±7.49
2 56.46 ±8.89 56.44 ±8.92
3 54.30 ±6.80 54.32 ±6.74
4 53.88 ±5.89 54.00 ±5.98
Table 3 .
3 4: Deep RPO-mean on CIFAR10 for 30 seeds, with either 1000 or 3000 RPs for RPO (truncated mean AUC ± std). The data being more complex, more RPs were used to verify whether a simple increase in the number of RPs could lead to better performances, without success.
Table 3 .
3 5: Deep RPO-mean test AUCs with and without components and projections dropouts on Fashion-MNIST for 10 seeds (truncated mean AUC ± std). C. is components dropout rate, P. is projections dropout rate. The best AUC per number of modes is indicated in bold.
SAD method SAD ratio 1 3
deep SAD 0.00 87.70 ±5.30 78.30 ±5.02
deep SAD 0.01 88.08 ±5.03 83.49 ±4.71
deep SAD 0.10 90.37 ±4.00 84.54 ±4.87
deep RP-SAD 0.00 89.00 ±3.71 78.71 ±4.80
deep RP-SAD 0.01 89.19 ±3.60 78.76 ±4.90
deep RP-SAD 0.10 89.40 ±3.46 79.93 ±5.30
Table 3 .
3 6: Semi-supervised anomaly detection with distance inversion as in deep SAD for deep RPO to take into account rare labeled anomalies during training. The SAD ratio denotes the percentage of the training set composed of labeled anomalies. Two anomalous classes are randomly picked for each seed to provide the labeled anomalies.
Table 3
3 .7, breaks the normalization of the inputs features before their presentation to the neural network first layer. The experiments results suggest that deep RPO and deep SVDD are comparably stable with respect to the input transformation considered, and that such transformation does not trigger a drop in AUC. In addition, one can notice that the average test AUC slightly increases in some cases with the affine data disturbance. The lower part of Table3.7 reports the results where instead of a constant diagonal matrix applying α to each
AD method α 1 AUC gap 3 AUC gap
deep SVDD 0.80 87.02 ±5.56 -0.70 ±1.05 76.49 ±5.67 -1.81 ±0.86
deep SVDD 0.90 87.77 ±5.24 +0.03 ±0.29 78.02 ±5.15 -0.29 ±0.29
deep SVDD 0.95 87.83 ±5.22 +0.09 ±0.14 78.31 ±5.02 -0.00 ±0.16
deep SVDD 1.00 87.73 ±5.24 ±0.00 ±0.00 78.31 ±5.02 ±0.00 ±0.00
deep SVDD 1.05 87.48 ±5.30 -0.25 ±0.20 78.05 ±5.18 -0.25 ±0.28
deep SVDD 1.10 87.09 ±5.40 -0.63 ±0.50 77.55 ±5.52 -0.76 ±0.75
deep SVDD 1.20 86.01 ±5.71 -1.71 ±1.32 75.98 ±6.65 -2.32 ±2.13
deep RPO 0.80 88.53 ±4.15 -0.72 ±1.48 76.85 ±5.28 -1.77 ±1.22
deep RPO 0.90 89.25 ±3.65 -0.00 ±0.59 78.33 ±4.85 -0.29 ±0.57
deep RPO 0.95 89.33 ±3.56 +0.07 ±0.31 78.61 ±4.79 -0.01 ±0.29
deep RPO 1.00 89.26 ±3.54 ±0.00 ±0.00 78.63 ±4.85 ±0.00 ±0.00
deep RPO 1.05 89.01 ±3.60 -0.24 ±0.38 78.38 ±5.05 -0.24 ±0.34
deep RPO 1.10 88.62 ±3.76 -0.63 ±0.84 77.91 ±5.36 -0.71 ±0.74
deep RPO 1.20 87.48 ±4.39 -1.77 ±1.95 76.49 ±6.25 -2.13 ±1.74
deep SVDD U [0.9;1.1] 87.47 ±5.10 78.15 ±4.76
deep SVDD U [0.8;1.2] 86.70 ±5.52 77.52 ±5.01
deep SVDD U [0.6;1.4] 83.86 ±7.21 75.67 ±5.94
deep SVDD U [0.5;1.5] 81.28 ±8.74 73.64 ±7.10
deep SVDD N (0, 1) 52.20 ±16.01 50.56 ±12.37
deep RPO U [0.9;1.1] 88.76 ±3.61 78.54 ±4.70
deep RPO U [0.8;1.2] 87.99 ±3.99 77.97 ±5.03
deep RPO U [0.6;1.4] 85.12 ±5.73 76.32 ±6.05
deep RPO U [0.5;1.5] 82.49 ±7.35 74.59 ±7.19
deep RPO N (0, 1) 52.02 ±16.16 50.05 ±10.77
Table 3 .
3 1 on less common tabular data. As can be seen in Table3.8, Deep SVDD remains our baseline. A satellite dataset is chosen17 . The data stems from the original Statlog
Method mean test AUC ± std
Deep SVDD 68.23 ±5.53
RPO-Max 64.89 ±2.67
Deep RPO-Mean 73.01 ±5.93
Table 3 .
3 8: Deep RPO-Mean, RPO-Max and the baseline deep SVDD on the satellite dataset for 20 seeds (truncated mean AUC ± std). A more complete max versus mean comparison for both RPO and Deep RPO can be found in Table3.1.(Landsat Satellite) dataset from UCI machine learning repository 18[START_REF] Dua | UCI machine learning repository[END_REF], where the smallest three classes are combined to form the outlier class, while the other classes define the inlier class. As for the previous experiments, deep SVDD and deep RPO-Mean share the same neural network architecture and training hyperparameters, to produce a fair comparison. The improvement provided by Deep RPO-Mean is confirmed. The number of RPs used in the latent RPO was set to 500, since the output dimensionality of 8 of the neural network is significantly lower. This in turn is due to the low input data dimensionality for the neural network, the input samples being 1D vectors defined by 36 values. The neural networks were always trained for 80 epochs, and the test AUC retained as the model performance for each seed is the one associated with the best epoch with respect to the validation set AUC.
Table 3 .
3 10: Experiments with additional supervision provided by SAD and/or SSL labeled samples during training (average test AUCs in % ± StdDevs over ten seeds). When available, SAD samples are the equivalent of one percent of the normal training samples in quantity. The first half of the Table reports performances where only one of the two kinds of additional supervision is leveraged, while the second half describes the performances for setups where both SAD and SSL labeled samples contribute to the model training. Each couple of lines compares Deep SVDD and Deep RPO in a shared AD supervision setup, thus allowing a direct comparison. c. stands for centroid.
1 mode) Mean test AUC (2 modes)
Table 3 .
3 11: Contamination experiments results (average test AUCs in % ± StdDevs over ten seeds): the SAD labeled anomalies are integrated within the training samples and taken into account as normal samples during training, thus no SAD loss term is used for SAD samples. The contamination rate is one percent, i.e. the equivalent of one percent of the normal training samples in labeled anomalies is added to confuse the AD. c. stands for centroid.
method (input format) SAD loss SSL loss Mean test AUC (1 mode) Mean test AUC (2 modes)
Deep SVDD (SP) no SAD no SSL 80.76 ± 7.11 76.02 ± 6.66
Deep MSVDD (SP) no SAD no SSL 78.31 ± 11.18 74.49 ± 9.13
Deep MSVDD "mean best" (SP) no SAD no SSL 79.84 ± 7.82 74.89 ± 7.01
Deep RPO (SP) no SAD no SSL 81.29 ± 5.92 74.82 ± 5.89
Deep SVDD (SP) no SAD SSL c. 85.34 ± 6.85 81.36 ± 7.47
Deep RPO (SP) no SAD SSL c. 86.66 ± 6.41 82.78 ± 8.25
Deep SVDD (SP) no SAD away 79.62 ± 9.02 75.38 ± 8.28
Deep RPO (SP) no SAD away 76.16 ± 9.87 76.56 ± 8.69
Table C.2: FCAE architecture used in our experiments. This defines a convolution-based seq2seq architecture
Layer parameters
Dropout /
C-conv 1D C-BN 1D C-ReLU C-conv 1D C-BN 1D C-ReLU C-conv 1D C-BN 1D C-ReLU R-Bottleneck C-tconv 1D C-BN 1D C-ReLU C-tconv 1D C-BN 1D C-ReLU C-tconv 1D kernel 6, stride 2, in chan 1, out chan 12 chan 12 chan 12 kernel 1, stride 1, in chan 12, out chan 12 chan 12 chan 12 kernel 1, stride 1, in chan 12, out chan 16 chan 16 chan 16 chan 16 (global pool here) kernel 6, stride 2, in chan 16, out chan 12 chan 12 chan 12 kernel 1, stride 1, in chan 12, out chan 12 chan 12 chan 12 kernel 1, stride 1, in chan 12, out chan 1
Layer parameters C-RNN (encode) 3 layers, hidden dim 16, input dim 1 Table C.3: RNN-based encoding architecture used in our experiments without success.
Using a terminology similar to the one already found in the deep learning literature, we call hit2vec a neural network architecture that translates radar hits features into a real-valued vector, i.e. a common
Libraries accessible on GitHub: https://github.com/NEGU93/cvnn https://github.com/ wavefrontshaping/complexPyTorch Accessed:
20/11/2022
Library accessible on GitHub https://github.com/wavefrontshaping/complexPyTorch Accessed: 20/11/2022
Library accessible on GitHub: https://github.com/pytorch/pytorch Accessed: 20/11/2022
Here we pick the notation of PyTorch documentation. The literature sometimes combines the double bias terms in the affine inputs in a single bias to produce a shorter and equivalent expression.
Again, we pick the notation of the PyTorch documentation.
Dropout randomly discards elements of representations in a neural network to prevent overfitting and excessive co-adaptation of trained weights. It defines a form of regularization very popular in deep learning architectures.
"2vec" is a popular suffix in the deep learning community to indicate an architecture that outputs a vector representation of a form of data. For instance, among the works we have cite there are two wav2vec[START_REF] Baevski | wav2vec 2.0: A framework for self-supervised learning of speech representations[END_REF][START_REF] Schneider | wav2vec: Unsupervised pre-training for speech recognition[END_REF] architectures encoding raw audio.
https://github.com/Blupblupblup/Doppler-Signatures-Generation Accessed: 28/10/2022
First version with data leakage: https://arxiv.org/abs/1711.05225v1 Third version without data leakage: https://arxiv.org/abs/1711.05225v3 Accessed: 25/11/2022
A threshold applied to the OCC score.
CIFAR10 is a popular baseline dataset of tiny images proposed in[START_REF] Krizhevsky | Learning multiple layers of features from tiny images[END_REF].
MNIST is a popular baseline dataset of handwritten digits proposed in[START_REF] Lecun | The mnist database of handwritten digits[END_REF].
Self-supervised learning (SSL) is any learning enabled by artificially generated supervision. This can typically be achieved by transforming unlabeled data and associating one label to each transformation, thus unlocking supervision.
The convergence is proved when the number p of projections tends to infinity.
A statistical depth provides a center-outward ordering of data points with respect to a dataset.
Each of the atoms here would be defined by a hypersphere partially capturing the one-class distribution.
[START_REF]Thales Ground Master 60[END_REF] Negative samples are out-of-distribution samples, i.e. data points not belonging to the one-class.
Dimensional collapse is the collapse of the variance in some of the representation space dimensions. Intuitively, this amounts to the irrelevance of some of the components in the associated representations.
Code produced during our participation available at https://github.com/Blupblupblup/ challenge-iclr-2021. Accessed: 06-10-2022.
Automatic Identification System
Identification Friend or Foe
https://archive.ics.uci.edu/ml/datasets/Statlog+\%28Landsat+Satellite\%29 Accessed: 01-09-2022
http://odds.cs.stonybrook.edu/satellite-dataset/ Accessed: 28/10/2022
https://archive.ics.uci.edu/ml/datasets/Statlog+\%28Landsat+Satellite\%29 Accessed: 01/04/2021
https://geomstats.github.io/
https://gitlab.lip6.fr/schwander/torchspdnet
An operator is permutation invariant if any permutation of the inputs does not change the output:g(x1, ... , xt) = g(x π(1) , ... , x π(t) )for any permutation π.
Remerciements
Mean vs max in Deep RPO training loss
In order to ensure the relevance of the replacement of the max estimator with a mean in Deep RPO for our application, which as previously explained in 3.
Concluding remarks
The near OODD performances of various deep and non-deep, unsupervised and semisupervised AD methods were compared on a radar Doppler signatures simulated dataset. Deep AD approaches were evaluated in various supervision setups, which revealed the relevance of combining a minority of labeled anomalies with transformed normal training samples to improve semi-supervised near OODD performances, and avoid latent normality distribution collapse. The benefits of deep learning clearly showed, and while not leading to the best overall performances, geometry-aware processing with tangent PCA proved to be the source of a substantial improvement for non-deep AD.
Appendix A
About the relation between RPO, Deep RPO and the Mahalanobis distance
To elaborate on the relation between RPO and the Mahalanobis distance, the latter describing an ellipsoid in the data points representation space, let us remind the results presented in [START_REF] Velasco | Robust rx anomaly detector without covariance matrix estimation[END_REF]. Let us consider a data point x ∈ R d belonging to a data matrix X d×n following an Elliptically Symmetric Distribution (ESD) containing n samples for which we want to compute an outlyingness score O(x). The ESD hypothesis guarantees that the sample covariance matrix Σ is positive definite. Using Σ, one can define an outlyingness score using the Mahalanobis distance:
where µ X is the data points mean, i.e. the mean column of X. According to the extended Cauchy-Schwarz inequality [START_REF] Arnold | Applied multivariate statistical analysis[END_REF], for any nonzero vector u ∈ R d :
where u T Σu ≥ 0 since Σ is positive definite. As suggested in [START_REF] Arnold | Applied multivariate statistical analysis[END_REF] if one takes
the latter leading to the equality case, showing us the upper bound is attainable. Since the upper bound is reachable for u = αΣ -1 (x -µ X ), and that such format allows u to be a RP of unit norm as required by the definition of RPO for Eq. 3.2, one can intuitively conclude that using a maximum estimator over numerous RPs the bound is approached or reached in Eq.A.3 by the left term, i.e. for U a large set of unit norm RPs:
RELATION BETWEEN RPO, DEEP RPO AND THE MD
This result is actually well-known and called the Maximization Lemma in [START_REF] Arnold | Applied multivariate statistical analysis[END_REF], where instead of u ∈ U, u is an arbitrary nonzero vector, i.e. max u∈U is replaced by max x̸ =0
. The proof for this lemma is actually the previous example case where u = αΣ -1 (x -µ X ) is shown to make the equality case happen. Note that using the equality case format for u implies knowing the covariance matrix Σ -1 . This emphasizes the relevance of RPO to compute a multivariate outlyingness without requiring a covariance matrix computation. From Eq. A.6, one can deduce:
this expression makes the projected sample u T x and the projected mean u T µ X appear. Note that for the denominator the transformation is as follows:
(A.8) Eq. A.7 now only differs from RPO due to the square and the estimators of first and second order statistics: RPO replaces the mean and the variance of the projected inputs x ∈ X with the robust estimators median and median absolute deviation respectively, which in turn adds a constant factor with respect to the Mahalanobis distance. Regarding the relevance of these robust estimators choice and the additional constant factor see [START_REF] Velasco | Robust rx anomaly detector without covariance matrix estimation[END_REF].
We thus get back to the conclusion of [START_REF] Velasco | Robust rx anomaly detector without covariance matrix estimation[END_REF] indicating the equivalence (up to a constant factor) between the Mahalanobis distance and RPO computed with an infinite number of RPs under the multivariate elliptical distribution hypothesis. Whereas RPO as presented in Table 3.9 used the max estimator to integrate over RPs as defined in [START_REF] Velasco | Robust rx anomaly detector without covariance matrix estimation[END_REF], Deep RPO replaces the max with a mean estimator in Eq. 3.2 to measure outlyingness in the latent representation space provided by a neural network. This replacement is motivated by empirical results and an interpretation provided in [START_REF] Bauw | Deep random projection outlyingness for unsupervised anomaly detection[END_REF]. The drawback of this change is that the Mahalanobis equivalence guarantee is lost, since the Maximization Lemma leading to Eq. A.6 can not be used with a mean. Working with an ellipsoid instead of a latent hypersphere as in [START_REF] Ruff | Deep one-class classification[END_REF] supposedly made the latent normality boundary used by the training objective more flexible and tailored to the data, and was the original motivation of [START_REF] Bauw | Deep random projection outlyingness for unsupervised anomaly detection[END_REF]. Recall that this normality boundary is fitted to training data before training and frozen, the boundary being defined by a single location estimator in the case of Deep SVDD, and by as many location and spread estimators as there are RPs in the case of Deep RPO.
Intuitively, the mean will pull the quantity integrated over the large set of RPs away from the upper bound. Even though the score is based on projected 1D outlyingnesses each normalized by their respective location and spread estimators, there is no assurance that once integrated with a mean into one final outlyingness these quantities generate a normality ellipsoid similar to the Mahalanobis one. Actually, nothing indicates the integrated outlyingness describes any kind of ellipsoid in the input vector representation space. However, since the mean integrates over the dimensions created by the set of RPs, and since along these dimensions each 1D coordinate is centered and normalized using its own location and spread estimators, the mean still relates to an ellipsoid in the high-dimensional representation space generated by the RPs.
Appendix B
Affine invariance of RPO with max and mean estimators
The content of this appendix was originally presented in [START_REF] Bauw | Near out-of-distribution detection for low-resolution radar micro-doppler signatures[END_REF]. We want to prove the affine invariance of the following quantity, called the random projection outlyingness (RPO):
that is, we want to prove the following equality:
where:
• x ∈ R d×1 is the data point for which we want to compute an outlyingness measure;
• X ∈ R d×n is the data matrix containing n features vectors in R d (i.e. the data distribution, including x);
• u ∈ R d×1 is a random projection vector of unit norm, i.e. u ∈ S d-1 where
• A ∈ R d×d is a non-singular matrix (for the affine transformation);
• b ∈ R d×1 is a constant vector (for the affine transformation);
• M ed(u T X) is the median of the scalars generated by the 1D projection of all x in X by u;
• M AD(u T X) is the median absolute deviation of the same scalars, i.e.
M AD(u
AX + b is a permissive notation defining the affine transformation of every column features vector with A and b, i.e. the affine transformation of the whole data distribution on which the location and scatter estimators, respectively the M ed and the M AD, are applied. We also want to prove the affine invariance of that same quantity where the sup is replaced by a mean estimator:
Unsuccessful architectures
This appendix contains a few of the unsuccessful neural networks architectures which have been mentioned in the manuscript. It first goes over the unsuccessful single range cell I/Q encoding architectures proposed in 2.3, and then puts forward the attempted SPD-manifold processing adaptation of Deep SVDD suggested in 3.2.3.
C.1 Cell2vec architectures
One can notice that these architectures, in addition to transforming a column of the input matrix Z I/Q (see Eq. (1.2)) of variable size into a fixed-size vector in R Q , convert the complex-valued representation into a real-valued one. These architectures correspond to the cell2vec step in the hit2vec depiction of Fig. 2.4. In the FCAE described in table C.2, as was the case for the FCN put forward in table 2.2, using a convolution with a large kernel size on the input signal is particularly interesting since it makes the potential interpretation of the learned weights as FIR filter coefficients more expressive.
C.2 Deep Riemannian one-class classification
The table C.4 describes an example architecture we experimented with to define an SPD-manifold aware equivalent to the deep SVDD [START_REF] Ruff | Deep one-class classification[END_REF] one-class classification method. This architecture was tested on SPD covariance matrices computed either over a fixed set of input images transformations, or over the rows or columns of the input images. The resulting covariance matrix was of size 28 × 28, hence the input dimension of the first BiMap layer in the proposed architecture. The intuition, the loss and the layers associated with this architecture are discussed in section 3.2. The architecture presented here can be applied to the covariance matrices computed over the individual components of Doppler spectrums (see Fig. 3.8) or computed over the output of convolutional layers, or also to the Toeplitz autocorrelation matrices associated with AR models (see 2.3.1). It is also possible to extend this SPD-manifold adaptation of Deep SVDD to the complex-valued equivalent of SPD matrices, i.e. HPD matrices (see 3.2). These architectures were implemented and trained using the torchspdnet [10] library 1 . This versatility of the SPD manifold-aware processing, in addition to its potential for more efficient learning in terms of required iterations and data, motivated our experiments to develop a deep SVDD SPD equivalent. Only the last encoder layer hidden and cell states (hence the "×2") were kept as the embedding of the input range cell I/Q signal and passed on to the LSTM decoder. These hidden and cell states were furthermore reduced to their real part in order to limit the transit of information to a real-valued representation that will serve as range cell embedding in R Q . This complex to real, and back to complex is illustrated on Fig. 2.3. Input dimensionality of both LSTM networks are equal to one in order to accept the complexvalued I/Q signal. For the decoder, this input can be the ground truth stemming from the teacher forcing. A complementary complex-valued linear layer was added to further process the output of the LSTM decoder before evaluating the reconstruction error. A dropout layer without rescaling was added in some of our experiments to complicate the task of reconstruction (see 2.3.2).
Layers in forward order
Appendix D Publicly available codes
A few of the codes developed during this thesis have been made publicly available at the time of publication of this document and allow for the reproduction of parts of the experiments that led to scientific publications:
• Deep MSVDD [START_REF] Ghafoori | Deep multi-sphere support vector data description[END_REF] |
04106785 | en | [
"math.math-oc",
"info.info-au"
] | 2024/03/04 16:41:22 | 2023 | https://hal.science/hal-04106785/file/main.pdf | Lucas Brivadis
email: [email protected]
Antoine Chaillet
email: [email protected]
Jean Auriol
email: [email protected]
Adaptive observer and control of spatiotemporal delayed neural fields
Keywords: observers, adaptive control, persistence of excitation, neural fields, delayed systems
An adaptive observer is proposed to estimate the synaptic distribution between neurons asymptotically from the measurement of a part of the neuronal activity and a delayed neural field evolution model. The convergence of the observer is proved under a persistency of excitation condition. Then, the observer is used to derive a feedback law ensuring asymptotic stabilization of the neural fields. Finally, the feedback law is modified to ensure simultaneously practical stabilization of the neural fields and asymptotic convergence of the observer under additional restrictions on the system. Numerical simulations confirm the relevance of the approach.
Introduction
Neural fields are nonlinear integro-differential equations used to model the activity of neuronal populations [START_REF] Bressloff | Spatiotemporal dynamics of continuum neural fields[END_REF][START_REF] Coombes | Neural Fields: Theory and Applications[END_REF]. They constitute a continuum approximation of brain structures motivated by the high density of neurons and synapses. Their infinite-dimensional nature allows for accounting for the spatial heterogeneity of the neurons' activity and the complex synaptic interconnection between them. Their delayed version also allows to take into account the non-instantaneous communication between neurons. Yet, unlike numerical models of interconnected neurons, in which every single neuron is represented by a set of differential equations, neural fields remain amenable to mathematical analysis. A vast range of mathematical tools are now available to predict and influence their behavior, including existence of stationary patterns [START_REF] Brivadis | Existence of an equilibrium for delayed neural fields under output proportional feedback[END_REF][START_REF] Faugeras | Persistent neural states: stationary localized activity patterns in nonlinear continuous n-population, q-dimensional neural networks[END_REF] stability analysis [START_REF] Faye | Some theoretical and numerical results for delayed neural field equations[END_REF], bifurcation analysis [START_REF] Atay | Stability and Bifurcations in Neural Fields with Finite Propagation Speed and General Connectivity[END_REF][START_REF] Veltz | Interplay between synaptic delays and propagation delays in neural field equations[END_REF], and feedback stabilization [START_REF] Chaillet | Robust stabilization of delayed neural fields with partial measurement and actuation[END_REF].
This interesting compromise between biological significance and abstraction explains the wide range of neural fields applications, which cover primary visual cortex [START_REF] Bertalmío | Cortical-Inspired Wilson-Cowan-Type Equations for Orientation-Dependent Contrast Perception Modelling[END_REF][START_REF] Pinotsis | Contrast gain control and horizontal interactions in v1: A dcm study[END_REF], auditory system [START_REF] Boscain | A bio-inspired geometric model for sound reconstruction[END_REF], working memory [START_REF] Laing | Multiple bumps in a neuronal model of working memory[END_REF], sensory cortex [START_REF] Detorakis | Structure of receptive fields in a computational model of area 3b of primary sensory cortex[END_REF], and deep brain structures involved in Parkinson's disease [START_REF] Chaillet | Robust stabilization of delayed neural fields with partial measurement and actuation[END_REF].
The refinement of modern technologies (such as multi-electrode arrays or calcium imagining) allows to measure neuronal activity with higher and higher spatial resolution. Using these measurements to estimate the synaptic distribution between neurons would greatly help decipher the internal organization of particular brain structures. Currently, this is mostly addressed by offline algorithms based on kernel reconstruction techniques [START_REF] Alswaihli | Kernel reconstruction for delayed neural field equations[END_REF], although some recent works propose online observers (see [START_REF] Burghi | Online estimation of biophysical neural networks[END_REF] for conductance-based models or [START_REF] Brivadis | Online estimation of hilbert-schmidt operators and application to kernel reconstruction of neural fields[END_REF] for delayed neural fields).
In turn, estimating this synaptic distribution could be of interest to improving feedback control of neuronal populations. A particularly relevant example is that of deep brain stimulation (DBS), which consists in electrically stimulating deep brain structures of the brain involved in neurological disorders such as Parkinson's disease [START_REF] Limousin | Electrical stimulation of the subthalamic nucleus in advanced Parkinson's disease[END_REF]. Several attempts have been made to adapt the delivered stimulation based on real-time recordings of the brain activity [START_REF] Carron | Closing the loop of deep brain stimulation[END_REF]. Among them, it has been shown that a stimulation proportional to the activity of a brain structure called the subthalamic nucleus is enough to disrupt Parkinsonian brain oscillations [START_REF] Chaillet | Robust stabilization of delayed neural fields with partial measurement and actuation[END_REF]. Yet, the value of the proportional gain depends crucially on the synaptic strength between the neurons involved: estimating it would thus allow for more respectful stimulation strategies.
In this paper, we thus develop an online strategy to estimate the synaptic distribution of delayed neural fields. This estimation relies on the assumed knowledge of the activation function of the population, the time constants involved, and the propagation delays between neurons, as well as online measurement of a part of the neuronal activity. It exploits the theory of adaptive observers for nonlinear systems developed in [START_REF] Besançon | Remarks on nonlinear adaptive observer design[END_REF][START_REF] Besançon | On adaptive observers for systems with state and parameter nonlinearities[END_REF][START_REF] Pyrkin | An adaptive observer for uncertain linear time-varying systems with unknown additive perturbations[END_REF] and allows to reconstruct the unmeasured quantities based on real-time measurements. We then exploit this feature to propose a stabilizing feedback strategy that may be of particular interest to disrupt pathological brain oscillations. This control law estimates the synaptic kernel in real-time and adapts the stimulation accordingly, thus resulting in a dynamic output feedback controller.
The delayed neural fields model is presented in Section 2 together with an introduction to the necessary mathematical formalism. The synaptic kernel estimation is presented in Section 3, whereas its use for feedback stabilization is presented in Section 4. Numerical simulations to assess the performance of the proposed estimation and stabilization techniques are presented in Section 5.
Problem statement and mathematical preliminaries 2.1 Delayed neural fields
Given a compact set Ω ⊂ R q (where, typically, q ∈ {1, 2, 3}) representing the physical support of a neuronal population, the evolution of the neuronal activity z(t, r) ∈ R n at time t ∈ R + and position r ∈ Ω is modeled as the following delayed neural fields [START_REF] Bressloff | Spatiotemporal dynamics of continuum neural fields[END_REF][START_REF] Coombes | Neural Fields: Theory and Applications[END_REF]:
τ (r)
∂z ∂t (t, r) = -z(t, r) + u(t, r) + Ω w(r, r )S(z(t -d(r, r ), r ))dr .
n ∈ N represents the number of considered neuronal population types; for instance, imagery techniques often allow for discrimination between an excitatory and an inhibitory population, in which case n = 2. τ (r) is a positive definite diagonal matrix of size n × n, continuous in r, representing the time decay constant of neuronal activity at position r. S : R n → R n is a nonlinear activation function; it is often taken as a monotone function, possibly bounded (for instance, a sigmoid). w(r, r ) ∈ R n×n defines a kernel describing the synaptic strength between locations r and r ; its sign indicates whether the considered presynaptic neurons are excitatory or inhibitory, whereas its absolute value represents the strength of the synaptic coupling between them. d(r, r ) ∈ [0, d], for some d > 0, represents the synaptic delay between the neurons at positions r and r that typically mainly results from the finite propagation speed along the axons. Finally, u(t, r) ∈ R n is an input that may represent either the influence of non-modeled brain structures or an artificial stimulation signal. We assume that the neuronal population can be decomposed into z(t, r) = (z 1 (t, r), z 2 (t, r)) ∈ R n 1 × R n 2 where z 1 corresponds to the measured part of the state and z 2 to the unmeasured part. In the case where all the state is measured, we simply write z = z 1 and n 2 = 0. Such a decomposition is natural when the two considered populations are physically separated, as it happens in the brain structures involved in Parkinson's disease [START_REF] Chaillet | Robust stabilization of delayed neural fields with partial measurement and actuation[END_REF]. It can also be relevant for imagery techniques that discriminate among neuron types within a given population. Accordingly, we define τ i , S ij w ij and u i of suitable dimensions for each population i, j ∈ {1, 2} so that
τ i (r) ∂z i ∂t (t, r) = -z i (t, r) + u i (t, r) + 2 j=1 Ω w ij (r, r )S ij (z j (t -d ij (r, r ), r ))dr . (2)
Problem statement
In the present paper, we are interested in the following control and observation problems:
Problem 2.1 (Estimation).
From the knowledge of S ij , w 2j , τ i and d ij and the online measurement of u i (t) and z 1 (t) for all i, j ∈ {1, 2}, estimate online z 2 (t), w 11 and w 12 .
Problem 2.2 (Stabilization). From the knowledge of S ij , w 2j , τ i and d ij for all i, j ∈ {1, 2} and the online measurement of z 1 (t), find u 1 in the form of a dynamic output feedback law that stabilizes z 1 and z 2 at some reference when u 2 = 0.
As already said, Problem 2.1 is motivated by the advances in imagery and recording technologies and the importance of determining synaptic distribution in the understanding of brain functioning. The assumption that the transmission delays are known is practically meaningful, as these delays are typically proportional to the distance |r -r | between the considered neurons via the axonal transmission speed, which is typically known a priori. Similarly, the time constants τ i (r) are usually directly dependent on the conductance properties of the neurons. The precise knowledge of the activation function S ij is probably more debatable, although recent techniques allow to estimate them based on the underlying neuron type [START_REF] Carlu | A mean-field approach to the dynamics of networks of complex neurons, from nonlinear integrate-and-fire to hodgkin-huxley models[END_REF].
Problem 2.2 is motivated by the development of deep brain stimulation (DBS) technologies that allow electrically stimulating some areas of the brain whose pathological oscillations are correlated to Parkinson's disease symptoms. In our context, the neuronal activity measured and actuated by DBS through u 1 is denoted by z 1 , which corresponds to a deep brain region known as the subthalamic nucleus (STN). We refer to [START_REF] Detorakis | Closed-loop stimulation of a delayed neural fields model of parkinsonian STN-GPe network: a theoretical and computational study[END_REF] for more details on feedback techniques for DBS. One hypothesis, defended by [START_REF] Holgado | Conditions for the generation of beta oscillations in the subthalamic nucleus-globus pallidus network[END_REF], is that these pathological oscillations may result from the interaction between STN and a narrow part of the brain, the external globus pallidus (GPe). The neuronal activity in this area is inaccessible to measurements or stimulation in clinical practice, but it is internally stable and corresponds to z 2 in our model.
A strategy relying on a high-gain approach answered Problem 2.2 in [START_REF] Chaillet | Robust stabilization of delayed neural fields with partial measurement and actuation[END_REF]. The system under consideration was similar, except that the nonlinear activation function was not applied to the delayed neuronal activity but to the resulting synaptic coupling. In [START_REF] Faugeras | Absolute stability and complete synchronization in a class of neural fields models[END_REF], system (1) is referred to as a voltage-based model, while [START_REF] Chaillet | Robust stabilization of delayed neural fields with partial measurement and actuation[END_REF] focused on activity-based models. It is proven in [START_REF] Chaillet | Robust stabilization of delayed neural fields with partial measurement and actuation[END_REF]Proposition 3] that under a strong dissipativity assumption (equivalent1 to our Assumption 3.1 below), for any positive continuous map γ : Ω → R and any square-integral reference signal z ref : Ω → R n 1 there exists a positive constant α * depending on parameters of the system such that, for all α > α * , system (2) coupled with the output feedback law u 1 (t, r) = -αγ(r)(z 1 (t, r) -z ref (r)) and u 2 (t, r) = 0 is globally asymptotically stable (and even input-to-state stable) at some equilibrium whose existence is proved in [START_REF] Brivadis | Existence of an equilibrium for delayed neural fields under output proportional feedback[END_REF]. However, one of the drawbacks of this result is that α * is proportional to the L 2 -norm of w 11 , which is usually unknown or uncertain. In practice, this implies a high-gain choice in the controller, which may lead to large values of u 1 (t, r) that are incompatible with the safety constraints imposed by DBS techniques. On the contrary, our goal in this paper is to propose an adaptive strategy that does not rely on any prior knowledge of the synaptic distributions w 11 and w 12 , but at the price of more knowledge on other parameters of the system.
In a preliminary work [START_REF] Brivadis | Online estimation of hilbert-schmidt operators and application to kernel reconstruction of neural fields[END_REF], we have shown that an observer may be designed in the delayfree case to estimate z 2 (t), w 11 and w 12 , hence to answer Problem 2.1. However, this work was done in a framework that does not encompass time-delay systems, and Problem 2.2 was not addressed at all. With such an observer, a natural dynamic output feedback stabilization strategy would be to choose
u 1 (t, r) = -α(z 1 (t, r) -z 1,ref (r)) + z 1 (t, r) -r ∈Ω ŵ11 (t, r, r )S 11 (z 1 (t - d 11 (r, r ), r ))dr -r ∈Ω ŵ12 (t, r, r )S 12 (ẑ 2 (t -d 12 (r, r ), r
))dr where α > 0 is a tunable controller gain, z 1,ref is a reference signal, ŵ1j (t) denotes the estimation of w 1j made by the observer at time t and ẑ2 (t) is the estimation of z 2 (t). Doing so, if the observer has converged to the state, i.e., ŵ1j = w 1j and ẑ2 = z 2 , then the remaining dynamics of z 1 would be τ 1 (r) ∂z 1 ∂t (t, r) = -α(z 1 (t, r) -z 1,ref (r)) so that z 1 would tend towards z
Definitions and notations
Let q be a positive integer, Ω be an open subset of R q and X be a Hilbert space endowed with the norm • X and scalar product •, • X . Denote by
L 2 (Ω, (X , • X )) := {f : Ω → X Lebesgue-measurable | Ω f 2 F < +∞} the Hilbert space of X -valued square integrable functions. Denote by W 1,2 (Ω, (X , • X )) := {f ∈ L 2 (Ω, (X , • X )) | f ∈ L 2 (Ω, (X , • X )} and by W m,2 (Ω, (X , • X )) := {f ∈ W m-1,2 (Ω, (X , • X )) | f ∈ W m-1,2 (Ω, (X , • X )} for m > 1 the usual Sobolev spaces.
If Ω is a compact set, the above definitions hold by replacing Ω by its interior, and we denote by µ(Ω) := Ω dr the Lebesgue measure of Ω. If I is an interval of R, the space of k-times continuously differentiable functions from I to X is denoted by C k (I, X ). We endow C 0 (I, X ) with the norm defined by x C 0 (I,X ) := sup t∈I x(t) X for all x ∈ C 0 (I, X ). If x ∈ X , denote by x * ∈ X its adjoint. If X is a Hilbert space and Y is a Banach (resp. Hilbert) space, then the Banach (resp. Hilbert) space of linear bounded operators from X to Y is denoted by L(X , Y). For any W ∈ L(X , Y), denote by Ker W its kernel and Ran W its range. The map X x → W x Y defines a semi-norm on W , that is said to be induced by W . It is a norm if and only if W is injective. Set L(X ) := L(X , X ). Denote by Id X the identity operator over X .
For any positive integers n and m and any matrix w ∈ R n×m , denote by w its transpose, Tr(w) its trace, w its norm induced by the Euclidean norm, and w F = Tr(w w) its Frobenius norm. Recall that these norms are equivalent and w w F . Hence, for any positive integers n and m, L 2 (Ω, (R n×m , • )) and L 2 (Ω, (R n×m , • F )) are equivalent Hilbert spaces and
• L 2 (Ω,(R n×m , • )) • L 2 (Ω,(R n×m , • F )) . For all i, j ∈ {1, 2}, set X z i = L 2 (Ω, R n i ) and X w ij = L 2 (Ω 2 , R n i ×n j ), so that X z i (resp.
X w ij ) will be used as the state space of z i (resp. w ij ). Set also X z = X z 1 × X z 2 and X w = X w 11 × X w 12 . By abuse of notations, we write
• (Xw ij , • ) := • L 2 (Ω 2 ,(R n i ×n j , • )) and • (Xw ij , • F ) := • L 2 (Ω 2 ,(R n i ×n j , • F ))
. For any positive constant d and any Hilbert space
X , if x ∈ C 0 ([-d, +∞), X ),
:= sup R n i |S ij | .
Remark 2.3. To ease the reading, we have chosen to consider that the integral over Ω is a Lebesgue integral, i.e., that Ω is endowed with the Lebesgue measure. However, note that our work remains identical when considering any other measure for which Ω is measurable. In particular, an interesting case is when Ω = ∪ N k=1 {r k } for some finite family (r k ) 1 k N in R q and the measure is the counting measure. In that case, (1) can be rewritten as the usual finite-dimensional Wilson-Cowan equation [START_REF] Wilson | A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue[END_REF]: for all k ∈ {1, . . . , N },
τ (r k ) ∂z ∂t (t, r k ) = -z(t, r k ) + u(t, r k ) + N =1 w(r k , r )S(z(t -d(r k , r ), r ). ( 3
)
This case will be further investigated in Section 4.2.
Preliminaries on Hilbert-Schmidt operators
Let q, n and m be positive integers and Ω be an open subset of R q . To any map w ∈ L 2 (Ω 2 , R n×m ), one can associates a Hilbert-Schmidt (HS) integral operator W :
L 2 (Ω, R m ) → L 2 (Ω, R n ) defined by (W z)(r)
= Ω w(r, r )z(r )dr for all r ∈ Ω. The map w is said to be the kernel of W . Let us recall some basic notions on such operators (see e.g. [START_REF] Gohberg | Hilbert-Schmidt Operators[END_REF] for more details). The space
L 2 (L 2 (Ω, R m ), L 2 (Ω, R n )) of HS integral operators is a subspace of L(L 2 (Ω, R m ), L 2 (Ω, R n ))
, and is a Hilbert space when endowed with the scalar product defined by
W a , W b L 2 (L 2 (Ω,R m ),L 2 (Ω,R n )) := w a , w b (L 2 (Ω 2 ,R n×m , • F ))
for all W a and W b in L 2 (L 2 (Ω, R n×m )) with kernels w a and w b , respectively. For any Hilbert basis (e k ) k∈N of L 2 (Ω, R m ), we have that
W 2 L 2 (L 2 (Ω,R m ),L 2 (Ω,R n )) = k∈N W e k for all w ∈ L 2 (Ω 2 , R n×m ).
Let p be another positive integer. If W and P are two HS integral operators with kernels w and ρ, in
L 2 (L 2 (Ω, R m ), L 2 (Ω, R n )) and L 2 (L 2 (Ω, R n ), L 2 (Ω, R p )
) respectively, the composition W P is also a HS integral operators, in L 2 (L 2 (Ω, R m ), L 2 (Ω, R p )). Moreover, its kernel is denoted by w • ρ and satisfies
(w • ρ)(r, r ) = Ω w(r, r )ρ(r , r )dr for all r, r ∈ Ω. If W ∈ L 2 (L 2 (Ω, R m ), L 2 (Ω, R n ))
has kernel w, then its adjoint W * is also a HS integral operator and its kernel w * satisfies w * (r, r ) = w(r , r) for all r, r ∈ Ω. In particular, W is self-adjoint if and only if n = m and w(r, r ) = w(r , r) for all r, r ∈ Ω, and w is a positivedefinite kernel if and only if so is W . In that case, w induces a norm on L 2 (Ω, R n ), defined by z → W z L 2 (Ω,R n ) , that is weaker than or equivalent to the usual norm
• L 2 (Ω,R n ) .
To answer Problem 2.1, we propose to estimate w 1j 's in the norm L 2 (Ω, (R n×m , • F )), which, by the previous remarks, is equivalent to estimate their associated HS operators. This operator-based approach has been followed in [START_REF] Brivadis | Online estimation of hilbert-schmidt operators and application to kernel reconstruction of neural fields[END_REF] to answer Problem 2.1 in the delay-free case. In the present paper, we focus on estimating the kernels rather than their associated operators. From a practical viewpoint, the use of the Frobenius norm corresponds to a coefficientwise estimation of the matrices w 1j (r, r ).
Properties of the system
Let us recall the well-posedness of the system (2) under consideration (that was proved in [24, Theorem 3.2.1]), as well as a bounded-input bounded-state (BIBS) property (that we prove below).
Assumption 2.4. The set Ω ⊂ R q is compact and, for all i, j ∈ {1, 2},
τ i ∈ C 0 (Ω, D n i ++ ), u i ∈ C 0 (R + , X z i ), d ij ∈ C 0 (Ω 2 , [0, d]) for some d > 0, w ij ∈ X w ij , and S ij ∈ C 0 (R n j , R n i ) is bounded and globally Lipschitz.
These assumptions are standard in neural field analysis (see, e.g., [START_REF] Chaillet | Robust stabilization of delayed neural fields with partial measurement and actuation[END_REF]). In particular, the boundedness of S reflects the biological limitations of the maximal activity that the population can reach. Proposition 2.5 (Open-loop well-posedness and BIBS). Suppose that Assumption 2.4 is satisfied. Then, for any initial condition
(z 1,0 , z 2,0 ) ∈ C 0 ([-d, 0], X z 1 ) × C 0 ([-d, 0], X z 2 ), the open-loop system (2) admits a unique corresponding solution (z 1 , z 2 ) ∈ C 1 ([0, +∞), X z 1 × X z 2 )∩C 0 ([-d, +∞), X z 1 ×X z 2 ). Moreover, if u i is bounded for all i ∈ {1, 2}, then all solutions (z 1 , z 2 ) of (2)
are such that z i and dz i dt are also bounded.
Proof. Well-posedness. The only difference with [24, Theorem 3.2.1] is that τ depends on r. However, since τ is assumed to be continuous and positive, the proof remains identical to the one given in [24, Theorem 3.2.1].
BIBS. For all i ∈ {1, 2}, let τ i be the smallest diagonal entry of τ i (r) when r spans Ω (which exists since τ i is continuous and Ω is compact). For all t 0, we have by Young's and Cauchy-Schwartz inequalities that
τ i 2 d dt z i (t) 2 Xz i - Ω |z i (t, r)| 2 dr + Ω z i (t, r) u i (t, r)dr + Ω z i (t, r) 2 j=1 Ω w ij (r, r )S ij (z j (t -d ij (r, r ), r ))dr dr - Ω |z i (t, r)| 2 dr + 1 4 Ω |z i (t, r)| 2 dr + Ω |u i (t, r)| 2 dr + 1 4 Ω |z i (t, r)| 2 dr + 2 j=1 Ω 2 w ij (r, r ) 2 |S ij (z j (t -d ij (r, r ), r ))| 2 dr dr - 1 2 z i (t) 2 Xz i + u i (t) 2 Xu i + 2 j=1 S2 ij w ij 2 (Xw ij , • ) .
Hence, if u i remains bounded, then z i also remains bounded by Grönwall's inequality. Moreover,
τ i ∂z i ∂t (t) Xz i z i (t) Xz i + u i (t) Xz i + Ω 2 j=1 Ω w ij (r, r )S ij (z j (t -d ij (r, r ), r ))dr 2 dr z i (t) Xz i + u i (t) Xz i + 2 j=1 Sij w ij (Xw ij , • ) .
Hence dz i dt is also bounded if u i is bounded.
In the rest of the paper, we always make the Assumption 2.4, so that the well-posedness of the system is always guaranteed.
3 Adaptive observer
Observer design
In order to design an observer, we first make a dissipativity assumption on the unmeasured part z 2 of the state.
(t) -z b 2 (t)
Xz 2 is converging towards 0 as t goes to +∞. (This fact will be proved and explained in Remark 3.14.) Therefore, Assumption 3.1 can be interpreted as a detectability hypothesis: the unknown part of the state has contracting dynamics with respect to some norm.
We also stress that Assumption 3.1 is commonly used in the stability analysis of neural fields [START_REF] Faugeras | Absolute stability and complete synchronization in a class of neural fields models[END_REF] and ensures dissipativity even in the presence of axonal propagation delays [START_REF] Detorakis | Incremental stability of spatiotemporal delayed dynamics and application to neural fields[END_REF].
Inspired by the delay-free case investigated in [START_REF] Brivadis | Online estimation of hilbert-schmidt operators and application to kernel reconstruction of neural fields[END_REF], let us consider the following observer:
τ 1 (r) ∂ ẑ1 ∂t (t, r) = -α(ẑ 1 (t, r) -z 1 (t, r)) -z 1 (t, r) + u 1 (t, r) + Ω ŵ11 (t, r, r )S 11 (z 1 (t -d 11 (r, r ), r ))dr + Ω ŵ12 (t, r, r )S 12 (ẑ 2 (t -d 12 (r, r ), r ))dr τ 2 (r) ∂ ẑ2 ∂t (t, r) = -ẑ2 (t, r) + u 2 (t, r) + Ω w 21 (r, r )S 21 (z 1 (t -d 21 (r, r ), r ))dr + Ω w 22 (r, r )S 22 (ẑ 2 (t -d 22 (r, r ), r ))dr τ 1 (r) ∂ ŵ11 ∂t (t, r, r ) = -(ẑ 1 (t, r) -z 1 (t, r))S 11 (z 1 (t -d 11 (r, r ), r )) τ 1 (r) ∂ ŵ12 ∂t (t, r, r ) = -(ẑ 1 (t, r) -z 1 (t, r))S 12 (ẑ 2 (t -d 12 (r, r ), r )) (4)
where α > 0 is a tunable observer gain, to be selected later.
Note that ẑ2 has the same dynamics as z 2 . Hence the dissipativity Assumption 3.1 shall be employed to prove observer convergence. The correction terms are inspired by [START_REF] Besançon | Remarks on nonlinear adaptive observer design[END_REF] that dealt with the finite-dimensional delay-free context.
The well-posedness of the observer system is a direct adaptation of [24, Theorem 3.2.1]. The main differences are that τ i 's are space-dependent and ŵ1i are solutions of a dynamical system. Proposition 3.2 (Observer well-posedness). Suppose that Assumption 2.4 is satisfied. Then, for any initial condition (z 1,0 , ẑ1,0 , z 2,0 , ẑ2,0 , ŵ11,0 , ŵ12,0 )
∈ C 0 ( [-d, 0], X z 1 ) 2 ×C 0 ([-d, 0], X z 2 ) 2 × X w 11 ×X w 12 , the open-loop system (2)-(4) admits a unique corresponding solution (z 1 , ẑ1 , z 2 , ẑ2 , ŵ11 , ŵ12 ) ∈ C 1 ([0, +∞), X 2 z 1 × X 2 z 2 × X w 11 × X w 12 ) ∩ C 0 ([-d, +∞), X 2 z 1 × X 2 z 2 × X w 11 × X w 12 ). Proof.
The proof is based on [26, Lemma 2.1 and Theorem 2.3], and follows the lines of [START_REF] Faye | Some theoretical and numerical results for delayed neural field equations[END_REF]Lemma 3.1.1]. First, note that (2)-( 4) is a cascade system where the observer (4) is driven by the system's dynamics [START_REF] Atay | Stability and Bifurcations in Neural Fields with Finite Propagation Speed and General Connectivity[END_REF]. The well-posedness of ( 2) is guaranteed by Proposition 2.5. Now, let (z 1 , z 2 ) be a solution of (2) and let us prove the existence and uniqueness of (ẑ 1 , ẑ2 , ŵ11 , ŵ12 ) solution of (4) starting from the given initial condition. Let us consider the map F :
R + × X z 1 × C 0 ([-d, 0], X z 2 ) × X w 11 × X w 12 → X z 1 × X z 2 × X w 11 × X w 12 such that (4) can be rewritten as d dt (ẑ 1 , ẑ2 , ŵ11 , ŵ12 )(t) = F (t, ẑ1 , ẑ2t , ŵ11 , ŵ12
). Since τ i are continuous and positive, S ij are bounded and w ij are square-integrable over Ω 2 , d i are continuous and u i ∈ C 0 (R + , X z i ), the map F is well-defined by the same arguments than [24, Lemma 3.1.1]. Let us show that F is continuous, and globally Lipschitz with respect to (ẑ 1 , ẑ1t , ŵ11 , ŵ12 ), so that we can conclude with [26, Lemma 2.1 and Theorem 2.3]. Define F 1 taking values in X z 1 , F 2 taking values in X z 2 , F 3 taking values in X w 11 and F 4 taking values in X w 12 so that F = (F i ) i∈{1,2,3,4} From the proof of [24, Lemma 3.1.1], F 1 and F 2 are continuous and globally Lipschitz with respect to the last variables. From the boundedness of S, F 3 and F 4 are also continuous and globally Lipschitz with respect to the last variables. This concludes the proof of Proposition 3.2.
Let us define the estimation error (z 1 , z2 , w11 , w12 ) = (ẑ 1 -z 1 , ẑ2 -z 2 , ŵ11 -w 11 , ŵ12 -w 12 ). It is ruled by the following dynamical system:
τ 1 (r) ∂ z1 ∂t (t, r) = -αz 1 (t, r) + Ω w11 (t, r, r )S 11 (z 1 (t -d 11 (r, r ), r ))dr + Ω ŵ12 (t, r, r )S 12 (ẑ 2 (t -d 12 (r, r ), r ))dr - Ω w 12 (r, r )S 12 (z 2 (t -d 12 (r, r ), r ))dr = -αz 1 (t, r) + Ω w11 (t, r, r )S 11 (z 1 (t -d 11 (r, r ), r ))dr + Ω w12 (t, r, r )S 12 (ẑ 2 (t -d 12 (r, r ), r ))dr + Ω w 12 (r, r )(S 12 (ẑ 2 (t -d 12 (r, r ), r )) -S 12 (z 2 (t -d 12 (r, r ), r )))dr τ 2 (r) ∂ z2 ∂t (t, r) = -z 2 (t, r) + Ω w 22 (t, r, r )(S 22 (ẑ 2 (t -d 22 (r, r ), r )) -S 22 (z 2 (t -d 22 (r, r ), r )))dr τ 1 (r) ∂ w11 ∂t (t, r, r ) = -z 1 (t, r)S 11 (z 1 (t -d 11 (r, r ), r )) τ 2 (r) ∂ w12 ∂t (t, r, r ) = -z 1 (t, r)S 12 (ẑ 2 (t -d 12 (r, r ), r )) (5)
Observer convergence
In what follows, we wish to exhibit sufficient conditions for the convergence of the observer towards the state, meaning the convergence of the estimation error (z 1 , z2 , w11 , w12 ) towards 0. To do so, we introduce a notion of persistence of excitation over infinite-dimensional spaces.
Definition 3.3 (Persistence of excitation). Let X be a Hilbert space and Y be a Banach space. A continuous signal g : R + → X is persistently exciting (PE) with respect to a bounded linear operator P ∈ L(X , Y) if there exist positive constants T and κ such that
t+T t | g(τ ), x X | 2 dτ κ P x 2 Y , ∀x ∈ X , ∀t 0. ( 6
)
Remark 3.4. If X = Y is finite-dimensional and P is a self-adjoint positive-definite operator, then Definition (3.3) coincides with the usual notion of persistence of excitation since all norms on X are equivalent. However, if X = Y is infinite-dimensional, then there does not exist any PE signal with respect to the identity operator on X . (Actually, it is a characterization of the infinite dimensionality of X ). Indeed, if P = Id X , then (6) at t = 0 together with the spectral theorem for compact operators implies that T 0 g(τ )g(τ ) * dτ is not a compact operator, which is in contradiction with the fact that the sequence of finite range operators N j=0 g( jT N )g( jT N ) * converges to it as N goes to infinity. This is the reason for which we introduce this new PE condition which is feasible even if X is infinite-dimensional. Indeed, P induces a semi-norm on X that is weaker than or equivalent to • X . Remark 3.5. When X is infinite-dimensional, note that there exist signals that are PE with respect to an operator P inducing a norm on X (weaker than • X ), and not only a semi-norm. For example, consider X = Y = l2 (N, R) the Hilbert space of square summable real sequences. The signal g : R + → X defined by g(τ ) = ( sin(kτ ) k 2 ) k∈N is PE with respect to P : X → X defined by
P (x k ) k∈N = x k k 2 k∈N
with constants T = 2π and κ = π since 2π 0 sin 2 (kτ )dτ = π for all k ∈ N. Remark 3.6. If X = L 2 (Ω, R n ) for some positive integer n and if P is a HS integral operator with kernel ρ, then (6) is equivalent to
t+T t | g(τ ), x X | 2 dτ κ Ω Ω ρ(r, r )v(r )dr 2 dr, ∀x ∈ X , ∀t 0. ( 7
)
As explained in Remark 3.4, the role of ρ is to weaken the norm with respect to which g has to be PE. If one changes the Lebesgue measure for the counting measure over a finite set Ω as suggested in Remark 2.3, a possible choice of ρ is the Dirac mass: ρ(r, r ) = 1 if r = r , 0 otherwise. In that case, X is finite-dimensional, and we recover the usual PE notion.
Now, let us state the main theorem of this section that solves Problem 2.1.
Theorem 3.7 (Observer convergence). Suppose that Assumptions 2.4 and 3.1 are satisfied.
Define α * :=
2 12 w 12 2 (Xw 12 , • ) 2(1-2 22 w 22 2 (Xw 22 , • ) ) . Then, for all α > α * , for all u 1 , u 2 ∈ C 0 (R + × Ω, R n i ), any solution of (2)-(5) is such that lim t→+∞ z1 (t) Xz 1 = lim t→+∞ z2 (t) Xz 2 = 0
and w11 (t) (Xw 11 , • F ) and w12 (t) (Xw 12 , • F ) remain bounded for all t 0. Moreover, for any solution of (2), the corresponding error system (5) is uniformly Lyapunov stable at the origin, that is, for all ε > 0, there exists δ > 0 such that, if
z1 (t 0 ), z2t 0 , w11 (t 0 ), w12 (t 0 ) Xz 1 ×C 0 ([-d,0],Xz 2 )×Xw 11 ×Xw 12 δ
for some t 0 0, then z1 (t), z2 (t), w11 (t), w12 (t) Xz 1 ×Xz 2 ×Xw 11 ×Xw 12 ε for all t t 0 .
Furthermore, if S 11 and S 12 are differentiable, u 1 and u 2 are bounded 2 and if
ρ 1 ∈ L 2 (Ω 2 , R n 1 ×n 1 ) and ρ 2 ∈ L 2 (Ω 2 , R n 2 ×n 2 ) are self-adjoint positive-definite kernels such that the signal g : t → ((r, r ) → (S 11 (z 1 (t -d 11 (r, r ), r )), S 12 (z 2 (t -d 12 (r, r ), r )))) is PE with respect to P ∈ L(L 2 (Ω 2 , R n 1 +n 2 ), L 2 (Ω, R n 1 +n 2 )) defined by (P (x 1 , x 2 ))(r) := Ω 2 ρ 1 (r, r )x 1 (r , r )dr dr , Ω 2 ρ 2 (r, r )x 2 (r , r )dr dr for all (x 1 , x 2 ) ∈ L 2 (Ω 2 , R n 1 +n 2 ) and all r ∈ Ω, then lim t→+∞ w11 (t) • ρ 1 (Xw 11 , • F ) = lim t→+∞ w12 (t) • ρ 2 (Xw 12 , • F ) = 0. ( 8
)
The proof of Theorem 3.7 is postponed to Section 3.3.
Remark 3.8. In the case where all the state is measured, i.e., n 2 = 0, note that α * = 0. Hence, under the PE assumption on g, the convergence of ŵ11 • ρ 1 towards w 11 is guaranteed for any positive observer gain α. This means that the observer does not rely on any high-gain approach. This fact will be of importance in Section 4, to show that the controller answering Problem 2.2 is not high-gain when the full state is measured, contrary to the approach developed in [START_REF] Chaillet | Robust stabilization of delayed neural fields with partial measurement and actuation[END_REF].
Remark 3.9. The obtained estimations of the kernels w 11 and w 12 in (X w 11 , • F ) is blurred by the kernels ρ 1 and ρ 2 . The stronger is the semi-norm induced by ρ j (which is a norm if and only if ρ j is positive-definite), the stronger is the PE assumption, and the finer is the estimation of w 1j . In particular, if the counting measure replaces the Lebesgue measure over a finite set Ω and ρ j is a Dirac mass as suggested in Remark 3.6, then w1j • ρ j = w1j , hence the convergence of ŵ1j to w 1j obtained in Theorem 3.7 is in the topology of L 2 (Ω 2 , (R n i ×n j , • F )), i.e., coefficientwise.
Remark 3.10. The main requirement of Theorem 3.7 lies in the persistence of excitation requirement, which is a common hypothesis to ensure convergence of adaptive observers (see, for instance, [START_REF] Besançon | Remarks on nonlinear adaptive observer design[END_REF][START_REF] Farza | Adaptive observers for nonlinearly parameterized class of nonlinear systems[END_REF][START_REF] Sastry | Adaptive control: stability, convergence, and robustness[END_REF] in the finite-dimensional context and [START_REF] Curtain | Adaptive observers for slowly time varying infinite dimensional systems[END_REF][START_REF] Demetriou | Adaptive observers for a class of infinite dimensional systems[END_REF] in the infinite-dimensional case). Roughly speaking, it states that the parameters to be estimated are sufficiently "excited" by the system dynamics. However, this assumption is difficult to check in practice since it depends on the trajectories of the system itself. In Section 5, we choose in numerical simulations a persistently exciting input (u 1 , u 2 ) in order to generate persistence of excitation in the signal (S 11 (z 1 -d 11 ), S 12 (z 2 -d 12 )). This strategy seems to be numerically efficient, but the theoretical analysis of the link between the persistence of excitation of (u 1 , u 2 ) and that of (S 11 (z 1 -d 11 ), S 12 (z 2 -d 12 )
) remains an open question, not only in the present work but also for general classes of adaptive observers. This issue is further investigated in Section 4.2, where we look for a feedback law allowing simultaneous kernel estimation and practical stabilization. Another approach could be to design an observer not relying on PE, inspired by [START_REF] Pyrkin | An adaptive observer for uncertain linear time-varying systems with unknown additive perturbations[END_REF][START_REF] Wang | On robust parameter estimation in finite-time without persistence of excitation[END_REF] for example. These methods, however, do not readily extend to the infinite-dimensional delayed context that is considered in the present paper. They could be investigated in future works.
Remark 3.11. According to Definition 3.3, the PE assumption on g in Theorem 3.7 can be rewritten as follows: there exist positive constants T and κ such that, for all
(x 1 , x 2 ) ∈ L 2 (Ω 2 , R n 1 +n 2 ), T 0 2 j=1 Ω 2 g j (t + τ, r, r ) T x j (r, r )drdr 2 dτ κ 2 j=1 Ω Ω 2 ρ j (r, r )x j (r , r )dr dr 2 dr ( 9
)
This characterization will be used in the proof of Theorem 3.7.
Remark 3.12. The choice of the operator P is a crucial part of Theorem 3.7. First, its null space is given by Ker
P = {(x 1 , x 2 ) ∈ L 2 (Ω 2 , R n 1 +n 2 ) | Ω x j (r, •)dr ∈ Ker P j , ∀j ∈ {1, 2}} = {(x 1 , x 2 ) ∈ L 2 (Ω 2 , R n 1 +n 2 ) | Ω x j (r, •)dr L 2 (Ω,R n j ) = 0, ∀j ∈ {1, 2}} (
where P j denotes the HS integral operator of kernel ρ j ), since ρ 1 and ρ 2 are positive-definite. Secondly, remark that P can be written as a block-diagonal operator, with two blocks in L(L 2 (Ω, R n 1 )) and L(L 2 (Ω 2 , R n 2 )), respectively. Roughly speaking, this means that the PE signal g must excite "separately" on its two components so that we are able to distinguish them and to reconstruct separately w 11 and w 12 . Finally, note that if d 1j does not depend on r (i.e., d 1j (•, r ) is constant for all r ∈ Ω), then neither does g j (we write g j (t, r ) := g j (t, r, r ) by abuse of notations), and the kernel of P simply means that we do not require to excite the system along r. In other words, in that case, (9) can be rewritten as
T 0 2 j=1 Ω g j (t + τ, r ) T X j (r )dr 2 dτ κ 2 j=1 Ω Ω ρ j (r, r )X j (r )dr 2 dr
where X j defined by X j (r ) = Ω x j (r, r )dr spans L 2 (Ω, R n j ) as x j spans L 2 (Ω 2 , R n j ), which means that g is PE with respect to the HS integral operator having kernel diag(ρ 1 , ρ 2 ), which is a self-adjoint positive-definite endomorphism of L 2 (Ω, R n 1 +n 2 ).
Remark 3.13. One of the drawbacks of the observer (4) is that Theorem 3.7 does not guarantee input-to-state stability (ISS; see, e.g., [START_REF] Mironchenko | Input-to-state stability[END_REF][START_REF] Sontag | Input to State Stability: Basic Concepts and Results[END_REF]) of the error system with respect to perturbations of the measured output z 1 or model errors. This is an important issue in the context of neurosciences since model parameters are often uncertain. In particular, the assumption that S ij and d ij are known is based on models [] that may vary with time and with individuals.
From a mathematical point of view, it is due to the fact that the Lyapunov function V (see [START_REF] Brivadis | Existence of an equilibrium for delayed neural fields under output proportional feedback[END_REF]) used to investigate the system's stability cannot easily be shaped into a control Lyapunov function. Numerical experiments are performed in Section 5 to investigate robustness to measurement noise. From a theoretical viewpoint, in order to obtain additional robustness properties, new observers should be investigated in order to obtain global exponential contraction of the error system, for example, inspired by [START_REF] Burghi | Online estimation of biophysical neural networks[END_REF]. In any case, the Lyapunov analysis performed in Section 3.3.1 is still a bottleneck for proving the convergence of observers of this kind.
3.3 Proof of Theorem 3.7 (observer convergence)
3.3.1 Step 1: Proof that lim t→+∞ z1 (t) Xz 1 = lim t→+∞ z2 (t) Xz 2 = 0
In order to obtain the first part of the result, we seek a Lyapunov functional V for the estimation error dynamics. Inspired by the analysis performed in [START_REF] Chaillet | Robust stabilization of delayed neural fields with partial measurement and actuation[END_REF], let us consider the following candidate Lyapunov function:
V (z 1 , z2t , w11 , w12 ) := V z 1 (z 1 ) + V z 2 (z 2 ) + V w 1 ( w11 ) + V w 2 ( w12 ) + W 1 (z 2t ) + W 2 (z 2t ) (10)
where, for all (z 1 , z2t , w11 , w12 )
∈ X z 1 × C 0 ([-d, 0], X z 2 ) × X w 11 × X w 12 , V z i (z i ) := 1 2 Ω zi (r) τ i (r)z i (r)dr, (11)
V w j ( w1j ) := 1 2 Ω 2 Tr( w1j (r, r ) τ j (r) w1j (r, r ))dr dr,
W i (z 2t ) := Ω 2 γ i (r) 0 -d i2 (r,r ) |z 2t (s, r )| 2 dsdr dr, (12)
and γ i ∈ L 2 (Ω, R), for i ∈ {1, 2}, are to be chosen later.
Computing the time derivative of these functions along solutions of (5), we get:
d dt V z 1 (z 1 (t)) = -α Ω |z 1 (t,
i (z 2t ) = Ω 2 γ i (r)(|z 2 (t, r )| 2 -|z 2 (t -d i2 (r, r ), r )| 2 )dr dr.
Combining the previous computations, we obtain that
d dt V (z 1 (t), z2t , w11 (t), w12 (t)) = -α z1 (t) 2 Xz 1 -1 - 2 i=1 Ω γ i (r)dr z2 (t) 2 Xz 2 + 2 i=1 N i (z i (t), ẑ2t , z 2t ) - 2 i=1 Ω 2 γ i (r)|z 2 (t -d i2 (r, r ), r )| 2 dr dr
where
N i (z i (t), ẑ2t , z 2t ) := Ω 2 zi (t, r) w i2 (r, r )(S i2 (ẑ 2 (t-d i2 (r, r ), r ))-S i2 (z 2 (t-d i2 (r, r ), r ))) dr dr.
Let us provide a bound of N i by applying Cauchy-Schwartz and Young's inequalities.
|N i (z i (t), ẑ2t , z 2t )| = Ω 2 zi (t, r) w i2 (r, r ) S i2 (ẑ 2 (t -d i2 (r, r ), r )) -S i2 (z 2 (t -d i2 (r, r ), r )) dr dr Ω zi (t, r) Ω w i2 (r, r ) S i2 (ẑ 2 (t -d i2 (r, r ), r ) -S i2 (z 2 (t -d i2 (r, r ), r )) dr dr Ω zi (t, r) 2 dr Ω Ω w i2 (r, r ) S i2 (ẑ 2 (t -d i2 (r, r ), r ) -S i2 (z 2 (t -d i2 (r, r ), r )) dr 2 dr 1 2ε i zi (t) 2 Xz i + ε i 2 Ω Ω w i2 (r, r ) S i2 (ẑ 2 (t -d i2 (r, r ), r ) -S i2 (z 2 (t -d i2 (r, r ), r )) dr 2 dr 1 2ε i zi (t) 2 Xz i + ε i 2 Ω Ω w i2 (r, r ) 2 dr Ω S i2 (ẑ 2 (t -d i2 (r, r ), r ) -S i2 (z 2 (t -d i2 (r, r ), r )) 2 dr dr 1 2ε i zi (t) 2 Xz i + ε i 2 i2 2 Ω Ω w i2 (r, r ) 2 dr Ω |z 2 (t -d i2 (r, r ), r )| 2 dr dr for all ε i > 0. Now, set γ i (r) := ε i 2 i2
2 Ω w i2 (r, r ) 2 dr for all i ∈ {1, 2}. We finally get that
d dt V (z 1 (t), z2t , w11 (t), w12 (t)) -α - 1 2ε 1 z1 (t) 2 Xz 1 -1 - 2 i=1 Ω γ i (r)dr - 1 2ε 2 z2 (t) 2 Xz 2 . ( 14
)
In order to make V a Lyapunov function, it remains to choose the constants ε i . By definition of γ i , we have .
2 i=1 Ω γ i (r)dr = 1 2 2 i=1 ε i 2 i2 w i2 2 (Xw i2
Then, there exist two positive constants c 1 and c 2 , given by c 1 = α -α * and c 2 = 1 4 (1 -
2 22 w 22 2 (Xw 22 , • )
), such that for any solution of (5),
d dt V (z 1 (t), z2t , w11 (t), w12 (t)) -c 1 z1 (t) 2 Xz 1 -c 2 z2 (t) 2 Xz 2 . ( 15
)
Since τ i ∈ C 0 (Ω, D n i ++ ), V z i and V w j define norms that are equivalent to the norms of X z i and (X w 1j , • F ), respectively. Hence, the error system is uniformly Lyapunov stable, since V is non-increasing. Moreover, zi , and w1j are bounded for i, j ∈ {1, 2}. Moreover, we have for all .
t 0, d dt V z 1 (z 1 (t)) -α z1 (t) 2 Xz 1 + Ω |z 1 (t,
Hence d dt V z 1 (z 1 ) and d dt V z 2 (z 2
) are bounded since zi , and w1j are bounded for all i, j ∈ {1, 2}. Thus, according to Barbalat's lemma applied to
V z i (z i ), V z i (z i (t)) → 0 as t → +∞, hence zi Xz i → 0. Remark 3.14. At this stage, note that d dt (V z 2 (z 2 ) + W 2 (z 2t )) -c 2 z2 2 Xz 2
whenever γ 2 and ε 2 are chosen as above. Hence, Assumption 3.1 implies that z2 is converging towards 0. This justifies that Assumption 3.1 is indeed a dissipativity assumption of the z 2 sub-system, i.e. a detectability assumption, since ẑ2 has the same dynamics than z 2 .
3.3.2
Step 2: Proof that lim t→+∞ w11 (t)P X Xw 11 = lim t→+∞ w12 (t)P Y Xw 12 = 0 Now, assume that t → (S 11 (z 1 (t -d 11 )), S 12 (z 2 (t -d 12 ))) is PE with respect to ρ. The error dynamics (5) can be rewritten as
τ 1 (r) ∂ z1 ∂t (t, r) = f 0 (t, r) + Ω w11 (t, r, r )g 1 (t, r, r )dr + Ω w12 (t, r, r )g 2 (t, r, r )dr τ 1 (r) ∂ w11 ∂t (t, r, r ) = f 1 (t, r, r ) τ 2 (r) ∂ w12 ∂t (t, r, r ) = f 2 (t, r, r )
where g j (t, r, r
) := S 1j (z j (t -d 1j (r, r ), r )), ∀j ∈ {1, 2}, f 0 (t) 2 Xz 1 := Ω -αz 1 (t, r) + Ω ŵ12 (t, r, r )(S 12 (ẑ 2 (t -d 12 (r, r ), r )) -S 12 (z 2 (t -d 12 (r, r ), r )))dr 2 dr 2α 2 z1 (t) 2 Xz 1 + 2 2 12 Ω Ω ŵ12 (t, r, r ) 2 dr Ω z2 (t -d 12 (r, r ), r ) 2 dr dr 2α 2 z1 (t) 2 Xz 1 + 2 2 12 ŵ12 (t) 2 (Xw 12 , • ) sup s∈[-d,0] z2t (s) 2 Xz 2 , f 1 (t) (Xw 11 , • ) := Ω 2 |z 1 (t, r)S 11 (z 1 (t -d 11 (r, r ), r ))| 2 S2 11 µ(Ω) z1 (t) 2 Xz 1 , and
f 2 (t) (Xw 12 , • ) := Ω 2 |z 1 (t, r)S 12 (ẑ 2 (t -d 12 (r, r ), r ))| 2 S2 12 µ(Ω) z1 (t) 2 Xz 1 .
Recall that, according to Step 1, zi (t) Xz i → 0 for all i ∈ {1, 2}, and ŵ12 (t
) (Xw 12 , • ) w12 (t) (Xw 12 , • ) + w 12 (t) (Xw 12 , • ) remains bounded as t → +∞. Hence f 0 (t) (Xz 1 , • ) → 0 and f j (t) (Xw 1j , • ) → 0 as t → +∞ for all j ∈ {1, 2}. Moreover, t → (g 1 (t), g 2 (t)) is PE with respect to ρ by assumption.
Applying twice Duhamel's formula (once on z1 , then once on w1j ) , we get that for all t, τ 0,
τ 1 (r)z 1 (t + τ, r) = τ 1 (r)z 1 (t, r) + τ 0 f 0 (t + s, r)ds + 2 j=1 Ω τ 0 w1j (t + s, r, r )g j (t + s, r, r )dsdr = τ 1 (r)z 1 (t, r) + τ 0 f 0 (t + s, r)ds + 2 j=1 Ω w1j (t, r, r ) τ 0 g j (t + s, r, r )dsdr + 2 j=1 Ω τ 0 s 0 τ j (r) -1 f j (t + σ, r, r )g j (t + s, r, r )dσdsdr . For any t, T 0, define O(t, T ) := T 0 Ω z1 (t+τ, r) τ 1 (r) 2 z1 (t+τ, r)drdτ . Since z1 (t) Xz 1 → 0, O(t, T ) → 0 as t → +∞ for all T 0. Moreover, O(t, T ) = T 0 Ω τ 1 (r)z 1 (t, r) + τ 0 f 0 (t + s, r)ds + 2 j=1 Ω τ 0 s 0 τ j (r) -1 f j (t + σ, r, r )g j (t + s, r, r )dσdsdr 2 drdτ + T 0 Ω 2 j=1 Ω w1j (t, r, r ) τ 0 g j (t + s, r, r )dsdr 2 drdτ + 2 T 0 Ω 2 j=1 Ω w1j (t, r, r ) τ 0 g j (t + s, r, r )dsdr τ 1 (r)z 1 (t, r) + τ 0 f 0 (t + s, r)ds + 2 j=1 Ω τ 0 s 0 τ j (r) -1 f j (t + σ, r, r )g j (t + s, r, r )dσdsdr drdτ Since z1 (t) Xz 1 → 0, f 0 (t) (Xz 1 , • ) → 0 and f j (t) (Xw 1j , • ) → 0 as t → +∞, |g j (t, r, r )|
S1j for all t 0 and all r, r ∈ Ω, and t → w1j (t) Xw 1j is bounded, we get that for any T > 0,
lim t→+∞ T 0 Ω 2 j=1 Ω w1j (t, r, r ) τ 0 g j (t + s, r, r )dsdr 2 drdτ = 0. ( 16
)
For all t, τ 0 and all r ∈ Ω, define h(t, τ, r) := 2 j=1 Ω w1j (t, r, r ) τ 0 g j (t+s, r, r )dsdr . By [START_REF] Curtain | Adaptive observers for slowly time varying infinite dimensional systems[END_REF], h(t, •) L 2 ((0,T ),Xz 1 ) → 0 as t → +∞. Note that ∂h ∂τ (t, τ, r) = 2 j=1 Ω w1j (t, r, r )g j (t + τ, r, r )dr hence h(t, •) ∈ W 1,2 ((0, T ), X z 1 ) and is bounded since g j 's are bounded. Moreover, since u j is supposed to be bounded for all j ∈ {1, 2}, dz i dt is also bounded according to Proposition 2.5. Hence, since S 1j 's are differentiable with bounded derivative, ∂ 2 h ∂τ 2 (t, τ, r) = j=1 Ω w1j (t, r, r ) ∂g j ∂τ (t + τ, r, r )dr is well-defined and bounded. Therefore, for all t 0, h(t, •) ∈ W 2,2 ((0, T ), X z 1 ) and h(t, •) W 2,2 ((0,T ),Xz 1 ) c 3 for some positive constant c 3 independent of t. According to the interpolation inequality (see, e.g., [37, Section II.2.1]),
h(t, •) 2 W 1,2 ((0,T ),Xz 1 ) c 3 h(t, •) L 2 ((0,T ),Xz 1 ) . Thus ∂h ∂τ (t, τ ) L 2 ((0,T ),Xz 1 ) → 0, meaning that lim t→+∞ T 0 Ω 2 j=1 Ω w1j (t, r, r )g j (t + τ, r, r )dr 2 drdτ = 0. ( 17
)
Let (e k ) k∈N be a Hilbert basis of X z 1 . We have, for all t 0, Now, since g is PE with respect to ρ over L 2 (Ω 2 , R n 1 +n 2 ), there exist positive constants T and κ such that (9) holds. for all x j ∈ L 2 (Ω 2 , R n j ), j ∈ {1, 2}, and all t 0. Choosing x j (r, r ) = w1j (t, r, r ) e k (r), we get
T 0 Ω 2 j=1 Ω w1j (t, r, r )g j (t + τ, r, r )dr 2 drdτ k∈N κ 2 j=1 Ω Ω 2
ρ j (r, r ) w1j (t, r , r ) e k (r )dr dr 2 dr.
On the other hand,
2 j=1 w1j (t) • ρ j 2 (Xw 1j , • F ) = 2 j=1 w(t) • ρ j 2 L 2 (Ω 2 ,(R n 1 ×n j , • F )) = 2 j=1 ρ j • w(t) * 2 L 2 (Ω 2 ,(R n j ×n 1 , • F )) = 2 j=1 k∈N Ω Ω (ρ j • w(t) * )(r, r )e k (r )dr 2 dr = 2 j=1 k∈N Ω Ω 2 ρ j (r, r ) w1j (t, r , r ) dr e k (r )dr 2 dr = 2 j=1 k∈N Ω Ω 2 ρ j (r, r ) w1j (t, r , r ) e k (r )dr dr 2 dr 1 κ T 0 Ω 2 j=1 Ω w1j (t, r, r )g j (t + τ, r, r )dr 2 drdτ.
Thus, by [START_REF] Demetriou | Adaptive observers for a class of infinite dimensional systems[END_REF],
lim t→+∞ w11 (t) • ρ 1 (Xw 11 , • F ) = lim t→+∞ w12 (t) • ρ 2 (Xw 12 , • F ) = 0,
which concludes the proof of Theorem 3.7.
Adaptive control
In order to tackle Problem 2.2, we now introduce an adaptive controller based on the previous observer design.
Exact stabilization
Let z 1,ref ∈ X z 1 be a constant reference signal at which we aim to stabilize z 1 . The z 2 dynamics of (2) can be written as When z 1 is constantly equal to z 1,ref and u 2 = 0, we have v 2 = 0. Hence, according to [22, Proposition 3.6], system (18) admits, in that case, a stationary solution, i.e., there exists z
τ 2 (r) ∂z 2 ∂t (t, r) = -z 2 (t, r) + Ω w 22 (r, r )S 22 (z 2 (t -d 22 (r, r ), r ))dr + v 2,ref (r) + v 2 (t, r) (18)
2,ref ∈ X z 2 such that z 2,ref (r) = Ω w 22 (r, r )S 22 (z 2,ref (r ))dr + v 2,ref (r), ∀r ∈ Ω.
Moreover, we have the following stability result. Lemma 4.1 ([14, Proposition 1]). Under Assumption 3.1, system (18) is input-to-state stable ( ISS) at z 2,ref with respect to v 2 , that is, there exist functions β of class KL and ν of class K ∞ such that, for all z 2,0 ∈ C 0 ([-d, 0], X z 2 ) and all v 2 ∈ C 0 (R + , X z 2 ), the corresponding solution z 2 of (18) satisfies, for all t 0,
z 2 (t) -z 2,ref Xz 2 β( z 2 (0) -z 2,ref Xz 2 , t) + ν sup τ ∈[0,t] v 2 (τ ) Xz 2 . ( 19
)
Remark 4.2. The result given in [14, Proposition 1] holds for systems whose dynamics is of the form (after a change of variable)
τ 2 (r) ∂z 2 ∂t (t, r) = -z 2 (t, r) + S 22 Ω w 22 (r, r )z 2 (t -d 22 (r, r ), r )dr + v 2,ref (r) + v 2 (t, r),
which is slightly different from [START_REF] Detorakis | Incremental stability of spatiotemporal delayed dynamics and application to neural fields[END_REF]. However, one can easily check that this modification does not impact at all the proof given in [START_REF] Chaillet | Robust stabilization of delayed neural fields with partial measurement and actuation[END_REF]. In particular, the control Lyapunov function given in [START_REF] Chaillet | Robust stabilization of delayed neural fields with partial measurement and actuation[END_REF] remains a Lyapunov function for (18) (where v 2 is the input).
In particular, Assumption 3.1 implies that z 2,ref is unique. In the case where z 1,ref = 0 and S 2j (0) = 0 for all j ∈ {1, 2}, we also have z 2,ref = 0.
We aim to define a dynamic output feedback law that stabilizes (z 1 , z 2 ) at the reference (z 1,ref , z 2,ref ). We propose the following feedback strategy: for all t 0 and all r ∈ Ω, set
u 1 (t, r) = -α(z 1 (t, r) -z 1,ref (r)) + z 1 (t, r) - r ∈Ω ŵ11 (t, r, r )S 11 (z 1 (t -d 11 (r, r ), r ))dr - r ∈Ω ŵ12 (t, r, r )S 12 (ẑ 2 (t -d 12 (r, r ), r ))dr , u 2 (t, r) = 0, (20)
where α > 0 is a tunable controller gain. The motivation of this controller is that, when w 1j = ŵ1j and ẑ2 = z 2 , the resulting dynamics of z 1 is τ 1
∂z 1 ∂t = -α(z 1 -z 1,ref ).
Hence, since z 2 has a contracting dynamics by Assumption 3.1, this would lead (z 1 , z 2 ) towards (z 1,ref , z 2,ref ). Note that for the controller [START_REF] Detorakis | Structure of receptive fields in a computational model of area 3b of primary sensory cortex[END_REF], the resulting dynamics of ẑ1 in ( 4) is τ 1
∂ ẑ1 ∂t = -α(ẑ 1 -z 1,ref ).
Hence, since the choice of the initial condition of the observer is free, one particular instance of the closed-loop observer is given by ẑ1 (t, r) = z 1,ref (r) for all t 0 and all r ∈ Ω. For this reason, in the stabilization strategy, we can reduce the dimension of the observer by setting ẑ1 = z 1,ref , i.e., z1 = z 1,ref -z 1 . Finally, the closed-loop system that we investigate can be rewritten as system (2) coupled with the controller (20) and the observer system given by
τ 2 (r) ∂ ẑ2 ∂t (t, r) = -ẑ 2 (t, r) + Ω w 21 (r, r )S 21 (z 1 (t -d 21 (r, r ), r ))dr + Ω w 22 (r, r )S 22 (ẑ 2 (t -d 22 (r, r ), r ))dr τ 1 (r) ∂ ŵ11 ∂t (t, r, r ) = (z 1 (t, r) -z 1,ref (r))S 11 (z 1 (t -d 11 (r, r ), r )) τ 1 (r) ∂ ŵ12 ∂t (t, r, r ) = (z 1 (t, r) -z 1,ref (r))S 12 (ẑ 2 (t -d 12 (r, r ), r )) . (21)
First, let us ensure the well-posedness of the resulting closed-loop system.
Proposition 4.3 (Closed-loop well-posedness). Suppose that Assumption 2.4 is satisfied.
For any initial condition (z 1,0 , z 2,0 , ẑ2,0 , ŵ11,0 , ŵ12,0 )
∈ C 0 ([-d, 0], X z 1 ) × C 0 ([-d, 0], X z 2 ) 2 × X w 11 × X w 12
, the closed-loop system (2)-( 20)-( 21) admits a unique corresponding solution
(z 1 , z 2 , ẑ2 , ŵ11 , ŵ12 ) ∈ C 1 ([0, +∞), X z 1 × X 2 z 2 × X w 11 × X w 12 ) ∩ C 0 ([-d, +∞), X z 1 × X 2 z 2 × X w 11 × X w 12 ).
Proof. We adapt the proof of Proposition 3.2. The main difference is that, since the system is in closed loop, the state variables (z 1 , z 2 ) cannot be taken as external inputs in the observer dynamics. Therefore, we consider the map
F : R + × C 0 ([-d, 0], X z 1 ) × C 0 ([-d, 0], X z 2 ) 2 × X w 11 × X w 12 → X z 1 × X 2 z 2 × X w 11 × X w 12 such that (4) can be rewritten as d dt (z 1 , z 2 , ẑ2 , ŵ11 , ŵ12 )(t) = F (t, z 1t , z 2t , ẑ2t , ŵ11 , ŵ12
). Since τ i are continuous and positive, S ij are bounded and w ij are square-integrable over Ω 2 and d i are continuous, the map F is well-defined by the same arguments than [START_REF] Faye | Some theoretical and numerical results for delayed neural field equations[END_REF]Lemma 3.1.1]. Let us show that F is continuous, and globally Lipschitz with respect to (ẑ 1 , ẑ1t , ŵ11 , ŵ12 ), so that we can conclude with [26, Lemma 2.1 and Theorem 2.3]. Define F 1 taking values in X z 1 , F 2 and F 3 taking values in X z 2 , F 4 taking values in X w 11 and F 5 taking values in X w 12 so that F = (F i ) i∈{1,2,3,4,5} From the proof of [24, Lemma 3.1.1], F 1 , F 2 , and F 3 are continuous and globally Lipschitz with respect to the last variables. From the boundedness and global Lipschitz continuity of S, F 4 , and F 5 are also continuous and globally Lipschitz with respect to the last variables. This concludes the proof of Proposition 4.3. Now, the main theorem of this section can be stated. (Xw 22 , • ) ) . Then, for all α > α * , any solution of (2)-( 20)-( 21) is such that ), that is, for all ε > 0, there exists δ > 0 such that, if
lim t→+∞ z 1 (t) -z 1,ref Xz 1 = lim t→+∞ z 2 (t) -z 2,
z 1 (t 0 )-z 1,ref , z 2t 0 -z 2,ref , ẑ2t 0 -z 2,ref , ŵ11 (t 0 )-w 11 , ŵ12 (t 0 )-w 12 Xz 1 ×C 0 ([-d,0],X 2 z 2 )×Xw 11 ×Xw 12 δ
for some t 0 0, then Proof. As in Section 3, define the observer error z2 = ẑ2 -z 2 and w1j = ŵ1j -w 1j . Define also z1 := z 1,ref -z 1 . Then (z 1 , z2 , w11 , w12 ) satisfies [START_REF] Besançon | On adaptive observers for systems with state and parameter nonlinearities[END_REF]. Thus, Theorem 3.7 can be applied. It implies that (z 1 , z2 ) → 0 in X z 1 × X z 1 , ( w11 , w12 ) remains bounded, and this autonomous system is uniformly Lyapunov stable at the origin. Moreover, z 2 satisfies the dynamics [START_REF] Detorakis | Incremental stability of spatiotemporal delayed dynamics and application to neural fields[END_REF] with v 2 (t, r) = Ω w 21 (r, r )(S 21 (z 1,ref (r ) -z1 (t, r )) -S 21 (z 1,ref (r )))dr . In other words, (z 1 , z2 , w11 , w12 ) coupled with z 2 is a cascade system where z 2 is driven by the other variables. According to Lemma 4.1, ( 18) is ISS with respect to v 2 . Thus, z 2 → z 2,ref in X z 2 and is uniformly Lyapunov stable at z 2,ref .
z 1 (t) -z 1,ref , z 2 (t) -z 2,ref , ẑ2 (t) -z 2,
Remark 4.5. Let us consider the case where the full state is measured, i.e., n 2 = 0. Then α * = 0, which means that the controller does not rely on any high-gain approach (see Remark 3.8). This is the main difference between the present work and the approach developed in [START_REF] Chaillet | Robust stabilization of delayed neural fields with partial measurement and actuation[END_REF], where the gain has to be chosen sufficiently large even in the fully measured context. Remark 4.6. Convergence of the kernel estimation ŵ1j towards w 1j is not investigated in Theorem 4.4. Actually, one can apply the last part of Theorem 3.7 to show that, under a PE assumption on the signal g = (S 11 (z 1 (t -d 11 )), S 12 (z 2 (t -d 12 ))), the kernel estimation is guaranteed in the sense of (8). However, since the feedback law is made to stabilize the system, z 1 and z 2 are converging towards (z 1,ref , z 2,ref ). Hence, for g to be PE with respect to P , one must have that for some positive constant T and κ,
T |(S 11 (z 1,ref ), S 12 (z 2,ref )) * v| 2 κ P x 2 Y , ∀x ∈ X , ∀t 0.
i.e. that (S 11 (z 1,ref ), S 12 (z 2,ref )) * induces a semi-norm on X stronger than the one induced by P . This operator being a linear form, it implies that the kernel of P must contain a hyperplane, which makes the convergence of ŵ1j towards w 1j too much blurred by P to claim that any interesting information on w 1j can be reconstructed by this method, except when dim X = 1.
In particular, if (S 11 (z 1,ref ), S 12 (z 2,ref )) = (0, 0), then P = 0, hence no convergence of ŵ1j towards w 1j is guaranteed. The objective of simultaneously stabilizing the neuronal activity while estimating the kernels is investigated in the next section.
Simultaneous kernel estimation and practical stabilization
The goal of this section is to propose an observer and a controller that allow to simultaneously answer Problems 2.1 and 2.2. In order to do so, we make the following set of restrictions (in this subsection only) for reasons that will be pointed in Remark 4.11:
(i) Exact stabilization is now replaced by practical stabilization, that is, for any arbitrary small neighborhood of the reference, a controller that stabilizes the system within this neighborhood has to be designed.
(ii) All the neuronal activity is measured, i.e., n 2 = 0. Hence, we set z = z 1 , w 11 = w, u = u 1 , S = S 11 , τ = τ 1 and d = d 11 to ease the notations.
(iii) The state space X z is finite-dimensional. To do so, as suggested in Remark 2.3, we replace the Lebesgue measure with the counting measure and take Ω as a finite collection of N points in R q , so that X z R (N ×n) .
(iv) The delay d is constant. We write d := d(r, r ) for all r, r ∈ Ω by abuse of notations.
(v) The decay rate τ (r) is constant and all its components are equal, that is, τ (r) = τ Id R n for all r ∈ Ω for some positive real constant τ .
(vi) The reference signal is z ref is constant and S(z ref ) = 0.
(vii) S is locally linear near z ref , that is, there exists ε > 0 such that S(z)
= dS dz (z ref )(z -z ref ) for all z ∈ R n satisfying |z -z ref | ε. Moreover, dS dz (z ref ) is invertible.
Under these restrictions, we now consider the following problem. Problem 4.7. Consider the system (3). From the knowledge of S, τ and d and the online measurement of u(t) and z(t), for any arbitrary small neighborhood of 0 ∈ X z , find u in the form of a dynamic output feedback law that stabilizes z in this neighborhood, and estimate online w.
As explained in Remark 4.6, there is no hope of estimating w when employing the feedback law [START_REF] Detorakis | Structure of receptive fields in a computational model of area 3b of primary sensory cortex[END_REF], since stabilizing z at z ref prevents the persistency condition from being satisfied. For this reason, we suggest adding to the control law a small excitatory signal, whose role is to improve persistency without perturbing too much the z dynamics. Of course, this strategy prevents obtaining exact stabilization. This is why this condition has been relaxed in (i). More precisely, to answer Problem 4.7, we suggest to consider the feedback law
u(t, r) = v(t, r) -α(z(t, r) -z ref ) + z(t, r) - r ∈Ω ŵ(t, r, r )S(z(t -d, r ))dr , (22)
coupled with the observer
τ ∂ ẑ ∂t (t, r) = -αẑ(t, r) + v(t, r) τ ∂ ŵ ∂t (t, r, r ) = -(ẑ(t, r) -z(t, r))S(z(t -d, r )) (23)
where α > 0 is a tunable controller gain and v is a signal to be chosen both small enough in order to ensure practical convergence of z towards 0 and persistent in order to ensure convergence of ŵ towards w. In practice, v can also be seen as an external signal arising from interaction with other neurons whose dynamics are not modeled. Since v is supposed to be continuous, the proof of well-posedness is identical to the proof of Proposition 4.3 and we get the following. Proposition 4.8 (Well-posedness of ( 1)-( 22)-( 23)). Suppose that Assumption 2.4 is satisfied. For any v ∈ C 0 (R + , X z ) and any initial condition (z 0 , ẑ0 , ŵ0 ) ∈ C 0 ([-d, 0], X z ) 2 × X w , the closed-loop system (1)-( 22)-( 23) admits a unique corresponding solution (z, ẑ, ŵ) ∈
C 1 ([0, +∞), X 2 z × X w ) ∩ C 0 ([-d, +∞), X 2 z × X w ).
Theorem 4.9 (Simultaneous estimation and practical stabilization). Suppose that Assumption 2.4 is satisfied. Then, for all α > 0 and all input v ∈ C 0 (R + , X z ), any solution of (2)-( 22)-( 23 Moreover, there exists ε > 0 such that if v ∈ C 1 (R + , X ) is bounded by ε, has bounded derivative, and is PE with respect to Id X , then we also have
lim t→+∞ ŵ(t) -w (Xw, • F ) = 0.
The proof of Theorem 4.9 relies on the following lemma that states standard properties of the PE condition. Lemma 4.10 (Properties of PE). Let X be a Hilbert space and Y be a Banach space. Let g ∈ C 0 (R + , X ) and P ∈ L(X , Y).
(a) If (6) is satisfied only for t t 0 for some t 0 0, then g is PE with respect to P .
(b) If g is PE with respect to P , then t → g(t -d) is also PE with respect to P for any positive delay d 0.
(c) If g is PE with respect to P and W ∈ L(X ), then W g is PE with respect to P W * .
Moreover, if X = Y is finite-dimensional and P = Id X , we also have:
(d) If g is PE with respect to Id X with constants T and κ in (6), and g is bounded by some positive constant M , then M κ T . Conversely, for all positive constants T , κ and M such that M 2κ dim X T
, there exists g ∈ C 1 (R + , X ) that is bounded by M and PE with respect to Id X with constants (T , κ).
(e) If g is bounded and PE with respect to Id X and δ ∈ C 0 (R + , X ) is such that δ(t) → 0 as t → +∞, then g + δ is also PE with respect to Id X .
(f ) If µ is a positive constant and g ∈ C 1 (R + , X ) is bounded, with bounded derivative, and PE with respect to Id X , then any solution z ∈ C 1 (R + , X ) of dz dt = -µz(t) + g(t) is also PE with respect to Id X . Lemma 4.10 is proved in A. r, r ∈ Ω.
In order to test the observer (4), the inputs u i are chosen as spatiotemporal periodic signals with irrational frequency ratio, i.e., u i (t, r) = µ sin(λ i tr) with µ = 10 3 and λ 1 /λ 2 irrational. This choice is made to ensure persistency of excitation of the input (u 1 , u 2 ), which in practice seems to be sufficient to induce persistency of excitation of t → (S 11 (z 1 (t -d)), S 12 (z 2 (td))). Note that for u 1 = u 2 = 0, the persistency of excitation assumption seems to be not guaranteed. Hence the observer does not converge (the plot is not reported). For testing the controllers, the inputs are respectively chosen as [START_REF] Detorakis | Structure of receptive fields in a computational model of area 3b of primary sensory cortex[END_REF] for exact stabilization and ( 22) with v(t, r) = µ sin(λ 1 tr), µ ∈ R, for simultaneous practical stabilization and estimation. In the latter case, we must fix n 2 = 0 as imposed by Section 4.2 (ii).
The parameters of the system (2), the observer (4), and the controller [START_REF] Detorakis | Structure of receptive fields in a computational model of area 3b of primary sensory cortex[END_REF] are set as in Table 1, so that Assumptions 2.4 and 3.1 are fulfilled. The convergence of the observer error (5) towards zero is verified in Figure 2. In particular, the estimation of w 11 by the observer is shown at several time steps in Figure 1. The convergence of the state towards zero ensured by the controller ( 20) is shown in Figure 3. As explained in Remark 4.6, no convergence of the kernels estimation can be hoped for in Figure 3 since stabilizing the state prevents PE.
Figure 4 enlightens the compromise made by the feedback law ( 22) between observation and estimation: by choosing v(t, r) = µ sin(λ 1 tr) with µ = 100, the asymptotic regime of the state remains in a neighborhood of zero (practical stabilization), which allows the convergence of the kernel estimation ŵ towards w. When increasing µ, the estimation rate increases (one obtains a plot similar to Figure 2 for µ = 10 3 ) but the asymptotic regime of the state moves away from zero. On the contrary, when decreasing µ towards zero, one obtains an asymptotic regime of z closer to zero (one obtains a plot similar to Figure 3 for µ = 0.1), but the convergence of ŵ towards w is slower.
Conclusion
In this paper, a new adaptive observer has been proposed to estimate online the synaptic strength between neurons from partial measurement of the neuronal activity. We proved the convergence of the observer under a persistency of excitation condition by designing a Lyapunov functional taking into account the infinite-dimensional nature of the state due to the spatial distribution of the neuronal activity and to the time-delay. We have shown that this observer can be used to design dynamic feedback laws that stabilize the system to a target point, even without persistency of excitation. From the theoretical viewpoint, the main open question remains to extend our result on simultaneous estimation and stabilization. It currently relies on important limitations on the system, that cannot be lifted without a deeper analysis of the PE condition proposed in the paper. In particular, sufficient conditions ensuring that choosing a PE input signal guarantees PE of the state of the neural fields should be sought. Hence, g is PE with respect to Id X with constants T and κ.
(e) Denote by M a bound of g. Denote by T and κ the PE constants of g with respect to Id X . Let ε = κ 4M T . Let t 0 > 0 be such that δ(t) X ε for all t t 0 . Then for all t t 0 and all x ∈ X , Hence, g + δ is PE with respect to Id X by (a).
(f ) This proof follows the one given in [START_REF] Loria | A nested matrosov theorem and persistency of excitation for uniform convergence in stable nonautonomous systems[END_REF]Property 4]. We give it here for the sake of completeness. Denote by M a bound of g and ġ. Let z be a solution of dz dt = -µz(t) + g(t). By Duhamel's formula, z(t) X e -µt z(0) X + t 0 e -µ(t-τ ) g(τ ) X dτ e -µt z(0) X + M µ .
Hence there exists t 0 0 such that z(t) X
2M
µ for all t t 0 . For any x ∈ X , define φ x : R + → R by φ x (t) = -z(t), x X g(t), x X . Then φ x is continuously differentiable and φx = z, x X µg -ġ, x X -| g, x X | 2 . Hence, for all t t 0 and all T > 0, φ x (t + T ) -φ x (t) = t+T t z(τ ), x X µg(τ ) -ġ(τ ), x X dτ
- t+T t | g(τ ), x X | 2 dτ. ( 24
)
Since g is PE with respect to Id X , there exist T, κ > 0 such that, for any k ∈ N, x 2 . Choose k large enough that kκ > 4M 2 µ . Combining ( 24) and ( 25) yields, for all t t 0 and all x ∈ X , t+kT t z(τ ), x X µg(τ ) -ġ(τ ), x X dτ kκ -
4M 2 µ x 2 X . (26)
Finally, we get by Cauchy-Schwartz inequality that
Combining ( 26)-( 27)-( 28), we obtain that for all x ∈ X and all t t 0 , t+kT t | z(τ ), x X | 2 dτ (kκ -4M 2 µ ) 2 kT (µ + 1) 2 M 2 x 2 X which implies that z is persistently exciting with respect to Id X by Lemma 4.10 and (a).
Assumption 3 . 1 (
31 Strong dissipativity). It holds that 22 w 22 (Xw 22 , • ) < 1. Assumption 3.1 yields that for any pair (z a 2 , z b 2 ) of solutions of (1) (replacing τ , w, S and d by τ 22 , w 22 , S 22 and d 22 ), the distance z a 2
2
2
2 g
2 , r, r )g j (t + τ, r, r )dr , r, r )g j (t + τ, r, r ) e k (r)dr dr j (t + τ, r, r ) w1j (t, r, r ) e k (r)dr dr 2 dτ.
where v 2 ,
2 ref (r) := Ω w 21 (r, r )S 21 (z 1,ref (r ))dr and v 2 (t, r) := u 2 (t, r) + Ω w 21 (r, r )(S 21 (z 1 (t, r)) -S 21 (z 1,ref (r )))dr .
Theorem 4 . 4 ( 2 12 w 12 2 (
4422 Exact stabilization). Suppose that Assumptions 2.4 and 3.1 are satisfied. Define α * := Xw 12 , • )
S ij = tanh τ i = 1 λ 1 = 100 λ 2
12
= 10 Figure 1 :
101 Figure 1: Evolution of the kernel estimation ŵ11 (t, r, r ) when running the observer (4).
Figure 2 :
2 Figure 2: Evolution of the estimation errors w1i Xw 1i and zi Xz i for i ∈ {1, 2} of the observer (4).
Figure 3 :
3 Figure3: Evolution of the norm of the state z i Xz i and of the estimation errors w1i Xw 1i and zi Xz i for i ∈ {1, 2} for the control law[START_REF] Detorakis | Structure of receptive fields in a computational model of area 3b of primary sensory cortex[END_REF].
Figure 4 :
4 Figure 4: Evolution of the norm of the state z Xz and of the estimation errors w Xw and z Xz for the control law (22) with v(t, r) = 100 sin(λ 1 tr).
|
g(τ ), x X | 2 dτ T M 2 x 2 X , hence M κ T . Conversely, set g(τ ) = 2κ T dim X =1 sin( 2 πτT )e , where e is a basis of X . Then g is bounded by2κ dim X Tand has a bounded derivative. Moreover, for all t 0 and allx = dim X =1 x e ∈ X , t+T t | g(τ ), x X | 2 dτ = 2κ T
τ ) + δ(τ ), x X | 2 dτ t+T t | g(τ ), x X | 2 dτ -2 t+T t | g(τ ), x X δ(τ ), x X | dτ (κ -2M T ε) x 2
| 2 µ x 2 .
22 g(τ ), x X | 2 dτ kκ x 2 X , ∀x ∈ X , ∀t 0. (25)Moreover, by Cauchy-Schwartz inequality, |φ x (t)| 2M Hence φ x (t+T )-φ x (t)-4M 2 µ
), x X µg(τ ) -ġ(τ ), x X dτ (µ + 1)M x X t+kT t z(τ ), x X dτ (27)andt+kT t z(τ ), x X dτ √ kT t+kT t | z(τ ), x X | 2 dτ .
1,ref . Assuming adequate contraction properties of the z 2 dynamics, z 2 would tend towards some reference z 2,ref that depends on z 1,ref . In particular, pathological oscillations would vanish in state-state, without any large-gain assumption on the control policy. This motivates us to investigate Problem 2.1 and use the observer in dynamic output feedback to address Problem 2.2.
we denote by x t the history of x over the latest time interval of length d, i.e., x t (s) = x(t + s) for all t 0 and all s ∈ [-d, 0]. For any n ∈ N, denote by D n ++ ⊂ R n×n the set of positive diagonal matrices. For the sake of reading, if d i2 does not depend on r, i.e. if d i2 (•, r ) is constant for all r ∈ Ω, we simply write d ij (r ) := d ij (r, r ). For any globally Lipschitz map S ij : R n j → R n i , denote by ij its Lipschitz constant and set Sij
ref Xz 2 = lim t→+∞ ẑ2 (t) -z 2 (t) Xz 2 = 0 and ŵ11 (t) (Xw 11 , • F ) and ŵ12 (t) (Xw 12 , • F ) remain bounded for all t 0. Moreover, the system (2)-(20)-(21) is uniformly Lyapunov stable at (z 1,ref , z 2,ref , z 2,ref , w 11 , w 12
ref , ŵ11 (t) -w 11 , ŵ12 (t) -w 12 Xz 1 ×X 2 z 2 ×Xw 11 ×Xw 12
ε
for all t t 0 .
Table 1 :
1 System and observer parameters for the numerical simulation of Figures2-4
| g(τ ), x X | 2 dτ κ P x 2 Y , ∀x ∈ X ,which shows that g is PE with respect to P . ), x X | 2 dτ κ P x 2 Y for all x ∈ X , then for all t d,| g(τ ), x X | 2 dτ κ P x 2 Y , ∀x ∈ X ,which shows that t → g(t -d) is PE with respect to P by (a).which shows that W g is PE with respect to P W * by (a).(d) If g is bounded by M and for all t 0, t+T t| g(τ ), x X | 2 dτ κ x 2 X for all x ∈ X , then Cauchy-Schwartz inequality yields
t+t 0 +T t+t 0 +T
t+t 0
(b) If, for all t 0, | g(τ t+T t+T t t+T -d
| g(τ -d), x X | 2 dτ =
t t-d
A Proof of Lemma 4.10
(a) If, for all t t 0 , t+T t | g(τ ), x X | 2 dτ κ P x 2 Y for all x ∈ X , then for all t 0, t | g(τ ), x X | 2 dτ (c) If, for all t 0, t+T t | g(τ ), x X | 2 dτ κ P x 2 Y for all x ∈ X , then for all t 0, t+T t | W g(τ ), x X | 2 dτ = t+T t | g(τ ), W * x X | 2 dτ κ P W * x 2 Y , ∀x ∈ X ,
Actually, there is a typo in the condition stated in [14,Proposition 3]. The mistake is corrected in the proof, making it equivalent to our Assumption 3.1. See also[START_REF] Detorakis | Incremental stability of spatiotemporal delayed dynamics and application to neural fields[END_REF] Theorem 3] for a corrected version of the hypothesis.
This assumption is missing in[START_REF] Brivadis | Online estimation of hilbert-schmidt operators and application to kernel reconstruction of neural fields[END_REF] while it is implicitly used in the proof.
Proof of Theorem 4.9. Clearly, from [START_REF] Faugeras | Persistent neural states: stationary localized activity patterns in nonlinear continuous n-population, q-dimensional neural networks[END_REF] and Grönwall's inequality, lim sup t→+∞ ẑ(t) Xz lim sup t→+∞ v(t) Xz . Moreover, setting z1 = ẑ -z and w11 = ŵ -w, we see that, due to the choice of u given by [START_REF] Faugeras | Absolute stability and complete synchronization in a class of neural fields models[END_REF], (z 1 , w11 ) satisfies [START_REF] Besançon | On adaptive observers for systems with state and parameter nonlinearities[END_REF]. Hence, according to Theorem 3.7, lim sup t→+∞ ẑ(t) -z(t) Xz = 0 and ŵ(t) remains bounded. This yields the first part of the result. To show the second part of the result, it is sufficient to show that t → (r → S(z(t-d, r)) is PE with respect to Id Xz in order to apply the second part of Theorem 3.7.
To end the proof of Theorem 4.9, we use Lemma 4.10 as follows. Using Assumption (vii), let ε > 0 be such that S(z)
) is bounded by ε, has bounded derivative, and is PE with respect to Id X . Such a signal v exists by (d). By (c), τ -1 v is also PE with respect to Id X . Since ẑ -z ref satisfies [START_REF] Faugeras | Persistent neural states: stationary localized activity patterns in nonlinear continuous n-population, q-dimensional neural networks[END_REF], (f ) with µ = τ -1 α shows that ẑ -z ref is also PE with respect to Id X . Moreover, by Grönwall's inequality, there exists
) is PE with respect to Id X . Since ẑ(t) -z(t) → 0 as t → +∞ and S has bounded derivative, S(ẑ(t)) -S(z(t)) → 0. Hence, according to (e), t → S(z(t)) is also PE with respect to Id X . Hence, by (b), t → S(z(t -d)) is PE with respect to Id Xz , which concludes the proof of Theorem 4.9.
Remark 4.11. All assumptions (i)-(vii) have been used in the second part of the proof of Theorem 4.9 at crucial points where, without them, one cannot conclude. In particular, these assumptions allow to use the properties stated in Lemma 4.10. Without them, stronger versions of these properties should be required to show that S(ẑ(• -d)) is PE with respect to Id Xz . For example, without (iii), the property (e) would be required in an infinite-dimensional context, which is impossible since counter-examples can easily be found. Without (v), (f ) would be required for filters of the form dz dt = -Σz(t) + g(t) where Σ is a positive definite matrix, which is also known to be false (see [START_REF] Narendra | Persistent excitation in adaptive systems[END_REF]Example 7]). Similarly, (ii) is required to have that ẑ is a filter of v in the form of [START_REF] Faugeras | Persistent neural states: stationary localized activity patterns in nonlinear continuous n-population, q-dimensional neural networks[END_REF], which is necessary to use (f ). Without (iv), (b) would be required for non-constants delays, which is not possible due to counter-examples such as R + t → (sin(t -d 1 ), cos(t -d 2 )) ∈ R 2 with d 1 = 0 and d 2 = π 2 . Properties (vi) and (vii) are used in the end of the proof of Theorem 4.9. Without them, passing from ẑ -z ref being PE to S(ẑ) being PE remains an open problem.
Numerical simulations
We provide numerical simulations of the observer and controllers proposed in Theorems 3.7, 4.4, and 4.9. The observer simulations are in line with those presented in [START_REF] Brivadis | Online estimation of hilbert-schmidt operators and application to kernel reconstruction of neural fields[END_REF], while the two controllers (exact stabilization and simultaneous practical stabilization and estimation) are new. We consider the case of a two-dimensional neural field (namely, n 1 = n 2 = 1) over the unit circle Ω = S 1 with constant delay d. The kernels are given by Gaussian functions depending on the distance between r and r , as it is frequently assumed in practice (see [START_REF] Chaillet | Robust stabilization of delayed neural fields with partial measurement and actuation[END_REF]): w ij (r, r ) = ω ij g(r, r )/ g L 2 (Ω 2 ;R) , with g(r, r ) = exp(-σ|r -r | 2 ) for constant parameters σ and ω ij given in Table 1. Simulations code can be found in repository [START_REF] Brivadis | KernelEstimation Project[END_REF]. The system is spatially discretized over Ω with a constant space step ∆r = 1/20, and the resulting delay differential equation is solved with an explicit Runge-Kutta (2, 3) method. Initial conditions are taken as z 1 (0, r) = z 2 (0, r) = ẑ1 (0, r) = 1, ẑ2 (0, r) = ŵ11 (0, r, r ) = ŵ12 (0, r, r ) = 0 for all |
04102894 | en | [
"spi.gciv.it",
"info.info-mo"
] | 2024/03/04 16:41:22 | 2023 | https://hal.science/hal-04102894v2/file/heart2023.pdf | Sebastian Hörl
Jakob Puchinger
Modeling the ecological and economic footprint of last-mile parcel deliveries using open data: A case study for Lyon
Keywords: parcels, urban, last-mile, logistics, optimization, VRP
Modeling the ecological and economic footprint of last-mile parcel deliveries using open data: A case study for Lyon
Sebastian Hörl, Jakob Puchinger
INTRODUCTION
The amount of parcels delivered in the urban space is expected to increase strongly in the coming years. Today, cities already reflect upon strategies to regulate urban logistics, understanding the complex interplay between its economic, ecological and social impacts becomes ever more important. While ideas and research efforts on sustainable urban logistics policies are gaining traction [START_REF] Mucowska | Trends of Environmentally Sustainable Solutions of Urban Last-Mile Deliveries on the E-Commerce Market-A Literature Review[END_REF][START_REF] Neghabadi | Systematic literature review on city logistics: Overview, classification and analysis[END_REF][START_REF] Patella | The Adoption of Green Vehicles in Last Mile Logistics: A Systematic Review[END_REF]. Recent advances in transport simulation aim to model urban logistics on a systemic level [START_REF] De Bok | Application of an empirical multi-agent model for urban goods transport to analyze impacts of zero emission zones in The Netherlands[END_REF][START_REF] Sakai | SimMobility Freight: An agent-based urban freight simulator for evaluating logistics solutions[END_REF][START_REF] Toilier | Freight transport modelling in urban areas: The French case of the FRETURB model[END_REF], but reliable data remains scarce. The present short paper is an attempt to model one specific sector of urban logistics -home parcel deliveries -solely based on open data for a use case of Lyon.
METHODOLOGY
Our approach follows various steps from generating the parcel demand for a territory and defining the supply in terms of operators and distributions center. We then define cost structures to obtain the used vehicles and driven distances to deliver all parcels based on a cost-minimization and vehicle-routing approach. The individual steps are described below.
1
Demand data
Our demand generation process is based on a synthetic population for the Rhône-Alpes region around Lyon. Such a synthetic population, which is a digital representation of households and persons in a region, along with their socio-demographic attributes can be generated based on open data in France. We make use of a replicable data processing pipeline that can be applied anywhere in France [START_REF] Hörl | Synthetic population and travel demand for Paris and Île-de-France based on open and publicly available data[END_REF]. For the present study, we only consider households in our study area, which comprises the city of Lyon, the Grand Lyon metropolitan region and bordering municipalities with relevant logistics infrastructure (Figure 1). For this perimeter, the population synthesis pipeline generates 1.6 million persons in about 795,000 households.
We fuse the synthetic population data with surveys on the purchasing behavior of the local population. Specifically, [START_REF] Gardrat | Méthodologie d'enquête: Le découplage de l'achat et de la récupération des marchandises par les ménages[END_REF] provides statistics on the annual number of orders made per household based on various socio-demographic characteristics. In [START_REF] Hörl | From synthetic population to parcel demand: Modeling pipeline and case study for last-mile deliveries in Lyon[END_REF] we have proposed a method to make use of this information to generate the probable daily parcel demand for the synthetic households using Iterative Proportional Fitting. Applying the model to our study area yields 16,252 geolocated parcels to be delivered during an average day (Figure 1).
Operator model
The goal of our methodology is to let operators minimize their cost by choosing relevant vehicle types for delivering their assigned parcels and optimizing the vehicle routes. Unfortunately, information on the cost structures of parcel operators is scarce. However, we can assume that the main cost components for offering their service are salaries, vehicle maintenance and investment costs, and per-distance costs. A substantial part of our research was to collect information from gray literature on these cost components. While a detailed analysis of our sources and ag-gregation procedures exceed the scope of this paper, they will be detailed in an extended publication.
Figure 2: Monthly rent versus transport volume
For personnel costs, we assume an effective net salary of 1,300 EUR per month per driver, leading to an approximate gross salary of 1,700 EUR, and to monthly costs of about 3,400 EUR per full-time employee and month for the operator. Divided by 25 operating days, we arrive at daily salary costs of 136 EUR.
In terms of per-vehicle costs, we have examined the long-term rental offers of the major French vehicle manufacturers, along with the characteristics of the advertised vehicles. This analysis has yielded distinct vehicle classes (of about 3.3m 3 , 5m 3 , 10m 3 ) for which costs increase linearly with the transport volume. This is true for thermal and electric vehicles while the slope of the latter is higher (Figure 2). We document the daily unit costs per prototypical vehicle that are used in our model in Table 1 ranging from 210 EUR for a small thermal vehicle up to 800 EUR for large electric truck. In the final optimization we divide these cost by 25 active days per month.
The per-distance costs depend strongly on the consumption of the individual vehicle types.
Based on our analysis of manufacturer offers, we have attached representative values for thermal vehicles (in L/100km) and electric vehicles (in Wh/km) to our prototypical vehicle types in Table 1. The per-distance costs are calculated by multiplying the driven distance per vehicle type with the respective consumption factor and the price for fuel (in EUR/L) and electricity (in ct/kWh), respectively.
Additionally, we have noted down representative CO2 equivalent emissions rates (in gCO2eq/km) for each vehicle type. The rates for thermal vehicles are based on our manufacturer analysis, while the rates for electric vehicles are based on the French average of 90gCO2eq/kWh for electricity production1 .
Finally, Table 1 shows the values for a prototypical cargo-bike (Be) based on current rental offers in France and typical consumption rates.
Operator assignment
To link the demand and the operators, we need to assign an operator to each generated parcel in the synthetic population. For that, we perform weighted random draws from the set of operators based on their market shares. Those market shares have been elaborated from gray literature and a dedicated model. These steps cannot be covered in detail but will be explained in an extended publication. Table 2 shows the resulting parcels assigned to each operator.
Distribution centers
To know from where parcels are dispatched, we make use of the SIRENE database2 , which lists all enterprises and their facilities in France, along with their address and the number of employees. From this database we have extracted all facilities belonging to any of the parcel distributors listed in Table 2. The resulting distribution centers are shown in Figure 1 and Table 2 indicates the number of centers per operator.
For each parcel, we select the distribution center of the respective operator that is the closest in terms of road distance. These road distances are calculated using a network extracted from OpenStreetMap data3 and the osmnx library [START_REF] Boeing | OSMnx: New methods for acquiring, constructing, analyzing, and visualizing complex street networks[END_REF]. The process results in nine distribution centers out of 104 with more than 300 assigned parcels (Figure 1).
Heterogeneous Vehicle Routing Problem
Based on the inputs above, we define a Heterogeneous Vehicle Routing Problem (HVRP) per distribution center with the following characteristics:
• The goal is to minimize the overall cost which is the sum of the unit costs and a daily salary per chosen vehicle and the total distance-based cost of the vehicle trajectories. • The operator can vary the number of vehicles of each of the seven types (Table 1) and the individual vehicle routes.
• Vehicles start at the distribution center and must return before the end of the day. Their total active time cannot exceed a daily duration of 10h. It consists of the travel times between parcels and depot; service times of 120s at delivery; and service times of 60s per pick-up. • Vehicles cannot carry more parcels than their capacity allows. We assume 10 parcels per m 3 in Table 1. We allow multiple tours per vehicle during one day.
For each distribution center, we obtain a distance matrix and a travel time matrix from our extracted OpenStreetMap network using the osmnx library [START_REF] Boeing | OSMnx: New methods for acquiring, constructing, analyzing, and visualizing complex street networks[END_REF]. Since osmnx calculates travel times based on the speed limits of the road segments, we further inflate these values using averaged factors from the TomTom Congestion Index [START_REF] Cohn | The TomTom Congestion Index[END_REF] factors of Lyon to arrive at approximately congested travel times.
Finally, we solve the resulting Heterogeneous Vehicle Routing Problems using the open-source VRP solver VROOM4 .
RESULTS AND DISCUSSION
We define three individual scenarios:
• Baseline: The scenario is based on our synthetic population for 2022. The prices are chosen such that they reflect the long-term cost structures of the operators that have given rise to the distribution schemes that we see today.
• Today: The scenario considers recent increases in energy prices beginning of 2023 with fuel prices of about 1.90 EUR/L5 and 14 ct/kWh6 . It hence shows how the distribution system could develop in case prices stay at this level in the long term. • Future: The scenario is a future scenario in which we consider an updated synthetic population that considers population growth7 and a general increase of parcels per capita by a factor of two. We assume that prices have increased by +80% for fuel and +60% for electricity.
The results are shown in Table 3. For the Baseline case, we obtain 139 thermal vehicles being used, but only eight electric vehicles and 24 cargo bikes. This reflects today's reality where electric vehicles do not have a large share in the transport system. In terms of vehicle sizes, large vehicles (133) dominate, followed by smaller ones (24) of which the majority are cargobikes. Medium-sized vehicles are rarely used. The total distance driven for last-mile deliveries is almost 10,000 km per day for thermal vehicles, the distance for electric vehicles and cargobikes is ten times smaller. Only 7% of all parcels are delivered by electric vehicles or cargobikes. In terms of consumption, 780 liters of fuel are needed and 236 kWh of electricity. This consumption translates into about 2100 kg of CO2 equivalents emitted for the last-mile deliveries during one day which makes 131g per parcel. To calculate the total consumed energy we assume a conversion rate of 10 kWh/L and arrive at a total of 8000 kWh per day with 497 Wh per parcel.
For the Today scenario with adjusted prices (increase of 30% for fuel and 55% for electricity), we see a slight shift of electric vehicles from eight to 14. Still, this shift represents a doubling of the driven distance of electric vehicles and a doubling in parcels delivered by electric vehicles while their overall percentage remains low with 7% for electric transporters and 12% for cargobikes. Accordingly, electricity use doubles while fuel consumption drops by 10%. These shifts lead to a reduction in emissions by 8% in total to 120g per parcel. Total energy use is also reduced by 5% while no large shifts in the cost structures can be observed. Despite electricity prices having increased stronger than fuel prices, the observed shifts can be explained by the different ratios of capital expenses versus operational expenses between thermal and electric vehicles. The latter have higher vehicle prices with lower per-distance costs. In the Baseline case, the break-even daily distance at which a single electric vehicle becomes cheaper in total is at about 34 km, while the point shifts to 26 km in the Today scenario (see Figure 3). In the Future scenario, both the numbers of thermal and electric vehicles increase because of the higher demand. However, electric vehicles increase strongly from 8 to 101. In terms of vehicle size especially large vehicles double in count. At the same time the distance for thermal vehicles goes down and the distance for electric vehicles increases tenfold. Interestingly, the total distance is not doubled, but increases by only 50%, which shows that there are scale effects with respect to the transported parcel volumes. While parcels delivered by electric vehicles are rare Today they make up 50% of all flows in the Future scenario. In the latter, fuel consumption goes down by 37% while electricity use increases by a factor of 13 and total used energy by a factor of 10. On the contrary, total emissions decrease, but only by 24% despite a reduction of 63% per parcel. This effect is due to the generally increased demand. Total costs increase by 75% but not equally on all cost components (67% on salaries, 144% on vehicles, 109% on distance), which puts a higher influence on operational costs on the overall costs. Per parcel, there is a margin of 26ct per parcel between the Today and Future scenario.
In all scenarios, we see that cargo-bikes are rarely used because of their limited capacity.
CONCLUSION
In this paper we have documented a model on the economic and ecological characteristics of last-mile parcel deliveries in a city. While the model makes use of a multitude of assumptions, its main value lies in the comparison of scenarios. In the future, more detailed and distinct scenarios should be evaluated. In terms of validation, system-level reference data is not likely to emerge in the near future. Hence, we are engaging actively in discussing our operational assumptions with domain experts and practitioners to compile a comprehensive list of limitations and future improvements, which will be detailed in an extended publication on the model.
Figure 1 :
1 Figure 1: Map of the study area, generated parcels, and distribution centers (Background: OpenStreetMap)
Figure 3 :
3 Figure 3: Break-even points for a small electric vehicle
Table 1 : Operator model
1
Vehicle type St Mt Lt Se Me Le Be
Size S M L S M L S
Propulsion T T T E E E E
Capacity 33 50 100 33 50 100 14
Fuel consumption 5 6 8 - - - -
[L/100km]
Electricity consumption - - - 160 200 300 42
[Wh/km]
Unit cost 210 260 370 260 400 800 160
[EUR/month]
Distance cost* 304.5.0 377.00 522.00 14.00 18.00 27.00 3.80
[EUR/100km] 0
Emissions** 130 160 215 14.4 18 27 3.8
[gCO2eq/km]
Be: Cargo-bike; Size: S -Small, M -Medium, L -Large; Propulsion: T -Thermal, E -Electric; *Indicative distance costs based on 1.45 EUR/km and 9ct/kWh; **Electric vehicle emissions based on 90gCO2eq/kWh
Table 2 : Operator statistics
2
Operator Distribution centers Market share [%] Parcels
La Poste (Colissimo) 72 40.08 6,384
Chronopost 6 14.98 2,430
UPS 2 13.55 2,210
DPD 3 9.94 1,632
DHL 8 8.95 1,477
GLS 2 6.93 1,169
Colis privé 2 5.36 917
Fedex 9 0.21 33
Total 104 100 16,252
Table 3 : Optimization results
3
Baseline Today Future
Scenario
Population 2022 2022 2030
Demand factor 1.0 1.0 2.0
Fuel price [EUR/L] 1.45 1.90 3.40
Electricity price [EUR/kWh] 0.09 0.14 0.23
Vehicles by type
Thermic 139 133 164
Electric 8 14 101
Cargo-bike 24 24 21
Vehicles by size
Small (S) 32 33 32
Medium (M) 6 6 8
Large (L) 133 132 246
Distances [km]
Thermic 9,835 8,916 6,185
Electric 1,194 2,294 11,438
Cargo-bike 750 785 861
Total 11,778 11,995 18,484
Parcels
Thermic 15,001 14,493 21,926
Electric 500 1,009 11,236
Cargo-bike 751 750 761
Total 16,252 16,252 33,923
Consumption
Fuel [L] 783 710 494
Electricity [kWh] 236 546 3,261
Environment
Emissions [kgCO2eq] 2,127 1,956 1,622
Per parcel [gCO2eq] 131 120 48
Energy [kWh] 8,071 7,642 8,205
Per parcel [Wh] 497 470 242
Cost [EUR]
Salaries 23,256 23,256 38,896
Vehicles 2,217 2,302 5,416
Distance 1,159 1,426 2,427
Total 26,632 26,985 46,740
Per Parcel 1.64 1.66 1.38
https://www.rte-france.com/eco2mix
https://www.sirene.fr
https://download.geofabrik.de
https://github.com/VROOM-Project/vroom
Diesel, France, 09/01/2023, https://www.tolls.eu/fuel-prices
EUROSTAT, Non-household, S2 2022, https://ec.europa.eu/eurostat/cache/infographs/energy_prices/enprices.html
Based on INSEE prediction scenarios https://www.insee.fr/fr/information/6536990
ACKNOWLEDGEMENTS
This paper presents work developed in the scope of the project LEAD, which has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement no. 861598. The content of this paper does not reflect the official opinion of the European Union. Responsibility for the information and views expressed in this paper lies entirely with the authors. |
04107131 | en | [
"info.info-ni"
] | 2024/03/04 16:41:22 | 2022 | https://theses.hal.science/tel-04107131/file/these.pdf | Renée El
Melhem Bât
Pascal Blaise
M Stéphane
ÉLECTROTECHNIQUE E E A Électronique
M Philippe Delachartre
email: [email protected]
M Philippe
email: [email protected]
Mme Sylvie
M Hamamache Kheddouci
M Stéphane Benayoun
email: [email protected]
Stéphanie Cauvin
M Jocelyn Bonjour
M Christian Montes
Pipeline failures in crude oil transportation occur due to ageing infrastructure, third-party interferences, equipment defects and naturally occurring failures. Consequently, hydrocarbons are released into the environment resulting in environmental pollution, ecological degradation, and unprecedented loss of lives and revenue. Hence, multiple leakage detection and monitoring systems (LDMS) are employed to mitigate such failures. More recently, these LDMS include Wireless Sensor Networks (WSN) and Internet of Things (IoT)-based systems. While they are proven more efficient than other LDMS, many challenges exist in their adoption for pipeline monitoring. These include fault tolerance, energy consumption, accuracy in leakage detection and localisation, and high false alarms, to cite a few. Therefore, our work seeks to address some of these challenges in implementing IoT-based systems for crude oil pipelines in a resilient and end-to-end manner. Specifically, we consider the aspect of accurate leakage detection and localisation by introducing a unique node placement strategy based on fluid propagation for sensitive and multi-sized leakage detection. We also propose a new distributed leakage detection technique (HyDiLLEch) in the WSN layer based on a fusion of existing leakage detection techniques such as the negative pressure wave method, gradient-based method, and pressure point analysis. With HyDiLLEch, we efficiently eliminate single points of failure associated with classical centralised systems. Furthermore, we implement fault-tolerant data and service management in the fog layer utilising the Nigerian National Petroleum Corporation (NNPC) pipeline network as a use case. The problem is modelled as a regionalised data-driven game against nature on the NNPC pipelines. Our proposed regionalised solution (R-MDP) using reinforcement learning optimises accuracy and fault tolerance while minimising energy consumption. Overall, our system guarantees resiliency to failures and efficiency in terms of detection and localisation accuracy and energy consumption. I want to first thank Almighty Allah (SWT) for his mercy and grace upon my life.
Résumé
Les défaillances d'oléoducs dans le transport du pétrole brut se produisent en raison du vieillissement de l'infrastructure, des interférences de tiers, des défauts d'équipement et des défaillances naturelles. Par conséquent, des hydrocarbures sont rejetés dans l'environnement, entraînant une pollution de l'environnement, une dégradation écologique et des pertes de vies et de revenus sans précédent. Par conséquent, plusieurs systèmes de détection et de surveillance des fuites (LDMS) sont utilisés pour atténuer ces défaillances. Plus récemment, ces LDMS incluent les réseaux de capteurs sans fil (WSN) et les systèmes basés sur l'Internet des objets (IoT). Bien qu'ils se soient avérés plus efficaces que d'autres LDMS, de nombreux défis existent dans l'adoption de tels systèmes pour la surveillance des pipelines. Ceux-ci incluent la tolérance aux pannes, la consommation d'énergie, la précision de la détection et de la localisation des fuites et le nombre élevé de fausses alarmes, pour n'en citer que quelques-uns.
Par conséquent, notre travail vise à relever certains défis dans la mise en oeuvre de systèmes basés sur l'IdO pour les oléoducs de pétrole brut de bout en bout de manière résiliente. Plus précisément, nous considérons les aspects de détection et localisation précises des fuites en introduisant une stratégie de placement de noeud unique basée sur la propagation des fluides pour une détection de fuite sensible et multi-tailles. Nous proposons également une nouvelle technique de détection de fuite distribuée (HyDiLLEch) dans la couche WSN basée sur une fusion des techniques de détection de fuites existantes telles que la méthode des ondes de pression négative, la méthode basée sur le gradient et l'analyse des points de pression. Avec HyDILLEch, nous éliminons efficacement les points de défaillance uniques associés aux systèmes centralisés classiques. En outre, nous mettons en oeuvre une gestion des données et des services tolérante aux pannes dans la couche de inrastructure Edge en utilisant le réseau de oléoducs de la Nigerian National Petroleum Corporation (NNPC) comme cas d'utilisation. Le problème est modélisé par la théorie des jeux avec une approche régionalisée du réseaux NNPC contre la nature. Notre proposition de solution régionalisée (R-MDP) utilise l'apprentissage par renforcement et optimise la précision et la tolérance aux pannes tout en minimisant la consommation d'énergie. Dans l'ensemble, notre système garantit la résilience aux pannes et l'efficacité en termes de précision de détection et de localisation et de consommation d'énergie.
Dedication
To my late father, Sheikh Ahmed Rufai AbdulKareem, I bear your name with a deep sense of pride and I hope you are proud of me wherever you are. May Allah SWT grant you Jannatul Firdaus, Papa. To my queen mother of inestimable value, around whom my world revolves, Hajia Fatimah Ahmed Rufai, you're the definition of strength, love, perseverance, and patience. Onyam oziorijo, my role model, I am forever grateful for your constant prayers and support.
To my beloved brother, without whom I will not be where I am today, Engr. Timasaniyu Ahmed Rufai (ohinoyi onoru), I owe this to you. Thank you for being a girl's hero, I am deeply grateful for all you have done for me.
To my sisters: (Hajia) Mabrukat, Rabiyatu Al-Adawiya, Hajara, and Bilqis, my league of powerful and extraordinary women, enyene okumi okumi, I thank you for laying a solid foundation for me. On your shoulders, I rose, stood tall, and reached places that would have been otherwise impossible, avo nini. I immensely appreciate you for holding my hands through the journey of life. As I continue towards the future, I know I can always rely on your unceasing support, love, and prayers. This work is hereby dedicated to you ALL. [START_REF]Overview of the internet of things[END_REF] Declaration I declare that the contents of this dissertation are original except for the references made to other research works. It is written solely from the results of the work I conducted myself and has not been submitted for consideration for any other degree or qualification in this, or any other university. It is written based on the following works either published or accepted for publication:
List of Figures
Introduction
The Oil and Gas Industry (OGI) is a significant part of the global economic framework. It plays a crucial role in the energy market as the primary fuel source, generating annual revenue in trillions of dollars globally. It is also the source of petrochemical feedstock and asphalt and serves as the leading energy supply for pharmaceutical products and solvents. Currently, the total annual global consumption of oil is approximately 30 billions barrel with a forecast of approximately 53% increment in consumption by 2035 [START_REF] Amponsah | Ghana's downstream petroleum sector: An assessment of key supply chain challenges and prospects for growth[END_REF]. Still, numerous challenges exist across the production processes of these products, from multiple system failures, inefficient infrastructural monitoring, and under-utilisation of data. In particular, the transportation of oil and gas products could result in leakages and spills (LAS) due to some of these challenges. Crude oil LAS is the release of liquid hydrocarbons into the environment causing environmental pollution, loss of lives from resulting fire incidents and degradation of biodiversity, among other grave consequences. As such, over the years, companies, operators and regulatory bodies employed a range of measures, services and technological solutions to alleviate the impact of such failures. However, records have shown a continuous increase in LAS's resulting consequences, especially in developing countries like Nigeria. This is due to the complexity of mitigating such problems. Thus, this chapter introduces our work on the design and development of a resilient IoT-based Monitoring System for the Nigerian OGI. In the first section, we present the various sectors of the OGI, from the drilling processes to the end user consumption. We also discuss the challenges generally faced in the Industry. In section 1.2, we narrow our discussion to the focus of the thesis, which is the pipeline transportation of crude oil. We enumerate the current solutions adopted to this sector's failures and limitations. In section 1.3, we briefly introduce our contribution to the work. Finally, we outline the organisation of the thesis in section 1.4.
Context
The OGI has a highly complex and capital-intensive set of procedures, from exploration to marketing, as shown in Fig. 1.1. The set of processes can be broadly classified as upstream, midstream, and downstream sectors.
The upstream sector includes exploration or search of crude oil, natural gas, and Figure 1.1: The three sectors of the Oil and Gas Industry others in fields, underground or below the sea. When potential discoveries are made, the production process begins, which involves the drilling and operating of the discovery site. After production, the refined products are transported to the whole sellers, distributors and others in the supply chain. This part of the process is the midstream sector. Finally, the downstream sector concerns the purification, marketing and distribution of usable products, such as kerosene, gas, and diesel, to mention a few. Each sector has its challenges; for instance, the drilling processes in the upstream sector can be improved and accelerated using artificial intelligence (AI) or machine learning techniques [START_REF] Koroteev | Artificial intelligence in oil and gas upstream: Trends, challenges, and scenarios for the future[END_REF][START_REF] Sircar | Application of machine learning and artificial intelligence in oil and gas industry[END_REF]. On the other hand, the supply chain in the downstream sector can be digitalised to gain valuable insights used in more efficient product distribution. This efficiency can be accomplished by the analysis of the big data resulting from the digitalisation of the sector [START_REF] Gezdur | Digitization in the oil and gas industry: Challenges and opportunities for supply chain partners[END_REF][START_REF] Lima | Downstream oil supply chain management: A critical review and future directions[END_REF]. The midstream sector especially, lags in its digitalisation more than the other sectors, causing devastating effects.
Hence, we focus this research on the midstream sector. We worked on this sector specifically as it relates to the pipeline transportation of crude oil, attributed failures and solutions. The following subsections detail the midstream sector, incidents, and their environmental and socio-economic impacts.
Crude oil transportation
As aforementioned, several processes enable the transformation of oil and gas from raw to finished products. The midstream sector, i.e. transportation sector, is a vital part of the supply chain, facilitating these processes. For crude oil, the distribution is enabled through multiple means, such as pipelines, ships, trucks and rails [START_REF] Lisitsa | Supply-chain management in the oil industry[END_REF][START_REF] Scl | Crude oil transportation -pipes, rail, trucks and ships[END_REF][START_REF] Glcdgl | Crude oil transport: Risks and impacts[END_REF] to the appropriate recipient in the supply chain. Following are brief details of each mode of transportation.
Pipelines
Pipeline transportation consists of a network of pipelines in thousands of kilometres divided into two categories, i.e. natural gas pipelines and liquid pipelines. Crude oil are transported using the liquid pipelines, subdivided into gathering, feeding and transmission lines. The gathering lines are used for short-distance transportation of oil, usually from wells to processing tanks, refineries and others. On the other hand, the feeder lines connect the tanks and or processing facilities to the main lines known as the transmission lines. The transmission lines are used for long-distance transportation of crude at a national or international level. To enable such long-distance transportation, certain pressure must be maintained. This necessitates adding pumping stations spanning the pipeline network to boost the pressure. Generally, pipelines are considered the safest means of crude oil transportation.
Ships
Ships are the second most used mode of crude oil exportation to foreign countries. The type of ships expressly built for transporting oils in the sea is called tankers. Tankers are grouped by their size in terms of their dead-weight tonnage (dwt), which is determined by the canals and traits via which they travel, e.g. the Cape of Good Hope, the Strait of Malaca, the Strait of Dover, the Suez Canal, or the Panama Canal. For example, Ultra Large Crude Carriers (ULCCs) tankers range from 300 000 dwt to 500 000 dwt. Very Large Crude Carriers (VLCCs) tankers have a maximum size of 300 000 dwt, while the Suexmax tankers and the Aframax tankers have maximum sizes of 200 000 dwt and 120 000 dwt, respectively. This method of crude oil transportation is the slowest compared to the other methods.
Trucks
Trucks are considered the least efficient mode of transporting crude oil based on the accrued cost and the carbon footprint [1,[START_REF] Pootakham | A comparison of pipeline versus truck transport of bio-oil[END_REF]. Although the most accessible mode of transporting oil, trucks offer a limited storage capacity for carrying the transported product. Despite this constraint, it is the first choice where the distance between origin and destination is short, typically in the final stage of the midstream sector. However, good roads are critical to allow efficient truck transportation of crude oil. A study of the state of Colorado revealed the ratio of total accidents recorded results from truck transportation of oil and gas products per the population capital is 95% [START_REF] Blair | Truck and multivehicle truck accidents with injuries near colorado oil and gas operations[END_REF]. It also resulted in the most cause of deaths [START_REF] Ambituuni | Risk Assessment Of A Petroleum Product Pipeline In Nigeria: The Realities Of Managing Problems Of Theft/sabotage, ser[END_REF][START_REF] Retzer | Motor vehicle fatalities among oil and gas extraction workers[END_REF][START_REF] Ghaleh | Pattern of safety risk assessment in road fleet transportation of hazardous materials (oil materials)[END_REF]
Rails
Cargo trains are also another form of transporting oil and gas with the aid of specialised tankers cars. The tankers used in this case differ from those used for ship transportation in capacity. Rail transports are possible using rail tracks, usually from oil wells to refineries. For large volumes of oil, multiple railcars are used. This method of transportation is preferable to trucks as accidents only happen when a train derails. It is also considered the alternative to the pipeline mode of transportation when they are at total capacity, as the infrastructure's existence makes it flexible.
Statistical comparison of crude oil transportation methods
Among these modes of crude oil transportation, pipelines are the most widely used as they are considered the safest option and have the lowest carbon footprint. Figure 1.2: Crude oil transportation by mode in the US [1] As seen in Fig. 1.2, pipelines have consistently been the choice for transporting crude oil and have shown an increase in recent years. Yet, a considerable amount of failures are still reported yearly [START_REF] Ambituuni | Analysis of safety and environmental regulations for downstream petroleum industry operations in nigeria: Problems and prospects[END_REF]. Thus, in the following subsection, we discuss some common causes of pipeline failures and their impacts.
Pipeline failures and causes
According to regulatory bodies such as the Department of Petroleum Resources (DPR) of Nigeria and the United States Pipeline and Hazardous Materials Safety Administration (PHMSA) [2], causes of pipeline failures can be categorised as follows.
Corrosion
Corrosion of a pipeline occurs due to a deteriorative natural process called oxidation with the pipeline's surroundings. Corrosion is time-dependent and results in metal loss or the wall thickness of the pipe leading to leakages if left uncontained for an extended period. It is of different types, i.e. internal, external, stress or microbes induced, stay current, selective and seam corrosion. Internal corrosion results from chemical reactions within the pipelines. External corrosion on the hand occurs as a result of environmental conditions on the external surface of the pipeline and the surroundings, such as water, soil and others. This type of failure accounts for about 18% of all causes of failures in liquid pipelines.
Natural Hazards
Natural events such as earthquakes, landslides, and extreme temperatures can subject pipelines to large-scale failures. While the mechanism is put in place to envisage some of these natural events, their unpredictability results in eventual pipeline failures. A one-year report between 2002 and 2003 shows that 9% of incidents resulted from natural hazards. Currently, operators are taking risk prevention steps through the identification, assessment and preparation for geotechnical and meteorological events.
Excavation
Excavation damage primarily concerns buried or underground pipelines. They occur due to various construction activities such as digging and trenching of roads, lands, and others. Consequently, pipelines can sustain dents, scrapes, punctures and others, resulting in future failures or damages causing immediate failures. These failures can be hazardous to the public due to the release of petroleum or gas products into the environment. About 15% of liquid pipeline failure incidents are caused by excavation.
Equipment Failure
While some failures occur directly on the pipeline, critical operational equipment such as pumps, compressors, meters, valves, sensors, and others can sometimes cause failures, inadvertently resulting in the spilling of hazardous fluid into the environment. However, the volume of products released to the environment due to this type of incident is small compared to others. Thus, equipment failure rarely causes significant harm to the public.
Material and Weld Failures
Material failures occur as a result of manufacturing defects, such as the oxidation of impurities resulting in what is known as lamination and inclusions. Blisters and scabs caused by a gas expansion in the pipeline material can also lead to material failures in pipelines. Weld failure, on the other hand, occurs from making or joining the pipes together during the initial construction phase or during maintenance. Weld failure in new pipelines includes pinholes, incomplete fusion, porosity and others. In older pipelines, weld failure includes weld metal cracks, hook cracks, and cold welds, among others.
Operational Failures
Operational failures pertain to the human factor aspect of pipeline failure, which could be company staff or contracted personnel. They occur due to the incorrect direction of the fluid, filling and draining of vessels or tanks, or carrying out routine maintenance of the pipelines. Although they are considered an indirect cause of failure, they still sometimes result in the release of harmful products to the environment. Operational failures, however, are not as prevalent as the other types of failures.
Third Party Interference
Third-party interference, otherwise known as "other outside force/damage by outside forces" i.e. sabotage, vandalism, or vehicle accidents, causes pipeline failure. The frequency of such interferences depends on the location of the pipelines, i.e. there are more reported cases of vandalism in developing countries, e.g. Nigeria, than in the United States of America. In the presented data in Fig. 1.3, this accounts for 28% of the causes of pipeline failure causes. Pipelines susceptible to third-party interference include overground pipelines. As we will discuss later in chapter 5, the proximity of the pipelines to highways and some communities also increases the rate of third-party interference.
Effects of pipeline failures
Regardless of the cause, pipeline failures have severe negative environmental and financial impacts. The effects of these failures largely depend on the pipeline type and location. For example, failures in overground pipelines could negatively affect the environment through land contamination. Likewise, failures in subsea pipelines could harmfully affect water bodies and aquatic life. LAS also results in fatalities from direct or indirect consequences and the livelihood of the communities co-located with the pipelines. Over the past 20 years, the US has recorded more than 5780 pipeline incidents, with approximately 0,045% fatalities and 19% injuries. These incidents resulted in costs and litigations to the tune of over 11.17 billion USD [2]. Thus, in this subsection, we discuss the impacts of pipeline failures in two broad categories: land water and biodiversity and its public health and socio-economic impact.
Land, Water and Biodiversity
Crude oil leakages and spills (LAS) cause contamination in lands, sediments, swampland, seas and oceans, altering the physical and chemical properties of the contaminated areas. Figures 1.4 and 1.5 are an example of the effect of spills on land, swampland and mangrove water bodies. The severity of the contamination depends on several factors. For example, when spillage or leaks occur on land, the composition of the type of soil on the land impacts its severity. In addition, the chemical composition of the spilt hydrocarbon and the spilt volume affects the severity of the impact. Depending on this volume, studies have shown that land contamination occurs deeper than the surface, i.e. contamination can go up to 5m into the ground. Undersurface contamination results in well water pollution. Furthermore, fire accidents resulting in the surface crust on lands, as shown in Fig. 1.4, negatively affects the herbage of the land and can make re-vegetation of the land a herculean task or impossible in some cases. For example, the United Nations Environment Programme (UNEP) [3] evaluated LAS impact on a 122 km pipeline right of way in about 200 locations in Ogoniland of, Nigeria. According to the environmental assessment, UNEP concluded that complete land restoration might take up to 30 years due to the extent of damage caused. Similarly, the life cycle of the vegetation is compromised by the loss of mineral nutrients and the gaseous exchange of the plants. Stress is also induced due to the disruption of ions, excluding capabilities from the roots of the plants.
Figure 1.4: Crusted land and swampland pollution [3] In addition, spillage on water bodies, sea and mangroves occurs directly when a subsea pipeline fails. It may also indirectly impact surface and subsurface flows, rain, and wind affecting groundwater aquifers. Likewise, the water body's physical and chemical composition are also altered depending on the concentration level of the LAS. For example, oxygen transfer in the water column is prevented, resulting in the death of various plants such as pneumatophores, lenticels, trees, and seedlings. Additionally, this oxygen deprivation can partially or completely destroy the mangroves and unsettle the aquatic ecosystem. In Bonny local government in the Niger Delta region of Nigeria, over 307,380 square meters of healthy mangrove was lost due to artisanal refining, and the resulting LAS between 2007 and 2011 [3]. According to the same report, the highest concentration of hydrocarbon found in underground water is approximately one million micrograms per litre, nearly 99.94% more than the recommended standard by the Nigerian government.
Public Health and Socio-economic Impacts
Leakages and spills cause fire accidents and explosions that have devastating effects, such as the deaths of thousands of people, grave body injuries, and respiratory problems, to list a few. Non-fire but other life-threatening effects have also been reported by various bodies [2,3,[START_REF] Oriji | Environmental crude oil pollution and its rising levels of socioeconomic/health vulnerabilities among the rural people in rivers state, nigeria[END_REF]. In many cases, there is a notable reduction in the life expectancy of people in the host communities. For instance, this can be a direct consequence of the elevated hydrocarbon concentration in the air and water. Also heightening the risk of sickness and possibly death is the exposure and consumption of contaminated water. These result in problems to the digestive, immune, respiratory and nervous systems and rashes through dermal contact. Besides, in the Ogoniland case study, benzene, a known cancer-causing hydrocarbon, was present in the air at a rate of 900 times more than the World Health Recommendation. Such elevated levels of benzene in the environment possess a great risk to the public and put them at risk of acute myeloid leukaemia, acute and chronic lymphocytic leukaemia, multiple myeloma, and non-Hodgkin lymphoma [START_REF] Habich | Understanding and eliminating atmospheric benzene pollution in pasadena, tx[END_REF][START_REF]Benzene and cancer risk[END_REF]. Figure 1.5: Hydrocarbon pollution of water bodies [3] Although the effect of LAS can sometimes be direct in terms of fatality, injuries or health-related issues, its effects on lands and sea also have direct socio-economic impacts at both local and national levels. For example, comparative studies [START_REF] Oriji | Environmental crude oil pollution and its rising levels of socioeconomic/health vulnerabilities among the rural people in rivers state, nigeria[END_REF] show that LAS-affected lands produce lower yields than non-affected lands. In some cases, it renders root crops unusable due to the high concentration of harmful hydrocarbons in the crops. Thus, farmers whose livelihood depends on such lands either lose their income or have to look for additional or alternative sources of income. Additionally, increments in school dropouts of children were recorded as an indirect consequence of LAS.
So far, we have introduced the processes in OGI, particularly the midstream sector. We also discussed the causes and effects of failures in the midstream sector. In the next section, we define the problem addressed in this thesis as related to the pipeline transportation of crude oil.
Summary
In this subsection, we summarise the failure caused by the mode of transportation in the midstream sector of the OGI.
Problem Definition
Pipelines, as introduced earlier, are critical national and international infrastructures used to transport fluids such as crude oil, natural gas, bitumen, or water. In the midstream sector, they are considered the safest and mostly used transportation means for oil and gas products [START_REF] Ambituuni | Risk Assessment Of A Petroleum Product Pipeline In Nigeria: The Realities Of Managing Problems Of Theft/sabotage, ser[END_REF], [START_REF] Rashid | Wireless sensor network for distributed event detection based on machine learning[END_REF]. Despite its high safety rate, pipeline transportation can sometimes fail due to third-party interferences, equipment failures, corrosion and other causes of failures discussed in subsection 1.1.2 with vandalisation or third-party interference as the leading cause of failures in developing countries like Nigeria.
These failures led to an annual loss of approximately 10 billion USD in the United States [START_REF] Slaughter | Connected barrels: Transforming oil and gas strategies with the Internet of Things[END_REF] and 100 million USD in Nigeria, excluding the payments of litigations, fines, compensation and so on [START_REF] Ambituuni | Risk Assessment Of A Petroleum Product Pipeline In Nigeria: The Realities Of Managing Problems Of Theft/sabotage, ser[END_REF]. Furthermore, anomalous events such as leakages or sabotage have disastrous effects not just economically but also environmentally [START_REF] Sheltami | Wireless sensor networks for leak detection in pipelines: a survey[END_REF], [START_REF] Rashid | Wireless sensor network for distributed event detection based on machine learning[END_REF]. LAS has resulted in fatal accidents, one of which recorded deaths of over a thousand in Jesse in 1998 [START_REF] Ambituuni | Risk Assessment Of A Petroleum Product Pipeline In Nigeria: The Realities Of Managing Problems Of Theft/sabotage, ser[END_REF]. Another pipeline incident in January 2019 was reported to have killed about 135 people in Mexico. More consequences of LAS have previously been discussed in subsection 1.1.3.
Therefore, it is paramount to have a system in place to monitor, in real time, the various operations of pipelines to detect any anomaly that occurs promptly. Consequently, some monitoring systems for oil and gas pipelines were employed. They include:
1. Wired systems using fibre optic cables or copper 2. Wireless Systems
Robots
Despite implementing some of the techniques listed above for pipeline faults detection and monitoring, some countries like Nigeria have witnessed increasing incidents in their pipeline networks. In 2018, Shell Nigeria recorded a loss of petroleum of up to 11000 barrels per day in their pipeline network [START_REF] Spdcn | Security, theft, sabotage and spills[END_REF]. This loss was an increment of almost 550% compared to the reported loss in the previous year. In addition, some of the listed monitoring techniques, such as community-based surveillance, wired systems, and others are either ineffective, inflexible, expensive or impractical [START_REF] Khan | A reliable Internet of Things based architecture for oil and gas industry[END_REF], [START_REF] Shoja | A study of the Internet of Things in the oil and gas industry[END_REF], [START_REF] Aalsalem | Wireless sensor networks in oil and gas industry: Recent advances, taxonomy, requirements, and open challenges[END_REF]. According to the spillage data from the DPR, using a hybrid system such as the supervisory control and data acquisition (SCADA) systems, detection time can take as long as one month or more. Such long detection times result in greater environmental harm and more financial loss to the operators. Over the years, technology-driven solutions have addressed many difficulties in various industries and even in our daily lives. Thanks to research, many aspects of technology have continued to grow, ranging from communication protocols to the miniaturisation of devices. One significant growth in recent years is the advent of the Internet of Things (IoT). IoT is the connection or interconnection of machines, objects or devices, humans etc., to the internet. This enables essential features of IoT, i.e. the amassment and analysis of data. The vast amount of collected data has transformed how we live our lives, the procedures of conducting businesses, the kind of service provisions and other extensive arrays of opportunity through efficient analysis [START_REF] Nawaratne | Self-evolving intelligent algorithms for facilitating data interoperability in IoT environments[END_REF]. Consequently, more applications of IoT-based solutions are being developed.
While the advent of IoT has been revolutionary and economically impactful, some factors circumscribe its usage. For example, the security and privacy of generated However, the adoption of IoT-based solutions has continued to grow despite the challenges earlier enumerated. Figure 1.7 shows the forecast growth of connected devices in 2034. As shown in Fig. 1.7, this adoption and its projection cut across many industries, such as health, automation, manufacturing and others. Businesses, professionals, and individuals can now make informed decisions using data-driven strategies. Such added values can be seen in several areas, even within a single industry. For example, while Fig. 1.6 shows the current state of digitisation in the midstream sector of OGI, it also shows the different areas, i.e. gathering and processing, pipeline and storage, where changes could be effected, and additional values derived. Thus, a more recent approach to monitoring pipeline infrastructures is based on Wireless Sensor Networks (WSN) and the Internet of Things (IoT). Although they are proven to be more efficient solutions to pipeline monitoring and fault detection, they are also characterised by some limitations, such as energy consumption, reliability, robustness, and scalability, to mention a few. These challenges have limited most oil and gas companies' implementation of an IoT-based monitoring solution.
However, in our work, we consider an IoT-based solution for addressing the challenges of failures in pipeline transportation of crude oil. Given the challenges of adopting IoT-based solutions, we focus mainly on the following points:
1. How to detect and localise leakages in the pipeline with a high level of accuracy.
2. How to ensure fault tolerance and robustness in the leakage detection monitoring system.
3. How to ensure a scalable yet efficient solution in terms of coverage and connectivity.
4. How to minimise the global energy consumption of the system.
Hence, in the next section, we introduce our contribution to the design of an IoT-based monitoring system for crude oil pipelines.
Contributions
Our contribution to this work is to provide a fault-tolerant and energy-efficient IoTbased system for monitoring crude oil transportation using pipeline infrastructure. We implement this as an end-to-end solution across the IoT system's different layers (as will be discussed in detail in chapter 3). The first step in our work is to specify the system design and architecture based on which we carry out our implementation and contributions. As explained in the following subsections, our contribution is mainly divided into two aspects (the WSN layer and the Data and Service Layers).
Wireless Sensor Network Layer
The focus of the work in this layer includes event coverage, removal of the Single Points of Failure (SPOF) associated with centralised systems, reduction of false positives and energy consumption compared to existing systems. Hence, we propose a fluid propagation-based node placement strategy to allow distributed leakage detection and localisation. We also implement a detection and localisation algorithm using a hybrid of existing techniques. To test our algorithm, we use NS3 to simulate crude oil propagations. Performance metrics include detection and localisation accuracy, energy consumption, and the number of nodes detecting and localising leakages.
Data and Service Layers
In the Data and Service Layer, we build on the first contribution by extending the detection and localisation of leakages to the fog layer. The main focus of this work is to determine fault-tolerant and energy-efficient data and service management for pipeline transportation of crude oil. To achieve this, we utilise the historical failure of incidents of the Nigerian National Petroleum Corporation (NNPC) pipeline network. Our proposed solution is a data-driven Markov Decision Process (MDP)based model to maintain optimal performance across the NNPC pipeline network with minimised global energy consumption through a regionalised approach. We implement three other solutions for comparative analysis: randomised, globalised and pessimistic. Performance metrics include the reward obtained by each algorithm in terms of fault tolerance, the accuracy of detection, and energy consumption.
In the following section, we present the organisation of this thesis.
Thesis Organisation
This Manuscript is organised into two parts; the first part consists of the background information, and the second includes the contributions. Each part contains several chapters, as discussed next.
Part One: Background
The first part contains two chapters, i.e. the introductory chapter and the literature review.
The first chapter presents the context of our work. It begins by introducing the OGI as well as the challenges and effects of failure associated with the midstream sector. Then we describe the problem statement of our work related to the pipeline transportation of crude oil. Finally, we briefly discuss our proposed solution to the mentioned problems.
In the second chapter, we discuss various research related to our work. This chapter is also divided into two parts. In the first part, we introduce hydraulics and fluid dynamics in pipeline transportation of crude oil. We then briefly discuss various sensors that are used in monitoring pipelines. Also, we delve into the classical methods of pipeline monitoring. In the second part, we examine IoT and WSN-based monitoring systems. In detail, we review works related to strategic wireless sensor placements, their challenges and other aspects of pipeline monitoring. We also look at works on efficiency and data management through information sharing, storage, processing and analytics.
Part Two: Contributions
The second part consists of all our contributions as described next.
In chapter three, we take up aspects of the designs and specifications of our system. We first introduce generic and application-based architectural design and the communication aspects of IoT-based systems. Then we explain the metrics of evaluation on which the system will be designed. These include robustness, reliability, coverage and data management. Finally, we introduce the system design according to the layers discussed in 1.3 and the evaluation metrics.
Chapter four presents our first main contribution. The aspect of this work relates to the WSN layer and presents our detection and localisation algorithm. This algorithm has two versions depending on the degree of connectivity to the neighbouring nodes. We implement this contribution on a single horizontal long-haul pipeline and analyse both versions using NS3-a network simulator. Finally, we discuss and further examine the simulation results and conduct a comparative analysis with other classical methods.
In chapter five, we present our second contribution which extends the work from the previous chapter from the WSN layer to the fog layer, i.e. from a single pipeline segment to a pipeline network. We further introduce an MDP based on the historical failure events of the NNPC pipeline network for efficient data and service management at the fog layer. Additionally, we discuss the implementation and the solution of the MDP, including its impact on the energy consumption of the system.
Chapter six serves as the conclusion chapter. In this chapter, we summarise our work and contributions to this thesis. We also discuss our significant achievements in the course of this research and the limitations of our contributions. Finally, we explore various areas for possible future work.
Chapter 2
State-of-the-Art
Infrastructural monitoring has evolved over the years, moving along with technological advancement. The advent of IoT has further enabled monitoring from the austere environment, such as homes, to complex entities like industrial processes. However, the efficiency of a monitoring system relies heavily on a thorough understanding of the fundamental principles of the entity to be monitored. In addition, it is also essential to study the evolution of other techniques used in the monitoring, such as our use case.
Hence, we present the background on leakage detection and monitoring systems (LDMS) and the current approach to DAL of leakages in two sections-Classical Pipeline monitoring and IoT and WSN-based pipeline monitoring. In the first section, we focused on the hydraulic background and existing classical work on pipeline monitoring in the background of fluid mechanics in pipeline transportation, the different statistical leakage detection methods and the introduction to the types of sensors used in monitoring pipelines. In the second section, we discuss recent pipeline monitoring techniques based on IoT and WSN, which is in tandem with our contribution. The IoT has several layers, and each layer has different requirements and challenges. Thus the discussions in this section are based on the different layers of IoT, including sensor placement in the WSN layer, data and service placement and their processing.
Classical monitoring of pipeline transportation of crude oil
Effective monitoring of crude oil transportation in a pipeline can reduce and, in some cases, prevent the negative impact of LAS and other failures attributed to pipeline transportation of crude oil in the midstream sector. This section discusses different types of sensors used in LAS detection and localisation (DAL) in subsection 2.1.1. In subsection 2.1.2, we discuss various fluid mechanics and pipeline transportation phenomena. Then in subsection 2.1.3, we briefly mention some of the classical techniques in DAL of failures in pipelines. Finally, in subsection 2.1.4, we summarise failure detections by their merits and demerits.
Sensors for pipeline monitoring
A sensor is a device that takes an input from a physical quantity and returns an output [START_REF] Teja | What is a sensor? different types of sensors and their applications[END_REF]. Different types of sensors exist and can be used for sensing various physical quantities such as wind, humidity, temperature, and others. In pipelines, leakages can be detected based on the internal changes in the fluid's properties and the external changes as a consequence of leakages and the effect on the immediate surroundings. One or several sensors are required to detect these changes in both cases. In the following subsections, we broadly categorise some of the sensors used to detect and localise leakages and other pipeline failures.
Flow meters
Flow meters measure properties through electrical, mechanical and physical means. We focus on the ultrasonic flow meters typically used in monitoring pipelines. The ultrasonic flow meter consists of a pair of sensors and a transducer, both acting as transceivers to measure the volumetric flow of any fluid. Thus, their internal and external parameters are used to detect leakages in pipelines. Externally, the ultrasonic flow meter depends on the flow clamp-on to the pipeline; internally, it depends on the velocity of the fluid by transmitting ultrasonic waves generated and transmitted from the transducer to the meter (receiver) as well as the temperature of the fluid [START_REF] Henrie | External and intermittent leak detection system types[END_REF].
The ultrasonic flow meter is either a Doppler flow meter or transit time flow meter, and both can be used for measuring homogeneous fluid. The transit time flow meter uses the difference in time between sound wave transmission and reception. The Doppler flow meter, on the other, uses the Doppler effect, i.e. the reflection of the sound wave from the transmitter to the receiver. In both cases, measurements are made at several locations and sent to a central system for the algorithmic calculation of the mass volume balance.
While they have several advantages, including robustness, and low maintenance, they also present challenges, such as difficulty detecting small leaks and are expensive to implement [START_REF] Lourenco | Verification procedures of ultrasonic flow meters for natural gas applications; procedimento de verificacao de medidores ultra-sonicos para gas natural[END_REF].
Wired sensors
Wired sensors such as fibre optic cables, conductive or sensing cables and hydrocarbon tubes are used in monitoring pipelines by observing changes such as resistance and impedance in the physical properties of the cable [START_REF] Henrie | External and intermittent leak detection system types[END_REF]. This is usually achieved by laying them along the length of the pipeline in very close proximity, i.e. in a 10 cm to 15 cm distance.
Fibre optic cables are classified as external pipeline monitoring systems used for distributed chemical, temperature, and acoustic sensing [START_REF] Bai | Leak detection systems[END_REF]. They use light signals to detect leakages and respond to thermal environmental changes or localised remote vibrations. In addition, they are excellent for underwater pipelines due to their resistance to electromagnetic interference and non-conductibility [START_REF] Hafizi | A temperature-compensated fbg pressure sensor for underwater pipeline monitoring[END_REF]. Despite this ability for multi-issues detection and low false alarms, fibre optics are not commonly used due to their inability to adapt to pre-existing infrastructure and their exorbitant prices [START_REF] Tennyson | Fibre optic sensors in civil engineering structures[END_REF]. Additionally, the installation length is limited to a finite distance [START_REF] Henrie | External and intermittent leak detection system types[END_REF].
Conductive cables consist of a pair of insulated conductors that detect the presence of hydrocarbons by monitoring electrical changes in the cable. These changes occur through the physical contact of hydrocarbon to the cable destroying the insulation between the conductors. This results in changes in current in the cable and thus provides accurate leak location using Ohm's law for measuring the cable's Resistance, Voltage and Current. They can also react to third-party commodities, i.e. non-hydrocarbon fluid, thereby making them susceptible to high false alarms. Although their response time to detection can vary between several minutes to hours, once a cable detects leakage and generates an alarm, it must be replaced, thereby increasing operational expenses. Just like the fibre optic cables, they are also limited to a finite distance, usually a maximum of 400 km [START_REF] Henrie | External and intermittent leak detection system types[END_REF].
Hydrocarbon sensing tubes detect leakages by sensing the presence of hydrocarbon vapour in the tube that enters it when leakages occur. They can be used for singlephase and multi-phase oil or gas pipelines. The hydrocarbon released during a leak is transported to the tube inlet to the outlet and detected. The hydrocarbon sensing tubes can detect small-sized leakages because of their sensitivity to targeted vapours, and detection time ranges from several minutes or hours to days [START_REF] Henrie | External and intermittent leak detection system types[END_REF][START_REF] Bai | Leak detection systems[END_REF]. However, this sensitivity sometimes leads to false alarms. Just like the fibre optic and sensing cables, they are limited to a finite distance. In this case, a maximum of 50 km.
Visual/Aerial sensors
Remote sensing is achieved using vehicles, satellites, helicopters, and UAVs. to monitor pipelines. They are equipped with specialised sensors like Infrared Cameras, RGB Cameras, and Ground Penetrating Radars (GPR). The visual sensors observe signs of changes or third-party interference in the monitored areas. In the case where leakages are prevalent, community-based and security-based surveillance are employed.
Vehicles are used by security personnel or employed observers like those living within the proximity of pipeline locations to monitor the pipelines. Although ob-servations like community-based surveillance can sometimes be conducted by foot, it is usually more practical with a vehicle for long-distance pipelines. The vehicles are sometimes equipped with Infrared or RGB cameras for more effective DAL of leakages and spills (LAS).
Satellites are remote sensors equipped with RGB or infrared cameras used to monitor LAS by taking radar images of the monitored area. They can also be recorded and analysed in high temporal frequency. They are efficient for pipelines because they can be used in hard-to-reach areas such as the sea and buried pipelines. Additionally, they are not influenced by the restriction of flight zones as seen in helicopters, thus making them an efficient pipeline monitoring, especially as it relates to their coverage area. Other advantages of satellite-based monitoring include additional monitoring of third-party interference.
Helicopters are used in monitoring LAS in pipelines by flying over the monitored area, usually with an onboard observer(s). LAS is detected or localised with the visual confirmation of the onboard observer. Nowadays, Helicopters are also equipped with various types of cameras, as in satellite-based and vehicle-based monitoring.
UAVs are remotely piloted aircraft integrated with computers and visual sensors. For leakage detection and localisation, they are equipped with forward-looking or high-resolution cameras, synthetic aperture radar, multispectral imaging or shortwave infrared. While this is a cheaper alternative to pipeline monitoring using helicopters, it is currently largely experimental. However, its potential includes the ability to better detect and localise leakages due to its low speed and altitude. It also removes the human-prone risk and errors associated with using pilots and observers in helicopters.
Pipeline Inspection Gauges
Pipeline Inspection Gauges (PIGs) are used to monitor the structural integrity of pipelines and detect defects such as corrosion, dents, cracks, and pits [START_REF] Bernasconi | Acoustic detection and tracking of a pipeline inspection gauge[END_REF]. The monitoring or maintenance operation is done through magnetic flux, eddy current, and geometric detection as the PIG travels in the pipeline. PIGs collect and store information on the pipeline using a battery-powered onboard electronic. More sophisticated PIGs also carry out data analysis and correlation.
PIG can be categorised as gyroscopic-based or non-gyroscopic-based PIGs [START_REF] Guan | A review on smalldiameter pipeline inspection gauge localization techniques: Problems, methods and challenges[END_REF]. In the gyroscopic-based PIGs, the orientation and speed of the device can be easily monitored and maintained, which is not the same for the non-gyroscopic version. However, their deployment is labour-intensive, i.e. using several positioning systems such as visual sensors and odometers. Other challenges include the presence of in-frastructures, single entrance pipelines, and difficulty in integrating intelligence in PIGs [START_REF] Guan | A review on smalldiameter pipeline inspection gauge localization techniques: Problems, methods and challenges[END_REF][START_REF] Le | Multi-sensors in-line inspection robot for pipeflaws detection[END_REF].
Pressure sensors
Pressure sensors for crude oil pipeline monitoring are used to measure the flow and speed of the oil in the pipeline. Generally, most sensors use the piezoelectric effect, i.e. the electric charge created by material in response to stress, to measure the pressure. However, some pressure sensors also use resistive, capacitive, MEMS, or optical effects for measurement. The resistive and capacitive pressure sensors measure the changes in the electrical resistance and capacitance of the material, respectively. The MEMS sensors use either a capacitance change or a piezoelectric effect for measurement. Optical pressure sensors, on the hand, use interferometry to measure the pressure of a material. These sensing methods directly influence the reliability or accuracy of the sensors [START_REF] Eaton | Micromachined pressure sensors: review and recent developments[END_REF].
Pressure sensors come in three different types known as the absolute, gauge, and differential pressure sensor. The absolute pressure sensors are used to measure pressure against an absolute vacuum of zero PSI. The Gauge pressure sensors are used to measure pressure with respect to atmospheric pressure (14.7 PSI). Measurements above or below this value are categorised as positive or negative, respectively. Differential pressure sensors use reference thresholds for measurement. These are the type of pressure sensors typically used for fluid flows in pipelines.
Supervisory Control And Data Acquisition (SCADA)
Supervisory Control And Data Acquisition (SCADA) are used to monitor pipelines by making use of hybrid methods such as the combination of PPA and mass volume balance method. The placement of flow meters at the inlet and outlet of the pipeline allows leak detection using the mass volume balance method. Flow measurements are periodically sent to the centralised SCADA system, and detection is determined using the simple volume balance or the modified volume balance. The simple volume balance is wholly based on the principle of mass conservation, where the inlet mass is expected to equal the outlet mass. If the outlet mass is less than the inlet's, a leak is said to have occurred. The modified method includes other state properties, such as the temperature or pressure to determine the presence of leakage. SCADA systems are susceptible to SPOF due to their centralised nature. In addition, they are inflexible to change, have high response time and expensive to maintain or scale [START_REF] Khan | A reliable Internet of Things based architecture for oil and gas industry[END_REF].
In the next subsection, we discuss the background of fluid mechanics and some computational fluid dynamics and LDTs. Thus, fluid properties such as the pressure, flow, density, and temperature, are measurable from one point to another following the equation of state [START_REF] Henrie | Real-time transient model-based leak detection[END_REF] i.e. any data-driven equation that can be used to describe the fluid phase behaviour [START_REF]Equations of state[END_REF]. Pipeline transportation of crude oil follows these principles, i.e. for transmission pipelines in a steady state (presenting no leakage), the pressure decreases with distance due to frictional resistance. This decrease in the pressure results in a slope when represented in the time domain as shown in Fig. 2.1. Changes in the pressure gradient of crude oil travelling in a pipeline are shown in Fig. 2.1. The inlet and outlet pressures in the absence of leakage are represented by P 0 and P L , respectively. P 0 decreases with distance along the pipeline, hence, maintaining a relatively constant pressure gradient (PG) at every measurement point. However, when leakages occur, a negative pressure wave (NPW) -travelling in opposite directions from the point of leak [START_REF] Sowinski | Analysis of the impact of pump system control on pressure gradients during emergency leaks in pipelines[END_REF][START_REF] Ostapkowicz | Leak detection in liquid transmission pipelines using simplified pressure analysis techniques employing a minimum of standard and nonstandard measuring devices[END_REF]-is generated as shown in Fig 2 .2. This changes the state of the flow from a steady state to a transient state, after which a steady state is achieved again. Computational fluid dynamics (CFD) enables the statistical detection of leakages by advancing the governing principle of fluid flow in either space or time. Nowadays, some of the sensors introduced in 2.1.1 also embeds one or more of CFD method(s), allowing for autonomous detection and localisation of leakage. CFD-based techniques for crude oil pipelines include the pressure point analysis (PPA) technique, negative pressure wave method (NPWM), gradient-based method (GM), mass volume balance technique, and real-time transient modelling [START_REF] Sheltami | Wireless sensor networks for leak detection in pipelines: a survey[END_REF][START_REF] Ostapkowicz | Leak detection in liquid transmission pipelines using simplified pressure analysis techniques employing a minimum of standard and nonstandard measuring devices[END_REF][START_REF] Lu | Leakage detection techniques for oil and gas pipelines: State-of-the-art[END_REF]. Each technique has advantages and disadvantages (summarised at the end of this section). In the following subsection, we examine these LDTs and focus on the PPA, NPWM, and GM techniques based on their ease of implementation, low computational complexity, robustness, reliability and energy consumption. These properties we have considered are essential in distributed systems, as we will see later in the following chapter.
Fluid Mechanics
Pressure Point Analysis (PPA)
The PPA technique is used in determining the dissipation of flow in crude oil pipelines by the periodic measurement of fluid pressure at several points along a pipeline. This is achieved using the well-known Bernoulli equation defined in Eqn. (2.1).
z a + P a ρg + V 2 a 2g = z b + P b ρg + V 2 b 2g + E ab (2.1)
where E ab = λ V 2 2gd * L represents the energy head loss, λ is the coefficient of friction, V is the velocity of the fluid, d is the pipeline's inner diameter, L is the distance from point a to point b, P a is the pressure at point a, ρ is the fluid mass density , g is the gravitational force and z a is the elevation at point a.
Several sensors placed along the pipeline are shown in Fig. 2.3 to make this measurement. The pressure P i of a sensor i is relative to its location on the pipeline and can be estimated using an inlet pressure such as P 0 . Given P 0 , any other pressure point can be pre-estimated when parameters like the energy head loss E, the fluid velocity and others listed in Eqn. 2.1 are known. Hence, leakage occurrence can be detected by statistically evaluating the measured pressure values against a preset threshold. A resultant mean value less than this threshold indicates a leakage. The PPA technique is easy to implement and is not computationally complex. However, it cannot be used to localise leakages. In practice, it is supplemented with another detection technique for localisation [START_REF] Bai | Leak detection systems[END_REF].
Gradient-based Method (GM)
The GM technique utilises the changes in pressure gradient when leakage occurs for the DAL of leakages. As shown in Fig. 2.1, the pressure gradient (PG) in pipeline fluid transmission without leakages is approximately the same as the fluid travels along the pipeline. However, two steady states with different PGs are achieved when leakages occur. These steady states are before and after the leak locations, as shown in Fig.
Q = L × dP G leak Q-L + (dp 0 -dp L ) dP G leak Q-L -dP G leak 0-Q (2.2)
where Q is the estimated leak location, L is the pipeline length, dp 0 is the average increment in the pipeline's initial cross-section, dp L =average increment in pipeline final cross-section, dP G leak 0-Q is the average increment in the pressure gradient before the leak point, dP G leak Q-L is the average increment in the pressure gradient after the leak point.
For example, if we consider a leakage occurrence at a point Q in the pipeline, and if P G a-b denotes the pressure gradient between two points i.e point a and point b, then P G leak 0-Q and P G leak Q-L represent the two gradients before and after the leak point, therefore P G leak Q-L < P G 0-L < P G leak 0-Q . Given this difference in the two steady states, leakage localisation can be achieved with Eqn. 2.2. The GM technique is an efficient LDT because of its high accuracy, low computational complexity and energy consumption.
Negative Pressure Wave Method (NPWM)
The negative pressure wave method (NPWM) is a commonly used pipeline leakage detection technique [START_REF] Tian | Negative pressure wave based pipeline leak detection: Challenges and algorithms[END_REF]. When leakages occur, changes in the fluid pressure shown in Fig. 2.2 at the point of leak generate a NPW. The NPWM takes into account the NPW that is generated when leakage occurs in the fluid pipelines. As shown in Fig. 2.5, the generated wave travels with the speed of sound from the leak point in the opposite direction from the leakage point. This speed (c) can be calculated with Eqn. 2.3. Leakages are characterised by different sizes, from small to large leaks, based on the volume of the liquid lost in a specific period. Hence, depending on the size of the leakage, the amplitude of the NPW attenuates as it travels along the pipeline. Consequently, only sensors within the bounds of the wave can detect the arrival of the NPW front. The corresponding formula for characterising the rate of attenuation of the amplitude of the wave signal is represented in equation 28 2.4. This formula allows the determination of the maximum distance at which the wavefront is detectable from the point of leakage.
c = 1 ρ( 1 K + d Y.w ) (2.3)
where c is the NPW speed, ρ is the fluid density, K is the fluid's modulus of elasticity, d is the pipeline's inside diameter, Y is the Young modulus, and w is the pipeline's wall thickness
A b = A a * e -αD (2.4)
where A b is the amplitude at sensing point b, A a is the amplitude at sensing point a, α is the attenuation coefficient, e = 2.71 and D is the distance between two sensing points.
q = D -cδt 2 (2.5)
where q is the distance from the point of leakage to the nearest downstream node, D is the distance between the sensors, c is the negative wave speed, t is the communication time and δt the difference in the time of arrival of the signal at the upstream and downstream nodes.
The time of arrival of the NPW front can be calculated using Eqn. 2.5 [START_REF] Wan | Hierarchical Leak Detection and Localization Method in Natural Gas Pipeline Monitoring Sensor Networks[END_REF]. Each sensor at both sides of the leakage, i.e. the upstream and downstream, receives different times of arrival of the wave depending on their distance from the leak point. Determining the arrival time of this wave is an essential factor in localising leakages. In effect, for NPWM, the size of the leakage and the distance between the sensors are two crucial factors to consider in the DAL of leakages for high accuracy. Generally, NPWM is highly accurate LDT but with high energy consumption due to the sampling rate.
Real-time transient model
RTTM is based on field instrumentation and simulates pipeline monitoring using the hydraulic and thermodynamic properties by measuring the density, flow, pressure, temperature, and the other properties of the fluid. As the name indicates, these measurements are done in realtime representing the state information of the pipeline in all conditions. Important considerations include deciding the boundary conditions i.e., the input data, signal calculation and processing [START_REF] Henrie | Real-time transient model-based leak detection[END_REF].
Mass Volume Balance
Similar to PPA, MVB technique is used to detect leakages when the mass balance at the outlet exceeds a certain threshold defined by Eqn. 2.6 [START_REF] Stouffs | Pipeline leak detection based on mass balance: Importance of the packing term[END_REF]
≤ ρ in -ρ out - d dt m p (2.6)
where is the defined threshold, ρ in is the fluid density at the inlet, ρ out is the fluid density at the outlet, d dt m p is the change in pressure and temperature of the pipeline based on the liquid density and the cross-sectional area of the pipe.
The Supervisory and Data Acquisition system (SCADA) typically uses the MVB in combination with other statistical method like PPA for leakage detection and localisation in pipelines.
In the following subsections, we discuss some classical techniques of pipeline monitoring.
Pipeline Failure Detection and Localisation Techniques
The problem of leakage detection and localisation in pipeline networks has been widely studied. Over the years, a multitude of approaches has been proposed as a solution to this problem, with different approaches focusing on addressing specific problems. In this subsection, we work through some of the research conducted in this regard.
Research on robot-based pipeline integrity monitoring is increasingly being carried out. You Na et al. [START_REF] Na | Pipelines monitoring system using bio-mimeticrobots[END_REF] proposed using bio-mimetic robots to detect pipeline anomalies. Its advantage is its insect-like crawling ability which enables it to move effectively along complex networks of pipelines. Their work guarantees complete pipeline coverage for pipeline integrity monitoring. Sujatha et al. [START_REF] Sujatha | Robot based smart water pipeline monitoring system[END_REF] also worked on pipeline monitoring using robots. In this work, a prototype robot aimed at providing continuous and real-time pipeline monitoring in an autonomous manner was built and integrated with a mobile application. The test conducted shows its practicability and promises to be robust and customisable. Kim et al. [START_REF] Kim | Spamms: A sensor-based pipeline autonomous monitoring and maintenance system[END_REF] in their work developed a sensor-based network system for monitoring and maintaining pipeline networks. The system was implemented with the combination of Radio Frequency Identification (RFID) sensors-based localisation technique, MICA-based mobile sensors and topology-aware robots with multi-sensor functions and actuators. It is expected to detect and report anomalies in pipelines before the occurrence of failure and provide recovery or repairs with the help of the robotic agents. The authors supported their ideas by conducting several experiments and argued that their approach is more cost-effective and scalable than other monitoring systems. While the use of robots in pipeline monitoring is gaining momentum, it is still at an early stage and is yet to be fully adopted. This could be because of the complexity of their implementations and their performance capability for large-scale or industrial purposes.
Also used in pipeline monitoring are non-intrusive or statistical techniques. Beushausen et al. [START_REF] Beushausen | Transient leak detection in crude oil pipelines[END_REF] worked on the detection of transient leaks in crude oil pipelines by statistical leak detection method analysing the pressure, modified volume balance and the flow of the crude oil. While some transient leaks were successfully detected in the pipeline, the localisation error was up to 10 km. Other challenges noted by the authors include discrepancies in the flow meter, bandwidth limitation and the effects of the various types of crude oil transported in the pipeline. Santos et al. [START_REF] Santos | A sensor network for non-intrusive and efficient leak detection in long pipelines[END_REF] proposed a non-intrusive leak detection method for liquid pipelines using both Transit-time ultrasonic flow meters and Doppler ultrasonic flow meters. Simulations were carried out to estimate the effects of air bubbles on the efficiency of the proposed system in terms of accuracy in detection and suitability.
In addition, Ostapkowicz, in his work [START_REF] Ostapkowicz | Leak detection in liquid transmission pipelines using simplified pressure analysis techniques employing a minimum of standard and nonstandard measuring devices[END_REF], demonstrated two non-intrusive leakage detection techniques based on pressure gradient and negative pressure wave, i.e. GM and NPWM. Experimental results obtained by measuring the pressure and speed of the transmission fluid showed that both methods detect and locate leaks. The GM may be more energy efficient as a result of its low sampling rate. However, lesser accuracy is observed when compared to NPWM.
Karray et al. [58] presented a water pipeline monitoring technique based on leak detection predictive Kalman Filter, modified time difference of arrival and System on
Chip wireless sensor node. The algorithm incorporates data filtering, preprocessing, and compression aimed at reducing the energy consumption of the nodes. Mirzaei et al. [START_REF] Mirzaei | Transient response of buried oil pipelines fiber optic leak detector based on the distributed temperature measurement[END_REF] in their work tested the efficiency, precisely the response time in leak detection using Raman Optical Time Domain Reflectometer and Brillouin Optical Time Domain Amplifier. Leak localisation is achieved using the temporal difference of back-scattered laser pulses. Their results showed that the total mechanical response time is several minutes, so they proposed a novel detection based on the results. Some of the statistical methods, such as GM and NPWM, are highly efficient in terms of accuracy in the detection and localisation of LAS. However, they are implemented in centralised systems such as supervisory control and data acquisition, making them susceptible to Single Point of Failure (SPOF).
Satellites are also used to remotely monitor oil and gas pipelines through image sensing such as radar or RGB images. Kostianoy et al. in their works [START_REF] Kostianoy | Operational satellite monitoring systems for marine oil and gas industry[END_REF][START_REF] Kostianoy | Satellite monitoring of the nord stream gas pipeline construction in the gulf of finland[END_REF][START_REF] Kostianoy | Satellite monitoring systems for shipping and offshore oil and gas industry in the baltic sea[END_REF] shows the efficiency of using satellite monitoring systems for distinguishing anthropogenic and natural effects of pipeline, ports and terminal constructions as well as the ecological impact on the sea within the vicinity of the constructions. Integrated with a sea track web model, the environmental impact of the oil rig was also performed with an analysis of several impact measurements, including their spatial and temporal characteristics. Other works on the usage of satellites for pipeline monitoring include [START_REF] Smith | Gas pipeline monitoring in europe by satellite sar[END_REF][START_REF] Dedikov | The russian small satellite for hyperspectral monitoring of gas pipelines[END_REF]. In the latter, a hyperspectral satellite with the capacity to search for chemicals, gases and other dynamic process on the land. It also enables early detection of pipeline degradation. While they are especially suitable for difficult terrains like the sea and non-fly zones, they tend to be expensive to implement.
Human-based monitoring systems are also used in the monitoring of pipeline systems. They include using observers in vehicles or helicopters to visually detect LAS in pipelines. Unmanned Aerial Vehicles, such as Drones fitted with multispectral sensors, including infrared and RGB cameras to detect changes unnoticed by human eyes, are also used. Community-based surveillance and Security personnel are also used to detect leakages, especially those due to third-party interference. Although they are well used to a certain degree, studies [START_REF] Khan | A reliable Internet of Things based architecture for oil and gas industry[END_REF], [START_REF] Shoja | A study of the Internet of Things in the oil and gas industry[END_REF], [START_REF] Aalsalem | Wireless sensor networks in oil and gas industry: Recent advances, taxonomy, requirements, and open challenges[END_REF] have shown that these techniques are ineffective, inflexible, expensive or impractical as in some cases failure detection time take as long as five days to multiple weeks. Such a long detection time results in greater loss to the operators with a more severe impact on the environment.
In the following subsection, we summarise various leakage detection systems by their strength and weaknesses.
Summary and Conclusion
Thus far, we have discussed several LDTs, sensors and systems that have been generally classified into external or internal systems, hardware or software-based systems, and human and non-human-based system. In this subsection we enumerate the advantages and disadvantages of the LDTs in Table 2
Hardware, Wired
Detects leakages in minutes depending on the size Cover a maximum of 50km, tube should be fitted in close proximity to a pipeline, expensive, prone to false alarms Table 2.2: Summary of pipeline monitoring sensors and systems However, the advent of IoT-based solutions using WSNs has seen a shift in the focus of pipeline failure detection and localisations. A more effective approach, in the next subsection, we look at pipeline monitoring using IoT and WSN-based solutions.
Industrial IoT and its application to pipeline monitoring
IoT has been used to address different challenges in multiple industries, such as health, supply chain, agriculture, aviation and others. This is made possible with the use of several sensors like pressure, temperature, humidity, or proximity sensors in Wireless Sensor Networks (WSN) and based on several architectural frameworks. Wireless sensors also require strategic placements to cater to specific needs of IoT systems such as energy consumption, scalability, latency, reliability or robustness to failures. Data management in terms of placement, processing and analytics is also a critical factor to consider in IoT-based systems due to IoT devices' limited computational and memory capacity.
Thus, in this section, we present some existing works on the architectures of IoT Systems in subsection 2.2.1. We also discuss wireless sensor placements on pipelines for failure detection and localisation in subsection 2.2.2. In subsection 2.2.3, we examine existing works on WSN-based pipeline monitoring. Then we go into various works and issues on data management in subsection 2.2.4 and associated processing in 2.2.5. Finally, we summarised the challenges in IoT-based pipeline monitoring and the applied approaches in 2.2.6.
Architectural Design of IoT Systems
In a typical communication systems, the system design ranges from the architectural design of the network topology and the communication protocols, to mention a few. Such specification and design phase of any system is the fundamental framework based on which the system is built. It sets the basis for analysis to which we can measure the performance of the system. In IoT-based systems, particularly, the performance of the system is also dependent on such architectural design, the communication protocol, and the data management design, among others. However, unlike the legacy communication systems, there has not been standardisation on IoT's different elements. For example, the number of different layers that form an IoT system and what each layer signifies or constitutes. Many works rely on application-specific designs catering to each application's different needs and peculiarities different from the one shown in Fig. 3.1. El Hakim, in his work [START_REF] Hakim | Internet of things (iot) system architecture and technologies[END_REF], introduced a generic five-layered architecture consisting of sensors or controllers in layer one, gateway devices in layer two, a communication network in layer three, software for data analysis and translation in layer four, and end application service in layer five. Yelmarthi et al. also presented a four-layer architecture consisting of sensors, nodes, regional hub and cloud servers from layers one to four, respectively, targeting low-power IoT-based systems. Also, Khan et al., in their work [START_REF] Khan | A reliable Internet of Things based architecture for oil and gas industry[END_REF], proposed a three-layered centralised architecture consisting of the server control centre, the gateway and the smart object layer for a reliable system. In another recent work [START_REF] Rahimi | A novel iot architecture based on 5g-iot and next generation technologies[END_REF], Rahimi et al., proposed an eight-layered architecture for 5G and future generation networks. This consist of the physical device, communications, edge computing, data storage, management service, application, collaboration and services, and security layers. These works have presented frameworks for IoT-based systems ranging from three-layers to eight layers. This depicts the high variance in the factors that affect the choices of the system design, Hence, in chapter 3, we take a deeper look at these factors and propose our system design.
In the next subsection, we present various sensor placement strategies in the sensor layer of the IoT architecture.
Wireless Sensor Placement Strategies
Notably important is the placement of sensors in addition to the choice of sensors adapted for infrastructural monitoring. Sensor placement strategy plays a crucial role in the monitored framework's accuracy, energy consumption, or sensitivity. Sela et al. [START_REF] Perelman | Sensor placement for fault location identification in water networks: A minimun test approach[END_REF] worked on optimal sensor placement for detection detection on failures on water pipelines. They used a preliminary method of approximate solution of the minimum set cover problem based on the Minimum Test Cover (MTC) approach. A novel approach based on an augmented greedy MTC-based algorithm was proposed. Conducted test on a water network shows that the algorithm is about three to eight times faster than the other approach. In their other work, Lina et al. [START_REF] Sela | Robust sensor placement for pipeline monitoring: Mixed integer and greedy optimization[END_REF] proposed a robust sensor placement in a pipeline network using robust greedy approximation (RGA) and robust mixed integer optimisation (RMIO). Both propositions served as an enhancement of the nominal GA and MIO using a robustness and redundancy parameter. In most of their simulation conducted on a water pipeline network, MIO and RMIO outperformed the compared versions, i.e. the robust sub-modular function optimisation (RSFO), MIO and GA. Both works are based on the assumption that a single sensor is able to detect failures in multiple pipelines, making them susceptible to SPOFs. [START_REF] Berry | Sensor placement in municipal water networks with temporal integer programming models[END_REF] is also based on mixed integer programming (MIP)for sensor placements. Results analysis using EPANET, SNL-1, and SNL-2 showed average consensus within the range of 86.5% to 100% with a maximum standard deviation of 1.3%.
On the other hand, Krause et al. [START_REF] Krause | Efficient sensor placement optimization for securing large water distribution networks[END_REF] also worked on robust sensor placement for water networks to avoid intrusions. However, they optimised sensor placement using the minimax criteria instead of MIP. Experimental results include the extension of multi-criterion optimisation, and efficient placement for large networks, i.e. up to 91% of the maximum placement score achievable. Sarrate et al. [START_REF] Sarrate | Sensor placement for fault diagnosis performance maximization in distribution networks[END_REF] studied the impact of sensor accuracy based on infrastructural analysis using the isolabilty index. Application on leakage detection in water networks shows that fault detection is improved and removes the complexity of mesh connectivity in an extensive network. Boubrima et al. [START_REF] Boubrima | Optimal deployment of wireless sensor networks for air pollution monitoring[END_REF] in their work for optimal deployment of sensors ensured minimal cost for air pollution monitoring. Two approaches based on integer programming formulation using real air pollution dispersion were proposed, i.e. a basic model and an enhanced model (an extension of the basic model). Both formulations aimed at finding minimum deployment cost solutions through the combination of network coverage, air pollution dispersion and connectivity constraints in a centralised manner. Experimental results showed a considerable reduction in cost.
Guo et al. [START_REF] Guo | Sensor placement for lifetime maximization in monitoring oil pipelines[END_REF] proposed a sensor placement on an oil pipeline to address the sensors' lifetime. They achieved this by taking into account the maximum transmission range of each sensor node. As a result, the distance between the sensors is measured by the length of the pipeline divided by the maximum transmission range of the sensors. The lowest number of sensors are deployed based on their communication range. This approach to sensor placement may significantly increase the impact of event detection if any of the intermediary nodes fail by putting the neighbouring nodes out of reach. Elnaggar et al. [START_REF] Elnaggar | WSN in monitoring oil pipelines using ACO and GA[END_REF] worked on sensor placement in a WSN network for oil pipeline monitoring to reduce the impact of energy consumption using ant colony optimisation and genetic algorithm. The simulations conducted on a linear pipeline segment indicate that the ant colony optimisation outperformed the genetic and greedy algorithm in terms of the communication level. However, both approaches show similar behaviour in terms of WSN lifetime optimisation with constraints. Also based on sensor placement in a linear pipeline is the work of Al Baseer et al. [START_REF] Albaseer | Cluster-based node placement approach for linear pipeline monitoring[END_REF]. They proposed sensor deployment and grouping based on an adaptive clustering algorithm for intermediate data delivery aimed at reducing energy consumption. Simulation results evaluation shows a significant energy reduction between 300% to over 500% through load sharing mechanism among the cluster heads and up to 62% better than heuristics approaches. Further evaluation using experimental studies shows that their approach conserves energy up to 50% more than the compared scheme. Li et al. [START_REF] Li | Deployment-based lifetime optimization for linear wireless sensor networks considering both retransmission and discrete power control[END_REF] proposed a generic sensor placement for sensor network optimisation utilising retransmission and discrete power control for single and double-tier uniformly and non-uniformly distributed WSNs.
In the next subsections, we discuss existing work on pipeline monitoring using WSN.
WSN-based pipeline monitoring
Recent approaches to pipeline monitoring are taking advantage of the advent of WSN and IoT-based solutions. The following works present various approaches to its application for pipeline monitoring and challenges. Yelmarthi et al. [START_REF] Yelmarthi | An architectural framework for low-power IoT applications[END_REF] proposed a four-layered low-power IoT applications architectural framework from the sensor layer to cloud servers. It comprised easily implementable wired and WSN, with minimal resources for multiple applications. Its applicability in diverse applications and low power consumption were experimentally demonstrated in damage detection, analysis of posture and physical activities. Khan et al. [START_REF] Khan | A reliable Internet of Things based architecture for oil and gas industry[END_REF] also proposed a three-layered IoT architecture for all the sectors of the OGI. In each layer, they considered reliability and robustness through a hierarchical design. In this structure, interconnection and collaboration enable performance enhancement through reliable communication and intelligent decision-making while allowing predictive maintenance.
Sadeghioon et al. [START_REF] Sadeghioon | Water pipeline failure detection using distributed relative pressure and temperature measurements and anomaly detection algorithms[END_REF] proposed a novel algorithm for detecting leakages in underground pipelines through the measurement of relative pressure and temperature obtained from a WSN. In a test conducted, the detection algorithm showed high accuracy in leak detection and sensitivity compared to other threshold-based methods. Also on leak detection is another work of Sadeghioon et al. [START_REF] Sadeghioon | Smartpipes: Smart wireless sensor networks for leak detection in water pipelines[END_REF]. This research presented a comparative pressure method based on force-sensitive resistors for ultra-lowpower wireless sensor networks. Experiments to test this technique were conducted in the laboratory and fields where the leakage was simulated and detected. Saeed et al. [START_REF] Saeed | Reliable monitoring of oil and gas pipelines using wireless sensor network (wsn) -remong[END_REF], on the other hand, worked on a reliable WSN-based system for monitoring oil and gas pipeline that spans over a long distance (REMONG). RE-MONG specifically focused on how data is sensed and communicated over a sizeable geographical area aimed at reducing energy consumption. A preliminary test for the energy consumed in the communication test showed promising results. Yunana et al. [START_REF] Yunana | An exploratory study of techniques for monitoring oil pipeline vandalism[END_REF] presented a comparative analysis of techniques for monitoring pipeline vandalism. Several monitoring techniques were compared, such as satellite, visual, UAV, and WSN. WSN was more suitable for its low power consumption and cost-effectiveness compared to other techniques. On the other hand, Azubogu et al. [START_REF] Azubogu | Wireless sensor networks for long distance pipeline monitoring[END_REF] proposed a WSN-based pipeline monitoring technique. In their work, they discussed several existing monitoring techniques and compared them to WSNs in terms of their architectural design, energy consumption, or maintainability. Henry et al. [START_REF] Henry | Wireless sensor networks based pipeline vandalisation and oil spillage monitoring and detection: Main benefits for nigeria oil and gas sectors[END_REF] in their work enumerated the advantages of using WSN for monitoring pipelines in Nigeria, especially vandalisation and oil spillage. They focused on different aspects of WSN, including features, challenges and how other applications have utilised WSN. In the work of Obodooeze et al. [START_REF] Obodoeze | Wireless sensor network in niger delta oil and gas field monitoring: The security challenges and countermeasures[END_REF], they focused on the security challenges related to the use of WSN for pipeline monitoring in the Niger Delta region of Nigeria. Owning to the prevalent vandalisation of pipelines, including monitoring equipment in that region, they proposed several measures, i.e. integration of WSN with Wi-FI network, installation of CCTV, or smart actuators, to circumvent this problem.
Common to all IoT-based applications is also the choice of communication pro-tocol. Several protocols, such as NB-IoT, BLE, LPWAN etc., exist to cater to the constrained nature of the sensors used in such applications. Thus, we examine some choices in the communication protocols in the following works in terms of their inefficiencies. Jamali-Rad et al. [START_REF] Jamali-Rad | IoT-based wireless seismic quality control[END_REF], examined the usability of Low Power Wide Area Networks (LP-WAN) IoT-based system for seismic quality control. Test conducted on LoRaWAN includes packet loss observation over a small-scaled and single-link network. The performance in the presence of mobility and interference was also tested. Results show that LoraWAN performs well in the presence of interference from LTE, GSM and UTME at the gateway and against Doppler effects. Similarly, Rudes et al. [START_REF] Rudeš | Towards reliable IoT: Testing lora communication[END_REF] conducted a small-scale test on the reliability of LoRaWAN for IoT applications. The test is done with varying parameters such as distance, packet size, and terrain for wildlife detection and precision agriculture. Although there was a lack of optical visibility between the central station and some nodes, obtained results were promising. Ratasiche et al. [START_REF] Ratasich | A roadmap toward the resilient Internet of Things for cyber-physical systems[END_REF] worked on how to ensure a resilient and secure IoT for cyberphysical systems (CPS). In their work, they enumerated various state-of-the-art IoT failure detection and recovery techniques for CPS. Using the presented guidelines, they demonstrated their technique on an application for the mobile autonomous system by using a self-healing approach through structural adaptation (SHSA), which was integrated into the fog node. They capitalised on the redundancy properties and knowledge base of SHSA to monitor, diagnose and recover from anomalies in the communication network.
Sensor devices generate a vast amount of heterogeneous and complex data hence the term Big data. In the IoT value chain, data management and their applications hold between 30% to 60% [START_REF] Rebbeck | M2m and internet of things (IoT). opportunites for telecom operators[END_REF] value. These values lie in the insights obtained through the analysis and optimisation of the generated data. Given the nature of these data, the several constraints of IoT nodes (i.e. memory capacity, computational ability, the limited bandwidth of IoT communication protocols etc., efficient data management in terms of placement, storage, processing and analytics is imperative. In the following subsection, we review works addressing these challenges.
Data and Service Placement
The increasing volume and structure of data, or applications of IoT-based systems, has necessitated alternative storage, accessing, and placement methods as opposed to classical database management systems (DBMS). Traditionally, data and databases are managed with relational DBMS like Oracle, and MySQL, characterised by static schema with records stored in tables [START_REF] Celesti | A nosql graph approach to manage IoTaaS in cloud/edge environments[END_REF]. Research [START_REF] Fatima | Comparison of sql, nosql and new sql databases in light of internet of things -a survey[END_REF][START_REF] Rautmare | Mysql and nosql database comparison for iot application[END_REF][START_REF] Mahgoub | Suitability of nosql systems -cassandra and scylladb -for IoT workloads[END_REF] shows that these DBMSs are not scalable and are not suitable for the management of IoT data due to the real-time nature of some IoT applications and how the data are collected, i.e. frequency of sampling and heterogeneity. Alternative choices are NoSQL and the socalled NewSQL. NoSQL is scalable, accessible and supports unstructured data, making it more suitable for cloud-based IoT applications or cloud computing in general. Cloud computing provides flexibility and alternative storage maintenance to relational databases, often as platform as a service (PaaS), software as a service (SaaS) and infrastructure as a service (IaaS).
Although cloud computing makes computer resources readily available, the delay for real-time applications, as perceived by the users, continues to be a problem. Content Distribution Networks (CDN) or fog computation are some of the approaches currently adopted to address this QoS issue. Fog computation extends the cloud computing paradigm to the edge of the network, i.e. closer to the users. This enables improved accessibility to the end users' resources, i.e. storage, computation and others. It can also be used to address some of the limitations of centralised cloud computing [START_REF] Patel | On using the intelligent edge for IoT analytics[END_REF], i.e. by significantly increasing the scalability of the network through the reduction of latency and computational overhead at the cloud server, enhancement of real-time operations, to mention a few.
Despite the many advantages of fog-enabled architectures, misplacing data in the fog nodes can negatively impact the system's overall performance. Hence, Naas et al. [START_REF] Naas | iFogStor: An IoT data placement strategy for fog infrastructure[END_REF] proposed a runtime data placement algorithm based on the criteria, i.e. the nature of the data, the node behaviour and location. The results show that overall latency was reduced by 86% compared to cloud solutions and 60% compared to simple fog solutions. Aral et al. [START_REF] Aral | A decentralized replica placement algorithm for edge computing[END_REF], on the other hand, worked on a replica placement algorithm by taking into account constraints such as the size, location and priced storage. The objective of this strategy is to reduce the latency in fog networks. Results of their work yielded a 14% and 26% -based on tradeoff-reduction in latency compared to replicas' absence. Additionally, Shao et al. [START_REF] Shao | A data replica placement strategy for IoT workflows in collaborative edge and cloud environments[END_REF] also worked on the placement of data replicas for IoT workflows in both fog and cloud environments. In their study, they utilised an intelligent swam optimisation algorithm following several criteria, such as user groups, data reliability, and workflows. Results show improvement in comparison to other works.
Whereas Wang et al. [START_REF] Wang | Data scheduling and resource optimization for fog computing architecture in industrial IoT[END_REF] made use of multiple channels optimal data scheduling policy in a four-layer fog computing architecture comprising the device, data scheduler, Jstorm, and cloud layers. Big data is split into several blocks and sent to the different Jstorm clusters for processing. The Jstorm layer integrates geographically distributed fog nodes into several clusters. Simulation shows a 15% gain over other data scheduling policies.
In addition to data placement, service placement is just as crucial in a fog en-vironment, i.e. wrongful service placements also lead to an increase in latency [START_REF] Velasquez | Service placement for latency reduction in the internet of things[END_REF]. Hence, Velasquez et al. [START_REF] Velasquez | Service placement for latency reduction in the internet of things[END_REF] defines an IoT service placement architecture that fuses cloud and fog computing based on the system's condition and latency. In the proposed architecture, services were placed according to the user's location, the server's location and the state of the network while taking advantage of the fog environment. The placement algorithm is generic for all types of scenarios using three modules; service repository, information collection, and service orchestrator.
In the next subsection, we discuss various data processing and management approaches using intelligent algorithms.
Processing, learning and analytics
On top of a cloud and fog-based architectures are several other strategies for improving the overall efficiency such as, reliability, fault tolerance, robustness, responsiveness, scalability, or intelligence of the system. These include data dissemination strategies, intelligent algorithms, middleware and others. For example, the use of microservice architecture can considerably increase the scalability of an IoT network [START_REF] Sun | An open IoT framework based on microservices architecture[END_REF]. Unlike its counterpart, it allows distributed services while limiting the functional dependency of the nodes. Middleware can also be used to provide better functionality in terms of reliability, scalability, availability security or communication [START_REF] Mohamed | Smartcityware: A service-oriented middleware for cloud and fog enabled smart city services[END_REF]. In this work, the importance of service-oriented middleware in the integration and utilisation of fog and cloud computing in a smart city context using SmartCityWare as a service-oriented model was highlighted. Used as a level of abstraction of services and components in smart cities applications, it enhances the flexibility and integration of different services. Experimental results showed reduced latency in fog nodes lookup compared to cloud servers lookup. Ozeer et al. [START_REF] Ozeer | Resilience of stateful IoT applications in a dynamic fog environment[END_REF] also demonstrated the importance of fault tolerance in fog nodes for the provision of reliable services in IoT applications by considering the dynamic, heterogeneous and cyber-physical interaction properties of the fog environment. They proposed a fault management protocol for stateful IoT applications consisting of state saving, monitoring/failure detection, failure notification/reconfiguration and decision/recovery processes. They evaluated their design in a smart home application by introducing a set of simulated events into the application, provoking failure of some of the components and verifying that recovery is as expected with respect to the physical world. Additionally, a system can be made end-to-end robust, resilient, adaptive and dynamic also by using artificial intelligence/machine learning to extract the unique features of the collected data. Giordano et al. [START_REF] Giordano | Smart agents and fog computing for smart city applications[END_REF] used smart agents for the imple-mentation of self-healing and recovery in autonomous systems through redundancy. In their work, failed or disrupted nodes are replaced with previously available but redundant nodes. According to work done by Nawaratne et al., [START_REF] Nawaratne | Self-evolving intelligent algorithms for facilitating data interoperability in IoT environments[END_REF], data interoperability can be achieved through an intelligent algorithm that can self-evolve and adapt, learn incrementally with temporal changes and have unsupervised self-learning capability. Also, Tang et al. [START_REF] Tang | A hierarchical distributed fog computing architecture for big data analysis in smart cities[END_REF] extended the data analytic capability of cloud computing in the context of smart cities using smart pipeline monitoring prototypes.
Besides, the advent of federated learning supports machine learning at the fog/edge nodes [START_REF] Konecný | Federated learning: Strategies for improving communication efficiency[END_REF], [START_REF] Mcmahan | Federated learning: Collaborative machine learning without centralized training data[END_REF]. This also encourages better performance by bringing intelligence closer to the originating data point and limiting the amount of data that needs to be transmitted to and processed in the cloud. As a result, failures that may have arisen due to network failure or overload are reduced. Rashid et al. [START_REF] Rashid | Wireless sensor network for distributed event detection based on machine learning[END_REF] used machine learning for the leakage detection and classification in pipelines. For their experimental set-up, sensor nodes were equally spaced along the pipeline based on their communication range. In this work, several machine learning algorithms, such as K-nearest neighbour (KNN), support vector machine (SVM), and Gaussian mixture model (GMM), were compared. They outperformed the rest of the algorithms regarding specificity, sensitivity, and accuracy of leak size estimation. Likewise, Roy [START_REF] Roy | Leak detection in pipe networks using hybrid ann method[END_REF] worked on an artificial neural network consisting of input, hidden and output layers. It is used as an optimisation tool for accurately detecting leakages in pipelines.
In addition, employing necessary telemetry for performance monitoring can significantly improve reliability through context management, i.e. a device is dedicated to monitoring a predefined set of metrics like link quality or battery level before tasks are allocated [START_REF] Patel | On using the intelligent edge for IoT analytics[END_REF]. The proposed services can be implemented through virtualisation to further enhance system reliability. Dar et al. [START_REF] Dar | Enhancing dependability of cloudbased IoT services through virtualization[END_REF] proposed a cloud-based IoT service that ensures reliability through a framework used in defining systems requirements and dependability demands. A virtualisation service is called during runtime that deploys one of the predefined redundancy patterns applied. Experimental work showed that application of this method allows over 98% availability with nearly 40% randomised network failures. Downsampling transmitted data also ensures reliability by limiting bottlenecks, thereby minimising packet loss rate. This can be done through efficient data transmission algorithms, e.g. data fusion and data prioritising algorithms as proposed in [START_REF] Yu | An efficient oil and gas pipeline monitoring systems based on wireless sensor networks[END_REF] to minimise packet loss rate due to network overload or limited bandwidth.
Other important considerations include the information flow between the network layers. Most commonly used are the publish-subscribe (pub/sub) messaging paradigms between clients (data producers) and agents (data consumers). Ioana et al. in their work [START_REF] Ioana | Approaching OPC UA publish-subscribe in the context of UDP-based multi-channel communication and image transmission[END_REF] demonstrated the applicability of the pub/subsystems in several complex scenarios, such as the Open Platform Communication Unified Architecture protocol for industry 4.0. In particular, they showed how a multi-channel User Datagram Protocol (UDP) communication strategy for pub/sub systems enables the transmission of high-volume data like images in a time frame fitted for the industry. Aslam et al. [START_REF] Aslam | Investigating response time and accuracy in online classifier learning for multimedia publish-subscribe systems[END_REF] worked on adaptive methods to handle unknown subscriptions in a low-latency pub/sub model used for processing multimedia events. Their system achieved between 79% and 84% accuracy. An interesting work [START_REF] Jafarpour | CCD: A distributed publish/subscribe framework for rich content formats[END_REF] also on the pub/sub systems focused on minimising the computing and transmission of cost for content subscribers based on a requested format.
Some works for latency reduction in mobile edge computing and general optimisation are based on game theory. In their work [START_REF] Garg | Heuristic and reinforcement learning algorithms for dynamic service placement on mobile edge cloud[END_REF], Garg et al. evaluated three dynamic placements (greedy approximation, integer programming optimisation and learning-based algorithms) for maximal user equipment availability using minimal infrastructures. Experimental results using a drone swarm application shows that while all approaches met the latency requirement, the learning-based algorithm performed better in terms of minimal variations in solution providing a more stable deployment and thereby guaranteeing a reduction in infrastructural cost. Also on placement optimisation is the work Ting et al. [START_REF] He | It's hard to share: Joint service placement and request scheduling in edge clouds with sharable and non-sharable resources[END_REF]. They worked on the optimal provision of edge services such as storage, communication and computational resources. Using a trace-driven simulation, they compared results obtained on the following algorithms: Optimal request scheduling (ORS as the baseline), greedy service placement with optimal request scheduling (GSP ORS) and greedy service placement with greedy request scheduling (GSP GRS). For joint service placement and resource scheduling, both GSP ORS and GSP GRS, including their linear programming relaxation, either achieved optimal or near-optimal solutions.
Improving data management through a game theoretical approach is also being researched. Cai et al., in their work [START_REF] Cai | Reinforcement learning driven heuristic optimization[END_REF], proposed a Reinforcement Learning Heuristic Optimisation (RLHO) framework aimed at the provision of better initial values for the heuristic algorithm. They carried out a comparative analysis between their proposed algorithm and two other algorithms(Simulated Annealing and Proximal Policy Optimisation). Results obtained from experiments on bin the packing problem show that RLHO outperformed pure reinforcement learning. Islam et al. [START_REF] Islam | A game theoretic approach for adversarial pipeline monitoring using wireless sensor networks[END_REF] and Rezazadeh et al. [START_REF] Rezazadeh | Applying game theory for securing oil and gas pipelines against terrorism[END_REF] worked on third-party interference on pipeline networks, especially a terrorist attack. The former proposed a Stackelberg competition-based attacker-defender model to find the equilibrium between possible attacks and pipeline security. They proved that in an equilibrium state, the monitoring system achieves the best result by maintaining its strategy, assuming both the defender and attacker act rationally. The later [START_REF] Rezazadeh | Applying game theory for securing oil and gas pipelines against terrorism[END_REF] proposed another model of the problem using a two-player non-zero-sum approach, with the assumption that both players act rationally according to some chosen indices. Two approaches to solving the problem were proposed. The first is a local optimisation approach allowing comprehensive analysis of the effects of countermeasures on attacks. The second approach utilises a global optimisation approach that enables the security personnel (defender) to provide a solution from the attacker's perspective. Rezazadeh et al. [START_REF] Rezazadeh | Optimal patrol scheduling of hazardous pipelines using game theory[END_REF] also worked on modelling a monitoring system for pipeline security using the Bayesian Stackelberg game. In his work, he proposed a time and distance discretisation-based scheduling policy. This framework enables the hierarchical ranking of security risks allowing the usage of different patrol paths.
So far, we have discussed multiple works towards efficient pipeline monitoring, data management and processing. In the next subsection, we will summarise the different challenges in the adoption of WSN and IoT-based solutions for infrastructural monitoring.
Summary and Conclusion
One factor to consider in utilising IoT-based solutions for infrastructural monitoring is the multiple limitations of any IoT-based system. In the preceding subsections, we elaborated on many of these issues and works done to circumvent them. Therefore, we summarise these challenges and possible solutions in this subsection.
Challenges
System Design and Specifications
Several options exist in the design and deployment of IoT-based systems for monitoring industrial processes. Hence, in this chapter, we discuss the design choices of our system compared to generic standards and existing models in other application areas. In section 3.1, we discuss some generic and application-based architectural designs for IoT systems. Additionally, we present some commonly used communication protocols in these systems. Section 3.2 shows and discusses the metrics on which some of the design decisions are based. Finally, in section 3.3, we introduce our proposed system architecture and its various components.
Background
The architectural layout, communication protocol, and other elements of a communication system have several impacts on the system. It can affect performance metrics such as robustness, reliability, and maintainability, to name a few. This section introduces some common architectures and communication protocols for IoT systems and their impacts.
IoT Generic and Application-based Architectures
We present a generic IoT architecture in Fig. 3.1 as proposed and explained by the International Telecommunication Union in their recommendation report [START_REF]Overview of the internet of things[END_REF]. This architecture has four layers consisting of the device, network, service and application supports, and application layers. Each layer is further embedded with management and security capabilities.
At the device layer, two general categorisations exist, i.e. device and gateway capabilities. The device's capabilities include interacting with the communication network directly, indirectly or through ad-hoc networking. The gateway capabilities, in addition to providing indirect communication, can support multiple interfaces and protocol conversion. The network layer mainly deals with networking connectivity through the provisioning of control functions, mobility management and others. It also deals with the transporting aspects for services, and application data, to mention a few. At the service support and application layer, two types of categories are present, i.e. the generic and specific support capabilities. While the latter includes supports Figure 3.1: Generic IoT Architecture [START_REF]Overview of the internet of things[END_REF] such as data processing or storage, the former deals with supports unique to different applications. Finally, the application layer contains all applications for the different subject areas where IoT-based solutions are applied. The management capabilities cut across the four layers dealing with the network topology, the traffic controls and the general management of all devices. The security capabilities, also dealing with all four layers, include authentication, authorisation, and data confidentiality across these layers.
The generic architecture presented in Fig. 3.1 aims to cover all the aspects of an IoT-based system. A typical system includes devices -that could act as sensors, gateways or both-that are used to percept the physical world. In addition, the networking aspects, such as the communication between these devices for data transfer and information interchange, are imperative. As such, several communication protocols are considered in this aspect, typically catering to constrained devices like IoT-based devices or specific application scenario. Various specifications and choices are made based on the application or the industry which the IoT-based system is deployed.
We have also shown in chapter 2-by presenting multiple existing works-that there is no unified architectural design that has been generally adopted for IoT-based monitoring systems. Most architectural frameworks are dependent on various factors, including the use case and the specific outcome required. However, paramount in the design choices is the communication protocols. As such, we propose a system design considering factors such as the industry of application, the availability in place of deployment (POD), existing communication infrastructure in the POD and possible integration with such infrastructure. Mainly, the design aspect of our system is aimed to further enhance its performance in several folds, as will be seen later. The design aspect of work is part of our published work in [START_REF] Ahmed | Resilient IoT-based Monitoring System for Crude Oil Pipelines[END_REF]. Thus, in the following subsection, we briefly discuss some communication protocols used in IoT systems. Other factors affecting the design or deployment of IoT-based can be found in chapter 2.
Wireless Communication Protocols for IoT
IoT/WSN sensor nodes are by nature constrained devices limited by their computational ability, memory capacity, energy consumption, security and privacy capabilities, among others. Existing cellular communication networks such as the 3G, 4G, LTE and others do not provide an energy-efficient utilisation of the IoT devices [START_REF] Mahmoud | A study of efficient power consumption wireless communication techniques/ modules for internet of things (IoT) applications[END_REF]. As shown in Fig. 3.2, other wireless technology and standards currently exist and meet the requirements or needs of IoT-based systems. Mainly differing by their communication range, which could range from zero (proximity) to up to a hundred kilometres and the throughput. Such wireless technologies are typically broadly categorized as short and long-range protocols.
The short-range communication protocols include IEEE 802.15.4 (Zigbee), IEEE 802.15.1 (Bluetooth LE), 6LowPan, Wi-Fi and others. Zigbee and Bluetooth LE shown in Fig. 3.2, are both examples of WPAN with a maximum transmission range of about 100m. The data rate of Zigbee is approximately 250kbps, and it has an application throughput of 20kbps [START_REF]Iot device standards[END_REF]. Zigbee also uses mesh topology for data transmission. 6LowPAN, similar to Zigbee, also utilises a mesh network topology with a throughput of 20kbps and does not make use of battery. Conversely, Bluetooth LE has a point-to-point topology in a master-slave setting where the master is responsible for communication management and the slave execution of commands. In addition, the Bluetooth LE uses a coin cell battery which can last up to a couple of years.
The long-range protocols, i.e. Low Power Wide Area Network protocols (LP-WAN), can be further categorised into licensed and unlicensed LPWAN [START_REF] Mekki | A comparative study of LPWAN technologies for large-scale IoT deployment[END_REF]. Lo-RaWAN and Sigfox are examples of unlicensed LPWAN, while the 3rd Generation Partnership Project(3GPP) NB-IoT is a licensed LPWAN. LoRaWAN operates in different radio bands across the continent, e.g. it uses 868 MHz in Europe. With a low data rate, the throughput is dependent on several factors, such as the size of the Recall, that this work focuses on detecting and localising leakages using an efficient IoT-based LDMS for a pipeline network spanning several thousands of kilometers in Nigeria. To design such system, we consider some of the performance-affecting factors of the communication protocols in terms of coverage, availability in the POD as well as the energy consumption to ensure longevity of the design. Thus, Table 3.1 summarises the properties of some commonly used and new communication protocols for IoT-based systems.
In the next section, we discuss the design metrics considered for our system and on which the performance of the system will be evaluated in the following chapters.
Protocol
Design Metrics and Considerations
A mission-critical IoT system consists of heterogeneous devices, data and applications whose failure may cause highly impacting consequences on the environment, public services and others. Additionally, it requires high availability, robustness and reliability, to mention a few [START_REF] Ciccozzi | Model-driven engineering for mission-critical IoT systems[END_REF]. Based on this definition, a pipeline infrastructure monitoring sytem based on IoT can be categorised as a mission-critical IoT system. Thus, we aim to design an end-to-end resilient IoT-based monitoring and detection solution that is robust to failure and efficient in terms of energy consumption and coverage. The system will also address some of the identified challenges in the pipeline infrastructure of the oil and gas midstream sector. These challenges include high false alarm rates associated with traditional LDMS, complex or computationally expen-sive detection techniques, SPOF related to centralised systems, poor detection and localisation accuracy, and high energy consumption as enumerated in the following works [START_REF] Sowinski | Analysis of the impact of pump system control on pressure gradients during emergency leaks in pipelines[END_REF][START_REF] Ostapkowicz | Leak detection in liquid transmission pipelines using simplified pressure analysis techniques employing a minimum of standard and nonstandard measuring devices[END_REF][START_REF] Kim | Spamms: A sensor-based pipeline autonomous monitoring and maintenance system[END_REF][START_REF] Perelman | Sensor placement for fault location identification in water networks: A minimun test approach[END_REF][START_REF] Sadeghioon | Smartpipes: Smart wireless sensor networks for leak detection in water pipelines[END_REF].
To design such a resilient IoT-based monitoring system, we consider the challenges of pipeline infrastructure, the requirements of a mission-critical IoT system, and the existing solutions and protocols introduced in 3.1.
Therefore, our objectives for the system design can be summarised as follows:
1. Ensure reliability and robustness.
2. Ensure coverage and connectivity.
3. Ensure highly accurate leakage detection and localisation.
Minimise energy consumption.
These objectives also served as the metrics of evaluation for our system. In the following subsections, we explain some of the design criteria and common approaches to their design.
Reliability and robustness
By definition, a resilient system is a system that is dependable in the presence of all types of faults with the ability to evolve or adapt to different situations. It should be scalable, available, safe, maintainable and reliable. In addition, a robust system is a system that can continuously function in the presence of stochastic interference [START_REF] Ratasich | A roadmap toward the resilient Internet of Things for cyber-physical systems[END_REF]. Note that, failures in pipeline infrastructure cannot be predetermined, i.e. they are stochastic in nature. In addition, they often include third-party interferences resulting in the disruption of installed monitoring systems. Thus in our case, the key challenges to address are the availability and reliability of the system. Several techniques have been used in system designs to improve the availability and reliability of any system, such as nodes and channel redundancy. The redundancy technique can significantly improve recovery time and infrastructure options when a failure occurs. Although it incurs an extra cost, it allows continuous operation and delivery of service in the event of failure [START_REF] Ratasich | A roadmap toward the resilient Internet of Things for cyber-physical systems[END_REF]. Other robustness techniques include employing fail-silent security measures to detect unwanted interference in the system. In a distributed system, the degree of distributiveness, application-level checkpoints, and the architectural design are also used to provide resilience in system designs [START_REF] Matni | Resilience in large scale distributed systems[END_REF][START_REF] Correia | How to tolerate half less one byzantine nodes in practical distributed systems[END_REF][START_REF] Cappello | Toward exascale resilience[END_REF][START_REF] Liu | Architectural design for resilience[END_REF]. A hierarchical and distributed network architecture can also allow for a scalable network without substantially affecting performance metrics like throughput, latency, and energy efficiency, among others.
Coverage and connectivity
Coverage can be considered in two aspects: the physical or event aspect and the communication aspect. The physical aspect, i.e. detecting changes in the monitored quantity, is one of the crucial factors to consider in system design. For example, Boubrima et al. [START_REF] Boubrima | Optimal deployment of wireless sensor networks for air pollution monitoring[END_REF] modelled air pollution detection based on the Gaussian air pollution dispersion in the environment. With this, he jointly proposed an integer formulation model of the dispersion for coverage and connectivity constraints. Alam et al. [START_REF] Alam | Dynamic adjustment of sensing range for event coverage in wireless sensor networks[END_REF] also worked even detection, focusing on dynamically changing the sensor range. Other method includes density-based and hot-spot-based estimation as in [START_REF] Solmaz | Optimizing event coverage in theme parks[END_REF]. In our work, the detection of such physical quantity is based on the changes observed when leakages occur. As discussed in more detail in chapter 4, we specifically considered the NPW and PG generated in the event of leakage.
The second coverage aspect concerns network coverage ensuring end-to-end communication among the devices. The issue of network coverage can be examined in terms of the network topology, the communication protocol and the availability of the protocol in the place of deployment (POD).
Accuracy of leakage detection and localisation
Accuracy of leakage detection and localisation (DAL)is one of the determining factors considered by oil and gas operators when considering leakage detection techniques or monitoring systems [START_REF] Ostapkowicz | Leak detection in liquid transmission pipelines using simplified pressure analysis techniques employing a minimum of standard and nonstandard measuring devices[END_REF]. Incorrect or insufficient data can lead to high false alarms, as it is with some leakage detection techniques. Thus, in addition to event coverage, we consider the correctness of the information or data collected. This can greatly influence the accuracy of the output result. In addition, accuracy can also be determined with the use of machine learning techniques such as deep neural networks or convolutional networks. In [START_REF] Yousef | Accurate, data-efficient, unconstrained text recognition with convolutional neural networks[END_REF], a data-driven neural network model was proposed for accurate text recognition. Wu and Lin [START_REF] Wu | Inversionnet: An efficient and accurate data-driven full waveform inversion[END_REF] also used convolutional neural network for accuracy in waveform inversion. In our work, the accuracy of DAL is determined through the fusion of multiple detection techniques in more than a single node. A detailed explanation of the method is given in the following chapter 4.
Energy consumption
There has been a rise in awareness of the negative impacts of energy consumption on the environment. Hence, the reduction in the overall energy consumption of a system has become paramount in any system design. An IoT solution's lifespan and performance depend highly on the nodes' energy efficiency. Several measures have been taken to minimise the energy consumption of IoT devices and systems. These include the use of energy-efficient communication protocols, the architectural design of the system, and the node deployment style, among others. Li et al. proposed the minimisation of the energy consumption in their system by implementing an efficient data collection strategy and the system's network design. In some work, energy harvesting schemes are employed to preserve and improve the life cycle of the devices. Management of the nodes and network activities are also employed to improve the energy consumed in system design.
System design and discussions
In our system design and specification, we put into consideration all discussions in the previous section. We begin by defining the system's architectural design, then determine the communication techniques between nodes and layers and finally, the data management aspect of the system.
Architectural design
As discussed in section 3.1, there is no consensual architecture for IoT systems. Recall that in our design, we want a reliable architecture that is efficient in terms of latency, energy consumption and reliability. Thus, we propose a three-layered hierarchical and distributed architecture. The hierarchical nature of the design can ensure a reduction in latency for the real-time detection and localisation of leakages. The distributed nature correspondingly ensures robustness to communication failures, SPOF, node failures and other types of failure that may result from third-party interference or natural occurrences on the pipelines. Figure 3.3 shows the three-layered architectural design for our system, the sensor, fog and cloud layers.
The sensor layer consists of several sensors placed on the pipeline to collect information and data about events on the pipeline. To realise that, we make use of multi-spectral sensors, i.e. sensors with the ability to collect diverse information. Information collected in our case includes the negative pressure wave (NPW), the speed of the NPW, and the pressure on the pipeline. This information aid in detecting and localising leakages.
At the fog layer, we have several gateways for collating sensor information. At this level, more complex and computationally tasks on the collected data are achieved. For example, data management such as prioritisation, classification, placement and replication, automated learning are also done in addition to the detection and localisation The cloud layer is used for carrying out long-term services such as the storage of historical data, predictive analysis, alarm notification and others. In the next subsection, we discuss the communication aspect of the proposed architecture.
Communication protocols
In section 3.1, we introduced several communication protocols suitable for IoT systems. Each protocol presents different properties regarding communication range, latency, and energy consumption, as listed in Table 3.1. Particularly interesting for our use case is the LoRaWAN network communication protocol. It is a long-range protocol that is power efficient, cost-effective, easy to maintain and configure and scalable. As demonstrated in [START_REF] Jamali-Rad | IoT-based wireless seismic quality control[END_REF], LoRaWAN is suitable for long-distance communication (more than 30 kilometres), performing excellently well in an out-of-line of sight with rugged terrain.
In terms of scalability, a LoRa base station has about 20km coverage range, and each cell has the capacity to connect up to 50 thousand end devices which allow for a highly scalable network compared to NB-IoT as stated in the comparative studies of the LPWAN technologies [START_REF] Mekki | A comparative study of LPWAN technologies for large-scale IoT deployment[END_REF]. In addition, when considering deployment cost, a LoRa base station costs four times and fifteen times less than its Sigfox and NB-IoT counterparts, respectively [START_REF] Mekki | A comparative study of LPWAN technologies for large-scale IoT deployment[END_REF].
The end devices communicate with each in the sensor layer. The gateways also 56 Figure 3.4: Network Architecture with Communication exchange information, and the same goes for the cloud data centres. Finally, there is interlayer communication, i.e. from the sensors to the gateway, from the gateway to the cloud and vice versa. Typical communication in our system is shown in Fig. 3.4. Thus, we use LoRa for the long-range protocol between end devices, gateway devices and end device to gateway device. As a backhaul for long transmission, we employ cellular networks such as 3G, 4G and LTE (or other available choices). The choice of the short, and long-range protocol and backhaul depends on their availability in the POD as well. In our case, all these technologies are currently available in Nigeria from various operators such as MTN, Globacom, Airtel and also, a communication satellite for those areas that be hard to reach.
The first aspect of coverage, i.e. network coverage, was addressed in the previous subsection. We also mentioned problems related to existing detection techniques and monitoring systems, such as high false alarm rates and poor accuracy in detecting and localising leakage. In the following subsection, we discuss the second part of coverage 57 in our work.
Event coverage
In this subsection, we briefly introduce our approach to leakage event coverage, taking into account the reduction in false alarm rates, the accuracy of detection and robustness to failure. As shown in Fig. 3.5, node placement on the pipeline is based on the negative pressure wave generated when leakage occurs (more details in chapters 1 and 4). To ensure maximum coverage in both event detection and its transmission between sensors or between sensors and gateways, we implement the following steps:
1. Implement redundancy to allow autonomous recovery of failed or disrupted nodes with a functionally equivalent one. Network route redundancy can also improve robustness in the event of communication or node failures. The internode connections are extended to nodes in the same layer in addition to their connection to nodes in the higher or lower layer.
2. Reduce false alarms by ensuring that more than one node can detect the leakage.
3. Reduce the energy consumption due to the architecture's distributed nature, we first determine the leakage region, i.e. where the effects of leakage are measurable and Localise the leakage using only sensors in this region.
In the next subsection, we discuss how the data collected from the leakage events are managed across the network architecture.
Data management
Based on the three-layered network architecture, our data and service layer are structured in three layers across the defined architecture. Fig. 3.6 shows the various data and service placements from the sensor to the cloud level. Data and services are further divided into two sub-layers, using the publish-subscribe paradigm as the model of interaction between the sub-layers. For the management, we consider various aspects, from data creation, communication, aggregation and prioritisation, and storage to data analysis. In our work, the end devices generate various data from the pipeline by periodically carrying out the measurement on the pipeline.
The generated data are made available through publication. Services can then subscribe to the different kinds of data that are of interest to them. Whenever data is published, subscribed service(s) receive the data for their various uses, i.e., the data communication aspect. The proposed architecture allows data communication or data sharing amongst predefined neighbourhood sensors at layer one. Such collaboration among the nodes enables the implementation of services such as the detection and localisation of leakages. Data sharing is also extended to the fog level to enable services such as data preprocessing, prioritisation, placement, aggregation and replication, among others.
While data storage cuts across the three levels (sensor, fog and cloud), each level deals with storage depending on the needs, size and implemented services, with the sensor layer at the bottom of the hierarchy. The fog layer deals with the main part of the work, i.e., the implementation of efficient fault tolerance, which will be shown in chapter 5. The cloud level, on the hand, houses more extensive data for more intensive or complex computations.
More detailed explanations of the system design and specification are given in the following chapters as it relates to the contribution of each chapter.
Distributed Detection and Localisation of Leakages in Pipelines
In recent years, Wireless Sensor Networks (WSN) and IoT-based solutions are increasingly adopted as monitoring systems for various industrial processes. We have seen its adaptation in the health industry and environmental monitoring, amongst many others. Consideration for their use in monitoring processes in the OGI has also increased. As discussed in 2.2.3, there are various ongoing research to improve WSN-based solutions for the OGI. Nevertheless, our contribution is focused on the midstream sector, which is the transportation of oil and gas products. We mainly worked on a monitoring system for pipeline transportation of crude oil in the Nigerian Oil and Gas Sector. Whereas there are existing works (also discussed in 2.2.3), they are centralised. Their centralised nature makes them susceptible to Single Point of Failure (SPOF) especially considering the high rate of third-party interference and vandalisation ( [START_REF] Azubogu | Wireless sensor networks for long distance pipeline monitoring[END_REF][START_REF] Henry | Wireless sensor networks based pipeline vandalisation and oil spillage monitoring and detection: Main benefits for nigeria oil and gas sectors[END_REF][START_REF] Obodoeze | Wireless sensor network in niger delta oil and gas field monitoring: The security challenges and countermeasures[END_REF]) in our POD (Nigeria). In addition, despite the use of such systems like SCADA by some of the major oil and gas companies in Nigeria, the detection and localising time is quite high. Between 2017 and 2021, the average detection and localisation time for one of the major oil companies in Nigeria is one day. Similarly, in 2019, another company recorded an average detection and localisation time of four days. Both records are from the data provided by the DPR Nigeria for the purpose of this research.
Therefore, we design a fault-tolerant monitoring system to detect and localise leakages and failures for crude oil transportation via pipelines in a timely manner. Some of the critical factors we consider in this design are network coverage for the communication aspect, the monitoring system's sensitivity to leakage, detection accuracy, fault tolerance, and energy consumption. Thus, in this chapter, we present the first part of our work: a WSN-based distributed detection and localisation of leakages in a single horizontal transmission pipeline published in [START_REF] Ahmed | Hydillech: a WSNbased Distributed Leak Detection and Localisation in Crude Oil Pipelines[END_REF]. In the first section 4.1, we discuss factors and choices that enable distributed leakage detection and localisation. In section 4.2, we introduce the algorithm-HyDiLLEch, followed by the simulation results in section 4.3. Finally, we conclude the chapter in section 4.4.
Collective Detection of Multifarious leakages
In this section, we introduce the techniques based (on node placement and data correlation) on which we developed a distributed leakage detection technique. The global objectives are as follows:
1. Coverage: Allows optimal connection between sensors.
2. Sensitivity: Determine multi-sized leakages, i.e. small to big-sized leaks. We present the node placement and data correlation strategies in the following subsections. Other objectives are discussed in the next section.
Node Placement
Node placement strategies significantly impact the efficiency of a Leakage Detection Monitoring System (LDMS). Existing strategies for node placement include those based on the maximum transmission capacity of sensor nodes [START_REF] Guo | Sensor placement for lifetime maximization in monitoring oil pipelines[END_REF], determining the shortest distance between the node and event [START_REF] Perelman | Sensor placement for fault location identification in water networks: A minimun test approach[END_REF][START_REF] Sela | Robust sensor placement for pipeline monitoring: Mixed integer and greedy optimization[END_REF] or most commonly, placement at the key junction of the pipelines as shown in Fig. 4.1 using the typical sensors. Some of the disadvantages of the existing methods, for example, in the case of maximum transmission range, are; to communicate between two nodes, the sensors always expend the maximum energy given the distance between them, thereby increasing the average energy consumed. Moreover, failure of one of the sensor nodes, e.g. an intermediate node, can interrupt the transmission as new neighbouring node(s) will cease to be in each other's range. In addition, using the minimum distance between the event and sensor may not be practical because leakages and other failure events on the pipeline are stochastic in nature, i.e. they cannot be predetermined. Finally, consider the sensor placement in Fig. 4.1, assuming it represents long-haul transmission pipeline networks. Although it is economical to have such placement, its drawbacks include its insensitivity to small leaks, given the length of each pipeline segment and vulnerability to node failures.
For example, it is impossible for pipeline segments II, III, IV and V or IV, VII to detect or localise leakages if the sensors in those junctions fail. Putting it in context, that is, between 14% or 57% of the pipeline network goes out of coverage due to a single node failure. Hence, we propose deploying several sensors along a pipeline segment contrary to the deployment at key junctions. However, in our work, we recommend a new node placement strategy based on fluid propagation properties -NPW-when leakage occurs in a pipeline to determine the optimal distance between sensors. Based on the NPW phenomenon and the PG properties discussed in subsection 2.1.2, sensors can be placed at the shortest distance from the source. Specifically, where events like small, medium and big-sized leakages are detectable by examining the amplitudes of the NPW for various-sized leaks and the corresponding attenuation. To find such distance D on a pipeline of Lkm, we apply the following constraints:
The distance (D) between sensor nodes must be less than half the maximum communication range of the sensor (S cr ) to secure interconnectivity and data sharing between sensors D < S cr /2 (4.1)
The maximum detecting distance of the NPW (N P W mdd -to be determined experimentally) between sensors and event source is based on the amplitude of the NPW, and it must be adequately small to guarantee sensitivity to smallsized leaks. 0 < D < N P W mdd (4.2)
where, D is the distance obtained in Eqn. 4.1
According to Eqn. 4.2, the distance (D) between the sensors, should be less than the N P W mdd between the upstream and downstream sensors surrounding a leakage. This way, the NPW travelling in both directions can be detected.
There should be at least three sensors that can detect the NPW front from the total number of deployed sensors (N ). Indeed, in an ideal case, the upstream and downstream nodes are enough to detect the arrival of the NPW front. But, to prevent SPOF and the impossibility of localising a leakage, we add a minimum redundancy of one node to provide a continuous detection and localisation in the event of node failure.
∀Q, [∃ n 1 , n 2 , n 3 : dist(n i , Q) < (N P W mdd )] (4.3)
where, i > 0 and dist(n i , Q) is the distance between node i and the leak location Q.
Utilising the constraints defined above, we discuss the data correlation among the sensor nodes in the next subsection.
Spatial Data Correlation
We propose spatial correlation among geographically close sensors to allow distributed leakage detection and localisation. Our proposed method ensures that sensor nodes can communicate, allowing collaborative data processing to improve detection efficiency and reduce or eliminate false positives. However, it is expensive and almost impossible to have all the sensor nodes interconnected, i.e. fully meshed connection. Hence, we chose a limited multi-hop communication to achieve this collaboration among the nodes. The benefits of a multi-hop connection compared to a single wireless link includes an increment in the network coverage, higher connectivity and reduction in transmission power [START_REF] Braun | Traffic and QoS Management in Wireless Multimedia Networks[END_REF]. It can also improve the throughput due to a higher data rate. However, linear deployment of a multi-hop communication network has its shortcomings. They include the rise in energy consumption towards the sink node as the number of hops before the sink node increases leading to unbalanced energy consumption. The number of messages per sensor also increases significantly in a linear deployment with an increasing number of connections among the nodes. In addition, multi-hop communication in wireless networks introduces interferences that can significantly affect the efficiency and performance of the network [START_REF] Parissidis | Interference in wireless multihop networks: A model and its experimental evaluation[END_REF]. It also introduces high communication overhead, which may be difficult to eliminate through data aggregation [START_REF] Braun | Traffic and QoS Management in Wireless Multimedia Networks[END_REF].
Therefore, when we consider a linear deployment of sensors for the WSN layer, as shown in Fig. 3.5, we limit the number of collaborations among the sensors to a maximum of two hops. This allows the minimisation of multiple interferences which results from such collaborations while optimising the energy consumption of the sensors. In our case, the number of hops is in two directions, i.e. a single-hop connection will represent one upstream neighbour and one downstream neighbour. Hence, a focal node (n i -the node making the detection or localisation) will use information from itself and two other nodes. Two hops will represent additional information from two upstream and two downstream nodes, as illustrated in Fig. 4.2.
HYbrid DIstributed Leakage detection and Localisation tECHnique (HyDiLLEch)
HyDiLLEch, is an LDT based on the fusion of several computational fluid dynamic detection techniques, i.e. PPA, NPWM, GM introduced in 2.1.2. Each chosen LDT has its strength and weaknesses, such as the detectability of leakage, energy consumption, the accuracy of leakage localisation and others, as enumerated in Table 2.1. For example, PPA can detect small leaks, but it is inexpensive, has low maintenance and performs well under extreme conditions. The strengths of GM and NPWM techniques include their capability of detecting and localising single or multiple leakages in transient state [START_REF] Ostapkowicz | Leak detection in liquid transmission pipelines using simplified pressure analysis techniques employing a minimum of standard and nonstandard measuring devices[END_REF]. Together, these three methods share the characteristic of non-invasive LDTs with low computational complexity, which are more easily implemented and deployed on existing infrastructure in comparison to other methods of leak detection [START_REF] Ostapkowicz | Leak detection in liquid transmission pipelines using simplified pressure analysis techniques employing a minimum of standard and nonstandard measuring devices[END_REF].
In any case, while the aforementioned techniques incorporate several advantages, they also have drawbacks. Consider the PPA technique; it cannot be used to detect leakages in a transient state, and it can also not be used to localise leakages independently [START_REF] Sheltami | Wireless sensor networks for leak detection in pipelines: a survey[END_REF]. On the other hand, the NPWM relies on accurately detecting the arrival of the wavefront at the upstream and downstream sensors. Finally, GM depends on the sensor nodes' calibration and accuracy. As a result, we use a combination of these detection techniques (NPWM, PPA and GM). This combination allows us to take advantage of their strengths to enhance leakage detection and localisation accuracy without their negative impacts. This will be discussed in more detail in the following subsections. HyDiLLEch is mainly targeted at increasing the fault tolerance of the LDMS, improving the accuracy in leakage detection, removing or minimising false positives all in an energy efficient manner. This is achieved through limited collaborations among the nodes, reduction in the sensing and multi-data analysis.
Algorithms 1 and 2 show the main structure of HyDiLLech. It is implemented with a neighbourhood of 1-hop (2 nodes) or 2-hops (4 nodes), both referred to as HyDiLLEch-1 and HyDiLLEch-2. The two algorithms have similar formulations, only differing in the number of nodes participating in the neighbourhood collaboration. Both algorithms are implemented on a single horizontal pipeline as illustrated in Fig. 4.2. Note that the sensors used are multispectral, i.e. can detect several fluid properties such as speed, pressure, or temperature and equipped with it's own battery. In the following subsections, we explain the detection and localisation techniques in detail.
Algorithm 1 HyDiLLEch (Single-Hop)
1: {Init-steady state} 2: Set upper and lower PG thresholds 3: for ever do
4:
Get pressure data from neighbours 5:
Calculate local gradients P G (i-1)-(i) and P G (i)-(i+1)
6:
if local PG is outside the threshold then 7:
Scan at high frequency 8:
for scanning time do
9:
Get P i 10:
end for 11:
if exists P i greater than threshold, then share NPW data then 12:
Localise using GM data (equation 2.2)
13:
Localise using NPW data (equation 2.5) No leak detected Get pressure data from neighbours 5:
Calculate local gradients P G
(i-1)-(i) , P G (i-2)-(i) , P G (i)-(i+1) and P G (i)- (i+2) 6:
if local PGs are outside the threshold then 7:
Scan at high frequency 8:
for scanning time do
9:
Get P i 10:
end for 11:
if exists P i greater than threshold, then share NPW data then 12:
Localise using GM data (equation 2.
Localise using NPW data (equation 2.5) Sleep (duty cycle) 21: end for to its neighbours (line 4 of Alg. 1). Pressure gradients can then be computed to see if unexpected pressure drop happened (line 5 of Alg. 1). If so, to determine the location, several pressure values are sensed at high frequency to capture the leak wave (lines 8-10 of 1). If a wave is detected, the precise location is computed from the changes in the gradient, wave amplitude and arrival time (line 12 and 13 of Alg. 1). After the detection and localisation, the sensor goes back to sleep during a period of time (line 20 of Alg. 1). This period of time is the network duty cycle -i.e. the ratio of time the sensor is ON compared to the time it is OFF (for instance 60% duty cycle means the sensor is working 70% of time and sleeping 30% of time).
3-factor Leakage Detection
Leakage is detected in a multiple manner in HyDiLLech using the predefined pressure threshold, the pressure gradient and the arrival of the NPW front at the sensor. Each method of detection is further explained below.
Defining the pressure threshold
The algorithms begin by utilising the PPA LDT to pre-estimate the expected pressure at every sensor node location in the pipeline, as shown in Fig. 4.2. In the first part of our work, we consider a single horizontal pipeline, as shown in Fig. 4.2. Thus, to determine the expected pressure using the PPA technique, we set the elevation parameter (z) defined Eqn. 2.1. Assuming our pipeline has a total length of L kilometres, then elevation z 0-L = 0. The pressure gradient helps us estimate the expected pressure at every sensor point. In our case, we can rewrite Eqn. 2.1 as follows. Hence, P G 0-L can be estimated as follows:
P G 0-L = ( P 0 ρg - P L ρg ) 1 L (4.4)
where P G 0-L is the pressure gradient for a horizontal i.e. elevation z a-b = 0 pipeline of length L, (P L ) pressure at the outlet (P 0 ) is the pressure at the inlet, ρ is the fluid density and g is the gravitational force.
Once the fluid's rate of change in pressure as it travels along the pipeline in a steady state is determined, a threshold is set to account for the difference between the value read from the sensor and possible calibration error from the sensor readings. This threshold is set using the industrial permissible standard [START_REF] Ostapkowicz | Leak detection in liquid transmission pipelines using simplified pressure analysis techniques employing a minimum of standard and nonstandard measuring devices[END_REF]. This set the basis for comparison between the reading of the sensor and the value obtained from the preestimation with Eqn. 4.4. A difference greater than the threshold indicates possible leakage occurrence.
Difference in the pressure gradient
To substantiate this indication, we check the two PGs that must be present in case of leakage -based on fluid dynamic properties explained in 2.1.2. These two pressure gradients are the pressure gradient formed between n i and n i+1 assuming a leakage point as shown in Fig. 4.2 is at location Q . Both gradients, i.e P G leak 0-Q and P G leak Q-L must be present. Then, we also ensure that the following equations are true:
P G leak 0-Q = P G leak Q-L (4.5)
P G leak 0-Q < P G 0-L (4.6)
P G 0-L < P G leak Q-L (4.7)
Ensuring the correctness of Equations 4.5, 4.6, and 4.7 further reduces the high false alarms associated with other detection techniques.
Arrival of the negative pressure wave
As discussed in chapter 2, a leakage must necessarily generate a negative pressure wave (NPW) in the pipeline. Thus, to finalise the detection process, certain sensors among all the sensors detect this wavefront. We identify these sensors as the N P W ds in Fig. 4.2 and are those sensors around the areas the pressure gradient is formed.
Note that sensors are placed in such a manner that when leakages occur (including small leakages), the sensor is able to detect them. Thus, we reduce or even eliminate false positives by coupling leakage detection to the arrival of the wavefront. In addition, this arrival for it to be considered will occur in at least two (one upstream and one downstream) of the sensor nodes. This condition is based on the property of the generated wave when leakage occurs, i.e. the wave travelling in opposite directions from the point of leakage.
Ultimately, we confirm the actual presence of leakage following these three factors. The localisation of the leakage is estimated using a similar approach as discussed in the following subsection.
2-factors Leakage Localisation
To localise the leakage, we iterate over the sensors on the pipeline by combining the data (PG, NPW) to narrow down the region of the leak, i.e. the region where the NPW is detectable and where the change in gradient occurred. This gives us an offset region calculated as -D × i-where i is the index location of the node. Assuming a focal node n i as shown in Fig. 4.2 receiving data from nodes n i-1 and n i+1 in a single-hop neighbours and additional data from nodes n i-2 and n i+2 from double-hop neighbors, the leakage can be localised in two ways as follows:
Q dpN P W = (D × i) + (D -q) (4.8)
and
Q dpGM = (D × i) + G (4.9)
where Q dpN P W is the leakage location based on the partial information on the NPW and estimated by Eqn. 2.5, Q dpGM is the leakage location based on the partial information on the pressure gradient, and G is the distance based on the gradient calculation.
Both equations 4.8 and 4.9 provide absolute leakage position -the first estimated position when receiving the first leakage front-wave. According to the amplitude and speed of these wave, the absolute position can be refined through the pressure gradients and Eqn. 2.2 or the time of arrival of the NPW front as follows:
Q dpN P W = (D × i) + k n=1 (c × δt n ) ÷ k (4. 10
)
where Q dpN P W is an absolute leakage location, n is the n-th front end of the leakage wave, c is the negative wave speed estimated by Eqn 2.3 of the n-th front end of the leakage wave, δt n the difference in arrival time of the n-th front end of the leakage wave at the upstream and downstream nodes, and k is the number of received front-end wave.
HyDiLLEch is expected to eliminate the SPOF problem related to centralised systems through sensor-based detection and localisation in more than one node as a result of the various techniques put into consideration in its development. It is also expected to have relatively high detection and localisation accuracy with minimised energy consumption. The energy consumed is reduced by utilising only a fraction of nodes in the localisation process. Thus, we discuss and analyse the simulation results in the next subsection.
Simulation Results
In this work, implementation is carried out with several goals. The first is determining the distance at which nodes can be placed for sensitive leakage detection. In addition to that, we also carried out the implementation of some classical LDTs (NPWM and GM) to enable comparative analysis with our proposed distributed LDT (Hy-DiLLEch). We evaluate their accuracy in leakage detection, susceptibility to SPOF and, in addition, communication efficiency in terms of energy consumption and communication overhead. We used NS3 to simulate crude oil propagation and leakages to achieve this. To cover the typical distance of a Lora-based communication while implementing the mesh connectivity between the nodes, we used the WiFi protocol in NS3 with an increased transmission power. The energy consumption of both LoRa and WiFi differs by a constant factor after a distance of approximately 350m in a linear deployment i.e., node placement with a constant distance between the nodes [START_REF] Klimiashvili | Lora vs. wifi ad hoc: A performance analysis and comparison[END_REF]. Given the similarity in node deployment in our work with a constant distance (D) greater than this threshold, we assume the corresponding energy consumption using WiFi is also comparable by a constant factor to LoRa. To determine the distance between the nodes, we first carried out a preliminary experiments and the results are discussed in the next subsection.
Node placement
The main focus of this phase of the work is to determine the optimal distance D between uniformly distributed sensor nodes on the pipeline. This partially fulfils the objectives of this contribution, i.e. ensuring the sensitivity of HyDiLLEch to small and large leaks. Thus, we carried out an analysis of the impact of several amplitudes on the detectability of leakages using predefined pressure thresholds for PPA and the amplitude of NPW for NPWM. PPA technique works in the detection process in Fig. 4.3. We conducted the simulations using 0.5kpsi, 5kpsi and 20kpsi, each representing small, medium and large-sized leaks at various distances between the sensors. Obtained results show that for the PPA technique, all sensor nodes along the pipeline can eventually detect the leakage based on the pre-estimated pressure and a pre-defined threshold value allowing a gradient-based detection. However, the NPWM shows maximum detectable distance N P W mdd of approximately 2500m for the highest amplitude of 20kpsi. In the case of the least examined amplitude 0.5kpsi, the N P W mdd is approximately 1500m. As depicted in Fig. 4.3, the NPWM, -i.e. detection technique based on NPW-is not very efficient in long-distance leakage detection. This is shown by the number of sensors over which the amplitude of the NPW is detectable, i.e. the number of sensor nodes that can detect the NPW decreases as the distance from the leakage point increases. However, the smallest leak tested is detectable beyond 1000m and by multiple sensors. When all the leak sizes are considered, we can see from the figure that, at a distance of 1000m away from the leakage point, the NPW is still detectable by an average of three sensors. Also, the constraint defined by Eqn. 4.3 is satisfied at this distance with an average of three sensors detecting the NPW. Additionally, all other conditions listed in section 4.1 are satisfied. Thus, for the sake of this work, we choose D equals 1000m as the distance between the sensors used for all the simulations.
With the optimal distance determined, we implemented HyDiLLEch on a horizontal long-haul transmission pipeline for a single-phase laminar flow. The pressure data generated by the sensors follows the Bernoulli equation defined by Eqn. 2.1. Also, the fluid propagation parametrisation results from the collaboration with the DEEP laboratory1 -working on environmental management. Additionally, the properties taken into consideration for this simulation include industry-defined specifications such as the operational velocity, type of crude oil, and length and material of the pipeline, to mention a few. The data for defining these properties are from existing pipeline networks and sourced from Nigeria's Department of Petroleum Resources (DPR), including preliminary results from the first simulation works. Table 4 As a preliminary work, all tests are conducted in ideal conditions, i.e. no communication or node failures. In the following subsection, we discuss the results.
Detection and Localisation
To conduct the detection and localisation test, a random selection of sample leakage points was drawn along the length of the pipeline. The sample points have a confidence interval of approximately 9880m -14523m and a confidence level of 98%. The localisation accuracy for each LDT was calculated formerly calculated based on percentage as in [START_REF] Ahmed | Hydillech: a WSNbased Distributed Leak Detection and Localisation in Crude Oil Pipelines[END_REF]. However, we noted that this form of presentation does not show the accuracy with respect to the distance in meters. Thus, in this document, we present the localisation accuracy as distance in meters from the actual leakage position.
Following, we discuss the result obtained using classical LDT such as NPWM and GM in comparison with HyDiLLEch.
Classical Approach
For the centralised approach, we implemented two LDTs (NPWM and GM) using all the information collected at the gateway. The result obtained from the simulation is shown in Fig. 4.4. The GM present an average localisation accuracy error of approximately 227m with a high variance across the tested leakage points. NPWM, on the other hand, shows a more consistent localisation accuracy with an error in distance of about 2m and almost insignificant variance. However, this level of accuracy is only achievable with a high sampling rate of all nodes resulting in high energy consumption. Although this simulation demonstrates the efficiency of both LDTs in the localisation of leakages, the centralised approach to the detection and localisation makes it vulnerable to SPOF and less robust to other types of failures, i.e. communication failure and vandalism. In the following subsection, we discuss the advantages of Hy-DiLLEch relative to this drawback by considering the increment in the number of nodes detecting and localising leakages (NDL).
HyDiLLEch
For the implementation of the distributed approach, both versions (single and doublehop) of HyDiLLEch were separately considered. We break down the results based on NDL (by their distance from the leakage point) and also the principal information (dpNPW and dpGM) considered in the localisation process. As shown in Fig. 4.5, the results for the single-hop version (HyDiLLEch-1) show an increment in the number of sensor nodes detecting and localising leakages (NDL) from one to four in comparison with the centralised approach. While the centralised systems does localisation in a single NDL, HyDiLLEch utilises multiple nodes surrounding the leakage area for this purpose, thereby increasing it's fault tolerance. The NDLs which are represented as n 1-4 in the figure, results from the spatial correlation of data from neighbouring sensors. This is in addition to partial information such as the sensor geolocation, the pressure gradient (dpGM), and the time of arrival of the NPW front (dpNPW) at each NDL. In HyDiLLEch-1, we refine the dpGM technique active on all nodes and active dpNPW at high frequency on the nodes whose gradient information differs from one of the neighbours, i.e. the nodes closest to the leakage points. Thus, the average error from the actual leakage location is about 2m for the best NDLs (n 2 and n 3 ) in the case of dpNPW and about 33m for dpGM.
As shown in the figure, nodes n 2 and n 3 maintaining the highest accuracy are the ones closest to the leakage point. The nodes farthest (n 1 and n 4 ) from the leakage point have an average of about 298m in the case of dpNPW. We notice that, although the detection is possible, both nodes are too far away from the leakage to precisely evaluate the leakage location. Note that to use dpGM for localisation, gradient information from at least one upstream and one downstream node of the leakage point is is required. Hence, in this case, only dpNPW can be used for localisation at the extremities (n 1 and n 4 ). However, these values are kept to be potentially filtered at the upper layer as outliers. Also, in the case of failures, they still give a precision error averaging between 230m to 367m on a 20km pipeline that is considered.
Similarly, Fig. 4.6 shows the result obtained for the double-hop version of HyDiL-LEch (HyDiLLEch-2). In this version, the NDL increased from 1 to 6 and is also represented as n 1-6 . The differing number from the single version is a result of the increment in the number of nodes participating in the neighbourhood collaboration. Sensor nodes that are physically-close to the leak -n 2 , n 3 , n 4 , n 5 -also maintains the highest accuracy. The increment in spatially correlated data reduced the error for both dpNPW at dpGM. For dpNPW, we have an absolute detection and average of 32m error in distance across the four nodes for dpGM, with lesser variation. In addition, the average error in distance at the extreme node is slightly reduced to 288m compared to HyDiLLEch-1. Note that this hybrid technique is kept loosely coupled to ensure the robustness of detection in addition to enhancing the fault tolerance of the system. This is of particular interest in cases where performance-influencing factors such as communication failure, node failure and others are particularly detrimental to the partial information utilised concerning each localisation technique.
Finally, both versions (HyDiLLECh-1 and HyDiLLECh-2) showed a significant improvement in NDL (upto 6) compared to the single NDL obtained in the centralised versions. Also, the range of detection and localisation is increased to about 3000m to 5000m for both HyDiLLECh-1 and HyDiLLECh-2. Although the nodes at the extremities in both cases have the worst localisation accuracy, in the case of failure, results from these nodes can still be utilised.
Communication Efficiency
The three LDTs are also compared based on their communication efficiency. For this, we put into consideration communication costs such as use number of packets, the energy consumed as a result of sampling and the radio of the net devices. In the next subsections, we discuss the obtained results. The number of exchanged packets (the total number of packets used for leakage localisation) is shown in Fig. 4.7. For NPWM and GM, this information is shared between the sensor nodes and the gateway, while in the case of HyDiLLEch-1 and HyDiLLEch-2, it represents information shared amongst the neighbouring sensors. Results depict the lowest number of packets exchanged for the GM compared to the NPWM and HyDiLLEch-1, and HyDiLLEch-2. The increment in the two versions of HyDiLLech results from the data correlation among the neighbours. As such, there is 50% communication overhead from HyDiLLEch-1 in comparison to the GM and 60% in the case of HyDiLLEch-2. On the other hand, both versions show better communication efficiency than the other classical LDT-NPWM. In addition to that, it has a higher NDL to the tune of 4 and 6 increment for HyDiLLEch-1 and HyDiLLEch-2, respectively.
Energy Consumption
The evaluation of the energy consumption for each LDT is based on the sampling rate utilised by the sensor and the radio energy consumption of the network duty cycle. For the energy model we consider, the sum of energy utilised in different states such as, sensing, reception, transmission, sleep, and idle states based on the WiFi energy In the case of radio energy consumption, the rate of change of the energy consumed at each duty cycle by the net device was analysed. As shown in Fig. 4.9, the most significant change is recorded in the first and second duty cycles. This change results from the difference in connections requirement for the participating nodes of each LDT. The subsequent cycle after establishing the connections shows convergence in energy consumption.
On the other hand, Fig. 4.10 shows the cumulative energy consumption of all LDTs. In comparison to NPWM, there is a reduction in total energy consumed by 86% and 83% for HyDiLLEch version 1 and 2, respectively, in the first duty cycle. An increment by 6 -7% compared to GM per NDL is also noticed. However, both versions of HyDiLLEch show a similar growth rate of linear increment to GM as the cycle increases.
Conclusion
In this chapter, we introduced a new LDT -HyDiLLEch-based on several detection techniques using a unique node placement strategy that allows distributed detection and localisation of small-sized to big-sized leakages. To obtain the optimal node placement and carry out a comparative analysis of the efficiency of HyDiLLEch, we implemented a couple of existing LDTs, such as the NPWM, GM and PPA. For the node placement, we made a choice of 1000m between the sensors to allow for multiple-sized leakage detection. The efficiency of the LDTs was compared in terms of the number of NDLs, energy consumption and communication overhead. Compared to currently used detecting and localising methods presented in chapter 2 and the data provided by DPR for oil and gas companies in Nigeria, -current detection and localisation time is in order of days. HyDiLLEch with single or double-hops detecting and single or double-hops notifying works in seconds.
Obtained results from the simulations show that HyDiLLEch in both the single and double-hops data sharing increased the number of NDL by four to six times, respectively, compared to the centralised LDTs. This increment in NDLs eliminates the SPOF problem related to the classical LDTs. In addition, HyDiLLEch showed high communication efficiency with minimal increment in the communication overhead compared with the other LDTs. Due to the low sampling rate utilised in HyDiLLEch, the energy consumption at the network level was maintained despite the spatial data correlation. However, in this part of the work, we have not included any failure. Thus, in the next chapter, we introduce various failures, such as communication failures.
Chapter 5
A Game Theory Approach to Data and Service Management for Crude Oil Pipelines Monitoring Systems
The operations across all oil and gas sectors present terabytes of data. According to Slaughter et al., [START_REF] Slaughter | Connected barrels: Transforming oil and gas strategies with the Internet of Things[END_REF], every 150,000 miles in the midstream sector, i.e. the transportation of crude oil, produces about ten terabytes of data. Analytic operations on such data can considerably enhance the processes of the three sectors. While these insights are well advanced [START_REF] Mohammadpoor | Big data analytics in oil and gas industry: An emerging trend[END_REF][START_REF] Aliguliyev | Conceptual big data architecture for the oil and gas industry[END_REF] in the upstream and downstream sectors, i.e. the exploration and distribution activities, respectively, the midstream sector has been mainly left unexploited. However, efficient management of the data produced from the midstream sector can significantly reduce operational downtime. In addition, analytics of the data can enable the timely detection of leakages in the pipeline and improve their localisation through the use of various services. Current works focus on detecting leakages in the midstream sectors using legacy and recent Leakage Detection Monitoring Systems (LDMS).
Thus in this chapter, we present our second contribution published in [START_REF] Ahmed | R-MDP: A Game Theory Approach for Fault-Tolerant Data and Service Management in Crude Oil Pipelines Monitoring Systems[END_REF] with the following objectives: 1. To carry out a comprehensive data analysis of historical incidents of the Nigerian National Pipeline Corporation (NNPC) pipeline network.
2. Based on the analysis, create a regionalised Markov Decision Process (MDP) to ensure similar performance across the defined regions with optimised energy consumption 3. Solve the model to determine the optimal strategy for each region.
In the first section, we present the background information by introducing the NNPC pipeline network and defining the problem to be solved. This step is followed by the formal definition and detailing of the model in section II. In section III, we discuss the implementation methods and results obtained. Finally, we conclude the chapter in section IV.
Failures in pipeline transportation and modelling process
Erosion, corrosion, vandalisation, equipment failure, and network failure, to mention a few are some of the causes of failures in pipeline transportation of crude oil. Existing monitoring systems only focus on the accurate and timely detection of leakages in the pipeline network. However, the monitoring systems are susceptible to third-party interference, which is one of the leading causes of failure in this mode of transporting crude oil. We aim to circumvent this problem through efficient data and service management that allows continuous detection of leakages in the presence of such failures (in both pipelines and the monitoring system). The following subsections introduce the pipeline network and its failure incidents and define the modelling process's layout.
Introduction to the Environment and Problem Definition
As discussed in the introductory chapter, the midstream sector has multiple modes of crude oil transportation. However, our work focuses on the pipeline transportation of crude oil in Nigeria through the NNPC pipeline network shown in Fig. 5.1. The NNPC pipeline network spans several thousand kilometres and is divided into five areas based on their geographical location. The five areas include Kaduna, PortHarcourt, Warri, Mosimi and Gombe. Each area is characterised by different failures, varying significantly from type to frequency. In Fig. 5.2, we present a snippet of the historical data by showing the rate of incidents across the areas for five years. The data presented shows a remarkable difference in incident rates in time and location. Contributing factors to this difference include weather conditions, festivities, and the area's proximity to the border, to mention a few. Therefore, to model this problem, we consider two principal components, i.e. failure causing elements and the monitoring techniques. Several options exist in the optimal modelling of this problem, such as Byzantine failure-a common failure modelling in distributed systems. Other approaches include a game theoretical approach such as multi or single-player games. Following, we will discuss each approach in application to the problem.
Byzantine failure model
A Byzantine failure is a failure model in distributed system theory characterised by partial information. In this environment, the correct state of a failing system com-Figure 5.1: Nigerian National Petroleum Corporation pipeline network [START_REF] Ambituuni | Optimizing the integrity of safety critical petroleum assets: A project conceptualization approach[END_REF] Figure 5.2: 5 year regionalised pipeline incidents [START_REF] Nnpc | annual statistical bulletin[END_REF] ponent may not always be detectable. Hence, to avoid waste of resources or system suboptimality, a strategy is usually determined through a consensus of the non-faulty components. This approach is called the Byzantine fault tolerance (BFT), allowing a majority of the system to ensure a fault-tolerant system through a consensus. This approach to improving or ensuring fault tolerance in a system has been adopted for many use cases such as the blockchain, IoT and others [START_REF] Porkodi | Integration of blockchain and internet of things[END_REF][START_REF] Zhang | Consensus mechanisms and information security technologies[END_REF][START_REF] Wang | Survey on blockchain for internet of things[END_REF]. Hence, we could apply the BFT in this system to ensure resiliency to failure.
Nonetheless, considering this problem, a Byzantine failure may result in a besteffort approach given the historical data presented in Fig. 5.2. In such a case, we will adopt the working method of a BFT where the majority of the system component must have an agreement. Suppose each area (or the nodes in each area) is considered part of the system. In that case, the majority of the areas (at least three out of the five geographical areas presented in the figure) must come to a consensus on a strategy to apply globally in the system. That is, if we consider sensors from the worst affected areas, such as PortHarcourt, Mosimi and Kaduna, then we will adopt a strategy that results in a waste of resources in areas like Warri and Gombe. Conversely, when considering the least affected areas, we will use a suboptimal strategy in the high incident rate areas.
Therefore, it is imperative to apply an approach such as a game-based approach that allows continuous and dynamic interaction with the environment and can guarantee optimal solutions in all affected areas.
A multiplayer model
System failures such as ours can be modelled as a multiplayer game. Considering the problem, on one hand, several casuses of failures in pipeline transportation of crude oil. On the other hand, also exists multiple monitoring techniques proposed as solutions to this problem. When we put this information into consideration, we could model the problem as a non-cooperative two-player game. In this case, we choose the non-cooperative game because both systems take adversarial actions. In this setting, player 1 could represent the failure-causing components while player 2 represents the monitoring system. Since our goal is to provide a fault-tolerant monitoring system against a partially stochastic opponent; therefore, we can apply the maxmin strategy for player 2 (the monitoring system). The maximin strategy of a player is the strategy that maximises the player's worst-case payoff, i.e. the security level of the player [START_REF] Shoham | MULTIAGENT SYSTEMS Algorithmic, Game-Theoretic, and Logical Foundations[END_REF] or the guaranteed minimum payoff. For example, let us denote a player's security level as Z i ; then, for player 2, the security level is defined in the following equation:
Z 2 = max A 2 min A 1 r 2 (A 2 , A 1 )
(5.1) Equation 5.1 above is used to find the policy that maximises player 2's security level by taking action(s)-A 2 -and minimises the effect of action(s)-A 1 -played by player 1, i.e. the saddle point of the two players.
While pessimistic, this approach can guarantee leakage detection in the event of any type of failure on the basis of the Nash -i.e. the opponent never changes its strategy.
Nevertheless, we must define the utility or payoff of at least one player to consider this a two-player game using the maximin strategy. In the case of player 2, this could be defined as the ratio of leak detection to the actual number of leaks over a predefined period. On the contrary, this definition does not translate to a utility for player 1. In the case of player 1, most failures result from natural occurrences. Therefore, the effects of these failures cannot be considered a utility for the player. In addition, we are more interested in how the actions of player 1 affect the utility of player 2.
A single-player game
Given the discussions from the previous subsections 5.1.1 and 5.1.1, it is more rational to model this problem as a game against nature, i.e. a one-player game. This is also made practical given the historical data (Fig. 5.2) from nature, i.e. the pipeline environment that gives insight into the failure tendencies of the environment.
A one-player game can be actualised using machine learning techniques such as reinforcement learning. This technique differs from other machine learning techniques. For instance, supervised learning deals with training a set of labelled data. The objective of such training is correctly identifying or categorising objects or situations by extrapolating from the trained data set. Unsupervised learning, on the other hand, deals with the identification of structures otherwise unknown in a set of unlabelled data. However, unlike supervised and unsupervised machine learning techniques, reinforcement learning deals with interactive problems to achieve a goal given a set of actions and continuous feedback from the environment using an MDP [START_REF] Sutton | Reinforcement Learning: An Introduction[END_REF].
Thus in subsection 5.1.2, we discuss in detail the environment enabling such a decision process.
The Environment Setting
The setting of a game against nature enables us to maximise the utilities as we define for player 2 (the monitoring system) purely in response to the failures in our environment. Given the data presented in Fig. 5.2 showing a large diversity in incident rate from one area to the other, considering a global approach to our problem will be unnecessarily costly. Thus, we propose a local solution for each area by broadly categorising them into different logical regions i.e. R = {r0, r1, r2}. This categorisation aims to maximise each region's utility without the cost of a globalised solution.
Following an empirical study, we define a failure rate-based threshold on which the region of the area is determined as follows: Low failure rate (r0), average failure rate (r1) and high failure rate (r2). The regionalisation of the areas enables the practical implementation of datadriven strategies tailored to each region. Representing the area(s) with the least number of incidents, the area(s) with an incidents rate (IR) of 0 to 5% belongs to region r0. The area(s) with IR from 6% to 20% is in region r1, and the area(s) with IR of 21% and above falls into the critical region r2. Additionally, each area shows a low probability of transitioning from one region to another. For example, the PortHarcourt area consistently remains in r2 for the period considered. On the other hand, both the Gombe and Kaduna areas are in the region r0 and r1 80% of the time, only changing in the year 2017 and 2019, respectively. While changing regions approximately 40% of the timeline, Mosimi and Warri can still be considered relatively stable. Given this low transition probability, we depict each area's state in broad representation as shown in Fig. 5.3-with start and con, the non-failure states.
Further details on the state's definition will be given in the following section. The following section presents the system design and modelling based on this environment's settings.
R-MDP: A Data-Driven Regionalised Markov Decision Process
Following the definition of the regions, we aim to provide efficient and fault-tolerant data and service management using a two-stage approach. The first stage is to find the best detection policies in terms of convergence and optimal detection accuracy using a data-based Markov Decision Process (MDP). We build the second stage on the results of the first stage to find the policy that minimises the processes' energy consumption. Thus, we propose our objective function summarised as follows:
min
(π * ,V * ) V e (π * , V * ) (5.2)
where π * is the optimal policy, V π * is a value function following the policy π * , and V e is the value of the minimum energy consumption of the node with the optimal value function
The value function obtained in each region allows us to measure and compare how good (reward and energy consumption) an applied policy is in that region. The performance measure is constrained by the number of nodes (at least two for fault tolerance in the monitoring system) that provide the accuracy within a bound, to be defined later.
In the following subsections, we present the details of the two stages of the objective function. In the first subsection, we present the details of the two stages of the objective function: the first stage; optimisation of the performance measure-i.e. the accuracy of leakage detection and localisation. In the second subsection, we present the second stage of the objective function aimed at optimising the overall energy consumption.
The First Stage: Accuracy Optimisation
We formalise our decision process using MDP. An MDP can be used as a model for formalising decision processes in a stochastic environment using a set of variables such as < S, A, p, r > [START_REF] Shoham | MULTIAGENT SYSTEMS Algorithmic, Game-Theoretic, and Logical Foundations[END_REF]. In this tuple, S represents the sets of states that the players can be in the environment. A is the set of available actions taken to transition from one state to another. p represents the probability transition function, while r denotes the reward or utility function. Also important to consider is the discount factor β for future rewards, as will be explained later. Given these sets of variables, the objective is to find an optimal policy that maximises the expected discounted cumulative reward. This can be achieved using the well-known Bellman optimality equation. The Bellman optimality equation enables the measurement of the goodness of a state -i.e. the maximum obtainable reward in a state through a state value function V (s).
In general, when an arbitrary policy is followed, the Bellman expectation equation is defined as follows.
∀s : V π (s) = s p(s |π(s), s)[(r(s, π(s)) + βV π (s ))] (5.3)
where V π (s) is the value in state s based on policy π, p(s |π(s), s) is the probability of going from state s to s when action following a policy, r(s, a) equals the immediate reward in state s, β is the discount factor for future rewards and V π (s ) is the future value.
Nonetheless, to maximise the accumulated reward, an agent should follow an optimal policy. Unlike the general equation that defines the expected reward by following an arbitrary policy, the optimal policy guarantees the maximum reward for the immediate state and all future states with the assumption that each state continues to follow this policy. Hence, the policy that takes action with the maximum reward is defined by the Bellman optimality equation as follows. The Bellman optimality equation is divided into two parts: the first part represents the immediate reward denoted by r(s, a). The second part means accumulated future rewards, gathered through the iterative part V * (s ). In the second part of the equation, we use the hyperparameter β. The β variable is vital to this part to avoid infinite cycles and enable solutions' eventual convergences. In addition, it emphasises the importance of future rewards, i.e. the higher the value of β, the more important the value of the long-term reward.
Thus, in the first stage of our work, we use Bellman's optimality equation to evaluate different policies and to determine the policies that maximise the reward in terms of accuracy of leakage detection and localisation. As such, We map the environment (NNPC pipeline network) to the equation as follows:
The States S: In our work, a state is defined as the conditions based on which leakage detection and localisation process are applied. Considering the regionalisation defined in the previous section, we define our set of state S as follows: S = {start, con, r 0 , r 1 , r 2 }. The states start and con represent the initial states where no failure is present. The latter state represents communication initialisation. This includes the connection definition with predefined neighbours discussed in the previous chapter. States r 0 , r 1 , r 2 denotes the states during failure. As earlier defined, these states depend on the geographical area's failure rate.
The Actions A: Actions are used to transition between states. In this work, we have several actions that allow this transition. They are represented by the set A = {s t , c, s a , r ds , m ds , rm ds } representing the start process, connection initialisation, service activation, replication of data and services, migration of data and services and replication and migration of data and services respectively. Services in our work include data preprocessing, leakage detection, leakage localisation, data transfer/ data sharing, data filtering, data aggregation and prioritisation and midterm data storage. Only the first four service can be done at the sensor node level. At the fog layer using the gateways, all services except long-term historical data storage are implemented.
s t (start-action): the start-action is a switch-on action symbolising the beginning of the decision process. c (connection): the connection initialisation action is used to implement the connectivity between nodes as defined in HyDiLLEch in the previous chapter.
s a (service activation): this action is used to activate services in the nodes. Services are deployed in two ways in this work: service pre-deployment and dynamic service deployment. In the case of service pre-deployment, the service switch action is a mechanism to reduce energy consumption by intermittently activating services as needed. It is, thus, applicable alongside other actions at every point in time. Dynamic service deployment, on the other hand, allows strategic placement of services in sensor nodes, gateways or the cloud.
r ds (data and service replication): this action denotes the replication of data and services as the name implies. This action is taken by the creation of copies of data as they are being produced or services in nearby sensor nodes. Additionally, as discussed in the previous chapter, it incorporates replication to the fog nodes instead of the limitation to the sensor node. The implementation of other services, such as alarms, data prioritisation and filtration, to mention a few, is enabled by the extension of the replication action to fog nodes.
m ds (data and service migration): involves the migration of data and services. While considerably similar to r ds , m ds differs from r ds as it is not only used in the presence of failures, it is also used as a response to specific deployment needs. For example, it can be used to reassign services such as data aggregation and storage or data prioritisation to other nodes in case of memory exhaustion in the hosting node. In addition, we can also make use of the migration action when there is an increased latency between a particular service and the data it requires. Note that both r ds and m ds have several, but different, communication requirements, as will be discussed later.
rm ds (data and service replication and migration): In areas where the failure rate is exceptionally high, we can utilise a combination of actions as a single action. Therefore we include the rm ds action, which is a combination of both replication and migration actions to be taken as a single action in this case.
The States Transition Function p: The state transition function p represents the probability of moving from one state to another when the environment dynamic is known. In our work, we utilised the information from the data presented in Fig. 5.2 to define this transition. According to this data, some geographical areas will likely transition from one region to another. Thus, we determine how each geographical area moves within the logical regions.
The Reward Function r: The reward function is defined as r ∈ [0, 100] per node for detection and localisation accuracy of leakages. The maximum obtainable reward for every action taken is equivalent to the accuracy of leakage localisation. In addition, we consider the number of nodes that falls within the allowance threshold (to be defined later) for an acceptable accuracy level representing the aspect of fault tolerance.
The Second Stage: Optimising the Energy Consumption
The second stage of the work includes the selection of policies that minimise the energy consumption for each region from the results of stage 1. We determine the energy consumption from actions on the environment by representing the interaction of the nodes in the environment as follows: Let us define a set of nodes N = {n 1 , n 2 , ...., n u }, where u -the total number of nodes (sensors nodes and gateways). Each node n i ∈ N produces a set of data used by the services deployed in the system, which are also stored in the nodes. If G is a matrix for the use of data by each service, then g ij = 1 represents service in node n i requiring data produced by node n j . As such, we propose h ij to equate the number of hops between these nodes.
G = g 11 . . g 1u . . g u1 g uu (5.5) N E = ne 11 . . ne 1u . . ne u1 ne uu (5.6)
Matrix N E is the predefined neighbourhood between the nodes such that ne ij = 1 represents a data sharing between nodes and 0 otherwise.
There are several modes of communication between sensor nodes and gateways. Let us represent the communication modes by l ij ∈ {0, 1, 2} where l ij = 0 where l ij = 0 is communication absence, l ij = 1 a connection via LoRa and l ij = 2 a connection via 3G/4G communication networks. The communication modes defer in their capacity and influence the possible action between nodes. For example, the service activation is a straightforward action that depends only on the predefined neighbourhood connection, i.e. the implementation of state con.
When the replication is considered, the communication between originating and destination nodes must be l = 1 for both data and services.
In the case of the migration action, the communication requires l ∈ {1, 2} for data and service, respectively. This is because the migration of services needs higher bandwidth than that of data.
Thus, the energy consumption for each action is calculated using the following the following relation:
cost ij = lc * h ij (5.7)
where lc is the cost per packet, and per link of the service or data, h ij is the number of hops between the origin and destination nodes.
Note that when services are pre-deployed in all nodes, the energy cost change slightly, i.e. the cost of service migration will not be incorporated. In this case, the service activation action switch s a will be used in place of the service migration action. Given this communication mode, we can minimise energy consumption by reducing the distance (in the number of hops) between services and the needed data to run the service. Note that a good placement of service is when we have it placed at most one hop away from where leakage occurs. Finally, we can define the objective of the second stage as follows: min
(π * ,V * ) V e (π * , V * ) = min (π * ,V * ) [ u i=1 u j=1 cost ij . s φ π (s ) s g ij (s |π(s), s).V * (s )] (5.8)
where φ π (s ) = p(s |s, π) is the steady state defined by the probability of moving to the next state following a policy (π) in the current state, cost ij is the energy consumed defined in Eqn. 5.7, g ij (s |π(s), s) represents if data/service is resident in different nodes when we follow a policy from one state to another.
This equation presents our objective function, i.e. to find and ensure convergence to the optimal reward with minimum energy consumption through the various placement strategies in the network nodes.
Implementation and Results
The model is implemented using the Gym Library. It is an open-source toolkit that can be used to develop and compare reinforcement learning algorithms. The NNPC environment shown in Fig. 5.1 was simulated using the NS3 network simulator. To realise the communication between Open AI Gym and NS3, we build on the work conducted by Gawlowicz et al. [START_REF] Gawlowicz | ns-3 meets openai gym: The playground for machine learning in networking research[END_REF] called ns3-gym. This work allows seamless communication between the OpenAI Gym framework and the NS3 network simulator. The interaction is realised using an instantiated gateway in the NS3 environment and a proxy in the Gym environment. Our implementation of the interaction between the simulated pipeline network and the OpenAI Gym is depicted using Fig. 5.4. We used multiple agents on the Gym side for comparative analysis. To implement the placement strategies for the regions while carrying out the reinforcement learning using the algorithm Alg. 3, each agent interacts with a gateway through the dedicated proxy.
The information exchanged between an agent and a gateway in each episode are: (i) observation space, (ii) the action space, (iii) the reward and (iv) the game over conditions. In our case, we defined them as follows:
The observation space: Failures, i.e. leakages, network or communication failures. for each region do 7:
Choose a from S using policy ( -greedy) derived from Q
8:
Perform action a 9:
Observe r, s'
10: Q(s, a) ← Q(s, a) + α[R + βmax a Q(s , a) -Q(s, a)] 11:
s ← s'
12:
Get corresponding energy consumption using 5.7 {Update E} 13:
end for 14:
Until s is terminal 15: end for 16: Reset Environment 17: For each region, choose the policy with the least energy consumption and optimal reward 95 ensure a balance between exploration and exploitation of states. We use this algorithm because, given the nature of our problem, whereas an optimal policy guarantees convergence to an optimal value, the rate of incidents across the regions differs significantly. Thus, for each region, we solve the MDP such that the unique policy returned from the solution is that which not only satisfies optimality for that region but also minimises the global energy consumption. Therefore, we take our environment, the NNPC environment and define the region of each of its areas in Alg. 3. This is done by setting the corresponding failure of the area using rate and list error types for the regions. Using a matrix -Q-, we store the state-action values and keep track of the energy consumption using the corresponding energy consumed with a matrix E. At the end of the episodes, we select the optimal policy for each region.
We discuss the simulations and results in the following subsections:
Accuracy in Detection and Fault Tolerance
We evaluate the fault-tolerance and accuracy of the LDMS (from the previous chapter) for all regions in this subsection. Given the context of our, these two are jointly considered the obtainable reward for the MDP. We examine both metrics using two types of communication failures in NS3, i.e. the rate error type and list error type.
While both methods can introduce failure to the system, they differ in how they work. The rate error type introduces failure via random packet drops, delays and out-of-order packet delivery. On the other hand, the list error type involves userselected packet drops. In our case, we used uniformly selected packets across the list of packets shared among the sensor and gateway nodes. To examine performance, we examine the effects of these failures across randomly selected leakage points. Obtained resulted are shown in Figures 5.5, 5.6, 5.7, 5.8, 5.9 and 5.10. Note that for each error rate, the evaluation is limited to the range of 0% to 20%. Outside this threshold, detection or localisation becomes impossible. Additionally, each metric of evaluation is presented in two figures (all possible outcomes and controlled outcomes). As the name implies, the former represents the outcomes recorded from all tests across all ranges. The latter, on the hand, represents the outcome of High Performing Nodes (HPN). The HPN are nodes that produce an accuracy of detection greater than 90%.
We set the minimum level of accuracy to 90% following the high level of performance required by the oil and gas operators [START_REF] Ostapkowicz | Leak detection in liquid transmission pipelines using simplified pressure analysis techniques employing a minimum of standard and nonstandard measuring devices[END_REF]. This set threshold also helps standardise the benchmark based on which performance is measured.
Fault tolerance
One of our LDMS's strengths is removing Single Points of Failure associated with centralised systems. Thus to maintain this property, we need to ensure that there are at least two HPNs across all regions. Figures 5.5 While Fig. 5.5 represents the total number of nodes that can detect and localise leakage without putting into consideration the level of accuracy, Fig. 5.6 captures only the HPN, i.e. with accuracy over 90%. In Fig. 5.6, we observe that areas with 5% failure rate for both rate and list error types maintain the required number (at least two) of nodes for fault tolerance, mainly differing in the variance. Additionally, we also observe that this variance is from nodes located at the extremities of the pipelines in the area(s) in consideration. This problem can be addressed by increasing the node density in such locations. Finally, we find that other areas with a failure rate above 5% do not meet this requirement.
Accuracy
In addition to the fault tolerance level, we also examine how the accuracy level changes based on the failure rate. Results are presented in Figures 5.7 and 5.8.
While the fault tolerance decreases as the failure rate increases, the accuracy level presents a differing result. Besides, the error types differ as well. For example, in Fig. 5.7, we notice that the list error type provides a higher accuracy level than the rate error type, with both showing similar trends for the failure rate. Although, when we consider results from HPN shown in Fig. 5.8, the rate error type presents much variance in changes in the accuracy of localisation compared to the list error type. Thus, we can make the inference that while the lower error rates favour a higher number of nodes in leakage detection, it does not have such an impact on the accuracy of localisation.
Rewards
Therefore, we base the definition of the reward function on the minimal requirements of both fault-tolerance and accuracy levels. Thus, the total reward is the sum of the accuracy level obtained by each detecting node in the region. We also represent the reward function in Figures 5.9 and 5.10 as all outcomes and HPN outcomes. Both cases show a similar pattern of decrement in reward as the failure increases.
Finally, we present the expected reward across the regions in Fig. 5.10 following the benchmark discussed in subsection 5.3.1. For each region, we expect to have two HPNs at all times for all failure rates, thus putting the target at the level shown in the figure.
Optimised Accuracy and Energy Consumption
We implemented several algorithms (learning-based, heuristic and pessimistic), thus, in this subsection, we discuss the rewards obtained based on these implementations.
For the learning-based algorithms, we used a uniform random policy and our proposed regionalised -greedy learning (R-MDP). While the uniform random policy has a uniform distribution over the space from which actions are taken, R-MDP finds the balance between exploitative and exploratory actions. This is possible using the -greedy Q-learning algorithm, which allows the agent to exploit what has been learnt and explore actions with yet-to-be-known outcomes. This strategy of balancing between exploitation and exploration avoids local optimum. We provided the baseline for comparative analysis by implementing a heuristic approach based on Weighted Set Cover (WSC) service placement from the work [START_REF] Garg | Heuristic and reinforcement learning algorithms for dynamic service placement on mobile edge cloud[END_REF]. The aim is to maximise detection accuracy with a minimal cost following the context of our work. We also included a pessimistic approach based on the worst-case scenario as a performance measure. Each algorithm is evaluated based on the value function and the energy consumption. We present the optimal reward for the implemented algorithms in Fig. 5.11. The random policy expectedly has the lowest total reward. The other algorithms, however, show a much better performance. The R-MDP for each region shows a similar reward across the three regions differing slightly. The WSC-based algorithm, however, shows an increased reward of about a 6% rise in the obtained total value for the R-MDP regions.
While the optimal value functions are similar, obtained energy results show a significant difference in energy consumption due to the exploration effects. We present Figure 5.12: Total Energy Consumption by Algorithm Figure 5.13: Regional Reward vs Energy Consumption 102 the total energy consumption and the corresponding optimal value in Fig. 5.12. In the figure, the energy consumed is presented as the histogram while the value function is presented in lines. According to the result, we observe lesser energy consumption for average epsilon values between 0 and 0.4 and an increased energy consumption as the epsilon value approaches 1. However, these effects differ considerably for each region, i.e. regions r0 and r2 have a better performance with near greedy or greedy policies than more explorative policies. Although the reward for region r0 increases slightly as the exploration increases, the reward for region r2 is inversely proportional to the value of epsilon. In the case of region r1, the optimal result is achieved with the epsilon value equaling approximately 0.3. Hence, we can conclude that partitioning the areas of the environment into several regions allows an optimal solution with a minimised energy consumption.
Using Fig. 5.13, we present the energy consumption by the algorithm. In this presentation, we analyse the total energy consumed by the different algorithms. Results show that the pessimistic algorithm expectantly has the highest energy consumption. In comparison to the total energy consumed by the R-MDP, this approach consumes approximately 77% more energy. However, we observe a 26% reduction in energy consumption when we compare it with the globalised heuristic approach (WSC). While this is much lesser than the reduction obtained compared to the pessimistic approach, the decrement in this case still poses a significant improvement.
Conclusion and Discussion
This chapter presented our work on fault-tolerant and energy-efficient data and service management modelled as an MDP. To model the MDP, we used the NNPC pipeline network and the historical failure pattern over a five-year period. Leakage detection and localisation are done using LDMS from the previous chapter. In this chapter, we took into consideration the data and service layer as opposed to just the sensor layer. We showed that energy consumption could be made efficient when we consider the dynamics of the environment. In the previous chapter, we used a generic scenario for the detection and localisation of leakages in a pipeline of twenty kilometres. We did this without considering any failure. In this chapter, however, we consider different types of failures, i.e. the rate and list error rates failures in a more dynamic environment. We observe that considering this aids the efficiency of global energy consumption.
Chapter 6 Conclusion and Future Works
This chapter present a summary of our research and obtained results in 6.1. We follow this with a discussion on the possibilities of future works in section 6.2.
Summary
This thesis presents our work on a resilient IoT-based solution for addressing failures in the oil and gas industry. In our, we focus on the midstream sector of Nigeria's oil and gas industry, mainly the failures in pipeline transportation of crude oil. We proposed an IoT-based solution and considered a multi-layer fault tolerance published in [START_REF] Ahmed | Resilient IoT-based Monitoring System for Crude Oil Pipelines[END_REF]. To implement our contribution, we first assessed the design and specification aspects of the system. We proposed a three-layer hierarchical architecture consisting of the WSN layer, the fog layer and the cloud layer. At the WSN Layer, we implemented our first contribution published in [START_REF] Ahmed | Hydillech: a WSNbased Distributed Leak Detection and Localisation in Crude Oil Pipelines[END_REF]. This included the design and simulation of a hybrid and distributed leakage detection and localisation technique-HyDiLLEch. HyDiLLEch combines three existing LDTs (PPA, GM and NPWM), each technique with its advantages and disadvantages. For instance, while the PPA can be used to detect leakages only, GM can be used to detect and localise leakages. However, the accuracy of localisation depends on the sensors. NPWM, on the other hand, requires a high sampling rate for accurate localisation. Thus, HyDiLLech aimed at taking advantage of each detection technique's strengths while minimising its weaknesses. With HyDiLLEch, we are also able to detect multi-sized leakage in a manner that eliminates SPOF from the detection system using a unique node placement strategy based on crude oil propagation. Our results showed that we could detect leakages with a high accuracy level without high energy consumption cost compared to some classical methods, such as the NWPM. However, compared to the other method GM, the most significant improvement is in the area of fault tolerance, with a four to six increment to the number of nodes detecting leakages. At the fog layer, we considered efficient data and service placement for a fault-tolerant and energy-efficient LDMS published in [START_REF] Ahmed | R-MDP: A Game Theory Approach for Fault-Tolerant Data and Service Management in Crude Oil Pipelines Monitoring Systems[END_REF]. This problem was modelled as an MDP based on the historical failure data of the NNPC pipeline network. We used various placement strategies, such as replication and migration of data and/or service between the sensor and fog nodes. With five distinct areas, the NNPC pipeline network presented diverse failure rates. Thus in our model, we regionalised (r0, r1, r2) the MDP to provide a similar level of performance across the different areas of the NNPC pipeline network using an -greedy algorithm for balancing the exploration and exploitation actions. With this regionalisation, we find that each region reaches its optimal value function following a different strategy. For regions r0 and r2, the policy adopted is near greedy policies. Although both behave differently across the epsilonvalue spectrum, the optimal value for both regions is closer to the minimum value of zero. The most dynamic region of all is the r1 region. In this region, we find that the optimal value function is obtained with an epsilon value of approximately 0.3. These different regional strategies allow the optimisation of the value function while minimising energy consumption. The cloud layer is used to store historical data and host infrequently used services such as alarm services. More heavy tasks outside the replication, migration, detection services, and localisation services. Unlike the first two layers, connection to the cloud requires a more robust connection through the backbone network, such as the 3G/4G and possibly 5G networks in the future.
Overall, our work addressed some issues across the layers of an IoT-based LDMS. We implemented an LDMS that is robust to SPOF and communication failure at the network layers, such as packet drops, out-of-order delivery, and communication delays. We also considered the scalability and energy efficiency of the LDMS by addressing the connectivity of nodes and data sharing amongst geographically close sensors as well as policy-based data and service management.
Future Works
Our work addressed several aspects of efficiency in an IoT-based monitoring system for crude oil pipelines in Nigeria. Still, a multitude of opportunities exists to further the scientific research for such systems. To evaluate the system's fault tolerance, we considered failures at the network layer. These failures include rate and list error type, which concerns packet drops, delay, and out-of-order delivery. Other communication failures, such as node failures, and propagation losses at the physical level, could be considered for further evaluation and possible enhancement. Additionally, more simulations to check the conformation of obtained results for the physical layer and the overhead of LoRaWAN services can be conducted using the 2021 release of the LoRaWAN module in NS3-an ongoing work of Magrin et al. and Capuzzo et al. [START_REF] Magrin | Performance evaluation of LoRa networks in a smart city scenario[END_REF][START_REF] Capuzzo | Confirmed traffic in LoRaWAN: Pitfalls and countermeasures[END_REF] Other options to enhance the system performance include the node placement strategy. For instance, we have used a homogeneous node placement across all the pipeline networks. This resulted in suboptimal detection at some extremities as discussed in chapter 5. Heterogeneous node placement based on location, in addition to the fluid propagation properties, can resolve such problems. In addition, we proposed an energy-efficient regionalised single-player game for data and service management due to the high diversity in the historical data of the NNPC pipeline networks over the geographical regions. However, various factors contributed to such disparity. As such, unsupervised learning-a machine learning technique-can be applied as a pre-processing step to determine the pattern or structure of the failure data according to the causes. This step can provide a more refined categorisation for causal modelling of the problem for an enhanced solution.
Also, such diversity as seen in the NNPC pipeline network historical data is absent in other countries, such as the United States, recording high incident rates in its pipeline networks. Besides, the analysis of pipeline incidents in [START_REF] Shan | Statistical analyses of incidents on oil and gas pipelines based on comparing different pipeline incident databases[END_REF] shows a more homogeneous pattern across different factors in various countries. In such situations, a two-player game would be a more practical solution for a highly vandalised environment using a competition-based model such as the Stackelberg model utilised in [START_REF] Islam | A game theoretic approach for adversarial pipeline monitoring using wireless sensor networks[END_REF]. A zero-sum game applying a score-based approach would also provide a stable solution on the assumption that each player is rational [START_REF] Shoham | MULTIAGENT SYSTEMS Algorithmic, Game-Theoretic, and Logical Foundations[END_REF]. Yet, this assumption cannot be guaranteed especially as it concerns third-party interference. Other options include adopting the Schrödinger-based model as in the work [START_REF] Gao | Two players game based on schrödinger equation solution[END_REF]. In this work, it is assumed that each player has only two states, where each state represents a different solution to the game. Playing conditions also include the time duration and outside factor influences. However, such refined approaches can only be applied to specific cases. As the situation and failure rates and types vary significantly across oil-producing countries, a better approach could be the adoption of a hybrid system using n players. In this case, n could be determined based on the historical data considered, thereby providing a generic solution to data and service management in crude oil pipelines.
Furtherance to that, the design of a specialised multi-sensory device that includes GPS, pressure sensor, and speed sensor is required to carry out experimentation of our work. While the algorithm itself has been proven to be energy efficient, other opportunities exist to improve the energy efficiency in the design phase of such a sensing device. In [START_REF] Bonvoisin | An environmental assessment method for wireless sensor networks[END_REF][START_REF]An integrated method for environmental assessment and ecodesign of ICT-based optimization services[END_REF][START_REF] Pohl | How lca contributes to the environmental assessment of higher order effects of ict application: A review of different approaches[END_REF], Bonvoisin et al. proposed a framework for the analysis of the environmental impact of WSN and ICT solutions throughout their lifecycle. These include sensor devices, gateways, interaction models, and optimisation at the various levels of the system. Additionally, Achachlouet, in his work [START_REF] Achachlouet | Exploring the Effects of ICT on Environmental Sustainability: From Life Cycle Assessment to Complex Systems Modelling[END_REF], enumerate the importance of the lifecycle approach to the design of ICT solutions for environmental sustainability from its direct and rebound effects. Hence, a lifecycle-based approach to creating such a sensor could tremendously reduce energy consumption and consequently the environmental impact.
Options to consider range from the source of the electrical components as well as the communication aspect. For example, the communication aspect of the device can be based on improved LoRaWAN classed devices. LoRaWAN devices act as transceivers communicating with each other or the gateway on well-defined transmission and receive windows for class A, B or C devices. HyDiLLEch also works with defined intervals setting precedence for research on suitable device classes during the design phase.
According to the United States energy information administration [START_REF]Oil and petroleum products explained[END_REF], crude oil currently accounts for one-third of global oil consumption. While this figure is forecasted to be on the rise, alternative energy sources such as liquefied natural gas (LNG), biofuels and renewable energy are also increasingly considered. LNG, in particular, is a fast-rising alternative energy source due to its minimal contribution to greenhouse gases and elevated combustion efficiency [START_REF] He | LNG cold energy utilization: Prospects and challenges[END_REF]. Yet, like crude oil, LNG is transported via pipelines [START_REF]Liquefied natural gas[END_REF][START_REF] Molnar | Economics of gas transportation by pipeline and lng[END_REF], which are also susceptible to failures resulting in environmental hazards [START_REF] Gómez-Camacho | An environmental perspective on natural gas transport options: Pipelines vs liquefied natural gas (lng)[END_REF]. Hence, as a long-term opportunity ensuring a faulttolerant LDMS for LNG is paramount. Such a system can be built using our LDMS as a model especially the architectural design and communication aspects. Furthermore, water pipeline monitoring is an essential aspect of the development of smart cities [START_REF] Malar | Smart and innovative water conservation and distribution system for smart cities[END_REF][START_REF] Adedeji | Towards digitalization of water supply systems for sustainable smart city development-water 4.0[END_REF]. Current approaches to its monitoring are based on centralised WSN [START_REF] Karray | EARN-PIPE: A testbed for smart water pipeline monitoring using wireless sensor network[END_REF][START_REF] Hassanin | A wireless sensor network for water pipeline leak detection[END_REF], making them susceptible to SPOF. As a result, a distributed approach to pipeline monitoring, such as HyDiLLECh, could be applied.
Another possibility to consider is the design and development of an IoT-based ecosystem for monitoring the processes in the three sectors of the OGI. As the sectors are codependent, a holistic approach to monitoring all the sectors will significantly improve the overall system. For instance, oil and gas transportation originates from the wellhead, storage facilities or offshore drilling source -vital parts of the upstream sector in the OGI. The transportation aspect of the product begins from such sources using the gathering lines. Hence, monitoring such components as the oil well is essential for the overall supply chain [START_REF] Aalsalem | An intelligent oil and gas well monitoring system based on internet of things[END_REF]. Likewise, the operational and supply chain activities across the sectors can be enhanced through efficient data analysis from an IoT-based system [START_REF] Wanasinghe | The internet of things in the oil and gas industry: a systematic review[END_REF][START_REF] Sattari | A theoretical framework for data-driven artificial intelligence decision making for enhancing the asset integrity management system in the oil and gas sector[END_REF]. For integrated and preventive asset integrity maintenance and management across the sectors, AI-based and machine learning techniques such as Bayesian network can be used. Such possibilities can only be enabled through efficient and long term data collection using IoT-based monitoring systems for incidents in the three sectors.
1. 1 5 . 7
157 The three sectors of the Oil and Gas Industry . . . . . . . . . . . . . 1.2 Crude oil transportation by mode in the US [1] . . . . . . . . . . . . 1.3 Statistical analysis of causes of pipeline failures in liquid and gas pipelines in the US [2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Crusted land and swampland pollution [3] . . . . . . . . . . . . . . . 1.5 Hydrocarbon pollution of water bodies [3] . . . . . . . . . . . . . . . 1.6 Digital maturity of the midstream sector [4] . . . . . . . . . . . . . . 1.7 Forecasted growth of connected devices . . . . . . . . . . . . . . . . . 2.1 Steady fluid transmission in a pipeline . . . . . . . . . . . . . . . . . 2.2 Effects of leakage on the pressure gradient and NPW generation . . . 2.3 Pressure points measurements in a pipeline . . . . . . . . . . . . . . . 2.4 The pressure gradient distribution after a leak . . . . . . . . . . . . . 2.5 Negative pressure wave introduced by a leakage . . . . . . . . . . . . 3.1 Generic IoT Architecture [5] . . . . . . . . . . . . . . . . . . . . . . . 3.2 Wireless Communication Technologies [6] . . . . . . . . . . . . . . . . 3.3 Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Network Architecture with Communication . . . . . . . . . . . . . . . 3.5 Event coverage on the proposed architecture . . . . . . . . . . . . . . 3.6 Data and Service Placement . . . . . . . . . . . . . . . . . . . . . . . 4.1 Sensor placement on long transmission crude oil pipelines . . . . . . . 4.2 Detection and localisation of leakages . . . . . . . . . . . . . . . . . . 4.3 Leakage detectability by number of sensors and distance . . . . . . . 4.4 NPWM and GM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 HyDiLLEch-1 Average localisation accuracy by NDLs . . . . . . . . . 4.6 HyDiLLEch-2 Average localisation accuracy by NDLs . . . . . . . . . 4.7 Communication overhead by the number of packets . . . . . . . . . . 4.8 Sampling energy consumption of the sensors . . . . . . . . . . . . . . 4.9 Radio energy consumption . . . . . . . . . . . . . . . . . . . . . . . . 4.10 Cumulative energy consumption . . . . . . . . . . . . . . . . . . . . . 5.1 Nigerian National Petroleum Corporation pipeline network [7] . . . . 5.2 5 year regionalised pipeline incidents [8] . . . . . . . . . . . . . . . . . 5.3 State transition Diagram . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Simulated pipeline network and OpenAI Gym Interaction . . . . . . . 5.5 Total Number of Nodes Detecting Leakage . . . . . . . . . . . . . . . 5.6 Total Number of HPN . . . . . . . . . . . . . . . . . . . . . . . . . . v Accuracy in Leakage Detection from all nodes . . . . . . . . . . . . . 5.8 Accuracy in Leakage Detection in HPN . . . . . . . . . . . . . . . . . 5.9 Total Obtainable Reward across the Error Rates . . . . . . . . . . . . 5.10 Total Obtainable Rewards with HPN . . . . . . . . . . . . . . . . . . 5.11 Total Reward by Algorithm . . . . . . . . . . . . . . . . . . . . . . . 5.12 Total Energy Consumption by Algorithm . . . . . . . . . . . . . . . . 5.13 Regional Reward vs Energy Consumption . . . . . . . . . . . . . . . vi List of Tables
Figure 1 . 3 :
13 Figure 1.3: Statistical analysis of causes of pipeline failures in liquid and gas pipelines in the US [2]
3. A hybrid of both wired and wireless systems 4. Daily overflights using state-of-the-art high-definition cameras to a specialised helicopter 5. Surveillance by security personnel 6. Community-based surveillance 7. Unmanned aerial vehicles (UAVs), i.e.Drones
Figure 1 . 6 :
16 Figure 1.6: Digital maturity of the midstream sector[START_REF] Slaughter | Bringing the digital revolution to midstream oil and gas[END_REF]
Figure 1 . 7 :
17 Figure 1.7: Forecasted growth of connected devices
Fluid
transmission can be categorised into different flows like turbulent and laminar flows, steady and unsteady flows, uniform and non-uniform flows, rotational and irrotational flows, compressible and incompressible flows, single direction or multiple directional flows, viscous and inviscid flows, and internal and external flows. Each flow type presents distinct characteristics in terms of changes in velocity and pressure over space and time.
Figure 2 . 1 :
21 Figure 2.1: Steady fluid transmission in a pipeline
Figure 2 . 2 :
22 Figure 2.2: Effects of leakage on the pressure gradient and NPW generation
Figure 2 . 3 :
23 Figure 2.3: Pressure points measurements in a pipeline
2
2
.4.
Figure 2 . 4 :
24 Figure 2.4: The pressure gradient distribution after a leak
Figure 2 . 5 :
25 Figure 2.5: Negative pressure wave introduced by a leakage
Figure 3 . 2 :
32 Figure 3.2: Wireless Communication Technologies [6]
Figure 3 .
3 Figure 3.3: Network Architecture
Figure 3 . 5 :
35 Figure 3.5: Event coverage on the proposed architecture
Figure 3 . 6 :
36 Figure 3.6: Data and Service Placement
3 . 6 .
36 Fault tolerance: Continue to detect leakage in the event of failures. 4. Accuracy: Ensure a highly accurate localisation of leakages 5. Energy efficiency: balance the energy consumption without compromising accuracy Detection time: Implement this detection and localisation in real time assuming leakages occur linearly i.e., one after the other in every pipeline segment.
Figure 4 . 1 :
41 Figure 4.1: Sensor placement on long transmission crude oil pipelines
2 :
2 end for HyDiLLEch first defines the gradient thresholds (line 2 of Alg. 1). These values are calculated to prevent classical variations of the fluid propagation model and customised offline. Each sensor capture the pressure and propagate/receive the values Algorithm 2 HyDiLLEch (Double-Hop) 1: {Init-steady state} Set upper and lower PG thresholds 3: for ever do 4:
Figure 4 . 2 :
42 Figure 4.2: Detection and localisation of leakages
Figure 4 . 3 :
43 Figure 4.3: Leakage detectability by number of sensors and distance
Figure 4 . 4 :
44 Figure 4.4: NPWM and GM
Figure 4 . 5 :
45 Figure 4.5: HyDiLLEch-1 Average localisation accuracy by NDLs
Figure 4 . 6 :
46 Figure 4.6: HyDiLLEch-2 Average localisation accuracy by NDLs
Figure 4 . 7 :
47 Figure 4.7: Communication overhead by the number of packets
Figure 4 . 8 :
48 Figure 4.8: Sampling energy consumption of the sensors
Figure 4 . 9 :
49 Figure 4.9: Radio energy consumption
Figure 4 . 10 :
410 Figure 4.10: Cumulative energy consumption
Figure 5 . 3 :
53 Figure 5.3: State transition Diagram
∀s : V * (s) = max a s p(s |s, a)[(r(s, a) + βV * (s )] (5.4)
Algorithm 3 -
3 greedy Q-learning 1: Reset Environment 2: ∈ (0, 1], α = 1, β = 0.99 3: Initialise Q(s,a), E(s,a) for all s ∈ S+, a ∈ A(s) arbitrarily except Q(terminal,.)=0 4: for each timestep in each episode do
and 5.6 shows the number of nodes that can detect and localise leakages.
Figure 5 . 5 :
55 Figure 5.5: Total Number of Nodes Detecting Leakage
Figure 5 . 6 :Figure 5 . 7 : 98 Figure 5 . 8 :
56579858 Figure 5.6: Total Number of HPN
Figure 5 . 9 :Figure 5 . 10 :
59510 Figure 5.9: Total Obtainable Reward across the Error Rates
Figure 5 . 11 :
511 Figure 5.11: Total Reward by Algorithm
1
List of acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Causes of Pipeline Failures by the transportation mode . . . . . . . . 2.1 Summary of pipeline monitoring techniques . . . . . . . . . . . . . . 2.2 Summary of pipeline monitoring sensors and systems . . . . . . . . . 2.3 Summary of common IoT challenges in pipeline monitoring and applied approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Properties of some IoT Wireless Communication Protocols . . . . . . 4.1 Pipeline and oil characteristics . . . . . . . . . . . . . . . . . . . . . . 4.2 Network simulation parameters . . . . . . . . . . . . . . . . . . . . .
Acronyms Chapter 1
NB-IoT Narrowband Internet of Things
NDL Nodes Detecting and Localising Leakages
3GPP NNPC 3rd Generation Partnership Project Nigerian National Petroleum Corporation
AI NPW Artificial Intelligence Negative Pressure Wave
BFT N P W ds Byzantine Fault Tolerance Negative Pressure Wave Detecting Sensor
BLE N P W mdd Bluetooth Low Energy Negative Pressure Wave Maximum Detecting Distance
CCTV NPWM Closed-Circuit Television Negative Pressure Wave Method
CDN OGI Content Distribution Network Oil and Gas Industry
CFD ORS Computational Fluid Dynamics Optimal Request Scheduling
CPS Paas Cyber Physical Systems Platform as a Service
DAL PG Detection and Localisation Pressure Gradient
DBMS PHMSA Database Management Systems Pipeline and Hazardous Materials Safety Administration
DPR PIG Department of Petroleum Resources Pipeline Inspection Gauges
dwt POD dead-weight tonnage Place of Deployment
GA PPA Greedy Approximation Pressure Point Analysis
GM PSI Gradient-based Method Pounds per Square Inch
GMM pubsub Gaussian Mixture Model publish subscribe
GPR QoS Ground Penetrating Radars Quality of Service
GPS RFID Global Positioning System Radio Frequency Identification
GSM RGA Global System for Mobile Communication Robust Greedy Approximation
GSP GRS RLHO Greedy Service Placement with Greedy Request Scheduling Reinforcement Learning Heuristic Optimisation
GSP ORS RMIO Greedy Service Placement with Optimal Request Scheduling Robust Mixed Integer Optimisation
HPN RSFO High Performing Nodes Robust Sub-modular Function Optimisation
HyDiLLEch Hybrid Distributed Leakage Detection and Localisation Tech-SaaS Software as a Service
SCADA nique Supervisory Control and Data Acquisition
IaaS SPOF Infrastructure as a Service Single Points of Failure
IEEE SVM Institute of Electrical and Electronics Engineers Support Vector Machine
IoT UAV Internet of Things Unmanned Aerial Vehicles
KNN UDP K-nearest Neighbour User Datagram Protocol
LAS ULCC Leakages and Spills Ultra Large Crude Carrier
LDMS UMTS Leak Detection and Monitoring System Universal Mobile Telecommunication System
LDT UNEP Leakage Detection Technique United Nations Environment Programme
LNG USD Liquefied Natural Gas United States Dollars
LoRaWAN VLCC Long Range Wide Area Network) Very Large Crude Carrier
LPWAN WSC Low Power Wide Area Network Weighted Set Cover
MDP WSN Markov Decision Process Wireless Sensor Networks
MEMS Micro-electromechanical System Table 1: List of acronyms
MIO Mixed Integer Optimisation
MIP Mixed Integer Programming
MTC Minimum Test Cover
vii viii 1
5.1 Simulation Parameters . . . . . . . . . . . . . . . . . . . . . . . . . .
Table 1 . 1 :
11 Causes of Pipeline Failures by the transportation mode
Mode Cor- Reg- Nat- Coll- Muf- Exc- Thi- Hum- Mai- Ope- Und- Alli- Gro- Loc- Mon-
/ ros- ula- ural iss- fler ava- rd an nten- rat- ete- sons und- at- ito-
Cau- ion tory Dis- ions De- tion Par- Er- ance ional rmi- ing ion ring
se Over- as- fects ty ror ned
sight ter In-
ter-
fer-
ence
Pipe-
lines
Rails
Ships
Tru-
cks
.1 and sensors/systems in Table 2.2.
Detection Detection Vehicles Category Advantages Category Advantages Hardware, provides security in the areas Challenges Challenges Expensive, inefficient, prone to
Method Sensors Human- covered human errors
Pressure and Sys- Software, based Leak detection using average Cannot localise leakage, only
Point Anal-tems SCADA Internal Hybrid pressure measurement, easy Can detect leakages provides suitable for a pipeline in steady Expensive, difficult to maintain,
ysis (PPA) PIG with Pres- Hardware, to implement Can monitor the structural valuable information on fluid conditions Requires huge labour, difficulty inefficient, high false alarms,
Negative sure Sensors Software, Internal Detect small to large leaks integrity of pipelines flow Requires high sampling rate, in integrating intelligence cannot detect pinhole leakages
Pressure Satellite Fibre Optic Wired Internal Hardware, in minutes, provides accurate Can e used to effectively mon-Accurate leak detection and not suitable for long-haul Expensive Expensive to deploy and main-
Wave External leakage localisation, can de-itor subsea pipelines for leak-localisation detects small to pipelines, event-driven tain, leakage detection is inter-
Method termine leak size ages and spills large leaks, can detect leak- rupted when the cable is cut,
(NPWM) Community- Human- Provision of security and ages in seconds depending on Expensive, ineffective, long de-limited to finite distances
Gradient-based Software based High accurate leakage detec-surveillance in the guarded the size Accuracy tightly dependent on tection or localisation time, in-
based Surveil-Cables Wired tion and localisation, can de-area Minimal false alarms detect sensors ability to detect and localise Expensive to deploy and main-
Method lance tect small to big-sized big leakage in minutes depending leakages in real-time tain, not suitable for long haul
Security- Human- leaks, low energy consump-Provision of security and on the size Expensive, ineffective, long de-pipelines
based Hydrocarbon De- based tion surveillance in the guarded tection or localisation time, can-
Mass-tection sensing Software Detect large leaks in minutes Not suitable for small leaks, area not detect in real-time
volume UAV tubes Hardware, Detect leakages in difficult prone to false alarms, cannot lo-Long detection and localisation
balance Human- terrains, Cheaper than Heli- calise leakages time, limited payload capabil-
Method based copters, lower speed and con- ities, susceptible to wind and
Real-time Hardware Detect large and medium-tinuous altitude for improved Inaccurate thermal turbulence interference, leak locations
transient sized leaks in minutes detection requires multiple online phys-limited coverage area and flight
model ical time, largely experimental measurements, time-
Ultrasonic Hardware, Robust, low maintenance consuming, costly Difficulty in detecting small
Flow Meters Table 2.1: Summary of pipeline monitoring techniques External, leaks, accuracy dependent on
Internal sensor's sensitivity, does not al-
low commodity inlet and outlet
between sensors
LiDAR Hardware Highly effective Limited coverage area
Infrared Hardware Robust to leakage detection Cannot be used for continuous
Thermogra- and localisation. Can be used monitoring, usually requires to
phy as a hand-held device be mounted on satellite or vehi-
cles for effective monitoring
Helicopters Hardware, Detect leakages in difficult Inefficient, requires all-round
Human- terrains monitoring, covers only small
based areas, expensive, prone to hu-
man errors, high speed reducing
the detection capability
Table 4 .
4 .1 outlines the simulation parameters and crude oil properties. 1: Pipeline and oil characteristics
Material Carbon steel
Pipeline Length (L) Wall thickness (w) Inside diameter (d) Height/elevation (z) Oil kinetic viscosity Temperature Oil density (ρ) Inlet pressure (P 0 ) Reynolds no (Re) Velocity (V ) Molecular Mass (m) Oil elasticity (K) Carbon steel elasticity (Y ) Gravitational force (g) Constant (e) 20km 0.323m 0.61m 0m 2.90mm 2 /s 50 • C 837kg/m 3 1000psi 1950 2m/s 229 1.85 × 10 5 psi 3 × 10 6 psi 9.81m/s 2 2.718 Number of sensors Number of gateways PHY/MAC model Transmit power Transmit distance Error model Propagation Loss Path Loss (L 0 ) Reference distance (d 0 ) Path-Loss Exponent (σ) Packet size Data rate Distance between sensors Duty cycle 21 1 802.11ax Ad hoc 80dBm 20km YANS Log-distance 46.67dB 1m 3.0 32bytes 1Kbps 1Km 70%
Coefficient of friction (λ) 0.033
Wave speed (c) 14.1m/s
Table 4 . 2
42
: Network simulation parameters
Cette thèse est accessible à l'adresse : https://theses.insa-lyon.fr/publication/2022ISAL0120/these.pdf © [S. Ahmed], [2022], INSA Lyon, tous droits réservés
deep.insa-lyon.fr
Acknowledgements
I thank you all. I pray that Allah SWT continue to keep us together.
Finally, I am especially grateful to the Petroleum Technology Development Fund (PTDF) for financing this work.
The action space: Actions include start, connection, service activation, data and service replication and data and service migration Reward: The reward is based on the localisation accuracy of the leakage and the number of nodes that localises the leakages.
Game over: A game is considered over at the end of the episodes. We present the simulation parameters in Table 5.1. The Q-learning algorithm is a well-known reinforcement learning algorithm that guarantees the convergence of an MDP using greedy policies to return the maximum rewards for the trajectories of the states. In our work, however, we use the epsilongreedy Q-learning algorithm to obtain the optimal value function for the three regions. This version of the Q-learning algorithm shown in Alg. 3 and extracted from [START_REF] Sutton | Reinforcement Learning: An Introduction[END_REF] |
04107134 | en | [
"math.math-ag"
] | 2024/03/04 16:41:22 | 2022 | https://theses.hal.science/tel-04107134/file/NOCERA_Guglielmo_2022_ED_269.pdf | Dario Beraldo
Ivan Di Liberti
Jeremy Hahn
Andrea Gagna
Dennis Gaitsgory
Owen Gwilliam
Jacob Lurie
Valerio Melani
Sam Raskin
Peter Haine
Mark Macerato
Michele Pernice
I started to learn derived geometry and geometric Langlands for the first time at the beginning of my PhD, mostly during the pandemic. That has not been always easy, and I wish to deeply thank first of all my parents, with whom I lived during most of the time, for their constant and warm support and affection, and also for their help with my English writing. 1 My advisors, Mauro Porta and Gabriele Vezzosi, were the ones who suggested the topic of this thesis to me in the first place and encouraged me to embrace the language of derived algebraic geometry. Geographical distance and the pandemic have prevented me from having a frequent interaction with them during most of my PhD, and our longest time together remains Spring 2019 in Berkeley, from which I will remember Mauro's elven verses from the foundational Noldor poem Higher Topos Theory, and Gabriele's unmistakable smile behind G. Karapet's portrait of a derived Artin stack.
Emanuele Pavia experienced with me "all the sorrows, but also the joy and marvel, of this journey": I owe him empathy, mathematics, many curses (especially in the Ghost type variant), and his company during two chocolate-based months of lockdown in Strasbourg.
of wonder. A special thank to Peter, for sharing their knowledge, enthusiasm, and many thoughts about mathematical life; to Michele for willing to undertake such a thorough yet satisfying journey through derived geometry (and another in a galaxy far far away in search of crêpes); to Marco for the equally fruitful and fun period we spent in Regensburg earlier this year and for staring with me at the frown of the terracotta warrior that awaits outside.
Finally, thanks, for their friendship, to Leonardo Abate, Gioacchino Antonelli, Marco Barbato, Lorenzo Benedini (who was so kind to share his house in Milan with me for five weeks), Andrea Cappelletti, Manuel Crispo, Julian Demeio (to whom I owe one, and he knows why), Fabio Ferri, Alberto Morelli, 1.1 The geometric Satake theorem
Motivation
A classical problem in representation theory is the study of a reductive group G (e.g. GL n , SL n , PGL n ) and its Langlands dual Ǧ (e.g. ǦL n = GL n , ŠL n = PGL n ). A celebrated result in the study of Langlands duality is the Satake theorem, which establishes an isomorphism between the C-algebra of compactly supported G(Z p )-biinvariant functions on G(Q p ), called the Hecke algebra of G, and the (complexified) Grothendieck ring of finite dimensional representations of Ǧ. Ginzburg [START_REF] Ginzburg | Perverse sheaves on a Loop group and Langlands' duality[END_REF] and later Mirkovic and Vilonen [START_REF] Mirkovic | Geometric Langlands duality and representations of algebraic groups over commutative rings[END_REF] provided a "sheaf theoretic" analogue (actually a categorification) of this theorem, called the Geometric Satake Equivalence: here G is a complex reductive group, and the statement has the form of an equivalence of tensor categories between the derived category of equivariant perverse sheaves Perv G O (Gr G ) and the category of finite dimensional representations of Ǧ (see Section 1.1.2 below). The key new object here is the affine Grassmannian Gr G , an infinite dimensional algebro-geometric object with the property that Gr G (C) = G(C((t)))/G(C t ).
The importance of results such as Geometric Satake, Derived Satake and their variants is related to the more general (and partly conjectural) Geometric Langlands Duality, which was introduced by Beilinson and Drinfeld in analogy to the celebrated arithmetic Langlands conjecture (see e.g. [START_REF] Beilinson | Quantization of Hitchin's integrable system and Hecke eigensheaves[END_REF]). One can say that if the Geometric Langlands Duality deals with algebraic and geometric data related to a reductive group and a smooth complex curve X, the Geometric Satake Equivalence is a "specialization" which looks at the same data near a chosen closed point in X; the affine Grassmannian Gr G itself, for example, is related to the local geometry of smooth curves, see Proposition 1.1.5 below. The use of techniques from homotopy theory and derived algebraic geometry in this field has provided many powerful results, and the current and most convincing formulations of the Geometric Langlands Duality are themselves derived in nature. One of the currently most accepted statements for a Geometric Langlands Conjecture is in [START_REF] Arinkin | Singular support of coherent sheaves, and the geometric Langlands conjecture[END_REF].
Intuitively, the statement should satisfy the following requirements.
Conjecture 1.1.1. Let G be a reductive complex group, and X a smooth projective complex curve. Then the Geometric Langlands Duality should at least:
• establish an equivalence between some category of sheaves over the stack Bun G (X) of G-torsors over X and a category of sheaves over the stack of Ǧ-local systems on X.
• agree with the Geometric Satake Equivalence (see below) when specialized "at" any closed point of X.
In order to explain Conjecture 1.1.1 we now present a short overview of the Geometric Satake Equivalence. The expert reader can skip directly to Section 1.3. We refer the reader seeking for a detailed explanation on this matter to the excellent notes [START_REF] Zhu | An introduction to affine Grassmannians and to the geometric Satake equivalence[END_REF], covering all of the next subsection and much more, including background, motivations and further developments.
Statement of the Geometric Satake Theorem
Theorem 1.1.2 (Geometric Satake Equivalence). Fix a reductive algebraic group G over C, and a commutative ring k for which the left-hand-side of the following formula is defined: for example, k could be Q ℓ , Z ℓ , Z/ℓ n Z, F ℓ . There exists a symmetric monoidal structure ⋆ on Perv G O (Gr G ), called convolution, and an equivalence of symmetric monoidal abelian categories
(Perv G O (Gr G , k), ⋆) ≃ (Rep fin ( Ǧ, k), ⊗).
Let us explain the meaning of this statement. Here Ǧ is the Langlands dual of G, obtained by dualising the root datum of the original group G, and the right-hand-side is the abelian category of finite-dimensional R-representations of Ǧ, equipped with a tensor (i.e. symmetric monoidal) structure given by the tensor product of representations. In order to define the left hand side, we need to introduce some further definitions: by G O we mean the representable functor C-algebras → Set, R → Hom(R t , G) (also denoted by G(C t )), by G K we mean the ind-representable functor C-algebras → Set, R → Hom(R((t)), G) (also denoted by G(C((t)))), and by Gr G we mean the affine Grassmannian, that is the stack quotient Gr G = G(C((t)))/G(C t ). Ind-representability of G K (and of Gr by consequence) comes from the fact that there is a natural filtration in finite-dimensional projective schemes Gr ≤N , N ≥ 0, induced by [START_REF] Zhu | An introduction to affine Grassmannians and to the geometric Satake equivalence[END_REF]Theorem 1.1.3]. We will call this filtration the lattice filtration.
Remark 1.1.3.
There is a natural action of G O on Gr G by left multiplication, whose orbits define an algebraic stratification of Gr G over the poset X • (T ) + . When viewed from the point of view of the complex-analytic topology, this stratification satisfies the so-called Whitney conditions (for a proof, see [Mat]). This allows to define the category Perv G O (Gr G , k), namely the abelian category of G O -equivariant perverse sheaves on Gr G with values in k-modules. This category is defined as
colim N Perv G O (Gr ≥N , k)
in the sense of [START_REF] Zhu | An introduction to affine Grassmannians and to the geometric Satake equivalence[END_REF]5.1 and A.1.4].
The ind-scheme Gr G is related to the theory of curves in the following sense: if X is a smooth projective complex curve, the formal neighborhood X x at a given closed point x of X is given by the map ϕ x : Spec C t → X. The inclusion C t ⊂ C((t)) induces a map Spec C((t)) → Spec C t ϕx -→ X which is a model for the punctured formal neighbourhood Xx of x.
Convolution product of equivariant perverse sheaves
Now we explain the tensor structure on both sides. The category Rep( Ǧ, k) is equipped with the standard tensor product of representations; we define now the tensor structure given by convolution product on Perv G O (Gr G ). A more detailed account is given in [Zhu16, Section 1, Section 5. 1, 5.4]. Consider the diagram
G K × Gr G G K × G O Gr G Gr G × Gr G Gr G p q m (1.1.2)
where G K × G O Gr G is the stack quotient of the product G K × Gr G with respect to the "anti-diagonal" left action of G O defined by γ • (g, [h]) = (gγ -1 , [γh]). The map p is the projection to the quotient on the first factor and the identity on the second one, the map q is the projection to the quotient by the "anti-diagonal" action of G O , and the map m is the multiplication map (g, [h]) → [gh]. It is important to remark that this construction, like everything else in this section, does not depend on the chosen
x ∈ X(C), since the formal neighbourhoods of closed points in a smooth projective complex curve are all (noncanonically) isomorphic. Note also that the left multiplication action of G O on G K and on Gr G induces a left action of G O × G O on Gr G × Gr G . It also induces an action of G O on G K × Gr G given by (left multiplication, id) which canonically projects to an action of G O on G K × G O Gr G . Note that p, q and m are equivariant with respect to these actions. Now if A 1 , A 2 are two G O -equivariant perverse sheaves on Gr G , one can define a convolution product
A 1 ⋆ A 2 = m * Ã (1.1.3)
where m * is the derived direct image functor, and à is any perverse sheaf on G K × G O Gr G which is equivariant with respect to the left action of G O and such that q * à = p * (A 1 ⊠ A 2 ). (Of course, the tensor product must be understood as a derived tensor product in the derived category.) Note that such an à exists because q is the projection to the quotient and A 2 is G O -equivariant. This is the tensor structure that we are considering on Perv G O (Gr G ).
Remark 1.1.7. Note that m * carries perverse sheaves to perverse sheaves: indeed, it can be proven that m is ind-proper, i.e. it can be represented by a filtered colimit of proper maps of schemes compatibly with the lattice filtration. By [KW01, Lemma III.7.5], and the definition of Perv G O (Gr G , k) as a direct limit, this ensures that m * carries perverse sheaves to perverse sheaves.
Remark 1.1.8. Note that the convolution product can be described as follows. Consider the diagram of stacks
(G O × G O )\(G K × Gr) G O \(G K ×Gr) G O \Gr × G O \Gr G O \Gr p ∼ m
where all the actions are induced by the left multiplication action of G O on G K . Then:
• the horizontal map is an equivalence;
• a G O -equivariant perverse sheaf on Gr is the same thing as a perverse sheaf on G O \Gr;
• the convolution product is equivalently described (up to shifts and perverse truncations) by
A 1 ⋆ A 2 = m * (p * (A 1 ⊠ A 2 )).
Observations similar to Construction 1.1.6 prove the following: Proposition 1.1.9. We have the following equivalences of schemes or ind-schemes:
• G O ≃ Aut Xx (T G ) • G K (R) ≃ {F ∈ Bun G (X × Spec R), α : F| (X\{x})×Spec R ≃ T G | (X\{x})×Spec R , µ : F| Xx×Spec R ≃ T G | Xx × Spec R} • (G K × Gr G )(R) ≃ {F, α : F| (X\{x})×Spec R ≃ T G | (X\{x})×Spec R , µ : F| Xx×Spec R ≃ T G | Xx×Spec R , G, β : F| (X\{x})×Spec R ≃ T G | (X\{x})×Spec R } • (G K × G O Gr G )(R) ≃ {F, α : F| (X\{x})×Spec R ≃ T G | (X\{x})×Spec R , G, η : F| (X\{x})×Spec R ≃ G (X\{x})×Spec R }.
Let us finally explain the meaning of Conjecture 1.1.1. Thanks to Proposition 1.1.5 and Construction 1.1.6, there is a way to interpret Perv G O (Gr G ) as the "specialization at any point of x" of the "Bun G (X)" side of the Geometric Langlands Conjecture, and Rep( Ǧ) as the specialization of the "local systems" side. In this sense, one wants the Geometric Langlands Conjecture to agree with the Geometric Satake Equivalence.
The spherical Hecke category 1.2.1 Equivariant constructible sheaves
We now review the notion of equivariant constructible sheaves on the affine Grassmannian. Recall that the affine Grassmannian admits a stratification in Schubert cells. Let k be a finite ring. We can consider the ∞-category of sheaves with coefficients in k which are constructible with respect to that stratification. We denote this category by Cons(Gr, S ; k). Here we are not assuming any finite-dimensionality constraint on the stalks nor on the cohomology of our sheaves. We denote by Cons fd (Gr, S ; k) the small subcategory of Cons(Gr, S ; k) spanned by constructible sheaves with finite stalks. The ∞-category Cons(Gr, S ; k) admits a t-structure whose heart is the category of perverse sheaves which are constructible with respect to S . We also consider the category Cons G O (Gr, S ; k) of G O -equivariant constructible sheaves with respect to S , defined as lim . . . Cons(G O × Gr, S ; k) Cons(Gr, S ; k) , (
where the stratification on G O ו • •×. . . G O ×Gr is trivial on the first factors and S on the last one. Note that there exists a notion of category of equivariant constructible sheaves with respect to some stratification (instead of a fixed one, like S in our case). In full generality, let us fix an algebraic group H acting on a scheme X defined over a field K, and let us denote by S the orbit stratification on X. We have a pullback square of triangulated (or dg, or stable ∞-1 ) categories (full faithfulness of the vertical arrows comes from the fact that the transition maps in the colimits are fully faithful).
Cons H (X, S ; k) Cons(X, S ; k) D c,H (X; k) D c (X; k). is not: its essential image only generates the target as a triangulated category ( [Ric]).
On the contrary, the left vertical arrow of (1.2.2) is an equivalence. Indeed, the functor is fully faithful because the transition maps in the diagram of which we take the colimit are, and essentially surjective by the following argument. First of all, we reduce to the finite-dimensional terms of the filtration {Gr N } N ∈N (which can be done by the very definition of category of constructible sheaves on an infinite-dimensional variety). Then we apply the following lemma: Lemma 1.2.1. Let H be an group scheme acting on a finite-dimensional scheme Y , and suppose that the orbits form a stratification S of Y . Then the two categories D c,H (Y ) and Cons H (Y, S ) are equivalent.
Proof. Let us consider an equivariant constructible sheaf F (with respect to some stratification) and the maximal open subset U of Y where the sheaf is locally constant: this is nonempty since we know that F is constructible with respect to some stratification. Then U is H-stable by equivariancy of F and maximality of U , and thus its complementary is, and we can apply Noetherian induction.
Remark 1.2.2. Let Gr an
G (resp. G an O ) be the analytic ind-variety (resp. analytic group) corresponding to Gr G (resp G O ). There is an equivalence between the (triangulated, or dg/∞-) category of algebraic G O -equivariant constructible sheaves over the affine Grassmannian, and that of analytic G an O -equivariant constructible sheaves over Gr an G . Indeed, by [BGH20, Proposition 12.6.4] (in turn building on [Art72, Théorème XVI. 4.1]), there is an equivalence of categories
D alg c (Gr G ) ≃ D an c (Gr an G ),
and the construction which adds the equivariant structure is the same on both sides (the analytification functor commutes with colimits and finite limits).
Derived Satake Theorem and the E 3 -monoidal structure
The Geometric Langlands Conjecture is currently formulated as a "derived" statement (see [START_REF] Arinkin | Singular support of coherent sheaves, and the geometric Langlands conjecture[END_REF].
Bezrukavnikov and Finkelberg [START_REF] Bezrukavnikov | Equivariant Satake category and Kostant-Whittaker reduction[END_REF] have proven the so-called Derived Satake Theorem. There, the abelian category Perv G O (Gr G ; k) is replaced by Sph loc.c. (G) (the small spherical Hecke category), which is a higher category admitting the following presentations:
• as the dg-or ∞-category Cons fd G O (Gr G ; k) of G O -equivariant constructible sheaves on Gr G with finite-dimensional stalks;
• as the dg-or ∞-category D-mod G O (Gr G ; k) G O -equivariant D-modules on Gr G with finitedimensional stalks.
Theorem 1.2.3 (Derived Satake Theorem, [BF07, Theorem 5]). Let G be a reductive group over a field k of characteristic zero. There is an equivalence
Cons fd G O (Gr G , k) ≃ Coh(Spec Sym(ǧ * [1])) Ǧ.
The category Perv G O (Gr G , k) is the heart of a t-structure on Sph loc.c (G), and the Geometric Satake Theorem is indeed recovered from the Derived Satake Theorem by passing to the heart. The same diagram as in (1.1.2) provides the formula for the convolution product of constructible sheaves, but the commutativity of the product is lost. However, in [START_REF] Nocera | A model for E3 the fusion-convolution product of constructible sheaves on the affine Grassmannian[END_REF] we recovered techniques similar to the ones that provide the commutativity of ⋆ in the perverse case, in order to prove a subtler result. Theorem 1.2.4 [START_REF] Nocera | A model for E3 the fusion-convolution product of constructible sheaves on the affine Grassmannian[END_REF]). Let G be a reductive group over C and k be a finite field of coefficients. The ∞-category Sph loc.c (G, k) admits an E 3 -monoidal structure in Cat × ∞ , extending the symmetric monoidal convolution product of perverse sheaves. Theorem 1.2.5. The ∞-category Sph loc.c (G) is equivalent, as an E 3 -∞-category, to the E 2 -center of the derived ∞-category of finite-dimensional representations DRep fd ( Ǧ, k).
Both theorems were originally stated by Gaitsgory and Lurie in unpublished work. The second one follows essentially from the Derived Satake Theorem and work of Ben-Zvi, Francis, Nadler and Preygel [START_REF] Ben-Zvi | Integral transforms and Drinfeld centers in derived algebraic geometry[END_REF], [START_REF] Ben-Zvi | Integral transforms for coherent sheaves[END_REF], and it implies the first one. In Chapter 2, we prove Theorem 1.2.4 independently, building the sought E 3 -monoidal structure in an intrinsic way. This is in the same spirit of the Tannakian reconstruction explained above for the case of perverse sheaves, where the existence of a symmetric monoidal structure on the category is a part of the initial datum, and only a posteriori it is interpreted as the natural tensor product in a category of representations. To be precise, we do not work exactly with the usual small spherical category, but with a big version which is presentable Theorem 2.3.6, and then we deduce the sought result as a corollary Corollary 2.3.7. Remark 1.2.6. It is worth noticing that the heart of an E 3 stable ∞-category C ⊗ with a compatible t-structure is a symmetric monoidal category, whereas for C E 1 -monoidal one only recovers a monoidal category. In other words, an E 3 -monoidal structure for Sph loc.c (G) is the "least level" of commutativity allowing to recover the full symmetric monoidal structure on Perv G O (Gr) in a purely formal way.
Remark 1.2.7.
In recent yet unpublished work [START_REF] Campbell | The derived geometric Satake equivalence via factorization[END_REF], Campbell and Raskin proved that, up to a certain renormalization of both sides, one can put a natural factorizable structure on the RHS of the Derived Satake Equivalence, and that the equivalence can be promoted to a factorizable/E 3 -monoidal equivalence.
Remark 1.2.8. It is worth stressing that the complex topology takes on a prominent role in our proofs. When taking constructible sheaves, we always look at the underlying complex-analytic topological space Gr an G of Gr G , with its complex stratification in Schubert cells. With this topology, Gr an G is homotopy equivalent to Ω 2 B(G an ) (although we do not use this in our paper). This is an equivalent way to derive the E 2 -algebra structure on the spherical category. However, by Remark 1.2.2, the category Sph loc.c is the same both from the algebraic or the complex-analytic point of view and therefore our result works in both settings.
An overview
The main guiding principle of the present dissertation (excluding Chapter 4) is the application of the homotopy theory of stratified spaces (as introduced in [Lur17, Appendix A]) to the study of the affine Grassmannian and the spherical Hecke category. This idea was not part of the project at the beginning, but arose in response to some issues in the description of the tensor structure on the spherical Hecke category; later, it took several other forms. We will now highlight which ones and how they arose.
Before doing this, let us just sketch the structure of the present dissertation. This thesis covers my first three works (two of them with coauthors). Chapter 2 is dedicated to the preprint [START_REF] Nocera | A model for E3 the fusion-convolution product of constructible sheaves on the affine Grassmannian[END_REF], which has been the start of many questions leading to the other works. Chapter 3 contains my work with Marco Volpe [NV21], which deals with problems of more topological nature and exploits phenomena related to the formalism of conically smooth spaces. Chapter 4 contains my work with Michele Pernice on Derived Azumaya Algebras [START_REF] Nocera | The derived Brauer map via homogeneous sheaves[END_REF]. I will explain the connection of this work to my PhD project in Section 1.3.3.
The affine Grassmannian as a Whitney stratified space
As mentioned in Remark 1.1.3, the stratification in Schubert cells of the affine Grassmannian satisfies the Whitney conditions. Now, a stratification can be seen as a continuous map s from the topological space Y to a poset P endowed with the Alexandrov topology (Section 2.A.1). A very important consequence of this property is recorded in the following: Proposition 1.3.1. Let (Y, P, s) be a stratified space satisfying the Whitney conditions. Then the stratification is conical in the sense of [START_REF] Lurie | Higher Algebra[END_REF]Definition A.5.5].
Proof. This is a consequence of [Mat70, Proposition 6.2], as proven in Section 3.2.1.
Theorem 1.3.2 ([Lur17, Theorem A.3.9]). Let (Y, P, s) be a stratified space which is locally of singular shape (see [START_REF] Lurie | Higher Algebra[END_REF]Definition A.4.15]) and conically stratified, and suppose that P satisfies the ascending chain condition. Then there exists an ∞-category Exit(Y, P, s) and an equivalence of the form
Cons(Y, P, s; S) ≃ Fun(Exit(Y, P, s), S)
where S is the ∞-category of spaces.
Corollary 1.3.3. Let k be any ring. In the hypotheses of the previous theorem, there is an equivalence of k-linear stable ∞-categories
Cons(Y, P, s; k) ≃ Fun(Exit(Y, P, s), Mod k )
where Mod k is the derived ∞-category of k-mondules.
Two easy but important consequences of this theorem are the following: Corollary 1.3.4. Let (Y, P, s) be a stratified space which is locally of singular shape and conically stratified.
Then its ∞-category of constructible sheaves is presentable.
Corollary 1.3.5. Suppose that (Y, P, s) → (Z, P, t) is a stratified homotopy equivalence of stratified spaces over the same poset P (i.e. a stratified map inducing equivalences at the level of exit-paths-categories). Then it induces an equivalence of ∞-categories Cons(Y, P, s; S) ≃ Cons(Z, P, t; S).
Another important consequence regards the functoriality and the symmetric monoidality properties of the association (Y, P, s) → Cons(Y, P, s; S). The exact same arguments work if we replace S by the derived ∞-category Mod k , with k a ring. We record all these consequences (proving those which have not been proven before) in Section 2.A.
These facts allow to study the affine Grassmannian and the spherical Hecke category Sph(G) from the viewpoint of stratified homotopy theory. Thanks to this, we may deduce properties about Sph(G) from homotopy-theoretic considerations about the affine Grassmannian. This is the main content of Chapter 2, and leads to prove Theorem 1.2.4 by arguments which are purely intrinsic to the topology of Gr G and its variants, and do not involve anything regarding the spectral side of Geometric Langlands.
The affine Grassmannian as a conically smooth stratified space
A much subtler version of Proposition 1.3.1 can be proven. Indeed, the notion of "conical chart" around a point can be promoted to something more rigid, which represents the analogue in the stratified setting of a smooth differential structure on a topological space. This notion is made rigorous in [AFT17, Section 3], where the Authors define what is called a conically smooth structure on a conically stratified space 2 . Roughly speaking, the idea is the same as that of a differential structure: a conically smooth structure is an equivalence class of atlases, and an atlas is a system of conical charts with a "smooth change of charts" property. A stratified space (Y, P, s) together with a conically smooth structure A is called a conically smooth space. The Authors conjecture that every stratified space satisfying the Whitney conditions (which is conical by Proposition 1.3.1) admits such a conically smooth structure. In Chapter 3 (joint work with Marco Volpe) we prove this conjecture: Theorem 1.3.6 (Theorem 3.2.7). Let (Y, P, s) be a stratified space satisfying the Whitney conditions. Then it admits a canonical conically smooth structure. This result is somehow useful per se, in that it adds a very natural class of examples to the newly introduce theory of conically smooth spaces. However, our middle-term plan together with Marco Volpe is to use this fact to prove additional properties of the affine Grassmannian. Indeed, by Theorem 1.3.6 Gr G admits a canonical conically smooth structure, and so does its Ran version Gr Ran (see Definition 2.1.3,Proposition 2.2.5). Now, [AFT17] introduce the notion of constructible bundle, which is a certain class of maps between conically smooth spaces representing the stratified analogue of the notion of smooth fiber bundle.
Our conjecture together with Marco Volpe is the following: where (H, R, u) is a compact stratified space and I is the unit interval stratified with a closed stratum at 0 and an open stratum at (0, 1], admits a filling.
As a corollary, we would obtain that the map strtop(Gr Ran ) → Ran(X an ) (see Construction 2.2.3) satisfies the exit homotopy lifting property. This is a "folklore" fact, whose proof we have not been able to locate anywhere. This is a crucial step in the proof of Theorem 1.2.4.
Towards an affine Grassmannian for surfaces
A long-term goal in the Geometric Langlands Program is to provide a statement of the Geometric Langlands Conjecture regarding a moduli space of sheaves (or stacks) over a surface S, which should replace Bun G (X) where X is a curve. Such a statement is expected to have connections to representationtheoretic objects such as double Hecke algebras or similar constructions.
A part of this program is to provide statements analogous to the Geometric or Derived Satake Theorem, again following the idea that they should represent the "specialization at a point" of the "global" statement. Therefore, one is naturally led to seek for an analogue of the affine Grassmannian Gr G at the level of surfaces.
For instance, one could fix a complex smooth projective surface S and a point s ∈ S(C), and define
Gr G (S, s)(R) = {F ∈ Bun G (S R ), α : F| (S\{x})×Spec R ∼ -→ T G,(S\{x})×Spec R }.
However, one can prove by means of the Hartogs theorem that this functor is "trivial", because every trivialization away from a closed subset of codimension at least 2 extends to S R , and therefore our moduli space is equivalent to G (seen as the automorphism group of T G,S ).
Alternatively, one could fix an algebraic curve C in S and define
Gr G (S, C)(R) = {F ∈ Bun G (S R ), α : F| (S\C)×Spec R ∼ -→ T G,(S\C)×Spec R }.
This last definition presents an important difference with respect to the setting of curves when it comes to the convolution product. Indeed, one can figure out suitable versions of the convolution diagram, but the analogue of the map m in (1.1.2) is not ind-proper ([Kap00, Proposition 2.2.2]), and therefore we are not granted that the pushforward (or the proper pushforward) along that map takes perverse sheaves to perverse sheaves 3 . However, m ! should preserve constructible sheaves, provided that it carries a sufficiently strong stratified structure. Therefore, although a convolution product of perverse sheaves over Gr G (S, x) or Gr G (S, C) is probably not well-defined, there is a good chance that it is well-defined at the level of some ∞-category of (equivariant) constructible sheaves C.
Another possible version of an affine Grassmannian for surfaces is the following (the additional "g" stays for "gerbes"). Let S be a smooth complex surface and s ∈ S(C). We define
Grg G,S,s (R) = {G a G-gerbe over S R , α : G| (S\{s})×Spec R ∼ -→ BG × (S \ {s}) × Spec R)}.
This definition circulated in the mathematical community years ago, and I am currently studying the properties of such an object in an ongoing project. This motivated by interest in the theory of gerbes. In the case G = GL 1 = G m , the following result of Toën holds: Theorem 1.3.9 ( [START_REF] Toën | Derived Azumaya algebras and generators for twisted derived categories[END_REF], see also Section 1.3.4). If S is a quasicompact quasiseparated scheme, then there is an isomorphism of abelian groups between H 2 (S, G m ) × H 1 (S, Z) and the group dBr Az of derived Azumaya algebras over S up to Morita equivalence. In particular, if S is normal, there is an isomorphism
H 2 (S, G m ) ≃ dBr Az .
This correspondence is useful in proving properties of Grg G,S,s , although Grg Gm,S,s is itself a fairly trivial object. Indeed, Jacob Lurie [Lur] suggested to us a way to prove that a "Beauville-Laszlo theorem for G-gerbes" is true (and thus, that Grg G,S,s is independent of S and s) that reduce to the case when G is abelian and then uses a reformulation of Toën's result in terms of prestable O S -linear prestable presentable ∞-categories (we will talk about this refurmulation in Section 1. 3.4).
Motivated by such applications, Michele Pernice and I studied some further properties of the correspondence established in Theorem 1.3.9. This is the content of Chapter 4, which we summarize in Section 1.3.4.
A final remark: we suspect that techniques analogous to the ones used for the construction of the E 3 -structure on Sph(G) can be used to build an E 5 -structure4 on a suitable category of equivariant constructible sheaves/D-modules on Grg G . A naïve topological motivation comes from the fact that we can define a "topological affine Grassmannian of G-gerbes on a surface" as
Map(S 3 , AutB(G an )/ Map(D 4 , AutB(G an ))
where S 3 is the real 3-sphere, D 4 is the real 4-disk, G an is the complex analytic topological group associated to G, and AutB(G an ) is the 2-group of automorphisms of the trivial topological G an -gerbe. Now this space is homotopy equivalent to
Ω 3 AutB(G an ) ∼ Ω 4 BAutB(G an ),
where BAutB(G an ) is the higher classifying space of AutB(G an ), and on the latter space we have an evident E 4 -structure which is the exact analogue of the E 2 -structure on Ω 2 B(G an ) ∼ Gr an G . Not that the relationship between Grg an G and this topological counterpart is at the moment unclear to us (we do not know an immediate argument to deduce that they are equivalent; the analogous statement for Gr G is already nontrivial), and therefore the existence of the E 4 -structure on Ω 4 BAut(G an ) remains just a heuristic motivation.
The derived Brauer map via twisted sheaves
Let us fix a quasicompact quasiseparated scheme X over some field k of arbitrary characteristic.
In 1966, Grothendieck [START_REF] Grothendieck | Le groupe de Brauer : I. Algèbres d'Azumaya et interprétations diverses[END_REF] introduced the notion of Azumaya algebra over X: this is an étale sheaf of algebras which is locally of the form End(E), the sheaf of endomorphisms of a vector bundle E over X. This is indeed a notion of "local triviality" in the sense of Morita theory: two sheaves of algebras A, A ′ are said to be Morita equivalent if the categories LMod A = {F quasicoherent sheaf over X together with a left action of A} and its counterpart LMod A ′ are (abstractly) equivalent; one can prove that LMod End(E) is equivalent to QCoh(X) via the functor
M → E ∨ ⊗ End(E) M .
The classical Brauer group Br Az (X) of X is the set of Azumaya algebras up to Morita equivalence, with the operation of tensor product of sheaves of algebras. Grothendieck showed that this group injects into H 2 (X, G m ) by using cohomological arguments: essentially, he used the fact that a vector bundle corresponds to a GL n -torsor for some n, and that there exists a short exact sequence of groups
1 → G m → GL n → PGL n → 1.
The image of Br Az (X) inside H 2 (X, G m ) is contained in the torsion subgroup, which is often called the cohomological Brauer group of X. 5One of the developments of Grothendieck's approach to the study of the Brauer group is due to Bertrand Toën and its use of derived algebraic geometry in [START_REF] Toën | Derived Azumaya algebras and generators for twisted derived categories[END_REF]. There, he introduced the notion of derived Azumaya algebra as a natural generalization of the usual notion of Azumaya algebra. Derived Azumaya algebras over X form a dg-category Deraz X . There is a functor Deraz X → Dg c (X), this latter being (in Toën's notation) the dg-category of presentable stable O X -linear dg-categories 6 which are compactly generated, with, as morphisms, functors preserving all colimits. The functor is defined by A → LMod A = {quasicoherent sheaves on X with a left action of A} where all terms have now to be understood in a derived sense. One can prove that this functor sends the tensor product of sheaves of algebras to the tensor product of presentable O X -linear dg-categories, whose unit is QCoh(X). Building on classical Morita theory, Toën defined two derived Azumaya algebras to be Morita equivalent if the dg-categories of left modules are (abstractly) equivalent. This agrees with the fact mentioned above that LMod End(E) ≃ QCoh(X) for any E ∈ Vect n (X).
In [Toë10, Proposition 1.5], Toën characterized the objects in the essential image of the functor A → LMod A as the compactly generated presentable O X -linear dg-categories which are invertible with respect to the tensor product, i.e. those M for which there exists another presentable compactly generated O X -linear dg-category M ∨ and equivalences 1
∼ -→ M ⊗ M ∨ and M ∨ ⊗ M ∼ -→ 1.
The proof of this characterization goes roughly as follows: given a compactly generated invertible dg-category M , one can always suppose that M is generated by some single compact generator E M . Now, compactly generated presentable O X -linear dg-categories satisfy an important descent property [Toë10, Theorem 3.7]; from this and from [Toë10, Proposition 3.6] one can deduce that the End M (E M ) has a natural structure of a quasicoherent sheaf of O X -algebras A, and one can prove that M ≃ LMod A .
Both Antieau-Gepner [START_REF] Antieau | Brauer groups and étale cohomology in derived algebraic geometry[END_REF] and Lurie [START_REF] Lurie | Spectral Algebraic Geometry[END_REF]Chapter 11] resumed Toën's work, using the language of ∞-categories in replacement of that of dg-categories. Lurie also generalized the notion of Azumaya algebra and Brauer group to spectral algebraic spaces, see [START_REF] Lurie | Spectral Algebraic Geometry[END_REF]Section 11.5.3]. He considers the ∞groupoid of compactly generated presentable O X -linear ∞-categories which are invertible with respect to the Lurie tensor product ⊗, and calls it the extended Brauer space Br † X . This terminology is motivated by the fact that the set π 0 Br † X has a natural abelian group structure, and by a result of Toën is isomorphic to H 2 ét (X, G m ) × H 1 ét (X, Z): in particular, it contains the usual cohomological Brauer group of X. At the categorical level, Lurie proves that there is an equivalence of ∞-groupoids between Br † (X) and
Map St k (X, B 2 G m × BZ) (St k
is the ∞-category of stacks over the base field k). In particular, Br † (X) is categorically equivalent to a 2-groupoid.
We can summarize the situation in the following diagram:
Deraz X [Morita -1 ] ≃ ∼ -→ Br † (X) ∼ -→ Map St k (X, B 2 G m × BZ) (1.3.1)
where the left term is the maximal ∞-groupoid in the localization of the ∞-category of derived Azumaya algebras to Morita equivalences. At the level of dg-categories, this chain of equivalences is proven in [Toë10, Corollary 3.8]. At the level of ∞-categories, this is the combination of [Lur18, Proposition 11.5.3.10] and [START_REF] Lurie | Spectral Algebraic Geometry[END_REF]p. 11.5.5.4] Note that, while in the classical case we had an injection Br Az (X) → H 2 (X, G m ), in the derived setting one has a surjection dBr † Az (X) := π 0 (Deraz
X [Morita -1 ]) ≃ → H 2 (X, G m ).
If H 1 (X, Z) = 0 (e.g. when X is a normal scheme), then the surjection becomes an isomorphism of abelian groups.
While the first equivalence in (1.3.1) is completely explicit in the works of Toën and Lurie, the second one leaves a couple of questions open:
• since the space Map(X, B 2 G m × BZ) is the space of pairs (G, P ), where G is a G m -gerbe over X and P is a Z-torsor over X, it is natural to ask what are the gerbe and the torsor naturally associated to an element of Br † (X) according to the above equivalence. This is not explicit in the proofs of Toën and Lurie, which never mention the words "gerbe" and "torsor", but rather computes the homotopy group sheaves of a sheaf of spaces Br † X over X whose global sections are Br † (X).
• conversely: given a pair (G, P ), what is the ∞-category associated to it along the above equivalence?
The goal of Chapter 4 is to give a partial answer to the two questions above. The reason for the word "partial" is that we will neglect the part of the discussion regarding torsors, postponing it to a forthcoming work, and focus only on the relationship between linear ∞-categories/derived Azumaya algebras and G m -gerbes. Our precise result is stated in Theorem 4. The content of this chapter is the paper [START_REF] Nocera | A model for E3 the fusion-convolution product of constructible sheaves on the affine Grassmannian[END_REF]. The aim is to provide an extension of the convolution product of equivariant perverse sheaves on the affine Grassmannian, whose definition will be recalled in the first subsection of this Introduction, to the ∞-category of G O -equivariant constructible sheaves on the affine Grassmannian1 . We will endow this extension with an E 3 -algebra structure in ∞-categories, which is the avatar of Mirkovic and Vilonen's commutativity constraint [MV07, Section 5]. The final result is Theorem 2.3.6. Theorem 2.0.1 (Theorem 2.3.6). If G is a reductive complex group and k is a finite ring of coefficients, there is an object
A ∈ Alg E 3 (Cat × ∞,k
) describing an associative and braided product law on the k-linear ∞-category Cons fd G O (Gr G , k) of G O -equivariant constructible sheaves over the affine Grassmannian (see Section 1.2.1). The restriction of this product law to the abelian category of equivariant perverse sheaves coincides up to shifts with the classical (commutative) convolution product of perverse sheaves [START_REF] Mirkovic | Geometric Langlands duality and representations of algebraic groups over commutative rings[END_REF].
From now on, we will fix the reductive group G and denote the affine Grassmannian associated to G simply by Gr.
The theorem will be proven by steps.
• in Section 2.1, we encode the convolution diagram Remark 1.1.8 in a semisimplicial 2-Segal stack Gr x,• , thus providing an associative algebra object in Corr(StrStk C ) which, after quotienting by the analogue of the G O -action, describes the span given in Remark 1.1.8. This is done in order to express the convolution product, formally, as push-pull of constructible sheaves along this diagram.
• Let Ran(X) be the algebraic Ran space of X, i.e. the prestack parametrising finite sets of points in X (see Definition 2.1.1). The affine Grassmannian Gr admits a variant Gr Ran , called the Ran Grassmannian (see Definition 2.1.3), living over Ran(X), and parametrizing G-torsors on X together with a trivialization away from a finite system of points. This object is a reformulation of the classical Beilinson-Drinfeld Grassmannian, and just like that it allows to define the so-called fusion product of perverse sheaves over the affine Grassmannian (see [MV07, Section 5] for the definition of the fusion product via Beilinson-Drinfeld Grassmannian). This fusion product is built on the fact that the Beilinson-Drinfeld Grassmannian satisfies the so-called factorization property (see [START_REF] Zhu | An introduction to affine Grassmannians and to the geometric Satake equivalence[END_REF]Section 3.1]). This property is formulated in a very convenient way for Gr Ran , taking advantage of the features of the Ran space.
Again in Section 2. • Given a finite-type (stratified) scheme Y over the complex numbers, one can consider its analytification Y an (see [START_REF] Reynaud | Géometrie algébrique et géometrie analytique[END_REF]). This is a complex analytic (stratified) space with an underlying (stratified) topological space which we denote by strtop(Y ). This procedure can be extended to a functor
strtop : PSh(StrSch C ) → Sh(StrTop, S)
, where S is the ∞-category of spaces. In Section 2.2, we consider the analytification of all the constructions performed in Section 2.1. The goal is to exploit the topological and homotopy-theoretic properties of the analytic version of Ran(X), which are reflected all the way up to Gr Ran and to the definition of the fusion product. In particular, these properties allow to apply [Lur17, Theorem 5.5. 4.10] and realize the fusion product as a consequence of the very existence of the map Gr an Ran,k → Ran(X) an for each k. Combining this with the simplicial structure of Gr Ran,• , we obtain that strtop(G O \Gr) carries a natural E 3 -algebra structure in Corr(Sh(StrTop, S)) (by taking quotients in Theorem 2.2.30).
• In Section 2.A we show, building on [Lur17, Appendix A] and many other contributions, that there is a natural "equivariant k-valued constructible sheaves" functor Cons from a certain category of stratified topological spaces with action of a group towards the ∞-category of presentable klinear ∞-categories, which is also symmetric monoidal (Corollary 2.A.11). A very important feature of this functor is that it sends stratified homotopy equivalences to equivalences of presentable ∞-categories. Also, it satiesfies the Beck-Chevalley properties, that allow to take push-pull along correspondences of stratified spaces in a functorial way. In our case, the needed formal properties are contained essentially in the fact that the affine Grassmannian is a stratified ind-scheme and its strata are exactly the orbits with respect to the action of G O . Thanks to all these properties, in Section 2.3 we formally obtain that the category Cons G O (Gr) admits a structure of E 3 -algebra in presentable k-linear ∞-categories (with respect to the Lurie tensor product) inherited from the structure built on strtop(G O \Gr) in the previous point. As a corollary, Cons fd G O (Gr) has an induced E 3 -algebra structure in small k-linear categories (with respect to the usual Cartesian product). Some of the techniques used in the present chapter are already "folklore" in the mathematical community; for example, application of Lurie's [Lur17, Theorem 5.5. 4.10] to the affine Grassmannian appears also in [START_REF] Hahn | Multiplicative structure in the stable splitting of ΩSL n (C)[END_REF], though in that paper the Authors are interested in the (filtered) topological structure of the affine Grassmannian and do not take constructible sheaves. Up to our knowledge, the formalism of constructible sheaves via exit paths and exodromy has never been applied to the study of the affine Grassmannian and the spherical Hecke category. Here we use it in order to take into account the homotopy invariance of the constructible sheaves functor, which is strictly necessary for the application of Lurie's [Lur17, Theorem 5.5.4.10].
Convolution over the Ran space
The aim of this section is to expand the construction of the Ran Grassmannian defined for instance in [Zhu16, Definition 3.3.2] in a way that allows us to define a convolution product of constructible sheaves in the Ran setting.
The presheaves Gr Ran,k
The Ran Grassmannian
Let us recall the definition of the basic objects that come into play. Let Alg C be the category of (discrete) complex algebras and X a smooth projective curve over C. Definition 2.1.1. The algebraic Ran space of X is the presheaf Ran(X) :
Alg C → Set R → {finite subsets of X(R)}.
Remark 2.1.2. Let Fin surj be the category of finite sets with surjective maps between them. Then we have an equivalence of presheaves Ran(X) ≃ colim PSh(X)
I∈Finsurj X I . Definition 2.1.3. The Ran Grassmannian of X is the functor Gr Ran : Alg C → Grpd R → {S ∈ Ran(X)(R), F ∈ Bun G (X R ), α : F| X R \Γ S ∼ -→ T G | X R \Γ S },
where T G is the trivial G-bundle on X R , Γ S is the union of the graphs of the s i inside X R , s i ∈ S, and α is a trivialization, i.e. an isomorphism of principal G-bundles with the trivial G-bundle. This admits a natural forgetting map towards Ran(X).
Gr X = X × Aut(Spec C t ) Gr G ,
where X is the space of formal parameters defined in loc. cit.. This parametrizes
Gr X (R) = {x ∈ X(R), F ∈ Bun G (X R ), α : F| X R \Γx ∼ -→ T G | X R \Γx }.
For each finite set I, there is a multiple version
Gr X I (R) = {(x 1 , . . . , x |I| ) ∈ X(R) I , F ∈ Bun G (X R ), α : F| X R \Γx 1 ∪•••∪Γx |I| ∼ -→ T G | X R \Γx 1 ∪•••∪Γx |I| }.
Then we have Gr Ran = colim I∈Fin surj Gr X I .
Definition 2.1.5. We can define G K,X I , G O,X I in the same way. Let also G K,Ran be the functor
R → {S ∈ Ran(X)(R), F ∈ Bun G (X R ), α : F| X R \Γ S ∼ -→ T G | X R \Γ S , µ : F| (X R ) Γ S ∼ -→ T G | (X R ) Γ S } ≃ colim I G K;X I .
Let G O,Ran be the functor
R → {S ∈ Ran(X)(R), g ∈ Aut (X R ) Γ S (T G )} ≃ colim I G O;X I .
By means of [Zhu16, Proposition 3.1.9] one has that Gr Ran ≃ G K,Ran /G O,Ran in the sense of a quotient in PSh(Stk C ), i.e.
Gr Ran ≃ colim
I G K,X I /G O,X I
where each term is the étale stack quotient in the category of stacks (recall that G K is an ind-scheme and G O is a group scheme) and is equivalent to Gr X I .
Definition 2.1.6. We define Gr Ran,k to be the functor Alg C → Grpd
k-1 times G K,Ran × Ran(X) • • • × Ran(X) G K,Ran × Ran(X) Gr Ran , that is R → {S ∈ Ran(X)(R), F i ∈ Bun G (X R ), α i trivialization of F i outside Γ S , i = 1, . . . , k, µ i trivialization of F i on the formal neighborhood of Γ S , i = 1, . . . , k -1}.
We call r k : Gr Ran,k → Ran(X) the natural forgetting map. Note that Gr Ran,0 = Ran(X), Gr Ran,1 = Gr Ran .
A priori, Gr Ran,k is groupoid-valued, because if R is fixed the F i 's may admit nontrivial automorphisms that preserve the datum of the α i 's and the µ j 's. Actually, this is not the case, just like for the classical affine Grassmannian which is ind-representable: It is worthwhile to remark that the map Gr Ran,k → Ran(X) is ind-representable, although Gr Ran,k itself is not. Definition 2.1.8. Let x ∈ X(C) be a closed point of X. There is a natural map {x} → X → Ran(X), represented by the constant functor R → {x} ∈ Set. Let us denote Gr x,k = Gr Ran,k × Ran(X) {x}. Proposition 2.1.9. Gr k,x is independent from the choice of X and x, and
Gr 1,x ≃ Gr G Gr 2,x ≃ G K × Gr G . Proof. Note first that Gr k,x (R) = (F i ∈ Bun G (X R ), α i : F i | X R \({x}×Spec R) ∼ -→ T G | X R \({x}×Spec R) , µ i : F i | (X R ) ({x}×Spec R) ∼ -→ T G | (X R ) ({x}×Spec R) ) i=1,...,k .
By the Formal Gluing Theorem [START_REF] Hennion | Formal gluing along non-linear flags[END_REF] this can be rewritten as
Gr k,x (R) = (F i ∈ Bun G ( (X R ) {x}×Spec R ), α i : F| (X R ) {x}×Spec R ∼ -→ T G | (X R ) {x}×Spec R , µ i : F| (X R ) ({x}×Spec R) ∼ -→ T G | (X R ) ({x}×Spec R) ) i=1,...,k ≃ ≃ (F i ∈ Bun G ( X {x} × Spec R), α i : F| X{x} ×Spec R ∼ -→ T G | X{x} ×Spec R , µ i : F| X {x} ×Spec R ∼ -→ T G | X {x} ×Spec R ) i=1,...,k ,
but X {x} is independent from the choice of the point x, being (noncanonically) isomorphic to Spec C t (and the same for Xx ). The rest of the statement is clear from the definitions.
The 2-Segal structure
Face maps
We now establish a semisimplicial structure on the collection of the Gr Ran,k .
Construction 2.1.10.
Let ∂ i be the face map from [k -1] to [k] omitting i. We define the corresponding map δ i : Gr Ran,k → Gr Ran,k-1 as follows. A tuple (S, F 1 , α 1 , µ 1 , . . . , F k , α k ) is sent to a tuple (S, F 1 , α 1 , µ 1 , . . . , F i-1 , α i-1 , µ i-1 , Fgl(F i , F i+1 , α i , α i+1 , µ i ), µ ′ i , . . . , F k , α k ),
where:
• Fgl(F i , F i+1 , α i , α i+1 , µ i ) is the pair (F ′ i , α ′ i )
formed as follows: the Formal Gluing Theorem ( [START_REF] Hennion | Formal gluing along non-linear flags[END_REF]) allows us to glue the sheaves
F i | X R \Γ S and F i+1 | (X R ) Γ S along the isomorphism µ -1 i | (X R ) Γ S • α i+1 | (X R ) Γ S
. This is our F ′ i . Also, F ′ i inherits a trivialization over (X R ) Γ S described by
α i | (X R ) Γ S µ -1 i | (X R ) Γ S α i+1 | (X R ) Γ S ,
which is the second datum.
• µ ′ i coincides with µ i+1 via the canonical isomorphism between the glued sheaf and F i+1 over the formal neighbourhood of Γ S . Proof. Let k be fixed. We check the face identities δ i δ j = δ j-1 δ i for i < j. The essential nontrivial case is when k = 3, i = 0, j = 1 or i = 1, j = 2 or i = 2, j = 3. Otherwise the verifications are trivial since, if i < j -1, then the two gluing processes do not interfere with one another. The cases i = 0, j = 1 and i = 2, j = 3 are very simple, because there is only one gluing and one forgetting (F 1 or F 3 respectively). In the remaining case, we must compare
F 1,23 = Fgl(F 1 , Fgl(F 2 , F 3 ) with F 12,3 = Fgl(Fgl(F 1 , F 2 ), F 3 ).
(we omit the S, α i , µ i from the notation for short). We have:
• F 1,23 | X R \Γ S ≃ F 1 | X R \Γ S ≃ F 12 | X R \Γ S ≃ F 12,3 | X R \Γ S • F 1,23 | (X R ) Γ S ≃ F 23 | (X R ) Γ S ≃ F 2 | (X R ) Γ S ≃ F 12 | (X R ) Γ S ≃ F 12,3 | (X R ) Γ S .
This tells us that the two sheaves are the same, and from this it is easy to deduce that the same property holds for the trivializations.
We thus have a semisimplicial structure on Gr Ran,• , together with maps r k : Gr Ran,k → Ran(X) which commute with the face maps by construction.
Verification of the 2-Segal property
The crucial property of the semisimplicial presheaf Gr Ran,• is the following: Proof. Case "0, l ≤ k". With the natural notations appearing in [DK19, Proposition 2.3.2], there is a map Gr Ran,{0,1,...,l} × Gr Ran,{0,l} Gr Ran,{0,l,l+1,...,k} → Gr Ran,{0,...,k} = Gr Ran,k inverse to the natural projection. The map sends
(S, F 1 , α 1 , µ 1 , . . . , F l , α l , F ′ l , α ′ l , µ ′ l , ξ : (F ′ l , α ′ l ) ∼ -→ Fgl({F i , α i , µ j } i=1,...,l,j=1,...,l-1 ), F ′ l+1 , α ′ l+1 , µ ′ l+1 , . . . , F ′ k α ′ k ) → (F 1 , α 1 , µ 1 , . . . , F l , α l , µ ′′ l , F ′ l+1 , α ′ l+1 , µ ′ l+1 , . . . , F ′ k , α ′ k ) where µ ′′ l,l+1 is the trivialization of F l on the formal neighbourhood of x defined as F l | (X R ) Γ S ∼ -→ Fgl({F i , α i , µ j } i=1,...,l,j=1,...,l-1 )| (X R ) Γ S ξ -1 --→ F ′ l | (X R ) Γ S µ ′ l -→ T G (X R ) Γ S .
This map is indeed inverse to the natural map arising from the universal property of the fibered product, thus establishing the 2-Segal property in the case "0, l". The case "l, k" can be tackled in a similar way.
Notation 2.1.13. Given a category or an ∞-category C, we denote by 2-Seg ss (C) the (∞-)category of 2-Segal semisimplicial objects in C.
Action of the arc group in the Ran setting
We now introduce analogues of the "arc group" G O to our global context, namely group functors over Ran(X) denoted by Arc Ran,k , each one acting on Gr Ran,k over Ran(X). This functor is a group functor over Ran(X) under the law
The Ran version
(S, g) • (S, h) = (S, g • h).
There is a semisimplicial group object of (PSh C ) /Ran(X) assigning
[k] → k times G O,Ran × Ran(X) • • • × Ran(X) G O,Ran .
The face maps are described by
δ i : (S, g 1 , . . . , g k ) → (S, g 1 , . . . , i g i g i+1 , . . . , g k ).
Definition 2.1.15. We denote this semisimplicial group functor over Ran(X) by Arc Ran,• . 2Note that Arc Ran,0 = Ran(X), and Arc Ran,1 = G O,Ran . It is also useful to define the version of Arc Ran,k over X I :
Arc X I ,k := k times G O,Ran × Ran(X) • • • × Ran(X) G O,Ran .
As usual, the "Ran" version is the colimit over I ∈ Fin surj of the "X I " versions.
The action on Gr Ran,•
The first observation now is that Arc Ran,k-1 acts on Gr Ran,k on the left over Ran(X) in the following way:
(S, g 2 , . . . , g k ).(S, F 1 , α 1 , µ 1 , . . . , F k , α k ) = = (S, F 1 , α 1 , µ 1 g -1 2 , F 2 , g 2 α 2 , µ 2 g -1 3 , . . . , F k , g k α k )
where g i α i is the modification of α i by g i on (X R ) Γ S which, by the usual "local-global" reformulation, induces a new trivialization of F i outside S.
We denote by:
• Φ k-1,k this action of Arc Ran,k-1 on Gr Ran,k .
• Ξ 1,k the left action of Arc Ran,1 on Gr Ran,k altering the first trivialization α 1 .
• Φ k,k the left action of Arc Ran,k on Gr Ran,k obtained as combination of Ξ 1,k and Φ k-1,k .
Definition 2.1.17. Define Conv Ran,k , the Ran version of the convolution Grassmannian, as the quotient of Gr Ran,k by the left action
Φ k-1,k described above. That is, Conv Ran,k = colim I Arc X I ,k-1 \Gr X I ,k ,
where the terms of the colimits are quotient stacks with respect to the étale topology and Arc X I ,k-1 acts through the pullback to X I of the action Φ k-1,k .
Remark 2.1.18. ConvGr k is a presheaf which can alternatively be described as follows:
R → {S ⊂ X(R), F 1 , G 2 , . . . , G k ∈ Bun G (X R ), α 1 : F 1 | X R \Γ S ∼ -→ T G | X R \Γ S , η 2 : G 2 | X R \Γ S ∼ -→ F 1 | X R \Γ S , . . . , η k : G k | X R \Γ S ∼ -→ G k-1 | X R \Γ S }.
This is proven in [Rei12, Proposition III.1.10, (1)], because if we take m = k then Gr p | ∆ in loc. cit. is the pullback along X n → Ran(X) of the functor described in Remark 2.1.18, and Conv m n is the pullback along X n → Ran(X) of our Conv Ran,m .
The convolution product over Ran(X) Connection with the Mirkovic-Vilonen convolution product
Consider the action of Arc Ran,k on ConvGr k induced by Ξ 1,k , which we still call Ξ 1,k by abuse of notation, and consider also Φ 1,1 as an action of Arc Ran,1 on Gr Ran . Note that Φ ×k 1,1 is an action of Arc where:
• p is the map that forgets all the trivializations µ i .
• q is the projection to the quotient with respect to Φ k-1,k , alternatively described as follows: we keep F 1 and α 1 intact, and define G h by induction as the formal gluing of G h-1 (or
F 1 if h = 1) and F h along µ h-1 and α h : indeed, µ h-1 is a trivialization of G h-1 over the formal neighbourhood of Γ S via the canonical isomorphism between G h-1 and F h-1 on that formal neighbourhood. The isomorphism η h : G h | X R \Γ S ∼ -→ G h-1 | X R \Γ S
is provided canonically by the formal gluing procedure.
• m is the map sending
(S, F 1 , α 1 , G 2 , η 2 , . . . , G k , η k ) → (S, G k , α 1 • η 2 • • • • • η k ).
Remark 2.1.19. Consider the special case k = 2. Note first of all that, since this diagram lives over Ran(X), we can take its fiber at {x} ∈ Ran(X). Under the identifications of Proposition 1.1.9, we obtain the diagram (1.1.2).
We can consider the diagram
Φ k,k \Gr Ran,k Ξ 1,k \ConvGr k Φ ×k 1,1 \Gr × Ran(X) k Ran,1 Φ 1,1 \Gr Ran . ∼ (2.1.2)
The horizontal map is an equivalence since it exhibits its target as the quotient
Ξ 1,k Φ k-1,k \Gr Ran,k ≃ Φ k,k \Gr Ran,k . For k = 2 one obtains: Φ 2,2 \Gr Ran,2 Ξ 1,2 \ConvGr 2 Φ ×2 1,1 \Gr × Ran(X) 2 Ran Φ 1,1 \Gr Ran .
∼ Remark 2.1.20. Take again the fiber of this diagram at the point {x} ∈ Ran(X). This results in a diagram of the form
Φ x,2,2 \Gr x,2 Ξ x,1,2 \ConvGr x,2 Φ ×2 1,1,x \Gr ×2 x,1 Φ 1,1,x \Gr x,1 .
∼ Recall now the identifications of Proposition 1.1.9.
Here, the action
Φ x,1,1 is the usual left-multiplication action of G O over Gr. The action Φ x,2,2 is the action of G O × G O on G K × Gr given by (g 1 , g 2 ).(h, γ) = (g 1 hg -1 2 , g 2 γ). Finally, the action Ξ x,1,2 on ConvGr x,2 is the action of G O on G K × G O Gr given by g[h, γ] = [gh, γ].
Consider now two perverse sheaves F and G on the quotient Φ x,1,1 \Gr x,1 . This is equivalent to the datum of two G O -equivariant perverse sheaves over Gr. We can perform the external product F ⊠ G living over Φ ×2
x,1,1 \Gr ×2 , and then pull it back to Φ x,2,2 \Gr x,2 . Under the equivalence displayed above, this can be interpreted as a Ξ 1,2 -equivariant perverse sheaf over ConvGr 2 . By construction, this is exactly what [START_REF] Mirkovic | Geometric Langlands duality and representations of algebraic groups over commutative rings[END_REF] call F ⊠G (up to shifts). Its pushforward along m is therefore the sheaf
F ⋆ G ∈ Perv G O (Gr).
The same construction can be done with constructible sheaves instead of perverse sheaves. In the rest of the chapter we will describe an algebra structure on the category Cons G O (Gr G ) of equivariant constructible sheaves over the affine Grassmannian. This ∞-category is equivalent to Cons(G O \Gr G ), and from this point of view the product law is exactly the one described here.
Stratifications Stratification of Gr, Gr Ran and Gr Ran,k
Construction 2.1.21. Recall from Remark 1.1.3 that the affine Grassmannian has a stratification in Schubert cells. We have explained in Section 1.2.1 that we are interested in considering constructible sheaves on the affine Grassmannian which are equivariant and constructible with respect to this stratification. While in principle the equivariant structure on the sheaf/D-module is sufficient in that implies the constructibility condition by what observed in Section 1.2.1, it will be very important for us to consider "stratified homotopy equivalences" when in the topological setting (see Definition 2.A.21). Therefore, it is essential to keep track of the stratifications. We will now extend the stratification from Gr to Gr Ran,k . We give/recall the definition of stratified schemes and presheaves in Definition 2.A.2 and Construction 2.A.3. In that formalism, the stratification in Schubert cells can be seen as a continuous map of ind-topological spaces zar(Gr G ) → X • (T ) + where zar(Gr G ) is the Zariski ind-topological space associated to Gr G , and X • (T ) + is the poset of dominant coweights of any maximal torus T ⊂ G. Therefore, the datum (Gr G , S ) may be interpreted as an object of StrPSh C . The global version Gr X admits a stratification described in [START_REF] Zhu | An introduction to affine Grassmannians and to the geometric Satake equivalence[END_REF]eq. 3.1.11], which detects the monodromy of the pair (bundle, trivialization) at the the chosen point. By filtering Gr X by the lattice filtration (see discussion after Theorem 1.1.2) at every point of X, we can exhibit Gr X as a stratified ind-scheme, or more generally a stratified presheaf, whose indexing poset is again X • (T ) + . 3 Notation 2.1.22. From now on, an arrow of the form X → P , where X is a complex presheaf and P is a poset, will denote a geometric morphism zar(X) → P, zar(X) being colim X→X scheme zar(X).
We will now construct maps Gr X I → (X • (T ) + ) I for any I. Recall from [START_REF] Zhu | An introduction to affine Grassmannians and to the geometric Satake equivalence[END_REF] the so-called factorising property of the Beilinson-Drinfeld Grassmannian. For |I| = 2, it says the following:
Proposition 2.1.23 ([Zhu16, Proposition 3.1.13]). There are canonical isomorphisms Gr X ≃ Gr X 2 × X 2 ,∆ X, c : Gr X 2 | X 2 \∆ ≃ (Gr X × Gr X )| X 2 \∆ .
For an arbitrary I, the property is stated in [Zhu16, Theorem 3.2.1]. This property allows us to define a stratification on Gr X I over (X • (T ) + ) I endowed with the lexicographical order:
Definition 2.1.24. For |I| = 2, (Gr X I ) ≤(µ,λ) ⊂ Gr X 2 is defined to be the closure of Gr X,≤µ × Gr X,≤k ⊂ (Gr X × Gr X )| X 2 \∆ ∼ -→ (Gr X 2 )| X 2 \∆ inside Gr X 2 .
For an arbitrary I, the definition uses the small diagonals. This stratification coincides with the partition in Arc X I -orbits of Gr X I . We now consider the map s I :
Gr X I → X • (T ) + given by Gr X I → (X • (T ) + ) I → X • (T ) +
where the first map is the one described above, and the second one is the map
(µ 1 , . . . , µ |I| ) → Σ n i=1 µ i .
The map s I is a stratification. Also, by applying [Zhu16, Proposition 3.1.14], one proves that the diagonal map X J → X I induced by a surjective map of finite sets I → J is stratified with respect to s I , s J . Now, Gr Ran is the colimit colim I∈Fin surj Gr X I , and therefore Gr Ran inherits a map towards X • (T ) + . This stratification coincides with the stratification in Arc Ran -orbits of Gr Ran . Finally, Gr Ran,k admits a map towards Ran(X
• (T ) + ) k , inherited from the bundle map Gr Ran,k → k Gr Ran × Ran(X) • • • × Ran(X) Gr Ran .
Definition 2.1.25. We denote the induced stratification Gr Ran,k → (X • (T ) + ) k by σ k .
Interaction with the semisimplicial structure and with the action of Arc Ran,•
Now we want to study the interaction between the stratifications and the semisimplicial structure. In particular, we want to prove that the simplicial maps agree with the stratifications σ k (and consequently τ k ), thus concluding that Gr Ran,• upgrades to a semisimplicial object in (StrPSh C ) /Ran(X) .
Definition 2.1.26. Consider the semisimplicial group
Cw (I) • (coweights) defined by Cw k = (X • (T ) + ) k δ j : (µ 1 , . . . , µ k ) → (µ 1 , . . . , µ j-1 , µ j + µ j+1 , µ j+2 , . . . , µ k ).
Note that the formal gluing procedure Gr Ran,2 → Gr Ran sends the stratum ((µ, λ), n) to the stratum (µ + λ, n). Indeed, unwinding the definitions and restricting to the case of points of cardinality 1 in Ran(X), formal gluing at a point amounts to multiplying two matrices, one with coweight µ and the other with coweight λ, hence resulting in a matrix of coweight µλ.
Thus if S : X • (T ) + × X • (T ) + → X • (T ) + is the sum map, the diagram Gr Ran,2 Gr Ran,1 Cw 2 = X • (T ) + × X • (T ) + Cw 1 = X • (T ) + τ 2 ∂ 1 τ 1 S commutes.
The case of an arbitrary I uses similar arguments. We can therefore say that for any face map ϕ :
[h] → [k] the induced square Gr Ran,k Gr Ran,h Cw k Cw h τ k Gr Ran (ϕ) τ h Cw(ϕ)
commutes. Note that the top row is a map of presheaves, so the correct interpretation of this diagram is: the diagrams
X I × Ran(X) (Gr Ran,k ) ≤N X I × Ran(X) (Gr Ran,h ) ≤N Cw k Cw h
commute for every I and N , where N refers to the lattice filtration.
Lemma 2.1.27. The strata of the stratification σ k are the orbits of the action of Arc Ran,k on Gr Ran,k over Ran(X).
Proof. We know that the orbits of the Schubert stratification of the affine Grassmannian are the orbits of the action of G O . The same is true at the Ran level. We can thus summarise the content of this whole section as follows, again in the notations of Section 2.A.3. Recall that Ran(X) is seen as a trivially stratified presheaf over Sch C .
Theorem 2.1.29. There exists a functor
ActGr • : ∆ op inj → Act((StrPSh C ) /(Ran(X)) ) [k] → ((Gr Ran,k , σ k ) → (Ran(X)), (Arc Ran,k → Ran(X),
Φ k,k stratified action of Arc Ran,k on Gr Ran,k over Ran(X)), which enjoys the 2-Segal property, and such that:
• ActGr 1 = (Gr Ran → Ran(X), G O,Ran → Ran(X), Φ 1,1 : G O,Ran × Ran(X) Gr Ran → Gr Ran )
• the values of ActGr k object for higher k's describe the Mirkovic-Vilonen convolution diagram and its associativity in the sense of Section 2.1.4.
Fusion over the Ran space 2.2.1 Analytification Topological versions of Gr Ran,k and Arc Ran,k
In order to take into account the topological properties of the affine Grassmannian and of its global variants, we will now analytify the construction performed in the previous section. This will allow us to consider the complex topology naturally induced on the analytic analogue of the prestacks Gr Ran,k by the fact that X is a complex curve, as well as a naturally induced stratification on the resulting complex analytic spaces. In 2.A.1 we describe the stratified analytification functor, which in turn induces a functor This construction admits a relative version over Ran(X) which is not exactly the natural one, because of a change of topology we are going to perform on top(Ran(X)).
strtop : PSh(StrSch C ) → PSh(StrTop) (see
Remark 2.2.2. Let M = top(X), which is a real topological manifold of dimension 2. In [Lur17, Definitions 5.5.1.1, 5.5.1.2] J. Lurie defines the Ran space Ran(M ) of a topological manifold. By definition, there is a map of topological spaces top(Ran(X)) → Ran(M ). Indeed,
top(colim I X I ) ≃ colim I top(X I ) ≃ colim I (topX) I = colim I M I ,
because top is a left Kan extension. Now each term of the colimit is the space of I-indexed collections of points in X(C), and hence it admits a map of sets towards Ran(M ). This is a continuous map: indeed, let f : I → X(C) be a function such that f (I) ∈ Ran({U i }) for some disjoint open sets U i . Then there is an open set V in Map Top (I, top(X)) containing f and such that ∀g ∈ V , g(I) ∈ Ran({U i }): for instance,
V = i {g : I → X | g(f -1 (f (I) ∩ U i )) ⊂ U i } suffices.
This induces a continuous map from colim M I to Ran(M ) by the universal property of the colimit topology, and therefore a continuous map top(Ran(X) an ) → Ran(M ) which is the identity set-theoretically (and thus it is compatible with the stratifications). (2.2.1)
In particular, we obtain a map ρ • : strtop(Gr Let us stress that we are consider Act con (PSh(StrTop) /Ran(M ) ) and not Act con (StrTop) because the whole affine Grassmannian is stratified by a poset which does not satisfy the ascending chain condition, whereas the stratifying posets of the truncations at the level N do.
Preimage functors
For every open U ⊂ Ran(M ), there exists a "preimage space" Gr U,k ∈ StrTop, whose underlying set can be described as {tuples in Gr Ran,k (C) such that S lies in U }.
Formally: Definition 2.2.6. We define functors FactGr k : Open(Ran(M )) → StrTop /Ran(M ) as
Open(Ran(M )) ⊂ Top /Ran(M ) ρ -1 k --→ StrTop /FactGr k → StrTop /Ran(M ) sending U to (U, κ| U ) and finally to (ρ -1 k (U ) → Ran(M ), σ k | ρ -1 k (U )
). This operation is compatible with the semisimplicial structure, and therefore we obtain a functor:
FactGr • : Open(Ran(M )) → 2-Seg ss (StrTop /Ran(M ) ).
We can perform the same restriction construction as above for Arc Ran,k and obtain stratified topological groups
Arc U,k acting on Gr U,k ∈ StrTop /Ran(M ) , functorially in U ∈ Open(Ran(M ))
and k. We denote the functor U → (Arc U,k → Ran(M )) by
FactArc k : Open(Ran(M )) → Grp(StrTop /Ran(M ) ).
Again, this construction is functorial in k.
Fusion
Definition 2.2.8. Let StrTop ⊙ /Ran(M )
be the following symmetric monoidal structure on StrTSpc /(Ran(M ),κ) : if ξ : X → Ran(M ), υ : Y → Ran(M ) are continuous maps, we define ξ ⊙ υ to be the disjoint product
(X × Y) disj = {x ∈ X, y ∈ Y | ξ(x) ∩ υ(y) = ∅}
together with the map towards Ran(M ) induced by the map
union : (Ran(M ) × Ran(M )) disj → Ran(M ) (S, T ) → S ⊔ T.
Recall the definition of the operad Fact(M ) ⊗ from [Lur17, Definition 5.5.4.9]. The aim of this subsection is to extend the FactGr k 's and the FactArc k 's to maps of operads respectively FactGr ⊙ k :
Fact(M ) ⊗ → StrTSpc ⊙ /Ran(M ) and FactArc ⊙ k : Fact(M ) ⊗ → Grp(StrTSpc /Ran(M ) ) ⊙ :
the idea is that the first one should encode the gluing of sheaves trivialised away from disjoints systems of points, and the second one should behave accordingly.
The gluing map
We turn back for a moment to the algebraic side. Definition 2.2.9. Let (Ran(X) × Ran(X)) disj be the subfunctor of Ran(X) × Ran(X) parametrising those S, T ⊂ X(R) for which Γ S ∩ Γ T = ∅. Let also (Gr Ran,k × Gr Ran,k ) disj be the preimage of (Ran(X) × Ran(X)) disj with respect to the map
r k × r k : Gr Ran,k × Gr Ran,k → Ran(X) × Ran(X).
Proposition 2.2.10. There is a map of stratified presheaves χ k : (Gr Ran,k × Gr Ran,k ) disj → Gr Ran,k encoding the gluing of sheaves with trivializations outside disjoint systems of points.
Proof. The map χ k is defined as follows: we start with an object
(S, F 1 , α 1 , µ 1 , . . . , F k , α k ), (T, G 1 , β 1 , ν 1 , . . . , G k , β k ),
where S ∩ T = ∅. We want to obtain a sequence (P, H 1 , γ 1 , ζ 1 , . . . , H k , γ k ). Since the graphs of S and T are disjoint, X R \ Γ S and X R \ Γ T form a Zariski open cover of X R . Therefore, by the descent property of the stack Bun G , every couple F i , G i can be glued by means of α i and β i .
Each of these glued sheaves, that we call H i , inherits a trivialization γ i outside Γ S ∪ Γ T , which is well-defined up to isomorphism (it can be seen both as
α i | X R \(Γ S ∪Γ T ) or as β i | X R \(Γ S ∪Γ T )
). Now set P = S ∪ T (in the usual sense of joining the two collections of points).
It remains to define the glued formal trivializations. However, to define a trivialization ζ i of H i over the formal neighbourhood of Γ P amounts to look for a trivialization of H i on the formal neighbourhood of Γ S ⊔ Γ T . But the first part of this union is contained in X R \ Γ T , where H i is canonically isomorphic to F i by construction; likewise, the first part of the union is contained in X R \ Γ S , where H i is canonically isomorphic to G i by construction. Hence, the originary trivializations µ i and ν i canonically provide the desired datum ζ i , and the construction of the map is complete. Moreover, this map is stratified. Indeed, we have the torsor Gr Ran,k → Gr Ran × Ran(X) • • • × Ran(X) Gr Ran , and the stratification on Gr Ran,k is the pullback of the one on Gr Ran × Ran(X) • • • × Ran(X) Gr Ran . Now for any I, J finite sets, the map (Gr X I ×Gr X J ) disj → Gr X I⊔J is stratified by definition (cfr. Definition 2.1.24). Therefore,
(( k Gr X I × X I • • • × X I Gr X I ) × ( k Gr X J × X J • • • × X J Gr X J )) disj → → k Gr X I⊔J × X I⊔J • • • × X I⊔J Gr X I⊔J
is stratified, and taking the colimit for I ∈ Fin surj , we obtain that
((Gr Ran × Ran(X) • • • × Ran(X) Gr Ran ) × (Gr Ran × Ran(X) • • • × Ran(X) Gr Ran )) disj
is stratified. Finally, since the stratification on Gr Ran,k is induced by the one on Gr Ran × Ran(X) • • • × Ran(X) Gr Ran via the torsor map Gr Ran,k → Gr Ran × Ran(X) • • • × Ran(X) Gr Ran , we can conclude.
Construction of
Gr U,k × Gr V,k strtop((Gr Ran,k × Gr Ran,k ) disj ) Gr Ran,k U × V (Ran(M ) × Ran(M )) disj Ran(M ), π ⊂ union (2.2.2)
where the left hand square is a pullback of topological spaces, and the right top horizontal map is induced by Proposition 2.2.10 by applying strtop. Here we use that the underlying complex-analytical topological space of Ran(X) is -set-theoretically -the space of points of M , and therefore the map
strtop(Ran(X) × Ran(X)) → Ran(M ) × Ran(M ) restricts to a well-defined map strtop((Ran(X) × Ran(X)) disj ) → (Ran(M ) × Ran(M )) disj .
Note also that the bottom composition coincides with U × V → U ⋆ V → Ran(M ), the first map being the one taking unions of systems of points; hence, by the universal property of the fibered product of topological spaces,
(FactGr k )(U ) × (FactGr k )(V ) admits a map towards (FactGr k )(U ⋆ V ) = Gr Ran,k × Ran(M ) (U ⋆ V ), which we call p U,V,k . Of course the triangle Arc U,k × Arc V,k Arc U ⋆V,k Ran(M ) , p U,V,k union•π commutes.
Proposition 2.2.12. Remark 2.2.11 induces well-defined maps of operads
FactGr ⊙ k : Fact(M ) ⊗ → StrTop ⊙
/Ran(M ) encoding the gluing of sheaves trivialised outside disjoint systems of points. That is, we have
• FactGr ⊙ k (U ) is the map (Arc U,k ) → U → Ran(M ) for every U open subset of Ran(M ).
• the image of the morphism
(U, V ) → (U ⋆ V ) for any independent U, V ∈ Open(Ran(M )) is the commuting triangle (FactGr k )(U ) × (FactGr k )(V ) (FactGr k )(U ⋆ V ) Ran(M ) ,
where the top map is the gluing of sheaves trivialised outside disjoints systems of points, and the left map is the map that remembers the two disjoint systems of points and takes their union.
Proof. See Section 2.B.2.
Remark 2.2.13. The constructions performed in the proof of Proposition 2.2.12 are compatible with the face maps of Gr Ran,• . Indeed, for any two independent open subsets U, V ⊂ Ran(M ), the squares
Gr U,k × Gr V,k Gr U ⋆V,k Gr U,k-1 × Gr V,k-1 Gr U ⋆V,k-1
are commutative because the original diagrams at the algebraic level commute. That is,
(Gr Ran,k × Gr Ran,k ) disj Gr Ran,k (Gr Ran,k-1 × Gr Ran,k-1 ) disj Gr Ran,k-1
commutes, since the construction involved in the horizontal maps, as we have seen, does not change the formal trivializations, and, by the independence hypothesis, the non-formal trivializations do not change in the punctured formal neighbourhoods involved in the formal gluing procedure.
Proposition 2.2.14. The maps of operads
FactGr ⊙ k : Fact(M ) ⊗ → StrTop ⊙ /Ran(M ) assemble to a map of operads FactGr ⊙ • : Fact(M ) ⊗ → (2-Seg ss (StrTop)) ⊙ /Ran(M ) .
Proof. Since we have already noticed that the functor strtop preserve finite limits, the condition that FactGr • is 2-Segal can be recovered from the algebraic setting. Now the map
FactGr k (U ) → FactGr {0,...,l} (U ) × FactGr {0,l} (U ) FactGr {l,...,k} (U )
is the pullback of Gr Ran,k → Gr Ran,{0,...,l} × Gr Ran,{0,l} Gr Ran,{0,l,...,k} along U → Ran(M ), hence it is a homeomorphism (and the same holds for the {l, k} case).
Construction of FactArc ⊙
k Construction 2.2.15. We can perform a similar construction for
FactArc k : Open(Ran(M )) → Grp(StrTop /Ran(M ) )
as well. Indeed, we can define
(Arc Ran,k × Arc Ran,k ) disj (R) = {(S, g 1 , . . . , g k ) ∈ FactArc k (R), (T, h 1 , . . . , h k ) ∈ Arc Ran,k (R) | Γ S ∩ Γ T = ∅}
and maps
(Arc Ran,k × Arc Ran,k ) disj (R) → Arc Ran,k (R) ((S, g 1 , . . . , g k ), (T, h 1 , . . . , h k )) → (S ∪ T, g 1 h 1 , . . . , g k h k ),
where g i h i is the automorphism of T G (X R ) Γ S ∪Γ T defined separatedly as g i and h i on the two components, which are disjoint by hypothesis. The rest of the construction is analogous, and provides maps of operads
FactArc ⊙ k : Fact(M ) ⊗ → Grp(StrTop /Ran(M ) ) ⊙
which are, as usual, natural and 2-Segal in k ∈ ∆ op inj .
The factorizing property
Our aim now is to verify the so-called factorization property (see [Lur17, Theorem 5.5.4.10]) for the functors
FactGr ⊙ k : Fact(M ) ⊗ → StrTop ⊙ /Ran(M )
and
FactArc ⊙ k : Fact(M ) ⊗ → Grp(StrTop /Ran(M ) ) ⊙ .
This will immediately imply the property.
Proposition 2.2.16 (Generalised factorization property). If U, V are independent, then the maps
FactGr k (U )× FactGr k (V ) → FactGr k (U ⋆ V ), resp. FactArc k (U ) × FactArc k (V ) → FactArc k (U ⋆ V ), are strat-
ified homeomorphisms over Ran(M ), resp. homeomorphisms of topological groups over Ran(M ).
Proof. Note that the right-hand square in Diagram (2.2.2) is Cartesian. Indeed, let us now prove that its algebraic counterpart
(Gr Ran,k × Gr Ran,k ) disj Gr Ran,k (Ran(X) × Ran(X)) disj Ran(X) is cartesian in PSh C .
The pullback of the cospan computed in PSh C is, abstractly, the functor parametrising tuples of the form (S, T ), (P,
H i , γ i , ζ i ), where (S, T ) ∈ (Ran(X)×Ran(X)) disj (R), P = S∪T, (P, H i , γ i , ζ i ) ∈ Gr Ran,k (R). From this we can uniquely reconstruct a sequence (S, F i , α i , µ i , T, G i , β i , ν i ) in (Gr Ran,k × Gr Ran,k ) disj (R).
To do so, define F i ∈ Bun G (X R ) as the gluing of H i with the trivial G-bundle around T , which comes with a trivialization α i :
F i | X R \Γ S ∼ -→ T G | X R \Γ S .
We also define G i as the gluing of H i with the trivial G-bundle around S, coming with a trivialization β i outside T . As for the formal part of the datum, the ζ i 's automatically restrict to the desired formal neighbourhoods. This construction is inverse to the natural map (Gr
Ran,k × Gr Ran,k ) disj → Gr Ran,k × Ran(X) (Ran(X) × Ran(X)) disj . But now, the diagram strtop(Ran(X) × Ran(X)) disj strtop(Ran(X)) (Ran(M ) × Ran(M )) disj Ran(M )
is again Cartesian, since, set-theoretically, the vertical maps are the identity, and we are just performing a change of topology on the bottom map. Hence the right-hand square in Diagram (2.2.2) is Cartesian, because the functor strtop preserves finite limits. This concludes the proof since the outer square in (2.2.2) is Cartesian, and therefore the natural map
FactGr k (U ) × FactGr k (V ) → FactGr k (U ⋆ V
) is a homeomorphism of topological spaces. Now we turn to FactArc k . It suffices to prove that the square
(Arc Ran,k × Arc Ran,k ) disj Arc Ran,k (Ran(X) × Ran(X)) disj Ran(X)
is Cartesian in PSh C . But this is clear once one considers the map
(Ran(X) × Ran(X)) disj × Ran(X) Arc Ran,k → (Arc Ran,k × Arc Ran,k ) disj
given by
(S, T, {g i ∈ Aut (X R ) Γ S ∪Γ T (T G )}) → ((S, {g i | (X R ) Γ S }), (S, {g i | (X R ) Γ T })).
Since the graphs are disjoint, this map is an equivalence. This concludes the proof.
Local constancy
The aim of this subsection is to prove that the functors FactGr ⊙ k and FactArc ⊙ k satisfy a suitable "local constancy" property, which will be used in the following to apply [Lur17, Theorem 5.5.4.10]. Definition 2.2.17. Let f : (Y, s, P ) → (Y ′ , s ′ , P ′ ) be a map of stratified topological spaces (in the notation of Section 2.A.1), and G a topological group. We say that f is a stratified G-bundle if, locally in the topology strloc (see Section 2.A.1), f coincides with the projection G × Y ′ → Y ′ . Lemma 2.2.18. Let G be a stratified group scheme. If f : S → T is a morphism of stratified schemes which is a stratified G-torsor with respect to the étale topology strét(see again Section 2.A.1), then strtop(f ) :
strtop(S) → strtop(T ) is a principal stratified G-bundle.
Proof. First we perform the proof in the unstratified setting. First of all, to be locally trivial with respect to the étale topology at the algebraic level implies to be locally trivial with respect to the analytic topology at the analytic level (see [Rey71, Section 5] and [Bha, Section 5]). Now the analytic topology on an analytic manifold is the topology whose coverings are jointly surjective families of local homeomorphisms. We want to prove that a trivialising covering for top
FactGr k (Ran({E i })) → FactGr k (Ran({D i })) and FactArc k (Ran({E i })) → FactArc k (Ran({D i }))
are stratified homotopy equivalences over (Ran(X • (T ) + )) k in the sense of Definition 2.A.21.
Proof. The factorising property tells us that these maps assume the form n i=1 FactGr k (Ran
(E i )) → n i=1 FactGr k (Ran(D i )) and n i=1 FactArc k (Ran(E i )) → n i=1 FactArc k (Ran(D i ))
respectively, and hence it suffices to perform the checks term by term, i.e., to assume n = 1. We deal first with the case FactGr k (Ran({E i })) → FactGr k (Ran({D i })). Now we observe that we can reduce to the case k = 1: indeed, Gr Ran,k is a stratified Arc Ran,k-1 -torsor over the k-fold product Gr Ran × Ran(X) • • • × Ran(X) Gr Ran , and therefore Gr Ran,k → Gr Ran × Ran(M ) • • • × Ran(M ) Gr Ran is a stratified Arc Ran,k-1 -principal topological bundle, in particular it is a Serre fibration. Thus, if we prove that Gr Ran(E) → Gr Ran(D) is a stratified homotopy equivalence, then the same will be true for
Gr Ran(E) × Ran(E) • • • × Ran(E) Gr Ran(E) → Gr Ran(D) × Ran(D) • • • × Ran(D)
This provides a functor
ActGr × : Fact(M ) ⊗ × E nu 1 → Corr(Act con (PSh(StrTop) /Ran(M ) ))[esh 1 ]) ×,⊙ (2.2.3)
which is a map of operads in both variables with respect to the respective symmetric monoidal structures on the target, has the factorization property in the first variable and sends the usual inclusions of Ran spaces of systems of disks to equivalences in the target category.
Interaction of convolution and fusion over Ran(M )
Construction 2.2.25. Let thus ActGr • (-)
• : Fact(M ) ⊗ × ∆ op inj → PSh(Act((StrTop) /Ran(M ) )) ⊙ denote the functor defined by (U, k) → (FactArc k (U ), FactGr k (U ), Φ k,k (U )).
Recall that this functor satisfies the 2-Segal property in k. If we apply (the semisimplicial variant of) [DK19, Theorem 11.1.6] to ActGr • (-) ⊙ , we obtain a functor
ActGr(-, -) ⊙,× : Fact(M ) ⊗ × E nu 1 → Corr(PSh(Act(Top /Ran(M ) ))) ⊙,×
This functor is lax monoidal in the variable Fact(M ) ⊗ with respect to the structure ⊙ on the target, and in the variable E nu 1 with respect to the structure × on the target.
Remark 2.2.26. Now we make the two algebra structures interact. Consider Gr Ran,1 = Gr Ran : we have two ways of defining an "operation" on it:
• restrict to Gr Ran,1 × Ran(M ) Gr Ran,1 and consider the correspondence
Gr Ran,2
Gr Ran,1 × Ran(M ) Gr Ran,1 Gr Ran,1
• restrict to (Gr Ran,1 × Gr Ran,1 ) disj , or more precisely to Gr U,1 × Gr
The stalk of the factorising cosheaf
We can apply [Lur17, Theorem 5.5. 4.10] to the map of operads (2.2.3), since we have proven in the previous subsections that the hypotheses of the theorem are satisfied. We denote the operads
E ⊗ n , E ⊗ M by E n , E M .
Corollary 2.2.27. The functor ActGr × induces a nonunital E M -algebra object
ActGr × M ∈ Alg nu E M (Alg nu E 1 (PSh(Act(Top)[esh -1 ]) × )).
Fix a point x ∈ M . The main consequence of the above result is that the stalk of ActGr × at the point {x} ∈ Ran(M ) inherits an E nu 2 -algebra structure in 2-Seg ss (PSh(Act(Top)[esh -1 ]) × )). We will now explain how, running through [Lur17, Chapter 5] again.
E ∩ F = ∅, D ⨿ D ∼ -→ E ⨿ F → D} ≃ ≃ A(Ran(E 0 )) ⊗ A(Ran(F 0 ))
for any choice of an embedding
D ⨿ D ∼ -→ E 0 ⨿ F 0 → D.
Proof. We need to prove that the colimit degenerates. Take indeed two couples of disks as in the statement, say E 1 , F 1 and E 2 , F 2 . By local constancy, we can suppose that all four disks are pairwise disjoint. Now we can embed both E 1 and E 2 into some E, and F 1 , F 2 into some F , in such a way that E ∩ F = ∅.
Then we have canonical equivalences
A(Ran(E 1 )) ⊗ A(Ran(F 1 )) ∼ -→ A(Ran(E)) ⊗ A(Ran(F ))
and
A(Ran(E 2 )) ⊗ A(Ran(F 2 )) A -→ (Ran(E)) ⊗ A(Ran(F )).
This discussion implies that the operation µ on A x encoded by Lurie's theorem has the form
A x ⊗ A x ≃ A(Ran(D)) ⊗ A(Ran(D)) ∼ -→ A(Ran(E 0 )) ⊗ A(Ran(F 0 )) ∼ -→ A(Ran(E 0 ) ⋆ Ran(F 0 )) → A(Ran(D)) ≃ A x , (2.2.4)
where:
• the first and last equivalences come from local constancy;
• the second equivalence is induced by the chosen embedding
D ⨿ D ∼ -→ E 0 ⨿ F 0 → D;
• the third equivalence is the factorization property.
The discussion about the stalk leads to the main theorem of this section:
Theorem 2.2.30. The stalk at x ∈ M of the E M -algebra object ActGr M from Corollary 2.2.27 can be naturally viewed as an object of
ActGr × x ∈ Alg nu E 2 (Alg nu E 1 (Corr(Act con StrTop[esh -1 ]) × ))
encoding simultaneously the convolution and fusion procedures on G O \Gr G .
Product of constructible sheaves 2.3.1 Taking constructible sheaves
Definition 2.3.1. We denote by A ⊗,nu the functor
Cons ⊗ corr • ActGr × x : E nu 2 × E nu 1 → Pr L,⊗ k .
By construction, one has that A ⊗,nu (1, 1) = Cons G O (Gr) and more in general
(⟨m⟩, ⟨k⟩) → m Cons G ×k O (G ×k-1 K × Gr) ⊗ • • • ⊗ Cons G ×k O (G ×k-1 K × Gr) .
Units and the main theorem
Remark 2.3.2. Let us inspect the behaviour of the E 2 product. We had a map µ : Hck x × Hck x → Hck x from (2.2.4). By construction, when we apply the functor Cons, this is sent forward with lower shriek functoriality. Therefore, for k = 1, we end up with a map
Cons G O (Gr) × Cons G O (Gr) µ ! -→ Cons G O .
Recovering the original structure of the map µ, µ ! decomposes as
Cons G O (Gr) × Cons G O (Gr) ∼ -→ Cons Arc Ran(D) (Gr Ran(D) ) × Cons Arc Ran(D) (Gr Ran(D) ) ∼ -→ Cons Arc Ran(E) (Gr Ran(E) ) × Cons Arc Ran(F ) (Gr Ran(F ) ) ∼ -→ Cons Arc Ran(E)⋆Ran(F ) (Gr Ran(E)⋆Ran(F ) ) → Cons Arc Ran(D) (Gr Ran(D) ) ∼ -→ j * x Cons G O (Gr),
where j x : {x} → Ran(M ) is the inclusion. The formal gluing property evidently commutes with this map at the various levels, so this construction is natural in k. As usual, let us denote by Hck x,k be the evaluation of Hck ×
x at ⟨1⟩ ∈ E nu 2 , ⟨m⟩ ∈ E nu 1 . We have an induced map * → Hck x,k for every k. Now if µ k : Hck x,k × Hck x,k → Hck x,k is the multiplication in Sh(StrTop), we can consider the composition
Hck x,k she ∼ Hck x,k × * id,u k ---→ Hck x,k × Hck x,k µ k -→ Hck x,k
and we find that this composition is the identity. Therefore, u k is a right quasi-unit, functorially in k. The condition that it is a left-quasi unit is verified analogously.
Remark 2.3.4.
Let us now inspect the behaviour of the E 1 product. Let us fix the E 2 entry equal to ⟨1⟩ for simpplicity. Then the product law is described by the map
A x (⟨1⟩, ⟨2⟩) = Cons G O (Gr) ⊗ Cons G O (Gr) ⊠ -→ Cons G O ×G O (Gr × Gr) Hckx(∂ 2 ×∂ 0 ) * =p * -----------→ Cons G O ×G O (FactGr 2,x ) = Cons(Hck x ) ∼ = Cons(G O \(G K × G O Gr)) FactGr(δ 1 ) * =m * ----------→ Cons G O (Gr) = A x (⟨1⟩, ⟨1⟩).
The "pullback" and "pushforward" steps come from the construction of the functor out of the category of correspondences, which by construction takes a correspondence to the "pullback-pushforward" transform between the categories of constructible sheaves over the bottom vertexes of the correspondence. Note that the most subtle step is the equivalence in the penultimate step. If one is to compute explicitly a product of two constructible sheaves F, G ∈ Cons G O (Gr), one must reconstruct the correct equivariant sheaf over ConvGr x,2 whose pullback to Gr x,2 is p * (F ⊠ G), and then push it forward along m (in the derived sense of course). We stress again that this, when restricted to Perv G O (Gr), is exactly the definition of the convolution product of perverse sheaves from [START_REF] Mirkovic | Geometric Langlands duality and representations of algebraic groups over commutative rings[END_REF] (up to shift and t-structure).
Proposition 2.3.5. The map of operads A ⊗
x :
E 2 × E nu 1 → Pr L,⊗ k can be upgraded to a map of operads A ⊗ x : E 2 × E 1 → Pr L,⊗ k .
Proof. Again, it suffices to exhibit a quasi-unit. In this case, this is represented by the element *
G K × Gr = Gr x,2 G K × G O Gr = ConvGr 2,x
Gr × Gr Gr.
p q m
In our specific case, we are given a diagram
G K × Gr G K × G O Gr * × Gr Gr × Gr Gr, p q m t×id j
where j is the closed embedding (F, α) → (T G , id| X\x , F, α) whose image is canonically identified with Gr. Let F ∈ Cons k,G O (Gr). We want to prove that 1 ⊠F ≃ j * (k ⊠ F ), i.e. that
q * j * (k ⊠ F ) ≃ p * (t × id) * (k ⊠ F ).
Note that because of the consideration about the image of j the support of both sides lies in G O × Gr ⊂ G K × Gr, and this yields a restricted diagram
G O × Gr Gr Gr Gr. q p m ∼ j
This proves the claim. By applying m * we obtain
1 ⋆ F ≃ m * (j * (k ⊠ F )) = k ⊠ F = F since mj = id.
Thanks to these results, our functor A ⊗ x is promoted to a map of operads E 2 × E 1 → Pr L,⊗ k . By the Additivity Theorem ([Lur17, Theorem 5.1.2.2]), we obtain an E 3 -algebra object in Pr L,⊗ k . Summing up:
Theorem 2.3.6 (Main theorem). Let G be a complex reductive group and k be a finite ring of coefficients.
There is an object A ⊗ x ∈ Alg E 3 (Pr L,⊗ k ) describing an associative and braided product law on the ∞-category
Cons G O (Gr G ) of G O -
equivariant constructible sheaves over the affine Grassmannian. The restriction of this product law to the abelian category of equivariant perverse sheaves coincides, up to shifts and perverse truncations, with the classical (commutative) convolution product of perverse sheaves [MV07].
Corollary 2.3.7.
There is an induced
E 3 -monoidal structure in Cat × ∞ on Cons fd G O (Gr).
Proof. The inclusion Pr L k → Cat ∞,k is lax monoidal with respect to the ⊗-structure on the source and the ×-structure on the target. Therefore, Cons G O (Gr) has an induced E 3 -algebra structure in Cat × ∞,k . One can easily check that the convolution product restricts to the small (not presentable) subcategory of finite-dimensional sheaves, and this concludes the proof.
2.A Constructible sheaves on stratified spaces: theoretical complements
2.A.1 Stratified schemes and stratified analytic spaces
Definitions
The following definitions are particular cases of [BGH20, p. 8.2.1] and ff. . Definition 2.A.2. Let StrSch = Sch × Top StrTop, where the map Sch → Top sends a scheme X to its underlying Zariski topological space, and the other map is the evaluation at [0], be the category of stratified schemes, and StrAff its full subcategory of stratified affine schemes.
Analogously, one can define stratified complex schemes StrSch C and stratified complex affine schemes StrAff C . The key point now is that there is an analytification functor an : Aff C → Stn C , the category of Stein analytic spaces. This is defined in [Rey71, Théorème et définition 1.1] (and for earlier notions used there, see also [START_REF] Grothendieck | Sur les faisceaux algébriques et les faisceaux analytiques cohérents[END_REF]p. 6]). In this way we obtain a Stein space, which is a particular kind of complex manifold with a sheaf of holomorphic functions. We can forget the sheaf and the complex structure and recover an underlying Hausdorff topological space (which corresponds to the operation denoted by | -| in [START_REF] Reynaud | Géometrie algébrique et géometrie analytique[END_REF]) thus finally obtaining a functor top : Sch C → Top.
A reference for a thorough treatment of analytification (also at a derived level) is [START_REF] Holstein | Analytification of mapping stacks[END_REF].
Let now StrStn
C = Stn C × Top,ev 0 StrTop.
There is a natural stratified version of the functor top, namely the one that assigns to a stratified affine complex scheme (S, s : zar(S) → P ) the underlying topological space of the associated complex analytic space, with the stratification induced by the map of ringed spaces u : S an → S: Proof. Since StrTop = Fun(∆ 1 , Top) × ev 1 ,Top,Alex Poset (see Section 2.A.1), it suffices to show that:
StrAff C → StrStn C → StrTop (S, s) → (S an , s • u) → (|S an |, s • u).
• the functor ev 0 : Fun(∆ 1 , Top) → Top preserves and reflects pullbacks.
• the functor Alex : Poset → Top preserves and reflects pullbacks;
Now, the first point follows from the fact that limits in categories of functors are computed componentwise. The second point can be verified directly, by means of the following facts:
• the functor preserves binary products. Indeed, given two posets P, Q, then the underlying sets of P × Q and Alex(P ) × Alex(Q) coincide. Now, the product topology on Alex(P ) × Alex(Q) is coarser than the Alexandrov topology Alex(P × Q). Moreover, there is a simple base for the Alexandrov topology of a poset P , namely the one given by "half-lines" P p 0 = {p ∈ P | p ≥ p 0 }. Now, if we choose a point (p 0 , q 0 ) ∈ Alex(P × Q), the set (P × Q) (p 0 ,q 0 ) is a base open set for the topology of Alex(P × Q), but it coincides precisely with P p 0 × Q q 0 . Therefore the Alexandrov topology on the product is coarser that the product topology, and we conclude. Note that this latter part would not be true in the case of an infinite product.
• equalizers are preserved by a simple set-theoretic argument. Therefore, we can conclude that finite limits are preserved.
• Alex is a full functor (by direct verification). Since we have proved that it preserves finite limits, then it reflects them as well.
Topologies
In our setting, there are two specially relevant Grothendieck topologies to consider: the étale topology on the algebraic side and the topology of local homeomorphisms on the topological side (which has however the same sheaves as the topology of open embeddings). We have thus sites Aff C , ét and Top, loc. We can therefore consider the following topoi:
• Sh ét (Sch C );
• PSh(Sh ét (Sch C )), which we interpret as "étale sheaves over the category of complex presheaves";
• Sh loc (Top). This last topos is indeed equivalent to the usual topos of sheaves over the category of topological spaces and open covers. Now, there are analogs of both topologies in the stratified setting. Namely, we can define strét as the topology whose coverings are stratification-preserving étale coverings, and strloc as the topology whose coverings are jointly surjective famiilies of stratification-preserving local homemorphisms. Therefore, we have well-defined stratified analogues:
• Sh strét (StrSch C ) • Sh strét (StrPSh C ) := PSh(Sh strét (StrSch C ))
• Sh strloc (StrTop).
The stratification functor strtop : StrSch C → StrTop sends stratified étale coverings to stratified coverings in the topology of local homeomorphism, and thus induces a functor Sh strét (StrPSh C ) → Sh strloc (StrTop).
2.A.2 Symmetric monoidal structures on the constructible sheaves functor
Constructible sheaves on conically stratified spaces locally of singular shape
Fix a finite ring of coefficients k (this can be extended to the ℓ-adic setting and to more general rings of coefficients, but we will not do this in the present work. For a reference, see [START_REF] Liu | Enhanced six operations and base change theorems for Artin stacks[END_REF]).
Remark 2.A.5. Let (X, s) be a stratified topological space, and k be a torsion ring. Suppose that (X, s) is conically stratified, that X is locally of singular shape and that P , the stratifying poset, satisfies the ascending chain condition (see [Lur17, Definition A.5.5 and A.4.15 resp.]). By [Lur17, Theorem A.9.3] the ∞-category of constructible sheaves on X with respect to s with coefficients in k, denoted by Cons k (X, s), is equivalent to the ∞-category Fun(Exit(X, s), Mod k ).
Here Exit(X, s) is the ∞-category of exit paths on (X, s) (see [Lur17, Definition A.6.2], where it is denoted by Sing A (X), A being the poset associated to the stratification). We will often write Cons instead of Cons k .
We review the definition of conical stratifications in more detail in Definition 3.1.11.
Remark 2.A.6. In recent work by Porta and Teyssier [START_REF] Porta | Topological exodromy with coefficients[END_REF], the hypothesis of "being locally of singular shape" has been removed. For simplicity, we will often work in this higher degree of generality.
Let StrTop con denote the 1-category of stratified topological spaces (X, P, s) such that the stratification is conical, X is locally of singular shape, and P satisfies the ascending chain condition. This category admits finite products because the product of two cones is the cone of the join space. Therefore, there is a well-defined symmetric monoidal Cartesian structure StrTop × con .
Corollary 2.A.7. Let (X, P, s) ∈ StrTop con , and k be a ring. Then Cons k (X, P, s) is a presentable stable k-linear category.
Therefore, the ∞-category Pr L k of presentable stable k-linear ∞-categories will be our usual environment from now on.
Symmetric monoidal structure Lemma 2.A.8. The functor
Exit : StrTop con → Cat ∞ (X, s : X → P ) → Exit(X, s) = Sing P (X)
carries a symmetric monoidal structure when we endow both source and target with the Cartesian symmetric monoidal structure. That is, it extends to a symmetric monoidal functor
StrTop × con → Cat × ∞ .
Proof. Given two stratified topological spaces X, s : X → P, Y, t : Y → Q, in the notations of [Lur17, A.6], consider the commutative diagram of simplicial sets
Sing P ×Q (X × Y ) Sing(X × Y ) Sing(X) × Sing(Y ) N (P × Q) Sing(P × Q) N (P ) × N (Q) Sing(P ) × Sing(Q). ∼ ∼ ∼
The inner diagram is Cartesian by definition. Therefore the outer diagram is Cartesian, and we conclude that Sing P ×Q (X × Y ) is canonically equivalent to Sing P (X) × Sing Q (Y ). Since Sing P (X) models the ∞-category of exit paths of X with respect to s, and similarly for the other spaces, we conclude.
Remark 2.A.9. There exists a functor P ( * ) : Cat × ∞ → Pr L,⊗ sending an ∞-category C to the presheaf ∞-category P(C), a functor F : C → D to the functor F * : P(D) → P(C) given by given by restriction under F . We would like to set a symmetric monoidal structure on this functor. However, this is slightly complicated. Up to our knowledge, the symmetric monoidal structure is well-studied on its "covariant version", in the following sense. Proof. The existence of an oplax-monoidal structure follows from [Lur17, p. 4.8.1]. As for symmetric monoidality, apparently, a detail in the proof of [Lur17, Proposition 4.8.1.15] needs to be fixed: for any pair of ∞-categories C, D, the equivalence P(C) × P(D) follows from the universal property of the tensor product of presentable categories, and not from [Lur17, Corollary 4.8.1.12]. Indeed, for any cocomplete ∞-category E one has:
Cocont(P(C × D), E) ≃ Fun(C × D, E) ≃ Fun(C, Fun(D, E)) ≃ ≃ Fun(C 0 , Cocont(P(D), E)) ≃ Cocont(P(C), Cocont(P(D), E)) ≃ ≃ Bicocont(P(C) × P(D), E).
Corollary 2.A.11. There is a well-defined symmetric monoidal functor
Cons ⊗ (!) : StrTop × con → Pr L,⊗ k (X, s) → Cons k (X, s) f → f formal ! = Lan Exit(f ) .
Proof. The previous constructions provide us with a symmetric monoidal functor
StrTop × Exit(-) ----→ Cat × ∞ op -→ Cat × ∞ P(-) ---→ Pr L,⊗ sending (X, s) → Fun(Exit(X, s), S), f → Lan Exit(f ) .
But now, with the notations of [Lur17, Subsection 1.4.2], for any ∞-category C we have Fun(C, Sp) = Sp(Fun(C, S)).
Then we can apply [Rob14, Remark 4.2.16] and finally [Rob14, Theorem 4.2.5], which establish a symmetric monoidal structure for the functor Sp(-) : Pr L,⊗ → Pr L,⊗ stable . The upgrade from Sp to Mod k is straightforward and produces a last functor Pr L,⊗ stable → Pr L,⊗ k (the ∞-category of presentable stable k-linear categories).
From now on, we will often omit the "linear" part of the matter and prove statements about the functor Cons : StrTop × con → Pr L,⊗ and its variations, because tha passage to the stable k-linear setting is symmetric monoidal. Remark 2.A.12. We denote the functor Lan Exit(f ) by f formal ! because, in general, it does not coincide with the proper pushforward of sheaves. We will see in the next subsection that it does under some hypothesis on f . We will also see that, as a corollary of Corollary 2.A.11, there exists a symmetric monoidal structure on the usual contravariant version
Cons ( * ) : StrTop op con → Pr L,⊗ (X, s) → Cons(X, s) f → f * = -• Exit(f )
as well.
2.A.3 Constructible sheaves and correspondences
First of all, we need to recall some properties of the category of constructible sheaves with respect to an unspecified stratification.
Definition 2.A.13. Let X be a topological space. Then there is a well-defined ∞-category of constructible sheaves with respect to a non-fixed stratification
D c (X) = colim s:X→P stratification Cons(X, s).
where the colimit is taken over the category StrTop × Top {X} of stratifications of X and refinements between them.
Our aim is to prove the following theorem (we will indeed prove a more powerful version, see Theorem 2.A.22).
It sends morphisms in horiz to pullback functors along those morphisms, and morphisms in vert to proper pushforward functors along those morphisms.
Unlike in the stratified case (i.e. the case when the stratification is fixed at the beginnig, treated in the previous subsections), for D c (-) there is a well-defined six functor formalism. In particular Lemma 2.A.15. For any continuous map f : X → Y , there are well defined functors f * , f ! : D c (Y ) → D c (X). Moreover, f * has a right adjoint Rf * and f ! has a left adjoint Rf ! .
From now on, we will write f * for Rf * and f ! for Rf ! . With these notations, the Proper Base Change Theorem (stated e.g. in [Kim15, Theorem 6] at the level of abelian categories, and in [START_REF] Volpe | Six functor formalism for sheaves with non-presentable coefficients[END_REF] at the level of derived ∞-categories) holds: Theorem 2.A.16 (Base Change theorem for constructible sheaves). For any Cartesian diagram of unstratified topological spaces
X ′ X Y ′ Y f ′ g ′ f g (2.A.2)
there is a canonical transformation of functors
D c (X) → D c (Y ′ ) f ′ ! g ′ * → g * f !
which is an equivalence of functors. The same holds for
f ! g * → g ′ * f ! as functors D c (Y ) → D c (X).
We are now ready to prove Theorem 2.A.14. We follow the approach used by D. Gaitsgory and N. Rozenblyum in [GR17, Chapter 5] to construct IndCoh as a functor out of the category of correspondences. As in these works, we need a theory of (∞, 2)-categories. In particular, we need to extend Cat ∞ to an (∞, 2)-category, which we shall denote by Cat 2-cat ∞ : this is done in [START_REF] Gaitsgory | A Study in Derived Algebraic Geometry[END_REF], informally by allowing natural transformations of functors which are not natural equivalences. We proceed by steps. proper,all . We perform the verifications necessary to the application of that theorem:
• The pullback and 2-out-of-3 properties for horiz = all, co-adm = proper, isom, which are immediately verified.
• • Given a Cartesian diagram
X Y Z W ε 0 γ 1 γ 0 ε 1
with ε i ∈ open and γ i ∈ proper, the Beck-Chevalley condition is satisfied (every functor from correspondences must satisfy it; this is the "easy" part of the extension theorems). Hence there is an equivalence
(γ 1 ) * ε * 0 ≃ ε * 1 (γ 0 ) * .
Now we use the other Beck-Chevalley condition, the one introduced and checked in the previous point, using the fact that ε 1 ∈ open. This new set of adjunctions gives us a morphism
(ε 1 ) ! (γ 1 ) * → (γ 0 ) * (ε 0 ) ! .
We require that this morphism is an equivalence. But since γ i ∈ proper, we have that (γ i ) * ≃ (γ i ) ! and we conclude by commutativity of the diagram and functoriality of the proper pushforward.
Note that we have used exactly once that, respectively, for an open embedding f * ≃ f ! and for a proper morphism
f * = f ! .
This completes the proof of Theorem Definition 2.A.21. We define she to be the class of maps in StrTop con which admit a homotopy inverse which is itself stratified. We define esh to be the class of maps f :
H → H ′ , g : Y → Y ′ in Act con (StrTop)
which admit a homotopy inverse f : H ′ → H, ḡ : H ′ → H also living in Act con (StrTop) (i.e. f , ḡ stratified and ḡ f -equivariant). These classes easily extend to the the setting of presheaves.
Theorem 2.A.22. There is a well-defined "equivariant constructible sheaves" functor
Cons ⊗ act,corr : Corr(P(Act con (PSh(StrTop))[esh -1 ])) × → Pr L,⊗ k .
Proof. We argue as in [START_REF] Gaitsgory | A Study in Derived Algebraic Geometry[END_REF]Chapter 5,3.4]. First of all, we have defined above a (symmetric monoidal) functor q : Act con (StrTop) × → Sh(StrTop) × , in turn inducing a symmetric monoidal functor q corr between the categories of correspondences. We now show that there exists a symmetric monoidal functor Corr(Sh(Top)) × → Pr L,⊗ . ----→ Pr L is symmetric monoidal with respect to the Cartesian structure on the source and to the Lurie tensor product on the target. Therefore, to conclude we can apply arguments similar to the ones used in [GR17, Chapter 5, 4.1.5], that is essentially [GR17, Chapter 9, Proposition 3.2.4], whose hypotheses are trivially verified since horiz = all.
By right
In particular, the functor
Cons : StrTop op con → Cat ∞ (X, s) → Cons(X, s) f → f * = -• Exit(f )
is symmetric monoidal, as we had announced in Remark 2.A.12.
2.B Omitted proofs and details
2.B.1 Proof of Proposition 2.1.7
Proof. Fix a discrete complex algebra R, and let ξ = (S, F i , α i , µ i ) i be a vertex of the groupoid Gr Ran,k (R).
We must prove that π 1 (Gr Ran,k (R), ξ) = 0. We know that Ran(X) is a presheaf of sets over complex algebras. Therefore it suffices to prove that for every S ∈ Ran(X)(R), the fiber of Gr Ran,k → Ran(X) at S is discrete.
Consider then an automorphism of a point (S, F i , α i , µ i ): this is a sequence of automorphisms ϕ i for each bundle F i , such that the diagrams
F i | X R \Γ S T G | X R \Γ S F i | X R \Γ S α i ϕ i | X R \Γ S α i
(for i = 1, . . . , k) and
F i | (X R ) Γ S T G | (X R ) Γ S F i | (X R ) Γ S µ i ϕ i | (X R ) Γ S µ i
(for i = 1, . . . , k -1) commute. (Actually, only the commutation of the former set of diagrams is relevant to the proof.)
The first diagram implies that ϕ i is the identity over X R \ Γ S . We want to show that ϕ i is the identity. In order to show this, we consider the relative spectrum Spec
X R (Sym(F i )) of F i , which comes with a map π : Y = Spec X R (Sym(F i )) → X R . An automorphism of F i corresponds to an automorphism f i of Y over X R ,
which in our case is the identity over the preimage of X R \ Γ S inside Y . Let U be the locus {f i = id}. This set is topologically dense, because it contains the preimage of the dense open set X R \ Γ S . We must see that it is schematically dense, that is the restriction map O Y (Y ) → O Y (U ) is injective. If we do so, then ϕ i = id globally. The remaining part of the proof was suggested to us by Angelo Vistoli.
We may suppose that R (and therefore Y and X R ) are Noetherian. Indeed, we can reduce to the affine case and suppose X R = Spec P and Y of the form Spec P [t 1 , . . . , t n ] from the Noether Lemma (observe that Y is finitely presented over X R ). Any global section f of Spec P [t 1 , . . . , t n ] lives in a smaller Noetherian subalgebra P ′ [t 1 , . . . , t n ], because it has a finite number of coefficients in P . Analogously, we can suppose U to be a principal open set of Spec P [t 1 , . . . , t n ] and thus f | U can be seen as a section in some noetherian subalgebra of P [t 1 , . . . , t n ] g , with g a polynomial in P [t 1 , . . . , t n ]. Therefore we conclude that the proof that f is zero can be carried out over a Noetherian scheme.
Let us recall the following facts. • If f : S → T is a flat morphism of noetherian schemes, p ∈ S, then p is associate in S if and only if p is associate in the fiber of f (p) and f (p) is associate in T .
Let now S = Y and T = X R . First of all, if we consider the composition Y red → X R we have that U red = Y red , because two morphisms between separated and reduced schemes coinciding on an open dense set coincide everywhere. Now we note that U contains the generic points of every fiber. Indeed, every f 1 (x) ⊂ Y factors through Y red → Y because the fibers are integral, and hence through
U red = Y red → Y . Now if y is an associate point in Y then it is associate in f -1 (f (y)).
Therefore it is a generic point of f -1 (f (y)), because every fiber of a principal G-bundle is isomorphic to G, which is integral. But U contains all generic points of the fibers, which are their associated points because the fibers are integral. This implies that U is schematically dense.
2.B.2 Proof of Proposition 2.2.12
Proof. Since the inclusions U → V of open sets in Ran(X) induce inclusions (FactGr k ) U → (FactGr k ) V and do not alter the datum of (FactGr k ) U , it suffices to prove that the maps p U,V,k make the diagram
(FactGr k ) U ⋆V × (FactGr k ) W (FactGr k ) U × (FactGr k ) V × (FactGr k ) W (FactGr k ) U ⋆V ⋆W (FactGr k ) U × (FactGr k ) U ⋆V p U ⋆V,W,k id×p V,W,k p U,V,k ×id p U,V ⋆W,k
commute in StrTop. Now this is true because of the following. Define (Ran(X)×Ran(X)×Ran(X)) disj as the subfunctor of Ran(X) × Ran(X) × Ran(X) parametrising those S, T, P ⊂ X(R) whose graphs are pairwise disjoint in X R . Let (Gr Ran,k × Gr Ran,k × Gr Ran,k ) disj be its preimage under r k × r k × r k :
Gr Ran,k × Gr Ran,k × Gr Ran,k → Ran(X) × Ran(X) × Ran(X). Then the diagram
(Gr Ran,k × Gr Ran,k ) disj (Gr Ran,k × Gr Ran,k × Gr Ran,k ) disj Gr Ran,k (Gr Ran,k × Gr Ran,k ) disj χ k id×χ k χ k ×id χ k
commutes because the operation of gluing is associative, as it is easily checked by means of the defining property of the gluing of sheaves. Note also that everything commutes over Ran(M ).
Finally, to prove that the functor defined in Remark 2.2.11 is a map of operads, we use the characterization of inert morphisms in a Cartesian structure provided by [Lur17, Proposition 2.4.1.5]. Note that:
• An inert morphism in Fact(M ) ⊗ is a morphism of the form
(U 1 , . . . , U m ) → (U ϕ -1 (1) , . . . U ϕ -1 (n) )
covering some inert arrow ϕ : ⟨m⟩ → ⟨n⟩ where every i ∈ ⟨n⟩ • has exactly one preimage ϕ -1 (i).
• An inert morphism in StrTop × is a morphism of functors ᾱ between f : P(⟨m⟩ • ) op → Top and g : P(⟨n⟩ • ) op → Top, covering some α : ⟨m⟩ → ⟨n⟩, and such that, for any S ⊂ ⟨n⟩, the map induced by ᾱ from f (α -1 S) → g(S) is an equivalence in StrTop.
By definition, CGr
× k ((U 1 , . . . , U m )) is the functor f assigning T ⊂ ⟨m⟩ • → j∈T (FactGr k ) U j ,
and analogously
CGr × k ((U ϕ -1 (1) , . . . , U ϕ -1 (m) )) is the functor g assigning S ⊂ ⟨n⟩ • → i∈S (FactGr k ) U ϕ -1 (i) .
But now, if α = ϕ and T = ϕ -1 (S), we have the desired equivalence.
2.B.3 Proof of Proposition 2.2.5
Proof. It is sufficient to prove that each FactGr k is conically stratified and locally of singular shape (although as remarked in Remark 2.A.6 this condition is not strictly necessary). Indeed, each (FactGr k ) U , being an open set of FactGr k with the induced stratification, will be conically stratified and locally of singular shape as well. Moreover, it suffices to show that strtop(Gr Ran ) is conically stratified and locally of singular shape. Indeed, this will imply the same property for the k-fold-product of copies of strtop(Gr Ran ) over Ran(M ), and
FactArc k is a principal bundle over this space, with unstratified fiber. This consideration implies the property for FactArc k .
Let us then prove the property for strtop(Gr Ran ). First of all, the Ran Grassmannian is locally of singular shape because of the following argument.
Proposition 2.B.1. Let U : ∆ op → Open(X) be a hypercovering. Then Sing(X) ≃ colim Sing n∈∆ op (U n ).
Proof. We use [START_REF] Lurie | Higher Algebra[END_REF]Theorem A.3.1]. Condition (*) in loc.cit. is satisfied for the following modification of U. Since U is a hypercovering, one can choose for any [n] a covering (U i n ) i of U n , functorially in n. We can thus define a category C → ∆ op as the unstraightening of the functor
∆ op → Set [n] → {(U i n ) i }.
Then there is a functor Ũ : C → Open(X), ([n], U i n ) → U . This functor satisfies (*) in [Lur17, Theorem A.3.1], and therefore colim ([n],U )∈C Sing(U ) ≃ Sing(X). Now note that U is the left Kan extension of
Ũ along C → ∆ op . Therefore, Sing(X) ≃ colim C Sing(U ) ≃ colim ∆ op Sing(U n ).
This allows us to apply the proof of [Lur17, Theorem A.4.14] to any hypercovering of Ran(M ). Now, the space Ran(M ) is of singular shape because is contractible and homotopy equivalences are shape equivalences. Therefore, so are elements of the usual prebase of its topology, namely open sets of the form i Ran(D i ). One can construct a hypercovering of Ran(M ) by means of such open subsets, and therefore we can conclude by applying the modified version of [START_REF] Lurie | Higher Algebra[END_REF]Theorem A.4.14] that we have just proved. It remains to prove that the Ran Grassmannian is conically stratified. Indeed, it is Whitney stratified. The proof of such property has been suggested to us by David Nadler [Nad], and uses very essential properties of the Ran Grassmannian. Consider two strata X and Y of Gr Ran . We want to prove that they satisfy Whitney's conditions A and B; that is:
• for any sequence (x i ) ⊂ X converging to y, such that T x i X tends in the Grassmanian bundle to a subspace τ y of R m , T y Y ⊂ τ y (Whitney's Condition A for X, Y, (x i ), y);5
• when sequences (x i ) ⊂ X and (y i ) ⊂ Y tend to y, the secant lines x i y i tend to a line v, and T x i X tends to some τ y as above, then v ∈ τ y (Whitney's Condition B for X, Y, (x i ), (y i ), y).6
The only case of interest is when X ∩ Y is nonempty. Observe that, when the limit point y ∈ Y appearing in Whitney's conditions is fixed, conditions A and B are local in Y , i.e. we can restrict our stratum Y to an (étale) neighbourdhood U of the projection of y in Ran(M ). Also, both Y and the y i in Condition B live over some common stratum Ran n (M ) of Ran(M ). Using the factorization property, which splits components and tangent spaces, we can suppose that n = 1. Therefore, X projects onto the "cardinality 1" component of Ran(M ), that is M itself. By the locality of conditions A and B explained before, we can suppose that y and the (y i ) involved in Whitney's conditions live over A 1 . There, the total space is Gr A 1 and this is simply the product Gr × A 1 because on the affine line the identification is canonical. From this translational invariance it follows that we can suppose our stratum Y concentrated over a fixed point 0 ∈ A 1 , that is: both y and the y i can be canonically (thus simultaneously) seen inside Gr 0 ⊂ Gr A 1 . Now, by [Kal05, Theorem 2] we know that there exists at least a point y ∈ X ∩ Y such that, for any Condition A or B for some choice of (x i ), (y i )}
(x i ) → y
does not coincide with the whole X ∩ Y . Let π : Gr Ran → Ran(M ) be the natural map. Note that X and
Y are acted upon by RanArc × Ran(M ) π(X) and RanArc × Ran(M ) π(Y ) ≃ G O respectively (recall that Y is a subset of Gr = Gr 0 ⊂ Gr A 1 )
, and this action is transitive on the fibers over any point of Ran(M ). Now, the actions of RanArc and G O take Whitney-regular points with respect to X to Whitney-regular points with respect to X, since it preserves all strata. Therefore, if y is a "regular" point as above, the whole Y is made of regular points, and we conclude.
Background
In this section we briefly review the definition of Whitney stratified space and we recall the basic properties of conically stratified and conically smooth spaces. By doing this, we will also introduce the necessary notations to state our main result, conjectured in [AFT17, Conjecture 1.5.3].
Smooth stratifications of subsets of manifolds (Thom, Mather)
We take the following definitions from [START_REF] Mather | Notes on Topological Stability[END_REF], with minimal changes made in order to connect the classical terminology to the one used in [AFT17].
Definition 3.1.1. Let M be a smooth manifold. A smooth stratification of a subset Z ⊂ M is a partition of Z into smooth submanifolds of M . More generally, if M is a C µ -manifold, then a C µ stratification of a subset Z of M is a partition of Z into C µ submanifolds of M . Remark 3.1.2.
In particular, all strata of a smoothly stratified space Z ⊂ M are locally closed subspaces of Z.
Definition 3.1.3 (Whitney's Condition B in R n ).
Let X, Y be smooth submanifolds of R n , and let y ∈ Y be a point. The pair (X, Y ) is said to satisfy Whitney's Condition B at y if the following holds. Let (x i ) ⊂ X be a sequence converging to y, and (y i ) ⊂ Y be another sequence converging to y. Suppose that T x i X converges to some vector space τ in the r-Grassmannian of R n and that the lines x i y i converge to some line l in the 1-Grassmannian (projective space) of R n . Then l ⊂ τ . • (condition of the frontier) if Y is a stratum of S , consider its closure Ȳ in M . Then we require that ( Ȳ \ Y ) ∩ Z is a union of strata, or equivalently that S ∈ S , S ∩ Ȳ ̸ = ∅ ⇒ S ⊂ Ȳ ;
• (Whitney's condition B) Any pair of strata of S satisfies Whitney's condition B when seen as smooth submanifolds of M .
Given two strata of a Whitney stratification X and Y , we say that X < Y if X ⊂ Ȳ . This is a partial order on S .
Conical and conically smooth stratifications (Lurie, Ayala-Francis-Tanaka)
Definition 3.1.6. Let P be a partially ordered set. The Alexandrov topology on P is defined as follows. A subset
U ⊂ P is open if it is closed upwards: if p ≤ q and p ∈ U then q ∈ U .
With this definition, closed subsets are downward closed subsets and locally closed subsets are "convex" subsets: p ≤ r ≤ q, p, q ∈ U ⇒ r ∈ U . Definition 3.1.7 ([Lur17, Definition A.5.1]). A stratification on a topological space X is a continuous map s : X → P where P is a poset endowed with the Alexandrov topology. The fibers of the points p ∈ P are subspaces of X and are called the strata. We denote the fiber at p by X p and by S the collection of these strata.
In this definition we do not assume any smooth structure, neither on the ambient space nor on the strata. Note that, by continuity of s, the strata are locally closed subsets of X.
Note also that the condition of the frontier in Definition 3.1.5 implies that any Whitney stratified space is stratified in the sense of Lurie's definition: indeed, one obtains a map towards the poset S defined by S < T ⇐⇒ S ⊂ T , which is easily seen to be continuous by the condition of the frontier. Definition 3.1.8. A stratified map between stratified spaces (X, P, s) and (Y, Q, t) is the datum of a continuous map f : X → Y and an order-preserving map ϕ : P → Q making the diagram
X Y P Q f s t ϕ commute.
Definition 3.1.9. Let (Z, P, s) be a stratified topological space. We define C(Z) (as a set) as
Z × [0, 1) {(z, 0) ∼ (z ′ , 0)} .
Its topology and stratified structure are defined in [Lur17, Definition A.5.3]. When Z is compact, then the topology is the quotient topology. Note that the stratification of C(Z) is over P ◁ , the poset obtained by adding a new initial element to P : the stratum over this new point is the vertex of the cone, and the other strata are of the form X × (0, 1), where X is a stratum of Z. A stratified space is conically stratified if it admits a covering by conical charts.
More precisely, the conically stratified spaces we are interested in are the so-called C 0 -stratified spaces defined in [AFT17, Definition 2.1.15]. Here we recall the two important properties of a C 0 -stratified space (X, s : X → P ):
• every stratum X p is a topological manifold;
• there is a basis of the topology of X formed by conical charts
R i × C(Z) → X
where Z is a compact C 0 -stratified space over the relevant P >p . Note that Z will have depth strictly less than X; this observation will be useful in order to make many inductive arguments work.
Hence the definition of [AFT17] may be interpreted as a possible analog of the notion of topological manifold in the stratified setting: charts are continuous maps which establish a stratified homeomorphism between a small open set of the stratified space and some "basic" stratified set.
Following this point of view, one may raise the question of finding an analog of "smooth manifold" (or, more precisely, "smoothly differentiable structure") in the stratified setting. We refer to [AFT17, Definition 3.2.21] for the definition of a conically smooth structure (and to the whole Section 3 there for a complete understanding of the notion), which is a very satisfactory answer to this question. A C 0 -stratified space together with a conically smooth structure is called a conically smooth stratified space.
The definition of conically smooth structure is rather elaborate. As in the case of C 0 -stratified spaces, here we just give a couple of important and enlightening properties of these conically smooth stratified spaces:
• any conically smooth stratified space is a C 0 -stratified space;
• all strata have an induced structure of smooth manifold, like in the case of Whitney stratifications;
• there is a notion of atlas, in the sense of a system of charts whose domains are the so-called basics, i.e. stratified spaces of the form R i × C(Z) where Z is equipped with a conically smooth atlas: indeed, to make this definition rigorous, the authors of [AFT17] employ an inductive argument on the depth, where the case of depth equal to zero corresponds to the usual notion of an atlas for a smooth manifold, and to pass to a successive inductive step they observe that, whenever there is an open stratified embedding R i × C(Z) → M , then depthZ < depthM .
This system admits a notion of "smooth" change of charts, in the sense that charts centered at the same point admit a subchart which maps into both of them in a "rigid" way. We recommend to look at the proof of Theorem 3.2.7 for a more precise explanation of this property.
• the definition of conically smooth space is intrinsic, in the sense that it does not depend on a given embedding of the topological space into some smooth manifold, in contrast to the case of Whitney stratifications (see Definition 3.1.1 and Definition 3.1.5);
• in [AFT17] the authors also introduced a notion of conically smooth maps, which differs substantially from the "naive" requirement of being stratified and smooth along each stratum that one has in the case of Whitney stratifications, and hence define a category Strat of conically smooth stratified spaces. In this setting, they are able to build up a very elegant theory and prove many desirable results such as a functorial resolution of singularities to smooth manifolds with corners and the existence of tubular neighbourhoods of conically smooth submanifolds. These results allow to equip Strat with a Kan-enrichment (and hence, a structure of ∞-category); also, the hom-Kan complex of conically smooth maps between two conically smooth spaces has the "correct" homotopy type (we refer to the introduction to [AFT17] for a more detailed and precise discussion on this topic), allowing to define a notion of tangential structure naturally extending the one of a smooth manifold and to give a very simple description of the exit-path ∞-category of a conically smooth stratified space.
Up to now, the theory of conically smooth spaces has perhaps been in need of a good quantity of explicit examples, specially of topological nature. The following conjecture goes in the direction of providing a very broad class of examples coming from differential geometry and topology. Conjecture 3.1.12 ([AFT17, Conjecture 1.5.3]). Let (M, S ) be a Whitney stratified space. Then it admits a conically smooth structure in the sense of [AFT17].
The rest of the chapter is devoted to the proof of this conjecture (Theorem 3.2.7).
Whitney stratifications are conically smooth
Whitney stratifications are conical
We will need the following lemma, whose proof (to our knowledge) has never been written down. Lemma 3.2.1. Let (M, S ) be a Whitney stratified space, T a smooth unstratified manifold, and let f : M → T be a proper map of topological spaces which is a smooth submersion on the strata. Then for every p ∈ T the fiber of f at p has a natural Whitney stratification inherited from M .
Proof. First of all, by definition of smoothly stratified space we may suppose that M ⊂ S for some manifold S of dimension n. Again by definition the problem is local, and we may then suppose M = R n .
We want to prove Whitney's condition B for any pair of strata of the form X = X ′ ∩ f -1 (p) and
Y = Y ′ ∩f -1 (p)
, where X ′ , Y ′ are strata of M and p ∈ T . To this end, we reformulate the problem in the following way: consider the product M ×T with its structure maps π 1 : M ×T → M, π 2 : M ×T → T , and its naturally induced Whitney stratification. Consider also the following two stratified subspaces of M × T : the graph Γ f and the subspace π -1 2 (p). Note that we can see Γ f as a homeomorphic copy of M inside the product (diffeomorphically on the strata). Having said that, the intersection
Γ f ∩ π -1
2 (p) is exactly the fiber f -1 (p). Consider now strata X, Y in f -1 (p) as above, seen as strata of Γ f . Consider sequences x i ⊂ X, y i ⊂ Y both converging to some y ∈ Y . Let l i be the line between x i and y i , and suppose that l i → l, T x i X → τ . By compactness of the Grassmannians Gr(n, 1) and Gr(n, dim T x i X) (which is independent of i), there exists a subsequence (x i j ) such that T x i j Γ f | X ′ converge to some vector space V ⊃ τ . Since the stratification on M is Whitney, we obtain that l ⊂ V . On the other hand, applying the same argument to π -1 2 (p) (which is again stratified diffeomorphic to M via the map π 1 ), we obtain that, up to extracting another subsequence,
T x i j (π -1 2 (p) ∩ π -1 1 (X)
) converges to some W ⊃ τ . Again, since the stratification on M is Whitney, we obtain that l ⊂ W . Note that the lines l i and l only depend on the points x i , y i and on the embedding of M into some real vector space, and not on the subspace we are working with. Now we would like to show that τ = V ∩ W , and this will follow from a dimension argument that uses the fact that f | X is a smooth submersion onto T .
Note that dim τ = dim T x i X for every i. Moreover, by the submersion hypothesis, this equals
dim X ′ -dim T . Also, dim V = dim T x i j Γ f | X ′ = dim X ′ and dim W = dim T x i j (π -1 2 (p) ∩ π -1 1 (X ′ )) = dim X ′ . To compute dim V ∩ W , it suffices to compute dim(V + W ), which by convergence coincides with dim(T x i j Γ f | X ′ + T x i j (π -1 2 (p) ∩ π -1 1 (X ′ ))). Let V j = T x i j Γ f | X ′ , W j = T x i j (π -1 2 (p) ∩ π -1 1 (X ′ ))).
We have a map of vector spaces
V j ⊕ W j → T x i j X ′ ⊕ T f (x i j ) T sending (v, w) → (w -v, df x i j v).
This map is surjective (since df is) and is zero on the subspace
{(v, v) | v ∈ V j ∩ W j }; hence it induces a surjective map V j + W j → T x i j X ′ ⊕ T f (x i j ) T. It follows that dim(T x i j Γ f | X ′ + T x i j (π -1 2 (p) ∩ π 1 (X ′ ))) ≥ dim X ′ + dim T and therefore dim V ∩ W = dim V + dim W -dim(V + W ) ≤ dim X ′ -dim T = dim τ. Since τ ⊂ V ∩ W
U : U → T is a submersion. Then f | A : A → Y is a locally trivial fibration.
We recommend the reading of Mather's two papers [START_REF] Mather | Notes on Topological Stability[END_REF] and [START_REF] Mather | Stratifications and Mappings[END_REF] to understand the behaviour of Whitney stratified spaces, specially in order to understand the notion of tubular neighbourhood around a stratum, which is the crucial one in order to prove our main result. We refer to [Mat70, Section 6] for a tractation of tubular neighbourhoods. Here we just recall the definition: Definition 3.2.4. Let S be a manifold and X ⊂ M be a submanifold. A tubular neighborhood T of X in M is a triple (E, ε, ϕ), where π : E → X is a vector bundle with an inner product ⟨, ⟩, ε is a positive smooth function on X, and ϕ is a diffeomorphism of B ε = {e ∈ E | ⟨e, e⟩ < ε(π(e))} onto an open subset of S, which commutes with the zero section ζ of E:
B ε X S. ϕ ζ
From [Mat72, Corollary 6.4] we obtain that any stratum W of a Whitney stratified space (M, S ) has a tubular neighbourhood, which we denote by (T W , ε W ); the relationship with the previous notation is the following: T W is ϕ(B ε ) ∩ M (recall that a priori ϕ(B ε ) ⊂ S, the ambient manifold) 1 . We also denote by ρ the tubular (or distance) function
T W → R ≥0 v → ⟨v, v⟩
with the notation as in Definition 3.2.4. Note that ρ(v) < ε(π(v)).
A final important feature of the tubular neighbourhoods of strata constructed in Mather's proof is that they satisfy the so-called "control conditions" or "commutation relations". Namely, consider two strata X < Y of a Whitney stratified space M . Then, if T X and T Y are the tubular neighbourhoods relative to X and Y as constructed by Mather, one has that
π Y π X = π Y ρ X π Y = ρ X .
We explain the situation with an example. Example 3.2.5. Let M be the real plane R 2 and S the stratification given by
X = {(0, 0)} Y = {x = 0} \ {(0, 0)} Z = M \ {x = 0}.
We take R 2 itself as the ambient manifold. Then Mather's construction of the tubular neghbourhoods associated to the strata gives a result like in Fig. 3.1. Here the circle is T X , and the circular segment is a portion of T Y around a point of Y . We can see here that T Y is not a "rectangle" around the vertical line, as one could imagine at first thought, because the control conditions impose that the distance of a point in T W from the origin of the plane is the same as the distance of its "projection" to Y from the origin.
Keeping this example in mind (together with its upper-dimensional variants) for the rest of the tractation may be a great help for the visualization of the arguments used in our proofs. Now we closely review the proof of [START_REF] Mather | Stratifications and Mappings[END_REF]Theorem 8.3], which is essential for the next section. This review is also useful to fix some notations. Note that we will use euclidean disks D n (and not euclidean spaces R n ) as domains of charts for smooth manifolds, because this will turn out to be useful in Section 3.2.2 in order to define some "shrinking" maps in an explicit way.
′ on W such that ε ′ < ε W . Let N be the set {x ∈ T W | ρ W (x) ≤ ε ′ (π W (x))}.
Let also
A = {x ∈ T W | ρ W (x) = ε ′ (π W (x))} and f = π W | A : A → W .
Note that f is a proper stratified submersion, since π W is a proper stratified submersion and for any stratum S of M the differential of π W | S vanishes on the normal to A ∩ S. Hence by Lemma 3.2.1 the restriction of the stratification of M to any fiber of f is again Whitney. Consider the mapping
g : N \ W → W × (0, 1] defined by g(x) = π W (x), ρ W (x) ε ′ (π W (x))
.
The space N \ W inherits from M a Whitney stratification (see Lemma 3.2.2) and, by [Mat70, Lemma 7.3 and above], the map g is a proper stratified submersion. Thus, since A = g -1 (W × {1}), by Lemma 3.2.3 one gets a stratified2 homeomorphism h fitting in the commuting triangle
N \ W A × (0, 1] W × (0, 1] g h f ×id
.
Furthermore, since W = ρ -1 (0) ⊆ N , h extends to a homeomorphism of pairs
(N, W ) (h,id) ---→ (M (f ), W ),
where M (f ) is the mapping cylinder of f (we recall that f :
A → W is the projection (π W | A )). If D i ⊂ R i
is the unit open disk, for any euclidean chart j : D i → W , the pullback of f along j becomes a projection D i × Z → D i . Note that Z is compact by properness of f , and has an induced Whitney stratification being a fiber of f , as we have noticed above. Finally,
M (f ) ≃ M (D i × Z pr 1 --→ D i ) ≃ C(Z) × D i .
From now on the conical charts obtained through the procedure explained in the previous theorem will be referred to as the Thom-Mather charts associated to the Whitney stratified space M . The rest of this chapter will be devoted to prove that these charts constitute a conically smooth structure (atlas) for M , as conjectured in [AFT17, Conjecture 1.5.3 (3)].
Whitney stratifications are conically smooth
We take all the definitions and notations from [AFT17], specially from Section 3. In particular, we recall that the definition of conically smooth structure is given in [AFT17, Definition 3.2.21].
Let now (M, S ) be a Whitney stratified space. Given a chosen system of tubular neighbourhoods around the strata along with their distance and projection functions {ρ X , π X }, we have an induced collection of Thom-Mather charts associated to this choice. Call A this collection. We are now going to prove that this is a conically smooth atlas in the sense of [AFT17, Definition 3.2.10]. We will then prove (Remark 3.2.9) that different choices of systems of tubular neghbourhoods induce equivalent conically smooth atlases, again in the sense of [AFT17, Definition 3. Proof. The proof will proceed by induction on the depth of (M, S ) (see Note 3.1.10). The case of depth 0 is obvious, since any Whitney stratified space over a discrete poset is just a disconnected union of strata which are smooth manifolds. Thus, we may assume that for any Whitney stratified set (M ′ , S ′ ) with depth(M ′ , S ′ ) < depth(M, S ), the Thom-Mather charts induce a conically smooth structure on
(M ′ , S ′ ).
Now we need to show that the Thom-Mather charts induce an atlas of (M, S ) in the sense of [AFT17, Definition 3.2.10]. We know that the charts cover the space M . By Theorem 3. Hence it remains to prove that the "atlas" axiom is satisfied: that is, if m ∈ M is a point, u : R i × C(Z) → M and v : R j × C(W ) → M are Thom-Mather charts with images U and V , such that m ∈ U ∩ V , then there is a commuting diagram
D k × C(T ) D i × C(Z) D j × C(W ) M f g u v (3.2.1)
such that x ∈ Im(uf ) = Im(vg) and that f and g are maps of basics in the sense of [AFT17, Definition 3.2.4].
It is sufficient to consider strata X, Y such that X < Y (that is X is in the closure of Y ) and m ∈ Y . In particular, X will have dimension strictly less than Y . 3In this setting, we may reduce to the case when u is a Thom-Mather chart for X which also contains
{n ∈ V | ρ Y (n) < γ(π Y (n)), π Y (n) ∈ U ∩ Y } ⊆ V ∩ U.
Let γ be defined as follows. Let ε Y : V ∩ Y → R >0 be the radius function asociated to the Thom-Mather chart v. Note that ε Y is equal to the function
y → sup{ρ Y (n) | n ∈ V, π Y (n) = y}
("maximum radius function" for v). Then it makes sense to define
γ = min v(B δ × * ) (γ/ε Y ) > 0.
Now let us consider the self-embedding
D j i - → B δ ⊂ D j
where i is of the form (t 1 , . . . , t j ) → ( t 1 a 1 , . . . ,
t j a j ) (in such a way that m ∈ Im(v • i)). We call ψ : D j × C(W ) i×(•γ) ---→ D j × C(W ).
This construction is a way to "give conical parameters" for a sufficiently small open subspace of v -1 (U ∩V ): the multiplication by γ is the rescalation of the cone coordinate, while i is the rescalation of the "euclidean" coordinate (i.e. the one relative to the D j component). By construction, m ∈ Im(v • ψ) ⊆ U ∩ V . In particular, the image is contained in U \ X.
Lemma 3.2.8. The function ψ is a map of basics.
Proof. We prove that:
• ψ is conically smooth along D j . Indeed, the map on the bottom row of the diagram in [AFT17, Definition 3.1.4] takes the form
(0, 1) × R j × D j × C(W ) → (0, 1) × R j × D j × C(W ) (t, (v 1 , . . . , v j ), (u 1 , . . . , u n ), [s, z]) → t, v 1 a 1 , . . . v j a j , u 1 a 1 , . . . , u j a j , s γ , z . (3.2.2)
As one can see from the formula, this indeed extends to t = 0, and the extension is called Dψ; the differential Dψ of ψ is the restriction of Dψ to t = 0. The same argument works for higher derivatives.
• Dψ is injective on vectors. This is an immediate verification using the formula (3.2.2).
• We have that A ψ -1 ((D j ×C(W ))\D j ) = ψ * ψ -1 ((D j ×C(W ))\D j ) A (D j ×C(W ))\D j . This is proven by looking at the definition of ψ: charts are only rescaled along the cone coordinate, or rescaled and translated in the unstratified part.
Now consider the open subset
D i × Z × (0, 1) ⊂ D i × C(Z)
. By [AFT17, Lemma 3.2.9], basics form a basis for basics, and therefore we may find a map of basics ϕ :
D i ′ × C(Z ′ ) → D i × C(Z) whose image is contained in D i × Z × (0, 1). Therefore we have a diagram D i ′ × Z ′ D j × C(W ) U \ X u ′ v•ψ
But now, depth(U \ X) < depth(U ) ≤ depth(M ), and U \ X with its natural stratification as an open subset of M is Whitney by Lemma 3.2.2 (or also by definition of Thom-Mather chart). Therefore, by induction we may find maps of basics f ′ , g ′ sitting in the diagram
D k × C(T ) D i ′ × Z ′ D j × C(W ) U \ X f ′ g ′ u ′ v•ψ
Let us define f as the composition
D k × C(T ) f ′ -→ D i ′ × C(Z ′ ) ϕ -→ D i × C(Z)
and g as the composition
D k × C(T ) g ′ -→ D j × C(W ) ψ -→ D j × C(W ). Now u • f = u ′ • f ′ = v • ψ • g ′ = v • g.
Since ϕ, ψ, f ′ , g ′ are maps of basics, then also f and g are, and this completes the proof.
Remark 3.2.9. By [Mat70, Proposition 6.1], different choices of Thom-Mather charts induce equivalent conically smooth atlases in the sense of [AFT17, Definition 3.2.10]. Indeed, the construction of a Thom-Mather atlas A depends on the choice of a tubular neighbourhood for each stratum X, along with its distance and projection functions ρ X , π X . Thus, let A, A ′ be two conically smooth atlases induced by different choices of a system of tubular neighbourhoods as above. We want to prove that A ∪ A ′ is again an atlas. The nontrivial part of the verification is the following. Let us fix two strata X < Y , and a point y ∈ Y ; take ϕ X a Thom-Mather chart associated to the A-tubular neighbourhood T X of X, and that ψ ′ Y a Thom-Mather associated to the A ′ -tubular neighbourhood T ′ Y of Y . We want to verify the "atlas condition" (3. Let G be a G m -gerbe over a qcqs scheme X, and χ a character of G m . We define the category of χ-homogeneous sheaves over G, informally, as the full subcategory QCoh χ (G) of QCoh(G) spanned by those sheaves on which ρ = σ χ . Remark 4.1.4. The above definition is a little imprecise, in that it does not specify the equivalences ρ(γ, F) ≃ σ χ (γ, F), γ ∈ G m , F ∈ QCoh(G). A formal definition is given in [BP21, Definition 5.14]. There, the Authors define an idempotent functor (-) χ : QCoh(G) → QCoh(G), taking the "χ-homogeneous component". This functor is t-exact ([BP21, Proof of Lemma 5.17], and comes with canonical maps i χ,F : F χ → F. The category QCoh χ (G) is defined as the full subcategory of QCoh(G) spanned by those F such that i χ (F) is an equivalence. To fix the notations, let us make these constructions explicit. If X is a scheme and G a G m -gerbe, we have the following diagram
G G × BG m G BG m G X u actα q p π π
where p, q are the projections, u is the atlas of the trivial gerbe and act α is the morphism induced by the bending α of G. See [BP21, Section 5] for more specific description of these maps. Given F ∈ QCoh(G) and χ a character of G m , the Authors define
(F) χ := u * (q * q * (act * α (F) ⊗ L ∨ χ ) ⊗ L χ )
and the morphism i χ (F) : (F) χ → F to be the pullback through u of the counit of the adjunction q * ⊣ q * . We will denote by L χ the line bundle over BG m associated to the character χ, while L χ is the pullback of L χ to the trivial gerbe G × G m , i.e. L χ := p * L χ .
[BP21] also prove that there is a decomposition
QCoh(G) ≃ χ∈Hom(Gm,Gm) QCoh χ (G)
building on the fact that F ≃ χ∈Hom(Gm,Gm) F χ . The same result was previously obtained by Lieblich in the setting of abelian categories and by Bergh-Schnürer in the setting of triangulated categories. The relationship between categories of twisted sheaves and Azumaya algebras has been intensively studied, see [De 04], [De ], [START_REF] Lieblich | Twisted sheaves and the period-index problem[END_REF], [START_REF] Hall | Perfect complexes on algebraic stacks[END_REF], [START_REF] Bergh | Decompositions of derived categories of gerbes and of families of Brauer-Severi varieties[END_REF], [START_REF] Binda | GAGA problems for the Brauer group via derived geometry[END_REF]. Given a G m -gerbe G over X, its category of twisted sheaves QCoh id (G) admits a compact generator, whose algebra of endomorphisms is a derived Azumaya algebra A G . In contrast, if we restrict ourselves to the setting of abelian categories and consider abelian categories of twisted sheaves, this reconstruction mechanism does not work anymore. This is one of the reasons of the success of Toën's derived approach.
The construction of the category of twisted sheaves gives rise to a functor Ger Gm (X) → LinCat St (X)
G → QCoh id (G)
taking values in the 2-groupoid Br † (X) of compactly generated invertible categories (see Theorem 4.2.6). To be precise, this was already known, see for instance [START_REF] Hall | Perfect complexes on algebraic stacks[END_REF]Example 9.3]. This functor corresponds to a section of the map Br
† (X) ∼ -→ Map(X, B 2 G m × BZ) → Map(X, B 2 G m
). This section is not fully faithful nor essentially surjective. However, one can observe that QCoh id (G) is not just an ∞-category, but also carries a t-structure which is compatible with filtered colimits, since as recalled in Remark 4.1.4 the functor (-) id is t-exact. This additional datum allows to "correct" the fact that QCoh id (-) is not an equivalence. Indeed, we can change the target from LinCat St (X) to the ∞-category of stable O X -linear presentable categories with a t-structure compatible with filtered colimits. Under the association (C, C ≥0 , C ≤0 ) → C ≥0 , this ∞-category is equivalent to the category LinCat PSt (X) of so-called Grothendieck prestable O X -linear ∞-categories (see Definition 4.1.11). Therefore, we have a functor
Ψ : Ger Gm (X) → LinCat PSt (X) (4.1.1) G → QCoh id (G) ≥0 .
Reminders on stable and prestable linear categories
Definition 4.1.6. Let C be an ∞-category. We will say that C is prestable if the following conditions are satisfied:
• The ∞-category C is pointed and admits finite colimits.
• The suspension functor Σ : C → C is fully faithful.
→ Cat ∞ R → LinCat St (Spec R) R → LinCat PSt (Spec R)
can be right Kan extended to functors
QStk St , QStk PSt : Sch op k → Cat ∞ .
This gives a meaning to the expressions QStk St (X), QStk PSt (X), which can be thought of as "the category of sheaves of QCoh-linear (resp. QCoh ≥0 -linear) (pre)stable categories on X". For any X ∈ Sch k , there are well-defined "global sections functors" This theorem means that, if X is a qcqs scheme over k, every (Grothendieck pre)stable presentable O X -linear ∞-category C has an associated sheaf of ∞-categories on X having C as category of global sections. We will make substantial use of this fact in the present work. The main reason why we will always assume our base scheme X to be qcqs is because it makes this theorem hold.
QStk St (X) → LinCat St (X), QStk PSt (X) → LinCat PSt (X) constructed in [Lur18,
Definition 4.1.16. We denote by Br(X) the maximal ∞-groupoid contained in QStk PSt (X) and generated by ⊗ QCoh(X) ≥0 -invertible objects which are compactly generated categories, and equivalences between them. We denote by Br † (X) the maximal ∞-groupoid contained in QStk St (X) generated by ⊗ QCoh(X) -invertible objects which are compactly generated categories, and equivalences between them. We call dBr(X) := π 0 Br(X) the derived Brauer group of X and dBr † (X) := π 0 (Br † (X)) the extended derived Brauer group of X.
By [Lur18, Theorem 10.3.2.1], to be compactly generated is a property which satisfies descent. The same is true for invertibility, since the global sections functor is a symmetric monoidal equivalence by Theorem 4.1.15. Therefore, Br † (X) is equivalent to the ∞-category of invertible and compactly generated categories in LinCat St , and an analogous statement is true for Br(X). Remark 4.1.17. The stabilization functor mentioned in Remark 4.1.12 restricts to a functor Br(X) → Br † (X), whose homotopy fiber at any object C ∈ Br † (X) is discrete and can be identified with the collection of all t-structures (C ≥0 , C ≤0 ) on C satisfying the following conditions:
• The t-structure (C ≥0 , C ≤0 ) is right complete and compatible with filtered colimits.
• The Grothendieck prestable ∞-category C ≥0 is an ⊗-invertible object of LinCat PSt (X).
• The Grothendieck prestable ∞-category C ≥0 and its ⊗-inverse are compactly generated.
See [Lur18, Remark 11.5.7.3] for further comments.
Consider now the functor Ψ introduced in Eq. (4.1.1). In analogy to the stable setting, we will prove that it factors through Br(X) → LinCat PSt (X) (see Theorem 4.2.6).
The inverse will be described as follows. By Theorem 4.1.15, if X is a quasicompact quasiseparated scheme, and M a prestable presentable ∞-category equipped with an action of QCoh(X) ≥0 , then M is the category of global sections over X of a unique sheaf of prestable QCoh(X) ≥0 -linear categories M.
where G × X G ′ is seen as a G m × G m -gerbe on X.
Proof. First we prove that the external tensor product induces a t-exact equivalence
⊠ : QCoh(G) ⊗ QCoh(X) QCoh(G ′ ) ≃ QCoh(G × X G ′ ). ( 4
(G) ⊗ QCoh(X) QCoh χ ′ (G ′ ) lands in QCoh (χ,χ ′ ) (G × X G ′ ). It is enough to construct an equivalence α(F ⊠ F ′ ) : (F ⊠ F ′ ) (χ,χ ′ ) ≃ (F) χ ⊠ (F ′ ) χ ′
for every F⊠F ′ . With the notation of Remark 4.1.4, we can do this using the following chain of equivalence:
(F ⊠ F ′ ) (χ,χ ′ ) = (p × p ′ ) * (act * (α,α ′ ) (F ⊠ F ′ ) ⊗ L ∨ (χ,χ ′ ) ) ≃ (p × p ′ ) * ((act α × act α ′ ) * (F ⊠ F ′ ) ⊗ (L ∨ χ ⊠ L ∨ χ ′ )) ≃ (p × p ′ ) * ((act * α F ⊗ L ∨ χ ) ⊠ (act * α ′ F ′ ⊗ L ∨ χ ′ )) ≃ (F) χ ⊠ (F ′ ) χ ′ .
A straightforward computation shows that α is in fact a natural transformation and verifies the condition described in Remark 4.2.8. It remains to prove that the restricted functor induces an equivalence, which will automatically be t-exact. But again, it suffices to show this locally, and in the local case this just reduces to the fact that QCoh
χ (U × BG m ) ⊗ QCoh(U ) QCoh χ ′ (U × BG m ) ≃ QCoh(U ) ⊗ QCoh(U ) QCoh(U ) → QCoh(U ) is an equivalence.
ρ : G × X G ′ → G ⋆ G ′ establishes a t-exact equivalence QCoh id (G ⋆ G ′ ) ∼ -→ QCoh (id,id) (G × X G ′ ).
Proof. Note that the character m : G m × G m → G m given by multiplication induces a line bundle L id,id on BG m × BG m . One can prove easily that this line bundle coincides with the external product of L id , the universal line bundle on BG m , with itself. This means that what we denote by L (id,id) ∈ QCoh(G × X G ′ × BG m × BG m ) has the form L id ⊠ L id (again with the usual notations of Remark 4.1.4).
We need to prove that ρ * sends id-twisted sheaves in (id, id)-twisted sheaves. To do this, we will construct an equivalence
α(F) : (ρ * F) (id,id) ≃ ρ * (F id )
for every F in QCoh(G ⋆ G ′ ) and apply Remark 4.2.8. An easy computation shows that α is in fact a natural transformation and it verifies the condition described in Remark 4.2.8. By construction of the ⋆ product, one can prove that the following diagram
G × X G ′ × BG m × BG m G × X G ′ G ⋆ G ′ × BG m G ⋆ G ′ act (α,α ′ ) (ρ,Bm) ρ act αα ′
is commutative, where act (α,α ′ ) is the action map of G × X G ′ defined by the product bending, act αα ′ is the action map of G ⋆ G ′ and Bm is the multiplication map m : G m × G m → G m at the level of classfying stacks. This implies that
act * (α,α ′ ) (ρ * F) = (ρ, Bm) * act * αα ′ (F).
Consider now the following diagram:
G × X G ′ × BG m × BG m G × X G ′ × BG m G × X G ′ G ⋆ G ′ × BG m G ⋆ G ′ (ρ,Bm) q(α,α ′ ) (id,Bm) q (α,α ′ ) (ρ,id) ρ q αα ′
where q(α,α ′ ) , q (α,α ′ ) and q αα ′ are the projections. This is a commutative diagram and the square is a pullback.
Finally, we can compute the (id, id)-twisted part of ρ * F:
(ρ * F) id,id =q (α,α ′ ), * (act * (α,α ′ ) (ρ * F) ⊗ L ∨ id,id ) =q (α,α ′ ), * (id, Bm) * (id, Bm) * (ρ, id) * (act * αα ′ (F)) ⊗ L ∨ id,id ;
notice that since Bm * L id = L id,id we have (ρ, Bm) * L id = L id,id . Furthermore, an easy computation shows that the unit id → (id, Bm) * (id, Bm) * is in fact an isomorphism, because of the explicit description of the decomposition of the stable ∞-category of quasi-coherent sheaves over BG m . These two facts together give us that
(ρ * F) id,id =q (α,α ′ ), * (id, Bm) * (id, Bm) * (ρ, id) * (act * αα ′ (F)) ⊗ L ∨ id,id =q (α,α ′ ), * (id, Bm) * (id, Bm) * (ρ, id) * act * αα ′ (F) ⊗ L ∨ id =q (α,α ′ ), * (ρ, id) * act * αα ′ (F) ⊗ L ∨ id =ρ * q αα ′ , * act * αα ′ (F) ⊗ L ∨ id =ρ * ((F) id ).
To finish the proof, we need to verify that ρ * restricted to id-twisted sheaves is an equivalence with the category of (id, id)-twisted sheaves. Again, this can be checked étale locally, therefore we can reduce to the case G ≃ G ′ ≃ X × BG m where the morphism ρ can be identified with (id X , Bm). We know that for the trivial gerbe we have that QCoh(X) ≃ QCoh id (X × BG m ) where the map is described by F → π * F ⊗ L id , π being is the structural morphism of the (trivial) gerbe. A straightforward computation shows that, using the identification above, the morphism ρ * is the identity of QCoh(X). Finally, t-exactness follows from the fact that ρ is flat (see [AOV08, Theorem A.1]). Remark 4.2.12. One can prove easily that ρ * is an inverse of the morphism ρ * which we have just described. It is still true that send twisted sheaves to twisted sheaves and it has the same functorial property of the pullback, due to the natural adjunction. Furthermore, ρ * is t-exact because ρ is a morphism of gerbes whose image through the banding functor Band(ρ) is a surjective morphism of groups with a linearly reductive group as kernel. This implies that it is the structure morphism of a gerbe banded by a linearly reductive group, therefore ρ * is exact.
Proof of Theorem 4.2.6. First of all, we start with the stable case. We have to lift QCoh id (-) from a morphism of ∞-groupoid to a symmetric monoidal functor, i.e. to extend the action of QCoh id (-) to multilinear maps. Let {G i } i∈I be a sequence of G m -gerbes indexed by a finite set I and H be a G m -gerbe, then we can define a morphism of simplicial set QCoh id (-) :
Mul({G i } i∈I , H) -→ Mul({QCoh id (G i )} i∈I , QCoh id (H))
by the following rule: if f : i∈I G i → H is a multilinear map, we define QCoh id (f ) to be the composi-
tion f * •⊠ n : i∈I QCoh id (G i ) → QCoh id (H) where ⊠ n : i∈I QCoh id (G i ) → QCoh id ( i∈I G i )
is just the n-fold box product. The fact that this association is well-defined follows from Proposition 4.2.9 and Proposition 4.2.11. A priori, QCoh id (-) is a lax-monoidal functor from Ger Gm (X) ⋆ to LinCat St (X) × , where LinCat St (X) × is the symmetric monoidal structure on LinCat St (X) induced by the product of ∞-categories. However, f * and ⊠ n are both QCoh(X)-linear and preserves small colimits. Notice that f * preserves small colimits because it is exact, due to the fact that it is a µ-gerbe, with µ a linearly reductive group (see Remark 4.2.12). This implies that QCoh id (-) can be upgraded to a morphism of ∞-operad if we take the operadic nerve. It remains to prove that it is symmetric monoidal, i.e. to prove that it sends coCartesian morphism to coCartesian morphism. This follows again from the isomorphisms described in Proposition 4.2.11 and Proposition 4.2.9.
The prestable case can be dealt with in the exact same way. Because the equivalences in Proposition 4.2.9 and Proposition 4.2.11 are t-exact, they restrict to the prestable connective part of the ∞-categories. The fact that they remain equivalences can be checked étale locally.
Gerbes of positive trivializations
Definition 4.2.13. Let M ∈ Br(X). We define the functor
Triv ≥0 (M ) : Sch /X → S (S → X) → Equiv QCoh(S) ≥0 (QCoh(S) ≥0 , M(S))
where M is the stack of categories associated to M (see Theorem 4.1.15).
Our aim in this subsection is to prove that for every M ∈ Br(X), the functor Triv ≥0 (M ) has a natural structure of a gerbe over X, and also that the functor Triv ≥0 (-) : Br(X) → Ger Gm (X) can be promoted to a symmetric monoidal functor Br(X) ⊗ → Ger Gm (X) ⋆ . Let us recall the main result about the Brauer space proven in [START_REF] Lurie | Spectral Algebraic Geometry[END_REF] (specialized to the case of qcqs schemes). Theorem 4.2.14 ([Lur18, Theorem 11.5.7.11]). Let X be a qcqs scheme. Then for every u ∈ dBr(X) = π 0 (Br(X)), there exists an étale covering f : U → X such that f * u = 0 in dBr(U ); that is, any representative of f * u is equivalent to QCoh(U ) ≥0 as an ∞-category. Lemma 4.2.15. Let X be a qcqs scheme. Then the stack Equiv QCoh(X) ≥0 (QCoh(X) ≥0 , QCoh(X) ≥0 ) is equivalent to BG m × X.
Proof. Let S → X be a morphism of schemes. A QCoh(S)-linear autoequivalence of QCoh(S) is determined by the image of O S , and therefore amounts to the datum of a line bundle L concentrated in degree 0 (the degree must be nonnegative because we are in the connective setting, and if it were positive then the inverse functor would be given by tensoring by a negatively-graded line bundle, which is impossible).
This implies that the desired moduli space is Pic × X, which is the same as BG m × X.
Proposition 4.2.16. Let X be a qcqs scheme, and M ∈ Br(X). Then Triv ≥0 (M ) has a natural structure of G m -gerbe over X. Furthermore, given an isomorphism f : M → N of prestable invertible categories, then the morphism Triv geq0 (f ) defined by the association ϕ → f • ϕ is a morphism of G m -gerbes.
Proof. To prove that Triv ≥0 (M ) is a G m -gerbe we need to verify that both the structure map and the diagonal of Triv ≥0 (M ) are epimorphisms and to provide a G m -banding. The first two assertions are evident from the fact that Triv ≥0 (M ) is locally of the form X × BG m . Now we provide the banding in the following way. First of all, notice that G m,X can be identified with the automorphism group of the identity endofunctor of QCoh(X) ≥0 , namely O X -linear invertible natural transformations of id QCoh(X) ≥0 . Let I Triv ≥0 (M ) be the inertia stack of Triv ≥0 (M ). We define a functor α M : Triv ≥0 (M ) × G m -→ I Triv ≥0 (M )
as α M (ϕ, λ) := (ϕ, id ϕ * λ) ∈ I Triv ≥0 for every (ϕ, λ) object of Triv ≥0 (M ), * being the horizontal composition of natural transformations. Since being an isomorphism is an étale-local property, we can reduce to the trivial case, for which it is a straightforward computation. The second part of the statement follows from a straightforward computation.
Lemma 4.2.17. The functor Triv ≥0 (-) is symmetric monoidal.
Proof. To upgrade Triv ≥0 to a symmetric monoidal functor, we need to define its action on multilinear maps. Let {M i } i∈I be a sequence of invertible prestable O X -linear ∞-categories indexed by a finite set I and N be another invertible prestable O X -linear category. Then we define Triv ≥0 (-) :
Mul {M i } i∈I , N -→ Mul {Triv ≥0 (M i )} i∈I , Triv ≥0 (N )
in the following way: if f : i∈I M i → N is a morphism in LinCat PSt (X) which preserves small colimits separately in each variable, we have to define the image Triv ≥0 (f ) as a functor Triv ≥0 (f ) :
i∈I Triv ≥0 (M i ) -→ Triv ≥0 (N )
such that Band(Triv ≥0 (f )) is the n-fold multiplication of G m , where n is the cardinality of I. We define it on objects in the following way: if {ϕ i } is an object of i∈I (Triv ≥0 (M i )), then we set
Triv ≥0 (f ) {ϕ i } := f • ϕ i
where ϕ i is just the morphism induced by the universal property of the product.
First of all we need to prove that f • ϕ i is still an equivalence. Let S ∈ Sch X and f : i∈I M i → N morphism in LinCat PSt (X), we can consider the following diagram:
QCoh(S) i∈I M i (S)
i∈I M i (S) N(S)
ϕ i ⊗ϕ i u f (S) f (S) (4.2.2)
where the tensors are in fact relative tensors over QCoh(S). The diagonal map u is the morphism universal between all the QCoh(X)-linear morphisms from i∈I M i (S) which preserve small colimits separately in each variable.Equivalently, one can say that it is a coCartesian morphism in LinCat PSt (S) ⊗ .
Thus, it is enough to prove that both ⊗ϕ i and f are equivalences. Because the source of the functor Triv ≥0 (-) is the ∞-groupoid dBr(S), the morphism f (S) is an equivalence. Furthermore, the morphisms ϕ i are equivalences, therefore it follows from the functoriality of the relative tensor product that ⊗ϕ i is an equivalence. A straightforward but tedious computation shows that it is defined also at the level of 1-morphism and it is in fact a functor. It remains to prove that Band Triv ≥0 (f ) is the n-fold multiplication of G m , where n is the cardinality of I. It is equivalent to prove the commutativity of the following diagram:
i∈I Triv ≥0 (M i ) × G m n Triv ≥0 (N ) × G m i∈I I Triv ≥0 (M i ) I Triv ≥0 (N ) (Triv ≥0 (f ),m n ) α M i α N I Triv ≥0 (f )
where m n is the n-fold multiplicatoon of G m . The notation follows the one in the proof of Lemma 4.2.16. Using diagram 4.2.2 again, we can reduce to the following straightforward statement: let λ 1 , . . . , λ n be QCoh(S)-linear automorphisms of id QCoh(S) , which can be identified with elements of G m (S); then the tensor of the natural transformations coincide with the product as elements of G m , i.e λ 1 ⊗ • • • ⊗ λ n = m n (λ 1 . . . λ n ).
Finally, because we are considering ∞-groupoid, the condition of being stricly monoidal is automotically satisfied.
(2) Dans le cas G = GL n , Gr G est interprétable comme un espace de C t -treillis dans C((t)) n .
(3) Dans le cas général, c'est l'espace des G-torseurs sur le disque complexe formel Spec C t avec une trivialisation sur Spec C((t) !).
Il existe un moyen d'interpréter Perv G O (Gr G ) comme la "spécialisation en tout point de x" du c ôté "Bun G (X)" de la conjecture géométrique de Langlands, et Rep( Ǧ) comme spécialisation du c ôté "systèmes locaux".
Étant donné que la conjecture géométrique de Langlands est actuellement formulée comme une déclaration " dérivée", Bezrukavnikov-Finkelberg [BF08] et Arinkin-Gaitsgory [START_REF] Arinkin | Singular support of coherent sheaves, and the geometric Langlands conjecture[END_REF] ont prouvé la Théorème Dérivée de Satake. Là, la catégorie abélienne Perv G O (Gr G ) est remplacée par la catégorie dite catégorie sphérique de Hecke Sph(G), qui est une catégorie supérieure admettant les présentations suivantes :
- Remarque 1.7. Les dg-categories et les ∞-categories sont des moyens d'encoder l'idée de "catégories avec une notion d'homotopie et d'équivalence d'homotopie" d'une manière particulièrement utile pour traiter les catégories dérivées et l'homotopie la théorie. Dans mes travaux, j'adopte la perspective des ∞categories, qui est systématiquement exposée dans [START_REF] Lurie | Higher Topos Theory[END_REF]. L'une des formulations les plus simples de ce concept est la notion de catégories enrichies dans des espaces topologiques d'ensembles simpliciaux.
Dans ce cadre, des propriétés telles que l'associativité ou la commutativité des structures monoïdales sur les catégories (∞) sont encodées dans certaines "structures de cohérence", qui permettent également des notions intermédiaires d'"associativité plus élevée", appelées E k -structures algébriques, k ∈ N. Par exemple, un loopspace ΩY supporte un produit qui n'est associatif qu'à homotopie près, dans la mesure o ù les associateurs dépendent des paramétrisations de l'intervalle unitaire. C'est ce qu'on appelle une E 1 -algèbre dans les espaces. Cette idée se généralise en Ω k Y, k ≥ 2 : il existe k lois de produit différentes, associatives à homotopie près, et distributives les unes par rapport aux autres, qui coïncident à homotopie près par le théorème d'Eckmann-Hilton. De la même manière, on peut définir des E k -algèbres en ∞-catégories (appelées ∞-catégories E k -monoïdales). Pour les 1-catégories, la situation est très simple. Une E 1 -algèbre en catégories n'est qu'une catégorie monoïdale. Une E 2 -algèbre en catégories est ce qu'on appelle une catégorie monoïdale tressée. Une catégorie ordinaire E k -monoïdale, pour k ≥ 3, n'est qu'une catégorie monoïdale symétrique. Par exemple, Perv G O (Gr G ) avec le produit de convolution est une catégorie E k -monoïdale pour chaque k ≥ 3. Dans le cas des ∞-catégories générales, au contraire, une E k+1 -algèbre dans les ∞-catégories est une notion strictement plus forte qu'une E k -algèbre. En ce sens, mon résultat sur Sph(G) dans [START_REF] Nocera | A model for E3 the fusion-convolution product of constructible sheaves on the affine Grassmannian[END_REF] est l'analogue correct dans le monde dérivé de la commutativité de sur Perv G O (Gr G ) .
Les deux théorèmes Theorème 1.5 et Theorème 1.6 ont été initialement énoncés par Gaitsgory et Lurie (non publié). La seconde découle des résultats de Bezrukavnikov-Finkelberg [BF08], Gaitsgory-Lurie (non publié) et Nadler, et elle implique la première. Dans [START_REF] Nocera | A model for E3 the fusion-convolution product of constructible sheaves on the affine Grassmannian[END_REF], j'ai prouvé Theorème 1.5 indépendamment, en construisant la structure E 3 -monoïdale recherchée de manière intrinsèque. Ceci est dans le même esprit de la reconstruction tannakienne expliquée plus haut pour le cas des faisceaux pervers, o ù l'existence d'une structure monoïdale symétrique sur la catégorie fait partie de la donnée initiale, et ce n'est qu'a posteriori qu'elle est interprétée comme le produit tensoriel naturel dans une catégorie de représentations.
Ma preuve de Theorème 1.5 utilise des outils de la théorie de l'homotopie et de la géométrie algébrique dérivée, en particulier les travaux de Jacob Lurie dans [Lur17] et de Gaitsgory-Rozenblyum dans [START_REF] Gaitsgory | A Study in Derived Algebraic Geometry[END_REF], afin de fournir l'interaction correcte entre la géométrie objets (par exemple la Grassmannienne affine) et des invariants hautement structurés sur eux (par exemple la catégorie des faisceaux constructibles).
FAISCEAUX CONSTRUCTIBLES ET ESPACES CONIQUEMENT LISSES
La nature de ce projet est beaucoup plus topologique. Dans [START_REF] Nocera | A model for E3 the fusion-convolution product of constructible sheaves on the affine Grassmannian[END_REF], je me suis intéressé à comprendre la ∞-catégorie des faisceaux constructibles équivariants sur la Grassmannienne affine par rapport à la stratification dans les cellules de Schubert. En ignorant un instant la structure équivariante, ce sont des faisceaux à valeurs spatiales particulières sur l'espace topologique analytique complexe sous-jacent de Gr G (noté Gr an G ) qui sont localement constants sur les strates de S. 1 La théorie des faisceaux constructibles sur les espaces topologiques et les variétés algébriques a été développée par exemple dans [GM80, Tre09, Lur17, BGH20]. MacPherson a prouvé que sous certaines hypothèses la donnée d'un faisceau constructible sur un espace stratifié (Y, S) est équivalente à une donnée combinatoire, à savoir une représentation de la "catégorie des chemins de sortie" de (Y, mathscrS). Lurie [Lur17, Appendix A] a étendu cette remarque à des hypothèses très générales et à un cadre dérivé, définissant une ∞-category of exit paths Exit(Y, S) et établissant le théorème suivant : De plus, il a un inverse qui est à nouveau monoïdal symétrique, envoyant une ∞-catégorie ∞-linéaire inversible M à sa champ de trivialisations sur X.
LISTE DES PUBLICATIONS
-A model for the E 3 fusion-convolution product of constructible sheaves on the affine Grassmannian (preprint disponible sur https://arxiv.org/abs/2012.08504) -Whitney stratifications are conically smooth (avec Marco Volpe, preprint disponible sur https:// arxiv.org/abs/2105.09243)
( 1 .
1 2.2)whereD c (X; k) = colim S stratification of X Cons(X, S ; k) and D c,H (X; k) = lim . . . D c (H × X; k) D c (X; k) .Now, the horizontal arrows in (1.2.2) are not equivalences, although they are while restricted to the abelian subcategories of perverse sheaves. Indeed, the forgetful functor Perv G O (Gr, S ; k) → Perv(Gr, S ; k) is an equivalence (see for example [BR18, Section 4.4]), but Cons G O (Gr, S ; k) → Cons(Gr, S ; k)
Remark 2 . 1 . 4 .
214 Consider the twisted tensor product defined in[START_REF] Zhu | An introduction to affine Grassmannians and to the geometric Satake equivalence[END_REF] (3.1.10)]
Proposition 2 . 1 . 7 .
217 For any k ≥ 0 the functor Gr Ran,k : Alg C → Grpd factorises through the inclusion Set → Grpd. Proof. See Section 2.B.1.
Proposition 2 . 1 . 11 .
2111 This construction defines a semisimplicial object Gr Ran,• : ∆ op inj → Fun(Alg C , Set) because the given maps satisfy the simplicial identities.
Proposition 2 . 1 . 12 .
2112 For any R ∈ Alg C , the semisimplicial set Gr Ran,• (R) enjoys the 2-Segal property, that is the equivalent conditions of [DK19, Proposition 2.3.2].
Remark 2 . 1 . 16 .
2116 The simplicial object Arc • enjoys the 2-Segal property. The verification is straightforward thanks to the multiplication structure.
Construction 2 . 2 . 3 .
223 Composition with the map that we have just described yields a functor which we call again strtop : (StrPSh C ) /Ran(X) → PSh(StrTop) /Ran(M ) .
Remark 2 .
2 2.7. The two functors FactGr k and FactArc k are hypercomplete cosheaves with values in PSh(StrTop /Ran(M ) ).
FactGr ⊙ k Remark 2.2.11. Consider two independent open subsets U and V of Ran(M ). We have the following diagram
Conjecture 2 . 2 . 20 .
2220 (f ) with respect to this topology is the same of a classical trivialising open covering. Any open embedding is a local homeomorphism. Conversely, let us suppose that we have a trivialising local homeomorphism, that is a local homeomorphism of topological spaces w : W → top(T ) such that top(S) × top(T ) W is isomorphic to top(G) × W . Now, by definition of local homeomorphism, for every x in the image of w we have that w restricts to an open embedding on some open set U ⊂ W whose image contains x. Moreover,U → W → top(T ) is trivialising a fortiori, that is U × top(T ) top(S) ≃ U × top(G).In conclusion, if we have a jointly surjective family of trivialising local homeomorphisms, the above procedure yields a covering family of trivialising open sets. In the stratified setting, the proof is the same, since the relevant maps are étale-stratified on the algebraic side, and therefore become analytic-stratified on the topological side.Recall the definition of conically smooth space from [AFT17, Section 3] (see Section 3.1.2 for a quick review) and the definition of weakly constructible bundle and constructible bundle from [AFT17, Definition 3.6.1].Lemma 2.2.19. The map Gr Ran → Ran(M ) is a weakly constructible bundle.Proof. Fix a stratum Ran n (M ) in Ran(M ). The restriction of the map Gr Ran → Ran(M ) to that stratum is locally trivial with stratified fiber Gr n , since by the factorization property Gr Ran | Rann(M ) ≃(Gr strtop X ) ×n | (X n ) top disj /Sn, and Gr X ≃ Gr × Aut + (D) X, which is étale-locally trivial over X with fiber Gr. The map Gr Ran → Ran(M ) is a constructible bundle. Conjecture 2.2.21. Let f : (Y, s, P ) → (Y ′ , s ′ , P ′ ) be a constructible bundle of conically smooth spaces. Then stratified homotopies can be lifted along f . Forthcoming work with Mauro Porta, Jean-Baptiste Teyssier and Marco Volpe is devoted to the proof of these two conjectures. Proposition 2.2.22. For every nonempty finite collection of disjoint disks D 1 , . . . , D n ⊆ M containing open subdisks E 1 ⊆ D 1 , . . . , E n ⊆ D n , the maps
Gr Ran(D) and therefore at the upper level for Gr Ran(E),k → Gr Ran(D),k . But this is true by Lemma 2.2.19. If D is a disk around x in M , we can define Gr D,1 as the preimage along r 1 : Gr Ran → Ran(M ) of the (not open) subset D → Ran(D) ⊂ Ran(M ).Corollary 2.2.23. The natural injective map Gr x,1 → Gr D,1 is a stratified homotopy equivalence over (Ran(X • (T ) + )) k . Notation 2.2.24. From now on, we will denote the relation "being stratified homotopy equivalent" by she ∼ and "being homotopy equivalent as topological groups" by ghe ∼ . We can express this property in a more suitable way by considering the ∞-categorical localization PSh(Act con (StrTop /Ran(M ) )[esh -1 ]) × see Definition 2.A.21. Since she is closed under the symmetric monoidal structure of PSh(StrTop) × , the localization functor extends to a map of operads (Act con PSh(StrTop)) × → Act con PSh(StrTop)[esh -1 ] × and therefore to Corr(Act con PSh(StrTop)) × → Corr(Act con PSh(StrTop)[esh -1 ]) × .
Remark 2 . 2 .
22 28. By [Lur17, Example 5.4.5.3], a nonunital E M -algebra object A ⊗ in a symmetric monoidal ∞-category C induces a nonunital E n -algebra object in C, where n is the real dimension of the topological manifold M , by taking the stalk at a point x ∈ M . More precisely, there is an object in Alg En (C) whose underlying object is lim {x}∈U ∈Open(Ran(M )) A(U ), which coincides with lim x∈D∈Disk(M ) A(Ran(D)), since the family of Ran spaces of disks around x is final in the family of open neighbourhoods of {x} inside Ran(M ). Now each A ⊗ | Fact(D) ⊗ induces a nonunital E D -algebra by Lurie's theorem [Lur17, p. 5.5.4.10]. But [Lur17, Example 5.4.5.3] tells us that E D -algebras are equivalent to E n algebras. Also, by local constancy (i.e. constructibility), the functor D → A(D) is constant over the family x ∈ D ∈ Disk(M ), and therefore the stalk A x coincides with any of those E n -algebras. This also implies that all stalks at points of M are (noncanonically) isomorphic. 4 Also, the content of [Lur17, Subsection 5.5.4] tells us how the E n -multiplication structure works concretely. Choose a disk D containing x. We interpret this as the only object in the ⟨1⟩-fiber of E D . Recall that a morphism in E D lying over the map ⟨2⟩ → ⟨1⟩ 1, 2 → 1 is the choice of an embedding D ⨿ D → D. Call n D the unique object lying over ⟨n⟩ in E D . Consider the canonical map E D → E M . If A M is the E nu M -algebra object appearing in the conclusion of Lurie's theorem, call A D its restriction to E D . Recall from the proof of Lurie's theorem that A M is obtained by operadic left Kan extension of the restriction A| Disk(M ) along the functor Disk(M ) ⊗ → E M . Then we have that Lemma 2.2.29. A D (1 D ) = A(Ran(D)), and A D (2 D ) = colim{A(Ran(E)) ⊗ A(Ran(F )) | E, F ∈ Disk(M ),
Proposition 2 . 3 . 3 .
233 Let x ∈ M be a point. Consider the functor A ⊗ x : E nu 2 × E nu 1 → Pr L,⊗ k . Then this can be upgraded to a map of operadsE 2 × E nu 1 → Pr L,⊗ k .Proof. We can apply [Lur17, Theorem 5.4.4.5], whose hypothesis is satisfied since for any ∞-category C the functor C × → Fin * is a coCartesian fibration of ∞-operads ([Lur17, Proposition 2.4.1.5]): therefore, it suffices to exhibit a quasi-unit for any A ⊗ (-, ⟨k⟩), functorial in ⟨k⟩ ∈ E nu 1 . We can consider the map (natural in k) u k : Spec C → Gr x,k represented by the sequence (T G , id| X\{x} , id| Xx , . . . , T G , id X\{x} ) ∈ Gr x,k . Note now that this induces a map of spaces * → Gr x,k .
1 -→
1 Cons k,G O(Gr). Here 1 is the pushforward along the trivial section t : * → Gr, t( * ) = (T G , id| X\x ), of the constant sheaf with value k. The proof is given in [Rei12, Proposition IV.3.5]. We denote by ⋆ the E 1 -product of equivariant constructible sheaves on Gr described by A x (-). By Remark 2.3.4 for any F ∈ Cons k,G O (Gr) we can compute the product via the convolution diagram
Definition 2 .A. 1 .
21 Let Top be the 1-category of topological spaces. The category of stratified topological spaces is defined asStrTop C = Fun(∆ 1 , Top) × Top Poset,where the map Fun(∆ 1 , Top) → Top is the evaluation at 1, and Alex : Poset → Top assigns to each poset P its underlying set with the so-called Alexandrov topology (see [BGH20, Definition 1.1.1]).
Construction 2 .A. 3 .Pullbacks of stratified spaces Lemma 2 .A. 4 .
2324 We can define the category StrPSh C as PSh(StrAff C ). Note that StrTop is cocomplete, because Top, Fun(∆ 1 , Top) and Poset are. By left Kan extension we have a functor strtop : StrPSh C → StrTop. (2.A.1) By construction, this functor preserves small colimits and finite limits (since bothan : Aff C → Stn C and | -| : Stn C → Top preserve finite limits, see [Rey71]). The forgetful functor StrTop → Top preserves and reflects finite limits.
Lemma 2 .
2 A.10 ([Lur17, Remark 4.8.1.8 and Proposition 4.8.1.15]). There exists a symmetric monoidal functor P (!) : Cat × ∞ → Pr L,⊗ sending an ∞-category C to the ∞-category of S-valued presheaves P(C), a functor F : C → D to the functor P(D) → P(C) given by F ! = Lan F (-).
Theorem 2 .:
2 A.14. There is an ∞-functor D corr c Corr(Top) → Cat ∞ that coincides with D c when restricted to Top op → Corr(Top).
Step 1 :
1 (f * , f * ) adjunction and proper base change. The previous discussion tells us that D op c : Top → Cat 2-cat,op ∞ satisfies the left Beck-Chevalley condition [GR17, Chapter 7, 3.1.5] with respect to adm = vert = all, horiz = proper, taking Φ = D c : StrTop op → Cat ∞ . Indeed, for any f ∈ horiz we set Φ ! (f ) = f * , which is right adjoint to f * = Φ(f ). Finally, the base change property for the diagram (2.A.2) completes the proof of the left Beck-Chevalley property. Therefore by [GR17, Chapter 7, Theorem 3.2.2.(a)] the functor Cons op : StrTop → Cat
.
Every map in adm ∩ co-adm = open ∩ proper should be a monomorphism. But the class open ∩ proper consists of embeddings of unions of connected components. • For any α : X → Y in vert = all, consider the ordinary category Factor(α), whose objects are X X Y, ε γ where ε ∈ adm = open and γ ∈ co-adm = proper, and whose morphisms are commutative diagrams X Consider the (∞, 1)-category N(Factor(α)). We require N(Factor(α)) to be contractible for any α ∈ vert. But this is exactly Nagata's compactification theorem, see [GR17, Chapter 5, Proposition 2.1.6]. • D c | Top should satisfy the right Beck-Chevalley condition with respect to adm = open ⊂ horiz = all. This is true, because for every f ∈ open we have that f * = f ! : now this admits a left adjoint f ! , and the Base Change Theorem holds.
Kan extension of D c along the Yoneda embedding we obtain a functor P(Top) op → Cat ∞,k . By using the same arguments as in the proof of [GR17, Chapter 5, Theorem 3.4.3], Theorem 2.A.14 provides an extension to Corr(P(Top)). To replace P(Top) with the category of sheaves we use the descent properties of the functor D c (-). We call D c,corr the extension Corr(Sh(Top)) → Cat ∞ . Let Cons act,corr = D c,corr •q corr : Corr(Act con (StrTop)) → Cat ∞,k . Note that if we restrict the functor we just obtained to Act con (Top) → Corr(Act con (Top)) vert , then coincides with the composition Act con (Top) → StrTop con Cons (!) ----→ Pr L . Now: • this proves that Cons act,corr takes values in Pr L k , since it does on objects and on vertical morphism, and for what concerns horizontal morphisms we have that any f * is a left adjoint; • the functor Act con (Top) → StrTop con sends the class esh to stratified homotopy equivalences, and the by the Exodromy Theorem the functor StrTop con Cons (!) ----→ Pr L k sends stratified homotopy equivalences to equivalences of ∞-categories. Therefore, D c,corr sends equivariant stratified homotopy equivalences to equivalences as well, and therefore it factors through the localization; • the functor Act con (Top) → StrTop con is Cartesian symmetric monoidal, and by Corollary 2.A.11 the functor StrTop con Cons (!)
• ([Mat89, page 181]) If A → B is a flat local homomorphism of local noetherian rings, then depth B = depth A + depth B/mB, where m is the maximal ideal of A.
as in the hypothesis of Whitney's Condition A, Whitney's Condition A is satisfied, and the same for Whitney's condition B. In other words, the space Sing(X, Y ) = {y ∈ X ∩ Y | y does not satisfy either Whitney's
3 . 2
32 subsets of manifolds (Thom, Mather) . . . . . . . 77 3.1.2 Conical and conically smooth stratifications (Lurie, Ayala-Francis-Tanaka) . 78 Whitney stratifications are conically smooth . . . . . . . . . . . . . . . . . . . 81 3.2.1 Whitney stratifications are conical . . . . . . . . . . . . . . . . . . . . . . 81 3.2.2 Whitney stratifications are conically smooth . . . . . . . . . . . . . . . . . 86
Definition 3 . 1 . 4 (
314 Whitney's condition B). Let X, Y be smooth submanifolds of a smooth n-dimensional manifold M , and y ∈ Y . The pair (X, Y ) is said to satisfy Whitney's Condition B at y if there exist a chart of M ϕ : U → R n around y such that (ϕ(U ∩ X), ϕ(U ∩ Y )) satisfies Whitney's Condition B at ϕ(y). Definition 3.1.5 (Whitney stratification). Let M be a smooth manifold of dimension n. A smooth stratification (Z, S ) on a subset Z of M is said to satisfy the Whitney conditions if • (local finiteness) each point has a neighbourhood intersecting only a finite number of strata;
Note 3 . 1 . 1 . 3 . 1 .
31131 10. A very useful notion relative to stratified spaces (see for example from [AFT17, Definition 2.4.4]) is the notion of depth of a stratified space at a point. For example, let Z be an unstratified space of Lebesgue covering dimension n. Then the depth of the cone C(Z) at the cone point is n + Definition 11 ([Lur17, Definition A.5.5]). Let (X, P, s) be a stratified space, p ∈ P , and x ∈ X p . Let P >p = {q ∈ P | q > p}. A conical chart at x is the datum of a stratified space (Z, P >p , t), an unstratified space Y , and a P -stratified open embedding Y × C(Z) X P whose image hits x. Here the stratification of Y × C(Z) is induced by the stratification of C(Z), namely by the maps Y × C(Z) → C(Z) → P ≥p → P (see Definition 3.1.9).
Figure 3 . 1 :
31 Figure 3.1: Tubular neighbourhoods in (R 2 , (X, Y, Z)).
2.10]. Theorem 3.2.7 (Main Theorem). If (M, S ) is a Whitney stratified space, then the Thom-Mather charts exhibit a conically smooth structure on (M, S ).
2.6, a Thom-Mather chart is in particular an open embedding of the form D i × C(Z) → M where Z has a Whitney stratification S ′ and depth(Z, S ′ ) < depth(M, S ); thus, by the inductive hypothesis, Z is conically smooth and this implies that the Thom-Mather chart is a basic in the sense of [AFT17, Definition 3.2.4].
m
∈ Y and v is a Thom Mather chart for a neighbourhood of M in Y , such that v -1 (m) = (0, * ) ( * is the cone point). Consider v -1 (U ∩ Y ) as an open subset of D j × * .This open subset contains some closed ball of radius δ and dimension j centered at 0; denote it by B δ × * . Also, let ρ Y : V → R >0 be the "distance from Y " function associated to the Thom-Mather chart v, and let γ be a positive continuous function on Y (defined at least locally around m) such that there is an inclusion
Proposition 4 . 1 . 2 .
412 2.1); let T Y be the A-tubular neighbourhood of Y . Now by [Mat70, Proposition 6.1] there is an isotopy between T ′ Y and T Y fixing Y . By pulling back ψ ′ Y to T Y along this isotopy, we obtain an A-Thom Mather chart ψ Y around y; we are now left with two A-charts ϕ X and ψ Y and we finally can apply the fact that A is an atlas. Let now F ∈ QCoh(G). Let I G be the inertia group stack of G over X. Thus F is endowed with a canonical right action by I G , called the inertial action. For an explicit definition, see [BS19, Section 3]. Let G be a G m -gerbe over a qcqs scheme X. The pullback functor QCoh(X) → QCoh(G) establishes an equivalence between QCoh(X) and the full subcategory of QCoh(G) spanned by those sheaves on which the inertial action is trivial. Note that the banding G m × G → I G induces a right action ρ of G m on any sheaf F ∈ QCoh(G), by composing the banding with the inertial action. On the other hand, for any character χ : G m → G m , G m acts on F on the left by scalar multiplication precomposed with χ. Let us call this latter action σ χ . Remark 4.1.3. Let 1 be the trivial character of G m . Proposition 4.1.2 can be restated as: the pullback functor QCoh(X) → QCoh(G) induces an equivalence between QCoh(X) and the full subcategory of QCoh(G) where ρ = σ 1 .
Definition 4 . 1 . 5 .
415 In the case when χ is the identity character id : G m → G m , QCoh χ (G) is usually called the category of G-twisted sheaves on X.
Remark 4 . 2 . 10 .
4210 Using the associativity of the box product ⊠, the same proof works in the case of a finite number of G m -gerbes. Proposition 4.2.11. Let G ⋆ G ′ be the G m -gerbe defined in Construction 4.2.1. Then the pullback along
en tant que dg-ou ∞-category D c,G O (Gr G , k) de G O -réas constructibles équivariants sur Gr G ; -en tant que dg-ou ∞-category D-mod G O (Gr G , k) de G O -équivariants D-modules sur Gr G . J'ai prouvé le premier des deux théorèmes suivants : Theorème 1.5 ([Noc20]). La ∞-category Sph(G) admet un E 3 -structure monoïdale étendant le produit de convolution mono dale symétrique des faisceaux pervers. Theorème 1.6. La ∞-category Sph(G) est équivalente au E 2 -center de la ∞-categorie dérivée des représentations DRep( Ǧ, k).
Theorème 2 . 1 ( 2 . 2 .
2122 Théorème d'exodromie, [Lur17, Théorème A.9.3]). Soit (Y, S) un espace coniquement stratifié localement de forme singulière. Alors la ∞-catégorie des faisceaux constructibles à valeurs spatiales sur (Y, S) est équivalente à Fun(Exit(Y), Spaces).L'hypothèse d'être "localement de forme singulière" est assez générale, et est satisfaite par la Grassmannienne affine. L'hypothèse que la stratification est conique dit essentiellement que chaque point a un voisinage o ù la stratification ressemble à R n × C(Z), C(Z) étant le c ône topologique d'un espace stratifié Z. On peut interpréter cette condition comme un analogue stratifié de "être une variété topologique".Or, la stratification en cellules de Schubert du Grassmannien affine satisfait les conditions de Whitney, et il est bien connu que Lemma Toute stratification satisfaisant les conditions de Whitney est conique.Cela m'a permis d'appliquer Theorème 2.1 dans [Noc20] et de compléter la preuve de l'existence de la E 3 -structure recherchée sur la catégorie sphérique.Ayala, Francis et Tanaka[AFT17] ont introduit la notion de structure coniquement lisse sur un espace stratifié, qui est interprétable comme un analogue stratifié d'une structure différentiable. Dans[NV21], j'ai prouvé avec Marco Volpe (Ratisbonne) la conjecture suivante contenue dans leur article[AFT17], affinant ainsi Lemma 2.2. Theorème 2.3 (N.-Volpe 2021, [NV21]). Tout espace stratifié satisfaisant les conditions de Whitney [Whi64, Whi65] admet une structure conique lisse. Notre résultat fournit une large classe d'exemples de structures coniquement lisses. En particulier, cela implique que la Grassmannienne affine Gr G admet une structure canonique coniquement lisse.
3 .Conjecture 3 . 1 .
331 ALG ÈBRES D'AZUMAYA D ÉRIV ÉES ET GERBESLe groupe de Brauer d'un champ a été largement étudié depuis son introduction par Serre et Grothendieck. Dans les mêmes articles, la théorie générale des groupes de Brauer de variétés algébriques est développée, généralisant ainsi le cas des corps : c'est le groupe des algèbres d'Azumaya sur X, à une relation d'équivalence appelée Morita équivalence. Giraud[START_REF] Giraud | Cohomologie non abélienne[END_REF] a montré comment pour toute variété 1. Dans notre cas, nous voulons des faisceaux évalués dans la ∞-catégorie dérivée des Λ-modules pour un anneau de coefficients Λ, mais cela n'est pas pertinent dans la présente section. algébrique S, H 2 (S, G m ) peut être interprété comme un ensemble de G m -gerbes sur S. Il existe une carte des groupes Br(S) → H 2 (S, G m ) impliquant que toute algèbre d'Azumaya sur S donne lieu à un G m -gerbe sur S. Cette application est injective, mais elle prend des valeurs dans le sous-groupe de torsion de H 2 (S, G m ), qui peut en général être un sous-groupe approprié pour les variétés non régulières. En utilisant la théorie des dg/∞-catégories stables présentables, Toën [Toë12] a défini une notion de algèbre d'Azumaya dérivée sur une variété algébrique S et une notion correspondante de Équivalence de Morita étendant l'équivalence classique. Il montre que le groupe dBr(S) des algèbres d'Azumaya dérivées jusqu'à l'équivalence de Morita est isomorphe à H 2 (S, G m ) × H 1 (S, Z) o ù le deuxième facteur est 0 si S est normal. Il y a eu des travaux ultérieurs dans le sens de rendre plus explicite la relation entre G m -gerbes et les algèbres d'Azumaya dérivées. [BP21, Théorème 5.19], suite aux travaux de [Lie04] et [BS19] respectivement au niveau des catégories abéliennes et triangulées, dit que si G est un G m -gerbe sur une base S, alors il y a une action du champ d'inertie I(G/S), induisant une notion de χ-faisceau quasicohérent homogène sur G, pour χ un caractère de G m . Elle dit aussi que la ∞-catégorie QCoh(G) des faisceaux quasi-cohérents se décompose en une somme directe des sous-catégories QCoh χ (G) des faisceaux homogènes. Partant de là, une explicitation de la connexion entre G m -gerbes sur S et les algèbres d'Azumaya dérivées semble possible, selon les lignes suivantes. Mon intérêt pour la théorie des G m -gerbes (et donc des algèbres d'Azumaya) est né du fait que de puissantes modifications de la Grassmannienne affine usuelle Gr G , liée à la dualité conjecturale de Langlands pour les surfaces , sont exprimés en termes de gerbes. Dans l'ouvrage [BP21] apparaît la conjecture suivante : Si Y → S est un G m -gerbe, alors QCoh α (S) ⊗ QCoh β (S) QCoh α+β (S) comme QCoh(S)-linear ∞-categories présentables, pour tout α, β ∈ H 2 (S, G m ).
Corollaire 3 . 2 .Theorème 3 . 3 .
3233 En particulier, la ∞-category QCoh α (X) est inversible dans Pr L,c.g. X , d'o ù Morita équivalent à une algèbre d'Azumaya dérivée par le travail de Toën. Michele Pernice (Scuola Normale Superiore) et moi avons prouvé cette conjecture. Plus précisément, le théorème principal que nous démontrons est le suivant : Soit S un schéma quasi-séparé quasicompact. Le foncteur Ger G m (S) → {objets inversibles dans Pr L,c.g. X } est monoïdal symétrique par rapport à : -le produit rigidifié de G m -gerbes sur la source (cfr. [BP21]) ; -le produit tenseur de Lurie sur le codomaine.
1.19. Chapter 2 E 3 -monoidal structure on the spherical Hecke category Contents 2.1 Convolution over the Ran space . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.1 The presheaves Gr Ran,k . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.2 The 2-Segal structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.3 Action of the arc group in the Ran setting . . . . . . . . . . . . . . . . . .
2.1.4 The convolution product over Ran(X) . . . . . . . . . . . . . . . . . . .
2.1.5 Stratifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Fusion over the Ran space . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.1 Analytification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.2 Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.3 The factorizing property . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.4 Local constancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.5 Interaction of convolution and fusion over Ran(M ) . . . . . . . . . . . . .
2.2.6 The stalk of the factorising cosheaf . . . . . . . . . . . . . . . . . . . . . .
2.
3 Product of constructible sheaves . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.1 Taking constructible sheaves . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.2 Units and the main theorem . . . . . . . . . . . . . . . . . . . . . . . . .
2.
A Constructible sheaves on stratified spaces: theoretical complements . . . . . .
2.A.1 Stratified schemes and stratified analytic spaces . . . . . . . . . . . . . . . . 59
2.A.2 Symmetric monoidal structures on the constructible sheaves functor . . . . 61
2.A.3 Constructible sheaves and correspondences . . . . . . . . . . . . . . . . . 65
2.B
Omitted proofs and details . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
2.B.1 Proof of Proposition 2.1.7 . . . . . . . . . . . . . . . . . . . . . . . . . . 70
2.B.2 Proof of Proposition 2.2.12 . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.B.3 Proof of Proposition 2.2.5 . . . . . . . . . . . . . . . . . . . . . . . . . . 74
1, we give a version of Gr • living over Ran(X). Formally, what we obtain is a semisimplicial 2-Segal (stratified) prestack Gr Ran,• over Ran(X). This is done in order to take into account the factorization structure of Gr Ran and give the setup of the extension of the fusion product to the setting of constructible sheaves. The same is done for G O , thus defining a "global" object G O,Ran ; all the preceding constructions are G O,Ran -equivariant.
of the arc group Construction
2.1.14. Consider the functor G O,Ran : Alg C → Set from Definition 2.1.5. It is immediate to see that this functor takes values in Set just like Gr Ran,k , and admits a map towards Ran(X).
Ran, Φ k,k on Gr Ran,k , Ξ 1,k on ConvGr k and Φ 1,1 on Gr Ran are compatible with the k-associative convolution diagram
Gr Ran,k q ConvGr k
p m (2.1.1)
Gr × Ran(X) k Ran Gr Ran
× Ran(X) k Ran,1 on Gr × Ran(X) k Ran . The actions Φ ×k 1,1 on Gr × Ran(X) k
Remark 2.1.28. Both (Gr Ran,• , σ • ) and (Arc Ran,• ) (unstratified) enjoy the 2-Segal property.Proof. We want to use the unstratified version of the same result, proved in Section 2.1.2. In order to do this, it suffices to prove that the functor StrPSh C → PSh C preserves and reflects finite limits. We can reduce this statement to the one that StrSch C → Sch C does. By definition, this follows from Lemma 2.A.4.
Construction 2.A.3). Recall that this functor preserves finite limits.Hence, if we precompose strtop with Gr Ran,• : ∆ op inj → StrPSh C we obtain a 2-Segal semisimplicial object in stratified spaces. Also, since strtop sends stratified étale coverings to stratified coverings in the topology of local homeomorphisms, it extends to a functor between the categories of sheaves. For simplicity, we set Gr Ran = strtop(Gr Ran ), Gr Ran,k = strtop(Gr Ran,k ).
Notation 2.2.1.
Ran,• , σ • ) → Ran(M ), which we consider as an object of 2-Seg ss (PSh(StrTop) /Ran(M ) ) (we abuse of notation by denoting the cardinality stratification again by κ). An important remark: with the notations of Section 2.A.3, the functor ActGr • takes values in Act con (PSh(StrTop) /Ran(M ) ). Every Gr k is a colimit of objects belonging to StrTop con ⊂ StrTop.
Analogously, the functor ActGr Ran,• : ∆ op ∆ inj → Act(PSh(StrTop) /Ran(M ) ). op inj → Act((StrPSh C ) /Ran(X) ) induces a functor ActGr • :
Remark 2.2.4. This is the combination of Lemma 2.1.27 and the following result:
Proposition 2.2.5. Proof. See Section 2.B.3.
V,1 for independent open sets U, V ⊂ Ran(M ), and consider the map Gr U,1 × Gr V,1 → Gr U ⋆V,1 induced by Remark 2.2.11. Ran(X) and ⊙. Indeed, there are mapsGr Ran,1 × Ran(M ) Gr Ran,1 → Gr Ran,1 × Gr Ran,1and(Gr Ran,1 × Gr Ran,1 ) disj → Gr Ran,1 × Gr Ran,1which can be encoded as correpondences from Gr Ran,1 ×Gr Ran,1 to Gr Ran,1 × Ran(M ) Gr Ran,1 and (Gr Ran,1 × Gr Ran,1 ) disj respectively. Note that the context of correspondences here is very useful to encode this "restriction" procedure.
Formally, these "restrictions" are obtained by forgetting both structures to StrTop. Indeed, the forgetful
functor StrTop /Ran(M ) → StrTop induces a functor
Corr(PSh(StrTop) /Ran(M ) ) → Corr(PSh(StrTop)) which is lax monoidal with respect to both × By this argument, the functor obtained from
ActGr(-, -) ⊙,× : Fact(M ) ⊗ × E nu 1 → Corr(PSh(Act con StrTop /Ran(M ) )) ⊙,×
by composition with Corr(PSh(Act con StrTop /Ran(M ) )) ⊙,× → Corr(PSh(Act con StrTop)) × is a lax monoidal functor in both variables ActGr(-, -) × : Fact(M ) ⊗ × E nu 1 → Corr(PSh(Act con StrTop)) × .
Nagata compactification and proper pushforward Consider
which is equivalent to the more familiar Corr(StrTop) isom proper;all (horizontal and vertical maps are interchanged while considering the opposite of the correspondence category). We have an (∞, 1)-functor , because we are performing a kind of construction "dual" to that used for IndCoh in[START_REF] Gaitsgory | A Study in Derived Algebraic Geometry[END_REF]).
2-cat,op ∞ Step 2: We want to apply [GR17, Chapter 7, Theorem 5.2.4] and extend our functor to Corr(StrTop) open extends to a functor (Cons op ) all;proper : Corr(StrTop) proper all;proper → Cat 2-cat,op ∞ Cons proper,all : Corr(StrTop) isom proper,all → Cat ∞ . (2.A.3) all,all ⊃ Corr(StrTop) isom
, or equivalently a functor Cons all;proper : (Corr(StrTop) proper, all;proper ) op → Cat 2-cat ∞ . Now we restrict this functor to the (∞, 1)-category (Corr(StrTop op ) isom all;proper ) op , the classes of morphisms in StrTop given by horiz = all, vert = all, adm = open ⊂ horiz, co-adm = proper∩ ⊂ vert (note that we need adm ⊂ horiz and co-adm ⊂ vert instead of the converse
All algebraic actions with finitely many orbits induce a Whitney stratification by orbits, and hence their analytic counterpart lies in Act con (Top). Also, by[use], the analytification of algebraic varieties is locally of singular shape. Formally, this means that the functor strtop induces a functor ActStrSch C → Act con StrTop. Act con (StrTop) → StrTop con (which remembers only the stratification in H-orbits) • Act con (StrTop) → Sh loc (Top) sending (Y, H, Φ) to the quotient Y /H defined as the colimit of the usual diagram in the category of sheaves. Note that here the stratification is forgotten. All the preceding constructions can be extended to presheaves, by replacing StrSch C by PSh(StrSch C ) and StrTop by PSh(StrTop).
Remark 2.A.20. There exist functors
•
2.A.14: the application of [GR17, Chapter 7, Theorem 5.2.4]
provides us with an (∞, 2)-functor from
Corr(Top) open all,all
to Cat
2-cat
∞ which we restrict to an (∞, 1)-functor from Corr(Top) isom all,all to Cat ∞ .
Equivariant constructible sheaves
Definition 2.A.17. Let ActStrSch C be the category with the following objects {H group scheme over C, (Y, P, s) stratified varieties of finite type over C, Φ action of H over Y, such that P is finite and the strata of (Y, P, s) coincide with the orbits of Φ} and the following morphisms
{f : H → H ′ morphism of group schemes, g : Y → Y ′ f -equivariant morphism of schemes }.
We take the definition of "conically stratified space" Definition 2.A.18. Let Act con StrTop be the category with the following objects {H topological group, (Y, P, s) locally compact conically stratified topological space locally of singular shape, Φ action of H on Y such that the strata of (Y, P, s) coincide with the orbits of Φ, and P satisfies the ascending chain condition} and morphisms analogue to the previous definition. Remark 2.A.19.
the proof is complete. Unlike the previous lemma, this is just a direct verification allowed by the fact that tangent spaces to open subsets of strata coincide with the tangent spaces to the original strata. One can also apply the more general and very useful argument appearing in[START_REF] Gibson | Topological Stability of Smooth Mappings[END_REF] (1.3), (1.4) and discussion below]. Let f : X → Y be a C 2 mapping, and let A be a closed subset of X which admits a C 2 Whitney stratification S . Suppose f |
Lemma 3.2.2. Any open subset of a Whitney stratified manifold inherits a natural Whitney stratification
by restriction.
Proof. Lemma 3.2.3 (Thom's first isotopy lemma, [Mat72, (8.1)]).
A : A → Y is proper and that for each stratum U of S , f |
• For every morphism f : Y → ΣZ in C, there exists a pullback and pushout square Grothendieck if it satisfies the equivalent conditions of Proposition 4.1.8. Following [Lur18, Definition C.3.0.5], we denote the ∞-category of Grothendieck presentable ∞-categories (and colimit-preserving functors between them) by Groth ∞ . We also denote the category of presentable stable ∞-categories (and colimit-preserving functors between them) by Pr L St . Groth ∞ inherit a symmetric monoidal structure from Pr L which we denote again by ⊗. Let X be a qcqs scheme. An O X -linear prestable ∞-category is an object of There exists a stabilization functor st X : LinCat PSt (X) → LinCat St (X), induced by the usual stabilization procedure. The category LinCat PSt (X) has a tensor product -⊗ QCoh(X) ≥0 -(which we will abbreviate by ⊗) induced by the Lurie tensor product of presentable ∞-categories. See [Lur18, Theorem C.4.2.1] and [Lur18, Section 10.1.6] for more details. The same is true for LinCat St (X). The stabilization functor LinCat PSt (X) → LinCat St (X) is symmetric monoidal with respect to these structures.LinCat St (X) and LinCat PSt (X) satisfy a very important "descent" property, which is what Gaitsgory [Gai15] calls 1-affineness.
X 0 Remark 4.1.10. By [Lur17] and [Lur18, Theorem C.4.2.1], both Pr L Y ΣZ. A stable presentable O X -linear ∞-category is an object of LinCat St (X) := Mod QCoh(X) (Pr L,⊗ St ). ∞ ). Remark 4.1.12. Remark 4.1.13. Construction 4.1.14. The functors St and Definition 4.1.11. LinCat PSt (X) := Mod QCoh(X) ≥0 (Groth ⊗ CAlg k
f ′ f Proposition 4.1.7 ([Lur18, Proposition C.1.2.9]). Let C be a presentable ∞-category. Then the following conditions are equivalent: • C is prestable and has finite limits. • There exists a stable ∞-category D equipped with a t-structure (D ≥0 , D ≤0 ) and an equivalence C ≃ D ≥0 . Proposition 4.1.8 ([Lur18, Proposition C.1.4.1]). Let C be a presentable ∞-category. Then the following conditions are equivalent: • C is prestable and filtered colimits in C are left exact. • There exists a stable ∞-category D equipped with a t-structure (D ≥0 , D ≤0 ) compatible with filtered colimits, and an equivalence C ≃ D ≥0 . Definition 4.1.9 ([Lur18, Definition C.1.4.2]
). Let C be a presentable ∞-category. We will say that C is
discussion before Theorem 10.2.0.1].
Proof. For the stable part, this is [Lur11, Proposition 6.5]. For the prestable part, this is the combination of
[Lur18, Theorem 10.2.0.2] and [Lur18, Theorem D.5.3.1]. Symmetric monoidality follows from the fact
that the inverse of the global sections functor (the "localization functor", see [BP21, Section 2.3]) is strong
monoidal.
Theorem 4.1.15. Let X be a qcqs scheme over k. Then the global sections functors
QStk St (X) → LinCat St (X)
QStk PSt (X) → LinCat PSt (X)
are symmetric monoidal equivalences.
.2.1) By Theorem 4.1.15, every side of the sought equivalence is the ∞-category of global sections of a sheaf in categories over X, which means that the equivalence is étale-local on X. Therefore, by choosing a covering U → X which trivializes both G and G ′ , we can reduce to the caseG = U × BG m → U, G ′ = U × BG m → U ,both maps being the projection to U . But by [BP21, Corollary 5.6], QCoh(U × BG m ) ≃ QCoh(U ) × QCoh(BG m ), and this proves our claim. Finally, t-exactness follows from the Künneth formula. Now we prove that the restriction of the functor (4.2.1) to QCoh χ
About this, I must say that I often wished I knew better the English language, to whose inclination to a certain somberness I owe the tone of this page; I hope that my next things will be, if not more interesting, better written.
In this thesis, we adopt the perspective of ∞-categories, which is systematically exposed in[START_REF] Lurie | Higher Topos Theory[END_REF]. Both dg-categories and ∞-categories are ways to encode the idea of "categories with a notion of homotopy and homotopy equivalence" in a way that is particularly useful to deal with derived categories and homotopy theory. One of the simplest formulations of this concept is the notion of categories enriched in topological spaces or simplicial sets.
To be precise, their notion of "conically stratified space" is slightly different from that of[START_REF] Lurie | Higher Algebra[END_REF], and is defined by induction.
In order for a map to preserve perverse sheaves, one usually assumes that it is semi-small and proper.
Here E1 would stand for an "associativity" coming from some convolution diagram, and E4 would come from the real dimension of S, just like E2 comes from the real dimension of the curve X in our construction.
Other authors, however, use this name for the whole H 2 (X, Gm).
See Section 4.1.2 for a definition of OX -linear category in the ∞-categorical setting.
Our notion of constructible sheaf does not require finitely dimensional stalks. However, the full subcategory obtained by imposing this condition (which is the one usually considered) is closed under the product law that we describe.
We replace the notation "G O " with "Arc" for typographical reasons.
Note that this is true only for points of Ran(M ) coming from single points of M . If we allow the cardinality of the system of points to vary, stalks may take different values. In fact, the factorization property tells us that a system of cardinality m will give the m-ary tensor product in C of the stalk at the single point.
X and Y are said to satisfy Whitney's condition A if this is satisfied for any (xi) ⊂ X tending to y ∈ Y . The space is said to satisfy Whitney's condition A if every pair of strata satisfies it.
Idem.
Also, we usually identify this subspace of M with its preimage in the "abstract" tubular neighbourhood Bε ⊂ E.
With respect to the Whitney stratification induced on A, see Lemma
3.2.1.
One may use Example 3.2.5 as a guiding example, with m a point on {x = 0} \ {(0, 0)}.
In fact, one would need to use the construction of the "opposite of an ∞-operad", see[Bea],[START_REF] Lurie | Higher Algebra[END_REF] Remark
2.4.2.7].
Acknowledgements
Definition 4.1.18. Let X be a quasicompact quasiseparated scheme, and M be an element of Br(X). Then we define the Triv ≥0 (M ) as the sheaf of categories S → Equiv QCoh(S) ≥0 , M(S) .
Finally, observe that both Br(X) and Ger Gm (X) have symmetric monoidal structures: on the first one, we have the tensor product ⊗ inherited by LinCat PSt (X) (Remark 4.1.13), and on the second one we have the rigidified product ⋆ of G m -gerbes. Although the two structures are well-known to experts, we will recall them in Section 4.2.1.
We are now ready to formulate our main theorem.
Theorem 4.1.19. Let X be a qcqs scheme over a field k. Then there is a symmetric monoidal equivalence of 2-groupoids Φ : Br(X) ⊗ ←→ Ger Gm (X) ⋆ : Ψ where
• Ψ(G) = QCoh id (G) ≥0 .
Remark 4.1.20. By taking the π 0 , one obtains an isomorphism of abelian groups dBr(X) ≃ H 2 (X, G m ).
This isomorphism appears also in [START_REF] Lurie | Spectral Algebraic Geometry[END_REF]Example 11.5.7.15], but as mentioned before, it is a consequence of the equivalence of ∞-categories
whose proof however does never refer to the notion of G m -gerbe.
Study of the derived Brauer map
Symmetric monoidal structures
We begin by describing the symmetric monoidal structure on the category of G m -gerbes. Let X be a scheme and G 1 and G 2 be two G m -gerbes on X. One can costruct the product G 1 ⋆ G 2 , which is a G m -gerbe such that its class in cohomology is the product of the classes of G 1 and G 2 (see [START_REF] Binda | GAGA problems for the Brauer group via derived geometry[END_REF]Conjecture 5.23]). Clearly, this is not enough to define a symmetric monoidal structure on the category of G m -gerbes. The idea is to prove that this ⋆ product has a universal property in the ∞-categorical setting, which allows us to define the symmetric monoidal structure on the category of G m -gerbes using the theory of simplicial colored operads and ∞-operads (see Chapter 2 of [START_REF] Lurie | Higher Algebra[END_REF]). Construction 4.2.1. Let AbGer(X) be the (2, 1)-category of abelian gerbes over X and AbGr(X) the (1, 1)-category of sheaves of abelian groups over X. We have the so-called banding functor Band : AbGer(X) -→ AbGr(X), see for example Chapter 3 of [START_REF] Bergh | Decompositions of derived categories of gerbes and of families of Brauer-Severi varieties[END_REF]. It is easy to prove that Band is symmetric monoidal with respect to the two Cartesian symmetric monoidal structures of the source and target, that is it extends to a symmetric monoidal functor Band : AbGer(X) × → AbGr(X) × of colored simplicial operads (and therefore also of ∞-operads).
Recall that, given a morphism of sheaf of groups ϕ : µ → µ ′ and a µ-gerbe G, we can construct a µ ′ -gerbe, denoted by ϕ * G, and a morphism ρ ϕ : G → ϕ * G whose image through the banding functor is exactly ϕ. This pushforward construction is essentially unique and verifies weak funtoriality. This follows from the following result: if G is a gerbe banded by µ, then the induced banding functor
is an equivalence and the pushforward construction is an inverse (see [BS19, Proposition 3.9]). This also implies that Band is a coCartesian fibration. Let Fin * be the Segal category of pointed finite set. Consider now the morphism Fin * → AbGr(X) × induced by the algebra object G m,X in AbGr(X) × . We can consider the following pullback diagram
thus B is a coCartesian fibration. This implies that G is a symmetric monoidal structure over the fiber category
We claim that this symmetric monoidal structure coincides with the ⋆ product of gerbes defined in [START_REF] Bergh | Decompositions of derived categories of gerbes and of families of Brauer-Severi varieties[END_REF]. In fact, following the rigidification procedure in [Ols16, Exercise 12.F] and [AOV08, Appendix A], we can define the ⋆ product as the pushforward of the multiplication map m :
Remark 4.2.2. Let us describe G as a simplicial colored operad. The objects (or colors) of G are G m -gerbes over X. Let {G i } i∈I be a sequence of objects indexed by a finite set I and H another object; we denote by i∈I G i the fiber product of G i over X. The simplicial set of multilinear maps Mul({G i } i∈I , H) is the full subcategory of the 1-groupoid of morphisms Map X ( i∈I G i , H) of gerbes over X such that its image through the banding functor is the n-fold multiplication map of G m , where n is the cardinality of I. Note that because the simplicial sets of multilinear maps are Kan complexes by definition, then they are fibrant simplicial sets. This is why the operadic nerve gives us a symmetric monoidal structure, see [ Next, we will describe the symmetric monoidal structure on Br(X). We know that it is the restriction of the symmetric monoidal structure of LinCat PSt (X), see Remark 4.1.13. Remark 4.2.5. We now describe explicitly the symmetric monoidal structure of LinCat St (X) using the language of simplicial colored operads. The same description will apply to the prestable case. This will come out useful in the rest of the chapter.
The simplicial colored operad LinCat St (X) can be described as follows:
1. the objects (or colors) are stable presentable ∞-categories which are modules over QCoh(X) ( see [Lur17, Section 4.5] for the precise definition of module over an algebra object);
2. given {M i } i∈I a sequence of objects indexed by a finite set I and N another object, the simplicial set of multilinear maps Mul({M i } i∈I , N ) is the mapping space Fun L ( i∈I M i , N ) of functors of stable presentable categories which are QCoh(X)-linear and preserve small colimits separately in each variable.
Derived categories of twisted sheaves
Recall from Section 4.1.1 the definition of QCoh id (G) and QCoh id (G) ≥0 . Our aim in this subsection is to prove the following statement: Theorem 4.2.6. Let X be a qcqs scheme. The functors
and QCoh id (-) ≥0 : Ger Gm (X) ⋆ → LinCat PSt (X) ⊗ carry a symmetric monoidal structure with respect to the ⋆-symmetric monoidal structure on the left hand side and to the ⊗-symmetric monoidal structures on the right hand sides. In particular, since every G m -gerbe is ⋆-invertible and Ger Gm (X) is a 2-groupoid, QCoh id (-) takes values in Br † (X) and QCoh id (-) ≥0 takes values in Br(X).
Remark 4.2.7. Notice that we need to explicit how QCoh id (-) (or QCoh id (-) ≥0 ) acts on the mapping spaces as well. We decided to stick with the covariant notation, i.e we define the image of a morphism of
We omit the proof of the fact that f * sends twisted sheaves to twisted sheaves, which follows from a straightforward computation.
Although there are no real issues with the contravariant notation (considering pullbacks instead of pushforwards), there are some technical complications when one wants to prove that a contravariant functor is symmetric monoidal. 1 Note that, if f is any morphism of gerbes, then, f * = (f * ) -1 .
We will prove Theorem 4.2.6 in two different steps.
) is a morphism between the two diagrams and therefore there exists a unique (up to homotopy) morphism C 0 1 → C 0 2 compatible with all the data. If G is a G m -gerbe and χ is a character of G m , the endofunctor (-) χ of QCoh(G) and the natural transformation i χ introduced in Remark 4.1.4 form a diagram as above. In this situation, QCoh χ (G) is exactly the equalizer. Proposition 4.2.9. Let X be a quasicompact quasiseparated scheme over a field k. Let G, G ′ → X be two G m -gerbes over X, and χ, χ ′ two characters of G m . The external tensor product establishes a t-exact equivalence
Proof of the equivalence
The goal of this subsection is to prove that the constructions
establish a (symmetric monoidal) categorical equivalence between the 2-groupoids Br(X) and Ger Gm (X), thus proving Theorem 4.1.19. Proof. We want to prove that for any M, M ′ ∈ Br(X) the map
is a homotopy equivalence of spaces (more precisely, an equivalence of 1-groupoids). First of all, by using the grouplike monoid structure of Br(X) together with Lemma 4.2.17, we can reduce to the case
If M does not lie in the connected component of 1, then what we have to check is that Map Ger Gm (X) (Triv ≥0 (M ), Triv ≥0 (1)) = ∅.
But if this space contained an object, then in particular we would have an equivalence at the level of global sections between Equiv QCoh(X) ≥0 (M, 1) and Equiv QCoh(X) ≥0 (1, 1). But the first space is empty by hypothesis, while the second is not. On the other hand, if M lies in the connected component of 1, by functoriality of Triv ≥0 (-) we can suppose that M = 1. In this case, we have to prove that the map
is an equivalence of groupoids. But the first space is the groupoid Pic(X), the latter is the space of maps X → BG m , and the composite map is the one sending a line bundle over X to the map X → BG m that classifies it. Proposition 4.2.19. Let X be a qcqs scheme. Then there is a natural equivalence of functors
Proof. Let G be a G m -gerbe over X. Let us observe that for any S → X we have
Indeed, there is an evident map of stacks over X F : Equiv Ger Gm (X × BG m , G) → G and up to passing to a suitable étale covering of X, the map becomes an equivalence: in fact the choice of an equivalence ϕ : S × BG m → S × BG m of gerbes over S amounts to the choice of a map S → BG m , because ϕ must be a map over S (hence pr BGm • ϕ = pr BGm ) and it must respect the banding . Therefore, F is an equivalence over X, and this endows Equiv Ger Gm (X × BG m , G) with a natural structure of a G m -gerbe over X.
The construction ϕ → ϕ * thus provides a morphism of stacks
Now since the pushforward ϕ * of an equivalence ϕ of stacks is t-exact, the construction ϕ → ϕ * factors through the stabilization map Equiv QCoh id QCoh id (X×BG m ) ≥0 , QCoh id (G) → Equiv QCoh(X) QCoh id (X×BG m ), QCoh id (G) .
Note that the stabilization map makes sense in the id-twisted context since the stabilization functor has descent (it is the limit of iterated loops), and thus we can reduce to the case when G is trivial.
We have therefore obtained a map of stacks over X G → Equiv QCoh(X) ≥0 QCoh(X) ≥0 , QCoh id (G) ≥0 = Triv ≥0 (QCoh id (G) ≥0 ).
But now, Triv ≥0 (QCoh id (G)) is a G m -gerbe over X (note that this was not true before passing to the connective setting), and therefore to prove that the map is an equivalence it suffices to prove that it agrees with the bandings, that is, that it is a map of G m -gerbes. This follows from unwinding the definitions.
We are now ready to prove our main result.
Proof of Theorem 4.1.19: Section 4.2.2 and Section 4.2.3 tell us that the two functors are symmetric monoidal and take values in the sought ∞-categories. The fact that they form an equivalence follows from Proposition 4.2.18 and Proposition 4.2.19. |
04107248 | en | [
"math.math-oc",
"info.info-dc"
] | 2024/03/04 16:41:22 | 2023 | https://hal.science/hal-04107248/file/2205.15015.pdf | Hadrien Hendrikx
email: [email protected]
A principled framework for the design and analysis of token algorithms
We consider a decentralized optimization problem, in which n nodes collaborate to optimize a global objective function using local communications only. While many decentralized algorithms focus on gossip communications (pairwise averaging), we consider a different scheme, in which a "token" that contains the current estimate of the model performs a random walk over the network, and updates its model using the local model of the node it is at. Indeed, token algorithms generally benefit from improved communication efficiency and privacy guarantees. We frame the token algorithm as a randomized gossip algorithm on a conceptual graph, which allows us to prove a series of convergence results for variance-reduced and accelerated token algorithms for the complete graph. We also extend these results to the case of multiple tokens by extending the conceptual graph, and to general graphs by tweaking the communication procedure. The reduction from token to well-studied gossip algorithms leads to tight rates for many token algorithms, and we illustrate their performance empirically.
Introduction
Modern machine learning relies on increasingly large models that train on increasingly large datasets: distributed optimization is thus crucial to scaling the training process. In the centralized paradigm, at the heart of Federated Learning [START_REF] Kairouz | Advances and open problems in federated learning[END_REF], the system relies on a server that aggregates models and gradients, and manages the nodes. Although quite efficient, this setting has several drawbacks: (i) nodes need to trust the server enough to send it sensitive data, (ii) there is a communication bottleneck at the server, which limits scaling and (iii) training stops if the server fails.
In the decentralized setting [START_REF] Boyd | Randomized gossip algorithms[END_REF][START_REF] Cassio | Incremental adaptive strategies over distributed networks[END_REF][START_REF] Shi | Extra: An exact first-order algorithm for decentralized consensus optimization[END_REF][START_REF] Nedic | Achieving geometric convergence for distributed optimization over time-varying graphs[END_REF], nodes are linked by a communication graph, and directly communicate with their neighbours in this graph instead of a central coordinator. This allows for better scaling, and is also more robust since the server is no longer the single point of failure. Yet, due to the lack of coordination, decentralized algorithms often require many peer-to-peer communications compared to centralized ones, and a gain in privacy is not always guaranteed.
Token, or random-walk algorithms [START_REF] Bertsekas | A new class of incremental gradient methods for least squares problems[END_REF][START_REF] Sundhar | Incremental stochastic subgradient algorithms for convex optimization[END_REF][START_REF] Johansson | A randomized incremental subgradient method for distributed optimization in networked systems[END_REF][START_REF] Suhail | Linearly convergent asynchronous distributed admm via markov sampling[END_REF][START_REF] Mao | Walkman: A communication-efficient random-walk algorithm for decentralized optimization[END_REF], work in the following way: a token owns an estimate of the model, "walks" over the graph, and sequentially visits nodes. When the token is held by a node, it updates its model, either by computing a gradient using the node's data, or by using the local model of the node. Then, the token is transmitted (or "jumps") to a new node. Some instanciations of these algorithms can be seen as a middle-point between centralized and decentralized algorithms. Indeed, the token plays the role of a server, since it owns the global model and receives updates from nodes. Yet, the token is no longer attached to a physical node as in the centralized case, but rather exchanged between computing nodes in a decentralized way.
Preprint. Under review.
arXiv:2205.15015v1 [math.OC] 30 May 2022
Besides, unlike standard centralized algorithms, each node may maintain a local parameter and update it using local updates. In that sense, some token algorithms (such as the ones developed in this work) closely resemble local methods, that are very popular in federated learning [START_REF] Mcmahan | Communication-efficient learning of deep networks from decentralized data[END_REF], Stich, 2018[START_REF] Lin | Don't use large mini-batches, use local SGD[END_REF]. The main difference is that instead of exchanging information through periodic exact averaging or gossip steps, communication is ensured through the roaming token. This allows to easily adapt the algorithms to the features of the system, by making either more local steps or more communication steps.
Related work
Many early works study token (or random-walk) algorithms [START_REF] Bertsekas | A new class of incremental gradient methods for least squares problems[END_REF][START_REF] Nedic | Incremental subgradient methods for nondifferentiable optimization[END_REF][START_REF] Sundhar | Incremental stochastic subgradient algorithms for convex optimization[END_REF][START_REF] Johansson | A randomized incremental subgradient method for distributed optimization in networked systems[END_REF]]. Yet, they focus on stochastic (sub)gradients algorithms, and thus lack linear convergence guarantees. The recent literature on token algorithms can be divided into two main lines of work that reflect the two main strengths of token algorithms: communication efficiency and privacy preservation.
Communication efficiency. [START_REF] Mao | Walkman: A communication-efficient random-walk algorithm for decentralized optimization[END_REF] introduce Walkman, a token algorithm based on an augmented Lagrangian method. Walkman works for general graphs and is shown to be communicationefficient provided graphs are well-connected enough. Yet, it only obtains linear convergence on least squares problems. When Walkman uses gradients (instead of proximal operators), it requires a step-size inversely proportional to the square of the smoothness constant of the problem, which is impractical. Variants of Walkman guarantee communication efficiency when walking over Hamiltonian cycles [START_REF] Mao | Walk proximal gradient: An energy-efficient algorithm for consensus optimization[END_REF]. [START_REF] Balthazar | Distributed linear estimation via a roaming token[END_REF] consider the problem of distributed linear estimation, and use a token algorithm to aggregate the measurements of all nodes.
Multiple tokens. When a single token walks the graph, there are no parallel communications. A natural fix to speed up algorithms is to allow multiple tokens to walk the graph in parallel, as recently done by Chen et al. [2022], whose approach is also based on an augmented Lagrangian method.
Privacy Preservation. The favorable privacy guarantees claimed by decentralized algorithms are actually mainly proven for token algorithms. For instance, the Walkman algorithm presented above has also been extended to guarantee privacy preservation [START_REF] Ye | Incremental admm with privacypreservation for decentralized consensus optimization[END_REF]. Besides, [START_REF] Cyffers | Privacy Amplification by Decentralization[END_REF] show that token algorithms satisfy a relaxation of local differential privacy, and match the guarantees offered by a trusted central server. They give a simple algorithm for ring and complete topologies. Similarly, [START_REF] Bellet | Who started this rumor? quantifying the natural differential privacy of gossip protocols[END_REF] study the privacy guarantees of a rumour spreading algorithm, and show that a single token spreading the rumour is optimal, while multiple tokens achieve optimal trade-offs between privacy and speed. In this work, we focus on the convergence guarantees of token algorithms, and leave the privacy preservation guarantees for future work.
Dual Decentralized algorithms. Our framework is based on applying the dual approach for decentralized algorithms [START_REF] Jakovetić | Linear convergence rate of a class of distributed augmented lagrangian algorithms[END_REF][START_REF] Boyd | Distributed optimization and statistical learning via the alternating direction method of multipliers[END_REF] to the analysis of token algorithms. This dual approach leads to very fast algorithms, and in particular [START_REF] Scaman | Optimal algorithms for smooth and strongly convex distributed optimization in networks[END_REF], [START_REF] César A Uribe | A dual approach for optimal algorithms in distributed optimization over networks[END_REF] used it to develop optimal decentralized algorithms. Then, Hendrikx et al. [2019a] showed that it can also be used to accelerate randomized gossip, and used an augmented graph formulation to obtain decentralized variance-reduced extensions [Hendrikx et al., 2019b[START_REF] Hendrikx | An optimal algorithm for decentralized finite sum optimization[END_REF][START_REF] Hendrikx | Dual-free stochastic decentralized optimization with variance reduction[END_REF].
Our contributions
As discussed in the previous section, token algorithms are still rare, and very few algorithms offer linear convergence guarantees, let alone integrating more advanced optimization tricks such as variance reduction or acceleration. Besides, most of the literature focuses on only one token, which is communication-efficient but very slow. In this work, we pave the way for the design and analysis of new efficient token algorithms, and in particular we:
1. Introduce a general framework for designing and analyzing token algorithms.
2. Give a simple algorithm with linear convergence guarantees on complete graphs that match those of both centralized and decentralized (gossip) optimization.
3. Speed up this simple algorithm by using multiple tokens.
4.
Leverage the general framework to analyze variants of the simple token algorithm, such as stochastic gradients with variance reduction and acceleration. The general framework is based on the dual approach for decentralized algorithms [START_REF] Jakovetić | Linear convergence rate of a class of distributed augmented lagrangian algorithms[END_REF][START_REF] Scaman | Optimal algorithms for smooth and strongly convex distributed optimization in networks[END_REF], and in particular [START_REF] Hendrikx | Dual-free stochastic decentralized optimization with variance reduction[END_REF], and so the algorithmic core similar, namely Bregman coordinate descent (with some adaptations) for the simple and variance-reduced algorithms, and Accelerated Proximal Coordinate Descent [START_REF] Lin | An accelerated randomized proximal coordinate gradient method and its application to regularized empirical risk minimization[END_REF] for the accelerated one.
2 Conceptual graph approach for token algorithms.
Building the conceptual graph
We consider the following distributed problem, where each f i is a local function at node i:
min θ∈R d n i=1 f i (θ) + σ 2 θ 2 . (1)
We assume that each f i is convex and L-smooth over R d , which writes if f i is twice differentiable as 0 ∇ 2 f i (x) L I d , where I d ∈ R d×d is the identity matrix of dimension d. The condition number of this problem is κ = 1 + L/σ. The key idea of this paper is to reduce the analysis of token algorithms to that of standard decentralized gossip algorithms on conceptual graphs. We follow the dual approach for building decentralized optimization algorithms, and rewrite Problem (1) as:
min θ∈R n×d , u∈R n×d , v∈R d , ∀i, θ (i) =u (i) , and u (i) =v i=1 f i (θ (i) ) + nσ 2(n + 1) u (i) 2 + nσ 2(n + 1) v 2 . ( 2
)
To write this reformulation, we have applied the consensus constraints (equality constraints for neighbours) given by the conceptual graph represented in Figure 1 (right). To build this conceptual graph, we add a conceptual node (with its own parameter) corresponding to the token, with local objective nσ • 2 /2(n + 1), and we split all local nodes into a computation part (that contains f i ), and a communication part (that contains nσ • 2 /n(n + 1). Node that the total regularization weight is still nσ/2. Then, all computation nodes are linked to their respective communication nodes, which are themselves linked to the token. Splitting each node between communication and computation part has two benefit: (i) it allows us to use the dual-free trick from [START_REF] Hendrikx | Dual-free stochastic decentralized optimization with variance reduction[END_REF], and obtain primal updates despite the dual approach, and (ii) it allows to decouple communications and computations. In particular, nodes can perform local steps even when they don't hold the token.
Now that we have defined the framework, it is important to make sure that this corresponds to a token algorithm. Updating the edge between the token and node i at time t in the conceptual graph means that the token jumped to node i at team t. Thus, to ensure the token aspect, we must enforce that if the edge between the token and node i is updated at time t, and the edge between the token and node j is updated at time t + 1, then node j has to be a neighbour of node i (since the token jumped from i to j at time t + 1). We apply the dual approach to Problem (2), which is inherited from the conceptual graph, but the sampling of the edges is ruled by the actual communication graph. In a complete graph, this does not impose any additional constraints, and this is why our convergence results are initially derived in this setting. In arbitrary graphs, this means that the sampling of the edges (and so the coordinate descent algorithm applied to the dual formulation) must follow a Markov Chain,
1 1 3 2 t = 0 t = 0 t = 3 t = 3 t = 1 t = 1 t = 2 t = 2 2 2 3 3 1 1 Computation updates
Communication updates Figure 2: An example execution of the token algorithm on the base graph (left), and the corresponding edges updated in the conceptual graph (right). The sequence is: t = 0: local computation at node 2, t = 1: the token jumps to node 2, t = 2: the token jumps to node 3, t = 4: local computation at node 0. Note that the updates at t = 2 and t = 3 can actually be performed in parallel since they affect different nodes, and that the token updates its estimate with the node it arrives at after each jump.
which leads to considerably harder analyses. In Section 3.4, we present a trick to circumvent this difficulty, which consists in not performing the update step every time the token jumps to a new node.
We will now show that this conceptual graph view allows to efficiently design fast algorithms, by making clear links with dual approaches for decentralized optimization. This will prove especially useful in the next section, when we introduce multiple tokens, variance-reduction and acceleration.
In the basic case (one token, full gradients), Equation (2) resembles the consensus formulation of Walkman [START_REF] Mao | Walkman: A communication-efficient random-walk algorithm for decentralized optimization[END_REF], which is also obtained through a (primal-)dual formulation. Yet, we split each node into two subnodes to allow for local steps, and we can then harness the power of the dual approach for decentralized optimization to significantly improve the base algorithm, as done in Section 3.
Deriving the single token algorithm
Following [START_REF] Hendrikx | Dual-free stochastic decentralized optimization with variance reduction[END_REF], we take a dual formulation of Problem (2), and apply Bregman block coordinate descent to obtain the simple token algorithm, which corresponds to Algorithm 1 with K = 1. This leads to dual-free updates [START_REF] Lan | An optimal randomized incremental gradient method[END_REF], which are simple to implement. Yet, in [START_REF] Hendrikx | Dual-free stochastic decentralized optimization with variance reduction[END_REF], all communication edges are sampled at once, which would mean that the token receives updates from all nodes at the same time. This is not possible in our case, so we adapted the Bregman block coordinate descent algorithm to better fit the structure of Problem (2), as detailed in Appendix A.
Algorithm 1 Token Gradient Descent(z 0 )
1: σ = n n+K σ, α = 2K L , η = min σpcomm 2nK , pcomp nα(1+L/σ) , ρ comm = nKη pcomm σ , ρ comp = nαη pcomp . // Init 2: ∀i ∈ [n], θ (i) 0 = -∇f i (z (i) 0 )/σ; ∀k ∈ [K], θ token,k 0 = 0 . // z 0 is arbitrary but not θ 0 . 3: for t = 0 to T -1 do // Run for T iterations 4:
if communication step (with probability p comm ) then
5: Pick i ∼ U([n]), k ∼ U([K])
// Choose next node and token uniformly at random 6:
θ token,k t+1 = θ token,k t -ρ comm (θ token,k t -θ (i) t ) // Token update 7: θ (i) t+1 = θ (i) t + ρ comm (θ token,k t -θ (i) t ) // Local
z (i) t+1 = (1 -ρ comp ) z (i) t + ρ comp θ (i) t // Virtual node update 11: θ (i) t+1 = θ (i) t -1 σ ∇f i (z (i) t+1 ) -∇f i (z (i) t ) // Local update using f i 12: return θ K
Theorem 1 (Token algorithm). For ε > 0, the number of steps required by Algorithm 1 with a single token (K = 1) and p comp = p comm = 1 2 to reach error θ t -θ 2 ≤ ε is of order:
T comp = O κ log ε -1 and T comm = O nκ log ε -1 , ( 3
)
where T comp is the expected number of gradient steps performed by each node, and T comm is the expected number of communication updates (jumps) performed by the token.
Proof sketch. This result follows from the guarantees of Bregman Coordinate Descent applied to a dual reformulation of Problem (2). We need to evaluate the relative strong convexity and directional smoothness constants of the problem, and link them with spectral properties of the conceptual graph, as well as the regularity of the local functions. Details can be found in Appendix B.
Implementation. Algorithm 1 requires sampling updates uniformly at random over the whole system. This can be implemented in a decentralized fashion by sharing a random seed between all nodes. An alternative is that all nodes wake up and perform updates following a Poisson point process.
Communication complexity. The total number of communications is of order O(nκ). This matches the complexity of centralized algorithms, in which the server communicates once with each node at each round, and there are O(κ) rounds in total. Yet, in terms of time, the O(n) centralized communications can take place in parallel provided there is enough bandwidth, whereas when K = 1, the O(n) communications from Algorithm 1 must be sequential.
Computation complexity. In average, each node performs O(κ) local computations, which is the "right" complexity for non-accelerated algorithms. Algorithm 1 is as computationally efficient as standard centralized or decentralized algorithms in this sense. Besides, unlike token exchanges that need to be sequential, nodes can compute local gradient updates in parallel. Therefore, the computation time of Algorithm 1 matches the centralized time. Instead, when using gradients, Walkman [START_REF] Mao | Walkman: A communication-efficient random-walk algorithm for decentralized optimization[END_REF] Theorem 1] requires a step-size proportional to L -2 just for convergence, which would lead to a significantly worse computation complexity of at least O(Lκ).
Sampling variants.
In Algorithm 1, one computation update corresponds to one node performing a gradient update, without any communication involved. Note that by changing the sampling of the dual coordinates, it is possible to design other algorithms algorithms. For instance, we can choose to have nodes perform a local computation only when they receive the token, similarly to Walkman. This would yield a similar rate as the one in Theorem 1, but does not allow for local steps. It would thus not be possible for instance to perform a lot of fast communications, while slow computations take place in parallel. Yet, our framework can also handle this variant (and many others), including with the tricks developed in the next section.
3 Extensions of the single token algorithm
In the previous section, we have introduced the conceptual graph, and showed how it allows to leverage the existing tools from (dual) decentralized optimization to analyze a simple yet already efficient token algorithm. We now demonstrate the flexibility and generality of this framework by introducing, analyzing and combining three important variants of the token algorithm: multiple tokens, variance reduction, and acceleration.
The case of several tokens
When there is a single token walking on the graph, resources are used in an efficient way, and privacy guarantees are strong, but mixing is very slow (up to n times slower in a complete graph for instance). This is due to the fact that there are no parallel communications. One natural solution is to use multiple tokens that walk the graph in parallel. Yet, this is generally harder to analyze and, to the best of our knowledge, there only limited theory on multi-token algorithms, with convergence rates that show actual improvement over single tokens. Our conceptual graph framework allows us to directly extend Theorem 1 to the case of multiple tokens.
To do so, we build a different conceptual graph. Namely, we add one new node for each token, and link all "token nodes" to the actual nodes of the network, as shown in the right part of Figure 3. Then, we apply the dual approach to this new conceptual graph, which has a different topology but which we know how to handle. This is how we obtain Algorithm 1 for the general case of K ≥ 1. Theorem 2 (Multiple tokens). For ε > 0, the number of steps required by Algorithm 1 with 1 ≤ K ≤ n and p comp = p comm = 1 2 to reach error θ token,k t -θ 2 ≤ ε is of order:
T comp = O κ log ε -1 and T comm = O n K κ log ε -1 , ( 4
)
where T comp is the expected number of gradient steps performed by each node, and T comm is the number of jumps performed per token. In particular, the total communication complexity is the same as in Algorithm 1, but now the burden is shared by K tokens that walk the graph in parallel.
Token interactions. In the formulation inherited from Figure 3, tokens interact with nodes, but not between themselves. We can change the formulation to add interactions between tokens by adding edges between them in the conceptual graph. This would mean that the tokens would mix information when they meet. Yet, this would only marginally increase the connectivity of the conceptual graph, and would thus not speedup the algorithm by more than constant factors.
Variance reduction in the finite sum case
We have seen that changing the conceptual graph on which the dual formulation is applied changes the resulting token algorithm. In the previous section, we used this to speed-up communications by having several tokens walk the graph in parallel. We now leverage it to speed-up computations by avoiding local full gradient computations at each node. We now assume that each local objective writes
f i (x) = m j=1 f ij (x).
In this case, each full gradient requires m stochastic gradient ∇f ij computations, so Algorithm 1 requires O(mκ) stochastic gradient in total. Instead, variance reduction techniques [START_REF] Schmidt | Minimizing finite sums with the stochastic average gradient[END_REF][START_REF] Johnson | Accelerating stochastic gradient descent using predictive variance reduction[END_REF][START_REF] Defazio | SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives[END_REF][START_REF] Shalev-Shwartz | Sdca without duality, regularization, and individual convexity[END_REF] only require m + κ s stochastic gradients, where κ s = m j=1 (1 + L ij /σ), where L ij is the smoothness of function f ij . Although mκ = κ s in the worst case (where the ∇f ij are all orthogonal), κ s is generally smaller than mκ, leading to the practical superiority of finite-sum methods.
In our case, we introduce the finite-sum aspect by combining the conceptual graph with an augmented graph formulation [Hendrikx et al., 2019b[START_REF] Hendrikx | Dual-free stochastic decentralized optimization with variance reduction[END_REF]. Instead of splitting each node into 2 parts, containing respectively f i and the regularization part, as in Figure 1, we split it into a star subnetwork, with each f ij linked to the regularization part, as shown if the left part of Figure 3. This new conceptual graph leads to a new algorithm, Token Variance Reduction (TVR), that has the following convergence guarantees: Theorem 3 (Variance Reduction). For ε > 0, the number of steps required by TVR with 1 ≤ K ≤ n and p comp = κs m-1+κs to reach error θ t -θ 2 ≤ ε is of order:
T comp = O (m + κ s ) log ε -1 and T comm = O n K κ s log ε -1 , ( 5
)
where T comp is the expected number of stochastic gradient steps performed by each node, and T comm the number of jumps performed per token. Compared to Theorem 2, the computation complexity goes from mκ stochastic gradients (κ full gradients) to m + κ s , which is generally much smaller.
TVR performs the same communication steps as Algorithm 1, with slightly different computation steps, adapted for the stochastic case: there are now m functions f ij (and so parameters z (ij) ), instead of just one. The full algorithm and the proof of Theorem 3 are detailed in Appendix B.2.
Acceleration
We have seen that the conceptual graph view of token algorithms allows to naturally extend the simple single-token batch algorithm to a multi-token variance-reduced algorithm. We now show that by applying a different optimization algorithm to the same dual formulation, we obtain an accelerated algorithm from the same framework. We refer the reader to Appendix C for details and derivations.
Theorem 4 (Token Accelerated Variance-Reduced). For ε > 0, the number of steps required by Accelerated TVR with 1 ≤ K ≤ n to reach error θ token,k t -θ 2 ≤ ε is of order:
T comp = O (m + √ mκ s ) log ε -1 and T comm = O n K √ κ s log ε -1 (6)
In particular, the dependences on the objective regularity are replaced by their accelerated versions. This matches the optimal complexities from Hendrikx et al. [2021].
Proximal oracles. This algorithm is based on Accelerated Proximal Coordinate Gradient [START_REF] Lin | An accelerated randomized proximal coordinate gradient method and its application to regularized empirical risk minimization[END_REF], Hendrikx et al., 2019b], and thus uses proximal operators of the functions f ij instead of gradients. Yet, this is also the case of Walkman, and it is quite cheap in case the f ij are generalized linear model of the form f ij (θ) = (x ij θ), where : R → R.
Continuized framework. TAVR requires each node to perform local convex combinations at each step. This introduces global synchronization constraints, that we can get rid of using a continuized version of the algorithm [START_REF] Mathieu Even | Continuized accelerations of deterministic and stochastic gradient descents, and of gossip algorithms[END_REF].
General graphs
All the results presented so far are for the complete communication graph, meaning that the token can directly jump from any node to any other, allowing to prove strong convergence rates. We now show how to extend these results to general graphs. To do so, we analyze a slightly different communication procedure: instead of just one jump, each token performs N jumps jumps before averaging with the node it lands at according to Algorithm 1 (lines 6-7). If enough steps are taken, and the underlying Markov Chain is irreducible and aperiodic, then the probability that the token lands at node j from node i after N jumps steps is approximately equal to π (j), where π is the stationary distribution of a random walk over the communication graph. In particular, by allowing multiple steps before performing the actual communication update, all nodes can be reached from all nodes, as in the complete graph case. The actual sampling probabilities pi,t depend on the node at which the token is at time t. Yet, if pi,t is close enough to π (i), we can just adapt the step-sizes in coordinate descent, and obtain convergence regardless. Details can be found in Appendix 3.4 Theorem 5. Assume that matrix W ∈ R n×n defining the token transitions is such that for any π 0 ,
W t π 0 -π ∞ ≤ C(1-γ) t .
Then, if the token jumps O(γ -1 log(C/ηµ)) times before performing an averaging step, the communication complexity of Algorithm 1 (ignoring log factors) is O( nκ γ log ε -1 ). Theorem 3 can be adapated in the same way.
Note that these rates are for uniform stationary distribution π (i) = n -1 for all i ∈ [n]. Rates for non-uniform stationary distributions can also be obtained, and would depend on max i π i /L i , so that the algorithm is still fast if nodes that are visited less frequently have better smoothness.
Comparison with existing decentralized algorithms. Note that decentralized algorithms such as EXTRA [Shi et al., 2015, Li and[START_REF] Li | Revisiting extra for smooth distributed optimization[END_REF] require a total of O(E(κ+γ -1 ) log ε -1 ) communications, where E is the number of edges in the graph. In particular, our token algorithm is more communication efficient even in general graphs as long as either κ or γ -1 is small compared to E/n, the average node degree. Note that this complexity is also better than that of Walkman [START_REF] Mao | Walkman: A communication-efficient random-walk algorithm for decentralized optimization[END_REF], which depends on γ 2 instead of γ, besides proving linear convergence only in limited settings.
Directed graphs time-varying graphs. With the communication-skipping variant, all that matters is that the probability of being at node j after taking O(γ -1 log(ηµ)) steps from node i is close to some π (j) > 0. In particular, this does not imply the reversibility of the Markov Chain used for communications (and so can be used for directed graphs), or even its stationarity.
Experiments
In the previous section, we leveraged the conceptual graph framework to design and analyze several (multi-)token algorithms. We now illustrate their differences in Figure 4. We use a step-size β = 1/L for Walkman, which is much higher than the one from [START_REF] Mao | Walkman: A communication-efficient random-walk algorithm for decentralized optimization[END_REF]. The communication complexity is the total number of token jumps (regardless of the number of tokens), and the computation complexity is the total number of gradients (on single f ij ) computed. Time is obtained by setting τ comp = 1 for computing one individual gradient and τ comm = 10 3 for one communication, so that one communication is faster than computing one full local gradient. For gradient descent (GD), we present 2 variants of the allreduce protocol: all-to-all , which has a high number of communications n(n -1) but small time (1) per step, and ring (sequentially averaging over a directed ring), which has a small number of communication(2n) but high time (2n) per step. Note that a fully centralized implementation would get both small communication complexity (2n) and time (2). Token and TVR respectively refer to the algorithms analyzed in Theorems 2 and 3. EXTRA is a standard decentralized algorithm [START_REF] Shi | Extra: An exact first-order algorithm for decentralized consensus optimization[END_REF]. Additional details can be found in Appendix D.
For the complete graph, all token algorithms have similar communication complexity. This is consistent with our theory, and confirms that using multiple token does not hurt efficiency. We also confirm that the communication complexity of token algorithms is lower than that of all-to-all gradient descent (GD), and comparable to that of the efficient ring GD. Similarly, all batch algorithms have similar computation complexities, with GD and Walkman performing slightly better. TVR is more computationally efficient thanks to the stochastic variance-reduced updates.
In terms of time, the fastest algorithm is all-to-all gradient descent, since it performs all communications in parallel. Algorithm 1 is as fast as ring GD, since both algorithms require O(n) sequential communications, but perform computations in parallel. Walkman is the slowest algorithm in this setting, because it needs to perform both communications and computations sequentially (since it does not use local updates). Note that it would be as fast as the single token algorithm for τ comm ≥ m.
TVR is faster than Algorithm 1 thanks to variance reduction. We also see that using several tokens speeds up TVR (since the communication time dominates), but not Algorithm 1 (for which communication and computation times are of the same order).
For the ring graph, we find that the same token algorithms as for the complete graph are stable (consistently across a wide range of m and n), and so we do not use the skip variant presented in Section 3.4. We observe similar results to the complete graph case, although using multiple tokens now accelerates Algorithm 1 since they allow to compensate for the worse graph connectivity. GD and EXTRA have the same rate in this case (since κ > γ -1 ), and their curves are thus almost indistinguishable.
Conclusion
We have presented a general framework for analyzing token algorithms, and derived several variants such as variance-reduction and acceleration from it. All these token algorithms are competitive with their centralized counterparts in terms of computation and communication complexities. Multiple tokens can be used to increase the level of parallelism, and reduce the communication time.
We have also discussed a reduction from the general case to the complete communication graph case, in which our results are proven. We claim this reduction leads to efficient algorithms for general graphs, although these algorithms seem to waste communications. An important research direction would be to formalize these claims, and directly analyze versions of these algorithms in which tokens exchange information with all the nodes they visit. This would involve tight analyses of coordinate descent with Markov Chain sampling.
Acknowledgements
This work was supported by the Swiss National Foundation (SNF) project 200020_200342.
A Revisiting Bregman Coordinate Descent
The algorithmic core for the non-accelerated methods is based on Bregman Coordinate descent. Yet, the existing theory from [START_REF] Hendrikx | Dual-free stochastic decentralized optimization with variance reduction[END_REF] was not satisfactory, as it did not allow to consider stochastic communications: the communication block was sampled all at once, which is not possible in the token approach.
Similarly, in general graphs, we do not know the exact sampling probability pi,t , but an upper bound p i of it. We thus changed the coordinate descent algorithm to also take that into account.
We adapt the result from [START_REF] Hendrikx | Dual-free stochastic decentralized optimization with variance reduction[END_REF] to tackle these two problems in this section. First of all, for two vectors x, y ∈ R d , define the Bregman divergence:
D h (x, y) = h(x) -h(y) -∇h(y) (x -y). (7)
In order not to worry about further regularity assumptions, we assume throughout this paper that all the functions we consider are twice continuously differentiable and strictly convex on dom h, and that ∇h(x) = min y h(y) -x y is uniquely defined. This is not very restrictive for typical machine learning objectives. The Bregman divergence has some interesting properties. In particular, D h (x, y) ≥ 0 for all x, y ∈ R d as soon as h is convex.
For i ∈ [d],
we denote e i ∈ R d the unit vector corresponding to coordinate i. Consider the following Bregman coordinate descent algorithm, in which the iterates are given by:
x t+1 = arg min x V i,t (x) = η t p i ∇ i f (x t ) A † Ax + D h (x, x t ) , (8)
where ∇ i f (x t ) = e i e i ∇f (x t ), and A † A is some projection matrix, which is such that A † A∇f (x) = ∇f (x) for all x ∈ R d . An equivalent way to write these iterations is:
∇h(x t+1 ) = ∇h(x t ) - η t p i A † A∇ i f (x t ) (9)
Although we use some p i for the learning rate, we assume that coordinates are sampled according to another distribution, namely, pi,t . In order to make the training stable, we assume that for all i, t, there exist δ i,t > 0 which are such that for all i,
pi,t (1 + δ i,t ) = p i . (10)
In particular, this means that the p i are not a proper probability distribution, since they don't sum to one. We denote ∆ = d i=1 pi,t δ i,t , which is such that 1 + ∆ is normalizing factor of the p i . Similarly, we denote δ = min i,t δ i,t . We denote R i = (A † A) ii and R p = min i R -1 i pi,t . We make the following assumptions on f and h.
Assumption 1 (Regularity assumptions). Function f is µ-relatively strongly convex and L i -relatively smooth in the direction i with respect to h, meaning that for all x, y ∈ R d , and some v i supported by e i :
µD h (x, y) ≤ D f (x, y), and D f (x + v i , x) ≤ L i D h (x + v i , x). ( 11
)
We also make some technical assumptions, that are verified for our problem. In particular, we assume that h and A † A are such that:
∇ i f (x t ) A † A(x t -x (i) t+1 ) = R i ∇f (x t ) (x t -x (i) t+1 ). ( 12
)
Using this assumption, we first prove a technical lemma, which ensures that each step reduces the function value:
Lemma 1 (Monotonicity). Under Assumption 1, if η t ≤ pi LiRi the iterates of Equation ( 8) verify for all i ∈ [d]:
f (x (i) t+1 ) ≤ f (x t ). ( 13
)
Proof. Using Assumption 1 , we have that:
D h (x (i) t+1 , x t ) + D h (x t , x (i) t+1 ) = ∇h(x (i) t+1 ) -∇h(x t ) (x (i) t+1 -x t ) = η t p i ∇ i f (x t ) A † A(x t -x (i) t+1 ) = η t R i p i ∇f (x t ) (x t -x (i) t+1 ) = η t R i p i D f (x (i) t+1 , x t ) -f (x (i) t+1 ) + f (x t ) ≤ η t p i R i L i D h (x (i) t+1 , x t ) -f (x (i) t+1 ) + f (x t ) ≤ D h (x (i) t+1 , x t ) - η t R i p i f (x t ) -f (x (i) t+1 ) ,
where the last line uses that η t ≤ pi LiRi . In particular:
f (x (i) t+1 ) ≤ f (x t ) -D h (x t , x (i) t+1 ) ≤ f (x t ).
Theorem 6. Consider two functions f and h that verify Assumption 1. If the iterates are given by Equation (8), and denoting for any x ∈ dom h,
L t = (1 + δ)D φ (x, x t ) + ηt Rp [f (x t ) -f (x)]
, we obtain for η t ≤ pi LiRi :
L t+1 ≤ max 1 + ∆ -η t µ 1 + δ , 1 -R p L t Note that 1+∆-ηµ 1+δ ≥ 1 -ηµ ≥ 1 -pi Ri µ Li ≥ 1 - (1+δi,t)µ Li
R p , so the first term generally dominates since we generally have (1 + δ i,t )µ ≤ L i unless the condition number is very small, or δ i,t very large.
Proof of Theorem 6. For any x ∈ domh We start by writing the 3 points inequality:
D h (x, x t+1 ) + D h (x t+1 , x t ) -D h (x, x t ) = [∇h(x t ) -∇h(x t+1 )] (x -x t+1 ) = η t p i ∇ i f (x t ) A † A(x -x t+1 ) = η t p i ∇ i f (x t ) (x -x t ) + η t p i ∇ i f (x t ) A † A(x t -x t+1 ).
We now multiply everything by 1 + δ i,t , leading to:
(1 + δ i,t ) [D h (x, x t+1 ) + D h (x t+1 , x t ) -D h (x, x t )] = η t (1 + δ i,t ) p i ∇ i f (x t ) A † A(x -x t ) + η t (1 + δ i,t ) p i ∇ i f (x t ) A † A(x t -x t+1 ),
which we rewrite as:
(1 + δ)D h (x, x t+1 ) ≤ (1 + δ i,t )D h (x, x t ) + η t pi,t ∇ i f (x t ) A † A(x -x t ) + η t pi,t ∇ i f (x t ) A † A(x t -x t+1 ) -(1 + δ i,t )D h (x (i) t+1 , x t ),
In particular, when taking an expectation with respect to the sampling distribution, we obtain:
(1 + δ)E [D h (x, x t+1 )] ≤ d i=1 p i D h (x, x t ) - d i=1 p i D h (x (i) t+1 , x t ) + η t ∇f (x t ) (x -x t ) + d i=1 η t ∇ i f (x t ) A † A(x t -x (i) t+1 ).
Using Assumption 1 leads to:
η t ∇ i f (x t ) A † A(x t -x (i) t+1 ) = η t R i ∇f (x t ) (x t -x (i) t+1 ) = η t R i D f (x (i) t+1 , x t ) -f (x (i) t+1 ) + f (x t ) ≤ η t R i L i D h (x (i) t+1 , x t ) -f (x (i) t+1 ) + f (x t )
Using that η t ≤ pi LiRi , we obtain:
(1 + δ)E [D h (x, x t+1 )] ≤ (1 + ∆)D h (x, x t ) + η t ∇f (x t ) (x -x t ) + d i=1 η t R i -f (x (i) t+1 ) + f (x t ) .
Note that by monotonicity of the iterations (Lemma 1), f (x
(i) t+1 ) ≤ f (x t )
for all t, i, and so, with
R p = min i R -1 i pi,t : d i=1 R i pi,t pi,t -f (x (i) t+1 ) + f (x t ) ≤ R -1 p - d i=1 pi,t f (x (i) t+1 ) + f (x t ) ≤ -R -1 p (E [f (x t+1 )] -f (x t )) (14)
Finally, the µ-relative strong convexity of f gives:
∇f (x t ) (x -x t ) = -[f (x t ) -f (x)] -D f (x, x t ) ≤ -µD h (x, x t ) -f (x t ) + f (x) (15)
Combining everything leads to:
(1 + δ)E [D h (x, x t+1 )] + η t R p [f (x t+1 -f (x)] ≤ 1 + ∆ -ηµ 1 + δ (1 + δ)D h (x, x t ) + η t R p (1 -R p ) [f (x t ) -f (x)]
B Convergence results
B.1 Introducing the problem
In this section, we introduce the consensus problem derived from the conceptual graph, and take its dual formulation. This section follows the same framework as [START_REF] Hendrikx | Dual-free stochastic decentralized optimization with variance reduction[END_REF]. Denoting σ = nσ n+K , this problem writes:
min θ∈R n×d , u∈R n×d , v∈R d ∀i, ∀j, θ (ij) =u (i) , ∀k, u (i) =v (j) n i=1 m j=1 f ij (θ (ij) ) + σ 2 u (i) 2 + K k=1 σ 2 v (k) 2 . ( 16
)
Introducing Lagrangian multipliers y for each consensus constraint in the virtual part of the graph (between center nodes u (i) and computation nodes θ (ij) ), and multipliers x for the communication part of the graph, (between center nodes u (i) and tokens v (k) ), the dual problem writes:
min x∈R nKd , y∈R nmd q A (x, y) + n i=1 m j=1 f * ij ((Ay) (ij) ), with q A (x, y) 1 2σ (x, y) A A(x, y), (17)
where A ∈ R (n(m+1)+K)d×n(K+m)d is the (weighted) incidence matrix of the augmented conceptual graph, which is such that Ae 1 2 = µ 1 2 (e 1 -e 2 ), for any two nodes 1 and 2 (and an arbitrary orientation), where e 1 2 is the unit vector corresponding to edge ( 1 , 2 ), and e 1 , e 2 are the unit vectors corresponding to nodes 1 and 2 . Note that we abused notations and wrote (Ay) (ij) instead of (A(x, y)) (ij) , since (A(x, y)) (ij) does not actually depend on x.
Matrix A has a very special structure, since it is the incidence matrix of a tripartite graph between computation nodes, communication nodes, and token nodes. We now have to choose the weights of matrix A. For communication edges (between communication nodes and tokens), we make the simple choice µ ik = 1. For computation edges (between communication nodes and computation nodes), we choose:
µ 2 ij = αL ij , with α = 2σK κ s , (18)
where L ij is the smoothness of function f ij and κ s = max i 1 + m j=1 L ij /σ. These choices follow from [START_REF] Hendrikx | Dual-free stochastic decentralized optimization with variance reduction[END_REF], where we used that the "communication part" of the conceptual graph used to derive the token algorithm is quite specific (complete bipartite graph), and so λ min (A comm A comm ) = K, the number of tokens, as long as K ≤ n [START_REF] Andries | Spectra of graphs[END_REF].
B.2 The TVR algorithm
We now proceed to the derivation of the DVR algorithm, and to proving its theoretical guarantees. To achieve this, we use the Bregman coordinate descent iterations defined in the previous section, which are of the form:
(x, y) t+1 = arg min x,y η t p i ∇ i [q A ((x, y) t ) + F * (Ay t )] A † Ax + D h ((x, y), (x, y) t ), (19)
where ij) ). We now need to define the reference function h, which we define, following [START_REF] Hendrikx | Dual-free stochastic decentralized optimization with variance reduction[END_REF], as h(x, y) = h x (x) + h y (y), with: ∀i, j, h (ij) y (y
F * (λ) = i,j f * ij (λ (
(ij) ) = L ij µ 2 ij f * ij (µ ij y (ij) ), and h x (x) = 1 2 x 2 A † comm Acomm , (20)
where n(m+K) is the restriction of A to communication nodes (and tokens). In order to avoid notations clutter, we slightly abuse notations and use A † A instead in the remainder of this section.
A comm ∈ R (n+K)d×(
Algorithm 2 Token Variance Reduced (z 0 )
1: σ = σ n n+K α = 2K σ κs , p comp = 1 + κs m-1+κs -1 , p comm = 1 -p comp
θ token,k t+1 = θ token,k t -ρ comm (θ token,k t -θ (i) t ) // Token update 8: θ (i) t+1 = θ (i) t + ρ comm (θ token,k t -θ (i) t ) //
z (i,j) t+1 = (1 -ρ comp ) z (i,j) t + ρ comp θ (i) t // Virtual node update 12: θ (i) t+1 = θ (i) t -1 σ ∇f ij (z (i,j) t+1 ) -∇f ij (z (i,j) t ) // Local update using f ij 13: return θ K
For the computation part, the algorithm can be recovered by following the exact same steps as Hendrikx et al. [2020, Section 2], which are themselves inspired by the dual-free updates from [START_REF] Lan | An optimal randomized incremental gradient method[END_REF].
For the communication part, there is a small difference in the fact that we do not sample all coordinates at once anymore. Instead, we sample edges of the communication graph one by one. The other difference is that communication edges are not between node i and node k anymore, but between node i and token k. In particular, the communication step writes:
A † Ax t+1 = A † Ax t - η σp ik A † Ae ij e ik A A(x t , y t ). (21)
Multiplying by A on the left and defining θ t = A comm (x t , y t ) leads to:
θ t+1 = θ t - η σp ik A comm e ik e ik A θ t , (22)
where we used the fact that the update only affects communication nodes. Therefore, when we sample an edge between node i and token k, this leads to:
θ t+1 = θ t - ηnK p comm σ W ik θ t , (23)
where W ik = (e i -e k )(e i -e k ) , thus leading to the form obtained in Algorithm 2. From there, we can prove the following convergence theorem, from which all the non-accelerated results from the main text are derived. Theorem 7. The iterates of Algorithm 2 verify:
θ t -θ 2 ≤ 1 - ηK σ κ s t C 0 , (24)
where C 0 = 2σL 0 , with L 0 the Lyapunov function from Theorem 6 instantiated on the dual problem, which thus depends on the initialization. In particular, ignoring log factors in C 0 , if p comm = p comp = 1 2 and for any ε > 0,
T comp = O((m + κ s ) log ε -1 ) and T comm = O(nκ s log ε -1 ) (25
) are required in total in order to obtain θ t -θ 2 ≤ ε.
The proof of this algorithm follows several steps, that we detail in the next subsections. We first show that the objective function defined in Equation ( 16), together with the reference function h defined in Equation ( 20) satisfies Assumption 1, so that we can apply Theorem 6. Then, we show how to choose the remaining parameters (and in particular p comm and p comp ) optimally, and evaluate the rate in terms of constants of the problem (number of nodes, number of tokens, smoothness and strong convexity of the local functions...).
B.3 Verifying Assumption 1
We start by showing that Assumption 1 is verified in this case, which includes three parts: Equation (12), relative strong convexity, and directional relative smoothness.
B.3.1 Verifying Structural assumptions.
We first verify that the updates of our problem verify Equation (12) from Assumption 1.
1 -Computation coordinates. Computation coordinates corresponding to the edge between computation and communication subnodes in Figure 1. In particular, the graph becomes disconnected if they are removed, thus implying that A † Ae i = e i . In particular,
∇h(x t+1 ) = ∇h(x t ) - η t p i ∇ i f (x t ). (26)
Yet, for our specific choice of h (which is such that h
comp (x) = i h comp (i) (x (i) )), this implies that x (i)
t+1 -x t = v i for some v i that only has support on coordinate i, and in particular:
∇ i f (x t ) A † A(x t -x (i) t+1 ) = R i ∇ i f (x t ) v i = R i ∇f (x t ) v i = R i ∇f (x t ) (x t -x (i) t+1 ) (27) 2 -Network coordinates.
Network coordinates have a different structure. In this case, the reference function h comm is the quadratic form induced by A † A, which facilitates analysis. The updates write:
A † Ax t+1 = A † Ax t -A † A η t p i ∇ i f (x t ) (28)
Although h comm is not separable, we can leverage the presence of A † A to write:
∇ i f (x t ) A † A(x t -x (i) t+1 ) = η t p i ∇ i f (x t ) A † A∇ i f (x t ) = η t p i R i ∇f (x t ) ∇ i f (x t ).
We then use that ∇f (x t ) = A † A∇f (x t ), leading to:
∇ i f (x t ) A † A(x t -x (i) t+1 ) = η t p i R i ∇f (x t ) A † A∇ i f (x t ) = R i ∇f (x t ) A † A(x t -x (i) t+1
). Now that we have proven that h and f verify the structural assumptions given by Equation ( 12), it remains to evaluate the relative strong convexity and directional smoothness constants µ and L i .
B.3.2 Relative Strong Convexity.
Since the structure of the dual problem and the reference function h are the same, we directly have from Hendrikx et al. [2020, Appendix B.1] that the relative strong convexity constant is equal to
µ = α 2 = K σ + m j=1 L ij , (29)
since the smallest eigenvalue of the communication graph is equal to K (the number of tokens) in this case.
B.3.3 Directional Relative Smoothness.
We now evaluate the relative smoothness constants.
1 -Computation edges. In the computation case, similarly to strong convexity, we directly get from Hendrikx et al. [2020, Appendix B.1] that Lij , the relative directional smoothness for virtual node i, j (or just Li if there is only one virtual node), can be obtained as:
Lij = α 1 + L ij σ i , (30)
Plugging in the value of α, this leads to:
Lij = 2K σ σ + L ij σ + m j=1 L ij (31)
2 -Communication edges. In this case, we cannot use the results from DVR directly because the sampling of communication coordinates is different. While DVR sampled all communication edges at once, we only sample one at each step. In this case, we have that the directional relative smoothness is equal to:
D f (x+∆ uv , x) = ∆ uv 2 A ΣA = µ 2 uv (σ -1 u +σ -1 v ) ∆ uv 2 = µ 2 uv (σ -1 u + σ -1 v ) e uv A † Ae uv ∆ uv 2 A † A . (32)
In particular, for communication edges, and with the choice that µ 2 uv = 1 and σ u = σ v = σ:
D f (x + ∆ uv , x) ≤ Luv D h (x + ∆ uv , x), with Luv = 2 σR uv (33)
B.4 Convergence guarantees
We have shown in the previous subsection that we can apply Theorem 6 to obtain convergence guarantees for our token algorithms. For the communication edges, the step-size constraint leads to:
η t ≤ p uv R uv Luv = p comm σ 2nK (34)
For the computation edges, we can set (as in the DVR article) p ij ∝ 1 + L ij /σ. In particular, the normalizing factor is equal to
n i=1 m j=1 1 + L ij /σ = n(m + m j=1 L ij /σ) = n(m -1 + κ s )
, where we recall that κ s = 1 + m j=1 L ij /σ. Therefore, we obtain:
η t ≤ p ij Lij ≤ p comp σκ s 2nK(m -1 + κ s ) . ( 35
)
We want to balance p comm and p comp such that these two constraints match, leading to:
p comm = κ s m -1 + κ s p comp . (36)
Since p comm = 1 -p comp , this leads to:
p comp = 1 + κ s m -1 + κ s -1 . ( 37
)
Assuming δ ≥ 0 and ∆ < ηtµ 2 , the rate of convergence of Algorithm 2 is ρ = η t µ -∆ ≥ η t µ/2. In particular, we directly have that the computation complexity is equal to:
p comp ρ = 4(m -1 + κ s ). (38)
Similarly, the communication complexity is equal to:
p comm ρ = 2nκ s . (39)
B.5 Special cases
B.5.1 Complete graphs
All the theorems in the main paper are actually direct corollaries of Theorem 7. We provide below how they can be derived in each case.
Theorem 3: We apply Theorem 7 with δ = 0 (since the graph is complete, so we know the true sampling distribution).
Theorem 2: When m = 1, all the derivations remain the same, but we now have that κ s = 1+L i /σ = κ, and so the computation complexity is equal to m -1 + κ s = κ.
Theorem 1: This result can be recovered by simply taking K = 1.
B.5.2 General graphs.
Consider that the transitions between nodes are ruled by matrix W , which is such that W t π 0π ∞ ≤ C(1 -γ) t for any starting distribution π 0 , with C > 0 a constant, π the stationary distribution of the random walk, and γ > 0 a constant which can be interpreted as the inverse of the mixing time of the Markov Chain with transition matrix W . This is true as long as the underlying Markov Chain is irreducible and aperiodic. In this case, then after O(γ -1 log(C/(ηµ)) steps, we have that for all i:
|p i,t -(π ) i | ≤ ηµ 4 , (40)
so in particular by taking p i = (π ) i + ηµ 4 satisfies pi,t (1 + δ i,t ) = p i , with 0 ≤ δ i,t ≤ ηµ 2 . Then, using Theorem 6, we recover the same result as in Theorem 7, with ηµ replaced by ηµ -∆ ≥ ηµ 2 . Note that the value of η depends on ∆, which itself depends on η, so the above derivations technically result in a circular argument. To avoid this, one can simply use a slightly different η = min i (π )i LiRi ≤ η to set the number of token jumps. In practice, we do not need to precisely evaluate these log factors, and taking Cγ -1 jumps with a small constant C is enough.
C Acceleration
In the accelerated case, the theory does not follow directly from Theorem 7, since the algorithmic core is different. Indeed, we use a variant of Accelerated Proximal Coordinate Gradient [START_REF] Lin | An accelerated randomized proximal coordinate gradient method and its application to regularized empirical risk minimization[END_REF] instead of Bregman coordinate descent on the dual formulation. Yet, we can directly reuse the convergence results for ADFS [Hendrikx et al., 2019b, Theorem 1], which we (informally) state below:
Theorem 8. ADFS has iteration complexity O(ρ log ε -1 ), with
ρ 2 ≤ min k λ + min (A Σ -1 A) Σ -1 kk + Σ -1 ll p 2 k µ 2 k R k . ( 41
)
Batch smoothness. Since the smoothness of the full functions f i is hard to compute, we approximated it by taking L batch = 0.02 × max ij L ij . Note that, following [START_REF] Hendrikx | Dual-free stochastic decentralized optimization with variance reduction[END_REF], we implemented TVR with this batch smoothness instead of j L ij (which corresponds to taking α = 2σK/κ instead of α = 2σK/κ s ). In particular, the communication complexity of TVR is thus proportional to κ (similarly to that of Algorithm 1) instead of κ s . We proved Theorem 3 with κ s since it is simpler and less restrictive.
Code. We provide the code used to run the experiments from Figure 4 in supplementary material. All algorithms are coded in Python, using MPI for communications. This code has not been optimized for efficiency, but rather aims at providing an actual implementation of token algorithms that can be used out of the box. Due to the similarities between algorithms, we based this code on the code in the supplementary material from [START_REF] Hendrikx | Dual-free stochastic decentralized optimization with variance reduction[END_REF].
Figure 1 :
1 Figure 1: Left: base communication graph. Right: Conceptual graph, with modified local objectives.
Figure 3 :
3 Figure 3: Conceptual graph of size n = 3 with finite-sum local objectives (m = 3) and multiple tokens (K = 2).
Figure 4 :
4 Figure 4: Convergence results for a logistic regression task on the RCV1 dataset [Lewis et al., 2004] (d = 47236), with n = 20 nodes, and m = 9841 samples per node.
1+κs) , ρ comm = nKη pcomm σ , ρ comp = mnαη pcomp . // z 0 is arbitrary but not θ 0 . 4: for t = 0 to T -1 do // Run for T iterations 5:if communication step (with probability p comm ) then6: Pick i ∼ U([n]), k ∼ U([K])// Choose next node and token uniformly at random 7:
node update
8: else
9: Pick i ∼ U([n]) // Choose one node at random
10:
Local node update
9: else
10: Pick i ∼ U([n]), j ∼ U([m]) // Choose one node and data point at random
11:
For our problem (conceptual graph), we obtain the following values for the parameters involved in the computation of ρ when a communication edge (k, ) between a node and the token is sampled:
In the end, this leads to
Similarly, we have (just like in the ADFS paper, since the computation part of the graph is the same):
We now fix p comm and p comp so that ρ comm = ρ comp , similarly to Section B.4. This leads to
and so the communication and computation complexities are respectively:
Theorem 4 is obtained by expressing these complexities in terms of per-node and per-token quantities.
D Experiments
For the experiments, we use the same setting as [START_REF] Hendrikx | Dual-free stochastic decentralized optimization with variance reduction[END_REF], meaning that we solve the following logistic regression problem:
where the pairs (X ij , y ij ) ∈ R d × {-1, 1} are taken from the RCV1 dataset, which we downloaded from https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html. We choose the regularization parameter as σ = 10 -5 . All experiments were run on a standard laptop, but using MPI to communicate between nodes.
Time. We choose to report ideal times: to get the execution time of an algorithm, we compute the minimum time it takes to execute its sequence of updates, given fixed communication and computation delays τ comm and τ comp . More specifically, we draw a sequence of actions S, and denote S the -th action from this sequence, and T i ( ) the time at which node i finishes executing update . All nodes start from T i (0) = 0.
• If S is a local computation at node i, then node i increases its local time by τ comp , i.e., T i ( ) = T i ( -1) + τ comp . For j = i, T j ( ) = T j ( -1).
• If S is a token jump from node j to node i, then T i ( ) = max(T i ( -1), T j ( -1)+τ comm ).
The sequence S is implemented by simply sharing a random seed between nodes. Token algorithms could also be implemented without executing this shared schedule, but this would not strictly correspond to Algorithm 2 since the sampling of the edges would not be i.i.d. |
04107428 | en | [
"stat"
] | 2024/03/04 16:41:22 | 2022 | https://cstb.hal.science/hal-04107428/file/WP_EcoX_2022-9.pdf | Economix
Abdoulaye Kané
Abdoulaye Kane
Measurement of total factor productivity: Evidence from French construction firms
Keywords: French construction sector, Production function, Total factor productivity, Parametric estimation, Semi-parametric estimation, Non-parametric estimation, Market structure JEL Classification: C13, C14, C23, D24, D43
teaching and research institutions in France or abroad, or from public or private research centers.
Introduction
Total Factor Productivity (TFP) is generally defined as the portion of output not explained by the amount of inputs used in production. It is crucial in terms of economic fluctuations, economic growth and cross-country per capita income differences 1 insofar as it determines long-term economic growth and is a comprehensive industry-level productivity measure. However, if its theoretical definition seems comprehensive, its empirical implementation is far from being an easy task.
Parametric and semi-parametric methods show decreasing returns to scale in French construction. The underlying economic interpretation would be the imposing weight of large firms on small firms. In other words, the market structure could be likened to an oligopoly situation. Moreover, while the elasticity of capital per worker is low in output per worker (ranging from 0.0395 for the Fixed Effect method to 0.0636 for the Wooldridge method), the elasticity of materials per worker is very high (ranging from 0.776 to 0.819 following the same methods). We also find that the ACF method can be considered a good estimator of TFP in the French construction sector.
The differences between the different parametric and semi-parametric estimators are very small when comparing the estimated TFP. Spearman's correlation coefficients between these TFP measures are generally greater than 0.92 and even 0.99 when comparing the results obtained by the GMM method and the three semi-parametric methods (OP, LP and ACF). However, the correlations between these methods and the non-parametric methods are very low, even negative with the calibration method. The rest of the paper is structured as follows. Section two presents the methods for the theoretical calculation of TFP. We move to the empirical part in section three which will present the data as well as the estimation results. Section 4 provides some concluding remarks.
Theoretical framework
If the authors are unanimous in attributing the term "total factor productivity" to the work of Solow (1957), they are less so as to its measurement. This measure is all the more difficult as the sector under study is fragmented. Usually the literature starts with a production function. Following this idea I assume a Cobb-Douglas production function:
Y it = A it K β k it L β l it M βm it (1)
Where Y it represents gross output of firm i in period t, K it , L it and M it are inputs of physical capital, labor (Total employment) and materials (intermediate inputs), respectively, and A it is the Hicksian neutral efficiency level of firm i in period t. The labor, capital and intermediate inputs elasticities are given by β l , β k and β m respectively. We take natural logarithm of (1) and in order to understand the nature of returns to scale4 in the industry studied, we divide each variable by the labor input. Thus, equation (1) becomes:
ln
Y it L it = β 0 + β k ln K it L it + (β l + β k + β m -1) ln L it + β m ln M it L it + it (2)
Where ln A it = β 0 + it ; β 0 measures the mean efficiency level across firms and over time; it is the error term.
To simplify the writing of (2) , we rewrite it as follows:
y it = β 0 + β k k it + γl it + β m m it + it ( 3
)
where y it = ln Yit Lit ; k it = ln Kit Lit ; m it = ln Mit Lit ; γ = (β l + β k + β m -1). It should be noted that γ provides us with information on the nature of the returns to scale.5 it can be decomposed into an observable and unobservable component:
y it = β 0 + β k k it + γl it + β m m it + υ it + µ it (4)
Where υ it is the observable component, µ it is the unobservable component and β 0 + υ it represents firm-level productivity.
Thus using the ordinary least squares (OLS) method, we can estimate TFP as follows:
tf p it = β0 + υit = y it -βk k it -γl it -βm m it (5)
To obtain the level productivity, we take the exponential of tf p it , i.e., T F P it = exp( tf p it )
However, the OLS estimation method is automatically biased. Indeed, estimating a production function by OLS assumes that the factors of production are exogenous in the production function, i.e. determined independently of the firm's level of efficiency. However, authors such as Marschak and Andrews (1994) have already shown that the factors of production in the production function are not chosen independently, but rather determined by the characteristics of the firm, including its efficiency. We face a simultaneity bias.6
In addition to this bias, we have the selection bias. TFP is typically estimated with a cylindrical panel by omitting all firms that enter and exit during the sample period (Olley and Pakes, 1996). Although some economists believe that firm entry and exit are implicitly taken into account in the analysis (Fariñas and Ruano, 2005), explicitly omitting consideration of the exit decision of firms leads to selection bias. The reason is as follows: Firms' decisions on factor allocation in a particular period are made conditional on its survivors. In sum, the selection bias will cause the error term to be negatively correlated with capital, thus causing the capital coefficient to be biased. For all these reasons, OLS estimation of a production function will provide us with inconsistent coefficients. To overcome these issues, different methods of estimating the TFP have been proposed.
Fixed effects estimation
Assuming that productivity is firm-specific but time-invariant, it is possible to estimate TFP using the fixed effects estimator (Pavcnik, 2002;[START_REF] Levinsohn | Estimating production functions using inputs to control for unobservables[END_REF]:
y it = β k k it + γl it + β m m it + tf p i + µ it (6)
Where tf p i = β 0 + υ it Equation (6) can be estimated in level using the within-individual estimator or in first difference providing unbiased coefficients as long as the unobserved productivity tf p i does not vary over time. In this respect, the simultaneity bias is eliminated because we have only the within-sector variation in the sample. The same is true for selection bias because exit decisions are made in an invariant time period.
However, in practice, the fixed effects estimator on a production function often leads to unreasonably low estimates of the capital coefficient because it imposes strict exogeneity of inputs conditional on firm heterogeneity [START_REF] Wooldridge | Econometrics: Panel Data Methods[END_REF]. In economic terms, this would mean that factors of production cannot be chosen in response to productivity shocks. This assumption probably does not hold in practice, especially not in the construction sector, which has been facing a decline in the rate of productivity growth in recent years, and thus a variation in productivity over time. Another estimator is often proposed to overcome these problems: the instrumental variables estimator and the Generalized Moment Method (GMM).
Instrumental Variables and Generalized method of moments
An alternative method for achieving consistency of coefficients across a production function is to instrument for the factors that cause the endogeneity problem. Unlike the fixed effects estimator, the instrumental variables (IV) estimator does not rely on strict exogeneity. Nevertheless, this method requires a certain number of conditions, notably on the variable or variables used as external instruments. First, the instruments must be correlated with the endogenous variable(s). Second, the instruments cannot enter directly into the production function. Finally, the instruments must not be correlated with the error term. The last assumption rules out the existence of imperfect competition in the factor market if output or factor prices are used as instruments. Assuming a perfect market, then, factor and output prices are natural choices of instruments for the production function.
Nevertheless, factor prices become valid instruments if and only if the firm does not have market power. Indeed, if the firm has market power, it will set its prices at least partly according to the quantities of factors and its productivity. This makes prices endogenous. This endogeneity problem will always arise even if we use the average wage per worker (reflecting exogenous labor market conditions) because this wage often varies according to the qualification and quality of the employee.
Other instruments can be taken into account, such as weather conditions, exogenous shocks on the labor or capital market, independently of the firm's market power. However, as pointed out by [START_REF] Ackerberg | Econometric tools for analyzing market outcomes[END_REF], even in this case, the IV approach only deals with the endogeneity of factors, but not the endogeneity of firms' outputs. If instrument choices are correlated with firm-level output, endogenous output would invalidate the use of instruments. Some authors lag inputs and then use them as instruments. This biases the capital coefficient, which is often not significant. To remedy this, Blundell and Bond (1999) propose the GMM estimator. For them, the poor performance of the IV estimator is due to the weakness of the instruments used for identification. We must therefore put the instruments in delayed first differences in the level equations and thus we will have good estimates.
Nevertheless, the main drawback of this method is to find a valid instrument to instrument the TFP. Apart from these parametric estimation methods, we have semi-parametric estimates that give us a time variability of the TFP.
The Olley-Pakes (1996) estimation algorithm
This semi-parametric estimation method solves the simultaneity bias by taking the firms' investment decision as a proxy for unobserved productivity shocks. The selection problem is solved by incorporating an exit rule in the model. This is a dynamic model of firm behavior where we have two fundamental assumptions:
1. Only one variable (state variable) is unobserved at the firm level. This is productivity, which is also assumed to evolve as a first-order Markov process.
2. The monotonicity of the "investment" variable so that the demand function for investment is invertible. This means that investment is increasing in productivity. One of the corollaries of this second hypothesis is that only positive values of investment are accepted.
Starting with the Cobb-Douglas production function given by equation (4), the estimation procedure can be described as follows: capital is a state variable, affected only by current and past values of the productivity level tf p it . Investment is described as follows:
I it = K it-1 -(1 -δ)K it
where δ is the capital depreciation. Also, investment decisions at the firm level can be written as a function of capital and productivity: i it = i t (k it , tf p it ) where the lower case notation refers to the logarithmic transformation of the variables. Since investment is an increasing function of productivity, conditional on capital, the investment decision can be inverted, allowing us to write unobserved productivity as a function of observable variables: tf p it = h t (k it , i it ) where h t (.) = i -1 t (.). Using this formula, equation ( 4) is rewritten as follows:
y it = β 0 + β k k it + γl it + β m m it + h t (k it , i it ) + µ it (7) Let's denote ϕ(k it , i it ) = β 0 + β k k it + h t (k it , i it )
The estimation of equation ( 7) is done in two steps. The first step is to estimate, using OLS, the following equation:
y it = γl it + β m m it + ϕ(k it , i it ) + µ it (8)
where ϕ(k it , i it ) is approximated by a third-order polynomial in investment and capital.
In the second step, where we address the attrition bias problem, we identify the capital coefficient by estimating the following equation:
y it -γl it -β m m it = β 0 + β k k it + g(ϕ t-1 -β k k t-1 ) + µ it (9)
where g(.) is an unknown function which is again approximated by a third-order polynomial expression in ϕ t-1 and k t-1 . The probability of survival is normalized to 1. The capital coefficient can then be obtained by applying nonlinear least squares to equation (9).
The Levinsohn-Petrin (2003) estimation algorithm
Like the Olley-Pakes (OP) method, the Levinsohn-Petrin (LP) method is also a semi-parametric estimate, but intermediate inputs are used as a proxy. Indeed, the monotonicity condition of OP requires that investment is strictly increasing in productivity. This implies that only observations with positive investment can be used when estimating equations ( 8) and ( 9), and this may lead to a significant loss of efficiency depending on the data available. Moreover, if firms report zero investment in a significant number of cases, this casts doubt on the validity of the monotonicity condition. To do this, the LP estimation method uses intermediate inputs as a proxy for unobserved productivity. Since firms generally report positive material and energy use each year, it is possible to retain most observations; this also implies that the monotonicity condition is more likely to hold. The LP estimation algorithm differs from the OP algorithm in two ways. First, it use intermediate inputs as a proxy for unobserved productivity, rather than investment. This implies that intermediate inputs are expressed in terms of capital and productivity, i.e. , :
m it = i t (k it , tf p it )
Using the monotonicity condition, intermediate inputs are strictly increasing in productivity and are also invertible: tf p it = s t (k it , m it ) where s t (.) = m -1 t (.). Equation (4) becomes:
y it = β 0 + β k k it + γl it + β m m it + s t (k it , m it ) + µ it (10)
The second difference is related to the correction for selection bias. Although the OP approach allows for both a non-cylindrical panel and the incorporation of the probability of survival in the second stage of the estimation algorithm, the LP method does not incorporate the probability of survival in the second stage.
Recently, new and even more robust production function estimation techniques have emerged in an effort to correct the LP method. These include the [START_REF] Wooldridge | Econometrics: Panel Data Methods[END_REF] and [START_REF] Ackerberg | Identification properties of recent production function estimators[END_REF] methods. [START_REF] Wooldridge | Econometrics: Panel Data Methods[END_REF] proposed an alternative implementation of OP/LP moments that involves the simultaneous minimization of first and second stage moments. Using the LP model, he suggested estimating all the parameters simultaneously using the moment conditions :
Wooldridge method (2009)
E[mu it |I it ξ it + µ it |I it-1 ] = E[ y it -γl it -Φ t (k it , m it )|I it y it -β 0 -γl it -β k k it -g(Φ t-1 (k it-1 , m it-1 ) -β 0 -β k k it-1 )|I it-1 ] = 0 (11)
As pointed out by Wooldridge, there are several advantages to this First, the joint approach avoids the functional dependence issue above. Even if l it is functionally dependent on m it , k it and t, γ might be identified by the second set of moments. Other benefits of the Wooldridge approach are potential efficiency gains, and simpler standard error calculations. There are also disadvantages of the joint approach; in particular, the joint approach nonlinear search over β 0 , β k , γ and the parameters representing the two unknown functions Φ t and g. This method is more time-consuming and probably more error-prone than the two-step approach, which can often be obtained by a nonlinear search on only β k , γ.
The Ackerberg-Caves-Frazer (2015) method
The main argument of the Ackerberg, Caves, and Frazer (ACF) method is that the labor coefficient may not be identified in the estimation procedures proposed by OP and LP. This is what the authors call the "functional dependence problem". Indeed, the authors believe that labor can be an argument of the demand function of the proxy variable and, consequently, of the unobserved productivity function.
Thus, the authors start from the same point of view as the LP method by taking the same assumptions of monotonicity of the proxy function but including labor: m it = f t (k it , l it , tf p it ). One interpretation of this assumption is that the gross output production function is Leontief in intermediate inputs. 7Given the assumption of strict monotonicity, we can invert the intermediate input demand : tf p it = f -1 t (k it , l it , m it ). Therefore, equation (4) becomes :
y it = β 0 + β k k it + γl it + β m m it + f -1 t (k it , l it , m it ) + µ it = Φ t (k it , l it , m it ) + µ it ( 12
)
where
Φ t (k it , l it , m it ) = β 0 + β k k it + γl it + β m m it + f -1 t (k it , l it , m it )
Using the first stage moment condition, we have:
E[µ it |I it ] = E[y it -Φ t (k it , l it , m it )|I it ] = 0 ( 13
)
Where I it represents a set of information. We note that unlike the OP and LP methods, no coefficient is identified in the first step. In short, all the coefficients are estimated in the second step using the following second stage moment condition:
E[ξ it + µ it |I it-1 ] = E[y it -β 0 -γl it -β k k it -β m m it -g(Φ t-1 (k it-1 , l it-1 , m it-1 ) -β 0 -β k k it-1 -γl it-1 -β m m it-1 )|I it-1 ] = 0 ( 14
)
Where Φ t-1 is replaced by its estimate from the first stage. The coefficients γ, β k , β m are estimated through a first order Markov process.
Using a Cobb Douglass production function with neutral productivity differences in the sense of Hicks does not allow for the identification of factor biases in technological change, especially since value added or sales are generally used as a measure of output. Non-parametric techniques -Elasticity calibration and Data Envelopment Analysis (DEA) -solve this problem.
Calibration method
Introduced by Solow (1956Solow ( , 1957)), the basic idea of this method is to calibrate the different elasticities of the factors of production using two crucial assumptions8 :
• The constant returns to scale assumption • The perfect market assumption in both the labor and goods and services markets Thus, with these two assumptions and as long as we have information on the different factors of production as well as on output, we are able to calculate equation (5). By calibration, the elasticities of labor (β l ) and intermediate inputs (β m ) are obtained by the ratio between the labor cost and the production value, the ratio between intermediate inputs and the production value respectively. Assuming constant returns, the elasticity of the capital stock becomes
β k = 1 -β l -β m .
Finally, we use the average of each elasticity over time.
The calibration method is particularly interesting in that it calculates TFP in the simplest possible way (no estimation is required). However, the assumption of perfect competition in product and input markets assumption is a strong assumption, especially in the construction sector, which tends towards an imperfect market structure.
Data Envelopment Analysis approach (DEA)
The DEA approach, also called non-parametric frontier estimation, constructs for each observation a linear combination of all other observations (normalized by production) for explicit comparison. Here, no particular production function is assumed. Efficiency (productivity) is defined as a linear combination of output over a linear combination of factors of production. The weights on the factors (u l , u k ) and output (v q ) are chosen directly by maximizing the efficiency (productivity) denoted by θ. Observations that are not dominated are labeled 100% efficient. Dominance occurs when another firm, or a linear combination of other firms, produces more of all output with the same aggregate of factors, using the same weights to aggregate the factors. A linear maximization program is solved separately for each observation. For unit 1 (firm-year), in the case of a single production, the problem is :
max vq,u l ,u k θ 1 = v q Q 1 + v * u l L 1 + u k K 1 (15)
subject to :
v q Q i + v * u l L i + u k K i ≤ 1; i = 1, . . . , N. v q , u l + u k > 0; u l , u k > 0 v * ≥ 0
Where v * = 0 when returns to scale are constant.
The efficiency of all firms cannot exceed 100% when the same weights are applied. A normalization is necessary to properly define the problem: u l L 1 + u k K 1 = 1 and v * is an additional game variable to allow for varying returns to scale. When we have constant returns to scale (v * = 0), the production frontier is a ray through the origin in the aggregate production-factor space. Under varying returns to scale, this frontier is the piece-wise linear envelope of all production plans.
The efficiency measure θ i can be interpreted as the productivity difference between unit i and the most productive unit. The efficiency score ranges from 0 to 1. An efficiency score of 1 means that the firm is fully efficient. The level and growth rate productivity estimates are defined as follows, respectively:
logA DEA it -logA t DEA = logθ it - 1 N t N j=1 logθ jt (16) logA DEA it -logA DEA it-1 = logθ it -logθ it-1 (17)
The main advantage of the DEA method is the absence of functional form or behavioral assumptions of firms and no distributional assumption is imposed on the inefficiency term. The underlying technology is completely indeterminate and allowed to vary across firms. However, it has serious limitations in that, first, it interprets any deviation from the set production potential as inefficiency. This means that all factors that affect the firm's performance are considered to be under the firm's control. The consequence is that the estimated inefficiency is biased due to exogenous factors such as measurement error [START_REF] Kumbhakar | Stochastic frontier analysis[END_REF]. The second limitation is that it measures inefficiency relative to the best performing unit among observations, making its result susceptible to outlier bias [START_REF] Coelli | Additional topics on data envelopment analysis. An introduction to efficiency and productivity analysis[END_REF]. Based on this review of the TFP, we present a summary result of each method.
Methods
Advantages Drawbacks Fixed effects
Resolution of the simultaneity bias when we assume strict exogeneity of the inputs.
Underestimation of the capital elasticity.
Instrumental variables and GMM (Blundell and Bond, 1999) Resolution of the simultaneity bias and these methods do not impose strict exogeneity.
The challenge is to find valid instruments. Olley and Pakes (1996) Resolution of simultaneity and attrition biases.
The monotonicity condition no longer holds when the investment value is zero and the functional dependence problem. [START_REF] Levinsohn | Estimating production functions using inputs to control for unobservables[END_REF] Resolution of simultaneity and attrition biases
The functional dependence problem. [START_REF] Wooldridge | Econometrics: Panel Data Methods[END_REF] Resolution of simultaneity and attrition biases by performing a one-step system GMM.
This method is more timeconsuming and probably more error-prone than the two-step approach. [START_REF] Ackerberg | Identification properties of recent production function estimators[END_REF] Resolve simultaneity and attrition biases by addressing the functional dependence problem suffered by the Olley and Pakes and Levinsohn and Petrin methods.
Like the previous methods, the problem of this method is the specification of a functional form of the production function.
Elasticity calibration
It calculates the TFP in the simplest way possible.
The condition of a perfect market, both in the labor market and in the market for goods and services, is a strong assumption. Data Envelopment Analysis (DEA)
The main advantage of the DEA method is the absence of functional form or behavioral assumptions of firms and no distributional assumption is imposed on the inefficiency term.
As it is a deterministic method, it is vulnerable to measurement errors.
Table 1: Summary of methods
The following section will empirically test these different methods.
Empirical Application
Data
Our data come from the Esane approximate results file (FARE), which contains accounting information from tax returns that are consistent with information from the Annual Sector Survey. The FARE system aims to build up a coherent set of business statistics. It combines administrative data (obtained from the annual profit declarations that companies make to the tax authorities, and from annual social data that provide information on employees) and data obtained from a sample of companies surveyed by a specific questionnaire to produce structural business statistics.
We mobilize data from the construction sector in metropolitan France from 2009 to 2018. Of the three subsectors in the construction sector (building construction including real estate development, civil engineering, and specialized construction), we focus only on the building construction. Specifically, this is the "Residential and Non-Residential Building Construction (Sector 4120)" sub-sector, which includes general construction or "all crafts" firms with overall responsibility for the construction of a building. It also includes the conversion or renovation of existing residential structures. This sub-sector has the advantage of being highly focused on on-site work and is also representative of the construction sector as shown in the estimation results in the Appendix. Moreover, unlike specialized construction, which is a highly atomized sector, building construction contains more large companies. Firms are reported for at least 3 years and our data do not show a temporal break for the same firm. Furthermore, to overcome selection bias, we need to work on an unbalanced panel data set. Indeed, the attrition bias is lower the more the panel is unbalanced with a large number of samples [START_REF] Ackerberg | Identification properties of recent production function estimators[END_REF].
We use total gross production as the output variable. Three inputs have been mobilized: labor, capital and intermediate inputs. The labor input is measured by the number of full-time equivalent employees. The capital stock is approximated by gross tangible fixed assets. The gross investment is also given by FARE. We measure intermediate inputs by the difference between total gross production and value added at factor costs. Because we have nominal values, we deflate them using price indices in the French construction sector obtained from the STAN 2020 edition database (constant price 2015) to obtain real values. These deflators cover added value, output, investment, capital and intermediate inputs. The table clearly shows that the average number of employees in the sample does not reach 20 (average employment amounts to 19.195 employees). This situation is specific to the sector, which has many small firms. Second, these data reveal strong heterogeneity across firms. While the minimum number of employees is 1, the maximum number is 4,635. For all these reasons, it is interesting to examine the structure of our sample through the size of the firms. The INSEE size classification (classification according to the number of employees, turnover and balance sheet) will be difficult to apply to a sector composed essentially of micro-enterprises. Thus, we assume that firm size depends only on the number of employees. To do this, we follow [START_REF] Baldwin | The trend to smaller producers in manufacturing: A Canada/US comparison[END_REF] classification that considers a firm to be :
• Micro if the number of employees is < 20;
• Small if the number of employees is between 20 and 99;
• Medium if the number of employees is between 100 and 499;
• Big if the number of employees is ≥ 500.
The following table shows that 84.10% of our sample has less than 20 employees and 13.23% are small businesses. Medium-sized companies represent 2.23%, while large companies are very few (0.44%). However, although they are few in number, the large companies in the French construction sector have enormous weight. This is explained by the following graph on average total labor productivity 9 : Figure 1: Average labor productivity by firm size Figure 1 shows that, on average, the labor productivity of large firms is the highest, followed by intermediate-sized firms. However, the gap between medium-sized firms and micro-firms is not large. Average labor productivity is lowest at the small firm level.
Firm size
Estimation results
In the following table we present the estimation results of the production function based on the methods presented in section 2. All reported estimates are obtained through an unbalanced panel of firms (allowing for implicit entry and exit). [START_REF] Wooldridge | Econometrics: Panel Data Methods[END_REF] estimator is used as the GMM method. Its advantage is that it provides consistent and efficient parameter estimation using the one-step system-GMM approach. It solves potential serial correlation, heteroscedasticity as well as the endogeneity due to simultaneity and 9 Labor productivity is measured as the ratio of real value added to the number of full-time equivalent employees.
attrition using lagged values as instrument [START_REF] Ackerberg | Identification properties of recent production function estimators[END_REF]). The Olley and Pakes (1996), [START_REF] Levinsohn | Estimating production functions using inputs to control for unobservables[END_REF], [START_REF] Wooldridge | Econometrics: Panel Data Methods[END_REF]Ackerberg, Caves, and[START_REF] Ackerberg | Identification properties of recent production function estimators[END_REF] methods are estimated using the "prodest" command10 developed by Mollisi and Rovigatti (2017), which has the advantage of correcting for attrition and simultaneity bias and adding control variables. We denote the Fixed Effects, Olley and Pakes, Levinsohn and Petrin, Wooldridge and Ackerberg, Caves and Frazer estimators by (FE), (OP), (LP), (WRDG) and (ACF) respectively. The calibration method is designated by (CM). To better interpret these results, we will make some general comments and comparisons between the different methods. With respect to the estimated elasticities, there are two important points in the general comments. First, whatever the estimator, the value of γ is negative, which means that returns to scale are slightly decreasing in the French construction. In other words, output varies less than the variation in the inputs used. This result shows that the more we produce, the more expensive it is to produce an additional unit in French construction. Moreover, we can link this dis-economy of scale to the organizational structure of the construction market. Indeed, in microeconomics, diseconomies of scale are the cost disadvantages that firms accumulate as a result of increasing their size or output, leading to the production of goods and services at higher unit costs. For this reason, large companies, although few in number, are imposing themselves on the French construction market. We are therefore in a market where a few companies (large or medium-sized) have a certain amount of market power and where a large number of micro or small companies are under the weight of these giants: the French construction market is similar to an oligopoly situation. This result is supported by [START_REF] Lowe | The measurement of productivity in the construction industry[END_REF] conclusion that the construction industry deviates significantly from the perfect competition model.
Second, the elasticity of capital per worker in the production process is very low ranging from 0.0395 (FE method) to 0.0636 (Wooldridge method). This low elasticity of capital is not a surprise insofar as construction, retaining its manual character, is very labor intensive. Physical assets will inevitably be less noticeable in a sector composed mainly of micro-firms. The weakness of investment spending in the French construction industry has been noted through the study of [START_REF] Ferrand | Les mutations de l'investissement dans le bâtiment[END_REF]. The author shows that, since the beginning of the 2000s, the rate of investment in French construction, on average, does not exceed 10% of GDP. This rate seems to be the norm when an international comparison is made (comparison with the United States or with the euro zone). The elasticity of per capita intermediate goods in per capita output is more than 0.77% in each of the estimates.
Comparing the estimation methods, we find that the elasticity of capital intensity provided by the FE method is always lower, while the elasticity of intermediate inputs per worker and the absolute value of the scale effect (γ) are still high. These results are consistent with the findings of Van Beveren (2012) who showed that the FE method leads to a low capital coefficient. We add that the fixed effects estimator overestimates the effect of intermediate inputs and the scale effect in our sample.
The estimation results of the other 4 methods (WRDG, OP, LP and ACF) are quite similar especially between the OP and LP methods. However, the absolute value of the scale effect obtained by the ACF method is the lowest (0.0146%). This result is probably linked to the problem of functional dependence (non-dynamic labor input) that is blamed on the OP and LP methods. The ACF method can be a good estimator of TFP in the construction sector. Not only does it correct for the OP and LP methods by making the labor coefficient dynamic, but it also highlights the crucial role of intermediate inputs in the production process. Using intermediate goods as a proxy for unobserved productivity is particularly important for the construction industry, where expenditures on equipment and machinery rentals are very large. Moreover, no amount of labor can replace the concrete, asphalt, wood and other materials needed to build.
The [START_REF] Wooldridge | Econometrics: Panel Data Methods[END_REF] estimator provides the highest coefficient of capital per worker (0.0636%). The author states that this method has the advantage of easily obtaining robust standard deviations and makes effective use of the moment conditions implied by the OP and LP assumptions. [START_REF] Wooldridge | Econometrics: Panel Data Methods[END_REF] argues that two-step estimators (OP and LP in this case) are inefficient for two reasons. First, they ignore the contemporaneous correlation of errors between two equations. Second, they do not effectively account for serial correlation or error heteroscedasticity. However, we have a loss of information (4,784 fewer observations) with the [START_REF] Wooldridge | Econometrics: Panel Data Methods[END_REF] estimator due to the use of lagged instruments which is not the case with the ACF method which estimates elasticities in one step as well (second step).
Elasticities can also be obtained by calibration or by the DEA method. (sections 2.6 and 2.7). This means that no estimation is required to obtain them.
Non-parametric results
In this subsection, we present the results related to the calculation of TFP by the elasticity calibration method and by the DEA method. The following Table 5 The calibration method has the advantage of showing that labor (21%) explains output better than capital (14%) in the construction sector. This result is consistent with the idea that the construction industry in general and the French construction industry in particular is labor-intensive. Materials (65%) contribute more to output, consistent with estimation methods. Although the calibration method requires strong assumptions, in this case the assumption of perfect markets, it provides results consistent with the reality of the sector. Based on these results, we represent the relationship between output and each factor of production (labor, capital and materials). The most striking observation is that the capital stock explains 50.37% of output, while labor and materials explain 70.55% and 97.86% of output respectively. The DEA method is used through the BCC model [START_REF] Banker | Some models for estimating technical and scale inefficiencies in data envelopment analysis[END_REF]) which assumes variable returns to scale (VRS model). Indeed, previous estimation results have shown that the constant returns assumption cannot be applied to the French construction sector. We adopt the input-oriented DEA approach because it allows us to determine the extent to which a firm's input use could be reduced if they were used efficiently to achieve the same output level. The effective decision units (DMUs) are represented by the different legal units. Since it is not possible to display the efficiency score or productivity (θ) of each of the 4,784 legal units in a given year, we will present the average efficiency by size. The average efficiency scores are given in the following 6, only large companies have an average score equal to about unity. In other words, all big firms have total technical efficiency. It also means that, on average, all big firms in the French construction sector operate on or very close to the production frontier from 2009 to 2018. This is a sign that big firms are using resources to their full potential. The efficiency score decreases with firm size. Mediumsized and small firms have higher average scores than micro-firms. These results confirm once again that the large companies, despite their paltry numbers, have total influence in the sector.
Comparison of TFP methods
In this subsection, we compare the different firm-level TFP calculation methods that have been obtained and focus on the correlation between them. In order to make the methods comparable, we normalize the parametric, stochastic and calibration methods:
z it = xit-min(x) max(x)-min(x)
Where x = (x 1t , ..., x nt ) and z it is the i th normalized value during the period t. 7 clearly shows that the TFPs obtained by the TFP estimation methods (FE, WRDG, OP, LP, and ACF) are very similar to each other. The average TFP for these 5 estimators is between 0.103 (FE estimator) and 0.131 (WRDG estimator). The different standard deviations are also very close, about 0.04 in each case. The FE, stochastic and calibration methods range from 0 to 1 because they have been normalized. The non-parametric methods (calibration and DEA methods) have slightly higher means. The calibration method provides a average TFP equal to 0.165 and the DEA method provides a mean equal to 0.507.
Finally, using the productivity levels obtained, it is possible to calculate the overall average industry productivity for each year based on each estimator. shows that the TFP of the "residential and nonresidential building construction" industry displays a rather stable trend regardless of the method. We also add that the estimation methods are quite similar, especially the semi-parametric methods (OP, LP and ACF). The DEA method has a higher average evolution than the others. The calibration method is slightly above the rest. A similar graph by firm size for each method is also available in the Appendix. But what about the correlation between these methods? Table 8 below shows the Spearman rank correlation between the TFP calculation methods. Since the relationship between the TFPs is not necessarily linear, it is appropriate to use the Spearman correlation. Not surprisingly, there is a strong positive relationship between TFP estimation methods (more than 92%), which is consistent with the work of Van Beveren (2012). The strongest is the correlation between the OP and WRDG methods (0.9942) followed by the correlation between LP and ACF (0.9937). The correlation between the OP and LP methods is 0.9930. However, the correlations between the non-parametric methods (DEA and calibration methods) and the other methods are relatively low, even negative with the calibration method. We have a negative correlation between the calibration method and the other methods. However, this negative correlation is less strong. The highest and lowest correlation (in absolute value) is between the CM and FE methods (0.3569) and between the CM and WRDG methods (0.1165), respectively. The DEA method is positively correlated with the WRDG, OP, LP, and ACF methods at over 32%. The correlation between the DEA and FE methods is less significant (0.2379). The strongest correlation -positive -(0.4190) is between the two non-parametric methods (DEA and CM).
Conclusion
This paper reviewed some methods for calculating TFP. The first part of the paper theoretically explored the strengths and weaknesses of each TFP estimation method, ranging from parametric, semi-parametric to non-parametric techniques. In the second part, we mobilized firm-level data from 2009 to 2018 from the FARE database (Statistique structurelle annuelle d'entreprises issue du dispositif ESANE) in the French construction sector, particularly in the construction of residential and non-residential buildings. Our estimation results (especially the semi-parametric methods and the Wooldridge method) reveal few differences in input elasticities among the methods used. However, some remarks can be made.
In each of the estimates (FE, [START_REF] Wooldridge | Econometrics: Panel Data Methods[END_REF] or semi-parametric methods), returns to scale are decreasing. This result may be related to the organizational structure of the sector in which big firms dominate the market. We are close to an oligopolistic market. [START_REF] Wooldridge | Econometrics: Panel Data Methods[END_REF] estimator produces a statistically significant estimate of capital relative to other approaches. Consistent with the literature, the fixed-effects estimator provides a lower coefficient on capital than the other methods. On the other hand, we have evidence that the same estimator overestimates both the value of the scale effect (in absolute value) and the coefficient on intermediate inputs.
Because the [START_REF] Ackerberg | Identification properties of recent production function estimators[END_REF] methodology treats labor as a dynamic input whose choice affects future profits, the scale effect (in absolute value) is lower. Also, the capital elasticity obtained by the ACF method is lower than that obtained by the OP, LP and WRDG methods. However, it is likely to provide the most plausible estimates among the TFP estimation methods because it corrects for the OP and LP methods and does not lose information compared to the WRDG method.
The non-parametric TFP methods reveal two major results. First, based on the calibration method, the building sector in France is labor-intensive. Second, using the DEA method, we show that on average, all big firms operate on or very near the production frontier from 2009 to 2018. The TFP estimation methods are highly correlated with each other, except for the correlations between the non-parametric methods (DEA and calibration methods) and the other methods. They are very weak, even negative with the calibration method.
Our contribution to the literature on productivity measurement is obvious. Using several methods, we measure TFP in an economically important sector of the French economy that has faced a productivity gap in recent years. We show that returns to scale are decreasing in the sector and that the semi-parametric estimation methods and the [START_REF] Wooldridge | Econometrics: Panel Data Methods[END_REF] method used on firm-level data are quite similar. However, as the choice of the appropriate method is strongly conditioned by the research question, the ACF method, to our knowledge, can be a good estimator of TFP in the French construction sector. The estimation results in Table 8 are in perfect agreement with those obtained in the body of the paper. All of these results emphasize that our sample is in harmony with the full sample.
Appendix Production function estimation using total construction
0420*** -0.0200*** -0.0249*** -0.0201*** -0.0146*
Figure 2 :
2 Figure 2: Relationship between output and inputs
Figure 3 :
3 Figure 3: Total average evolution of total factor productivity
0520*** -0.0239*** -0.0296*** -0.0281*** -0.0107*
Table 2
2
presents summary statistics
Table 2 :
2 Summary statistics of production variables
Table 3 :
3 Proportion by size
Number of observations Proportion (%)
Micro 23,164 84.10
Small 3,645 13.23
Medium 615 2.23
Big 120 0.44
Table 4 :
4 Production function estimates Where γ=Scale effect=(β l + β k + β m -1); ln(Y/L)=Production per worker in logarithm; β k is the elasticity of capital per worker; β m is the elasticity of intermediate inputs per worker; N is the number of observations; ID is the Number of legal units; Firm FE and Year FE are the individual and time fixed effects respectively.
**
Table 5 :
5 presents the results of the calibration of the different elasticities. TFP calculation using calibration method
Observation (N ) Labor (β l ) Capital (β k ) Materials (β m )
27,544 0.21 0.14 0.65
table :
:
Size Year 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018
Micro 0.49 0.49 0.50 0.49 0.49 0.48 0.47 0.50 0.51 0.50
Small 0.55 0.55 0.56 0.55 0.54 0.53 0.53 0.53 0.54 0.55
Medium 0.78 0.76 0.78 0.77 0.77 0.75 0.76 0.76 0.76 0.78
Big 0.90 0.89 0.90 0.91 0.90 0.89 0.89 0.88 0.88 0.89
Table 6 :
6 Average efficiency by sizeAccording to Table
Table 7 :
7 The following table provides a descriptive statistic of the TFP methods. Summary statistics of TFP methods Table
Methods N Mean Sd Min Max
Fixed effects 27,544 0.103 0.035 0 1
Wooldridge (2009) 27,544 0.131 0.04 0 1
Olley and Pakes (1996) 27,544 0.125 0.038 0 1
Levinsohn and Petrin (2003) 27,544 0.117 0.038 0 1
Ackerberg, Caves and Frazer (2015) 27,544 0.115 0.037 0 1
Calibration method 27,544 0.165 0.092 0 1
Data Analysis Envelopment 27,544 0.507 0.2 0.132 1
Table 9 :
9 Production function estimates
**
See Lipsey and Carlaw (2000, 2004)
Statista, March 2021
In its simple definition, returns to scale represent the increase in efficiency as a result of the increase in production factors.
If γ = 0 then the returns are constant. If γ > 0 then the returns are increasing. If γ < 0 then the returns are decreasing.
The choice of production factors is not under the control of the econometrician, but determined by the individual choices of firms[START_REF] Griliches | Production functions: the search for identification[END_REF]. According to De Loecker (2007), this simultaneity bias is defined as the correlation between the level of production factors and the unobserved productivity shock.
There is a technology characterized by a linear relationship between intermediate inputs and output, and these intermediate inputs are proportional to output[START_REF] Gandhi | Does value added overstate productivity dispersion? Identification and estimation of the gross output production function[END_REF]).
These assumptions can be relaxed. However, we will need other assumptions to calculate a user cost of capital.
50 replicates are performed for the semi-parametric and[START_REF] Wooldridge | Econometrics: Panel Data Methods[END_REF] methods. We use investment as a proxy in[START_REF] Wooldridge | Econometrics: Panel Data Methods[END_REF].
Powered by TCPDF (www.tcpdf.org)
Average change in total factor productivity by size
The following graph shows the average evolution of the TFP obtained by each method according to the size of the firm. Regardless of the method used, large firms consistently show a high average variation in TFP. For FE, CM and DEA methods, the larger the size of the company, the higher the average variation in TFP. However, with both the LP and ACF methods, micro-firms have a higher average TFP change than medium-sized firms in recent years, and even the same as large firms in 2018 with the ACF method. |
04107492 | en | [
"info.info-ma"
] | 2024/03/04 16:41:22 | 2022 | https://theses.hal.science/tel-04107492v2/file/TH2022CHAPUTREMY.pdf | Keywords: Machine Ethics, Artificial Moral Agents, Multi-Agent Systems, Multi-Agent Reinforcement Learning, Multi-Objective Reinforcement Learning, Ethical Judgment, Hybrid Neural-Symbolic Learning, Moral Dilemmas, Human Preferences Éthique computationnelle, Agents moraux artificiels, Systèmes multi-agent, Apprentissage par renforcement multi-agent, Apprentissage par renforcement multiobjectif, Jugement éthique, Apprentissage hybride neuro-symbolique, Dilemmes moraux, Préférences humaines vi Logic-based rules D. Argumentation graphs
The past decades have seen tremendous progress of Artificial Intelligence techniques in many fields, reaching or even exceeding human performance in some of them. This has led to computer systems equipped with such AI techniques being deployed from the constrained and artificial environments of laboratories into our human world and society, in order to solve tasks with real and tangible impact. These systems have a more or less direct influence on humans, be it their lives in the most extreme cases, or in a more subtle but ubiquitous way, their daily lives. This raises questions about their ability to act in accordance with the moral values that are important to us.
Various fields of research have addressed aspects of this problem, such as the ability to provide fair and just decisions, or the ability to be intelligible, thus providing human users with reasons to trust them, and to know when not to trust. In this thesis, we focus particularly on the area of Machine Ethics, which is concerned with producing systems that have the means to integrate ethical considerations, i.e., systems that make ethical decisions in accordance with the human values that are important to society.
Our aim is thus to propose systems that are able to learn to exhibit behaviours judged as ethical by humans, both in situations of non-conflicting ethical stakes, but also in more complex cases of dilemmas between moral values. We propose 3 contributions, each with a different objective, which can be taken independently of each other, but which have been conceived and thought to work together, to combine their benefits, and to address the overall problem.
Firstly, we propose a reinforcement learning algorithm, capable of learning to exhibit behaviours incorporating these ethical considerations from a reward function. The goal is to learn these ethical considerations in many situations over time. A multiagent framework is used, which allows on the one hand to increase the richness of the environment, and on the other hand to offer a more realistic simulation, closer to our intrinsically multi-agent human society, in which these approaches are bound to be deployed. We are particularly interested in the question of how agents adapt to changes, both to environmental dynamics, such as seasonal changes, but also to variations in the ethical mores commonly accepted by society.
iii Our second contribution focuses on the design of the reward function to guide the learning of agents. We propose to integrate judging agents, based on symbolic reasoning, tasked with judging learning agents' actions, and determining their reward, relatively to a specific moral value. The introduction of multiple judging agents makes the existence of multiple moral values explicit. Using symbolic judgment facilitates the design by application domain experts, and improves the intelligibility of the produced rewards, providing a window into the motivations that learning agents receive.
Thirdly, we focus more specifically on the question of dilemmas. We take advantage of the existence of multiple moral values and associated rewards to provide information to the learning agents, allowing them to explicitly identify those dilemma situations, when 2 (or more) moral values are in conflict and cannot be satisfied at the same time. The objective is to learn, on the one hand, to identify them, and, on the other hand, to decide how to settle them, according to the preferences of the system's users. These preferences are contextualized, i.e., they depend both on the situation in which the dilemma takes place, and on the user. We draw on multi-objective learning techniques to propose an approach capable of recognizing these conflicts, learning to recognize dilemmas that are similar, i.e., those that can be settled in the same way, and finally learning the preferences of users.
We evaluate these contributions on an application use-case that we propose, define, and implement: the allocation of energy within a smart grid.
Résumé (Fr)
Les précédentes décennies ont vu un immense progrès des techniques d'Intelligence Artificielle, dans de nombreux domaines, allant jusqu'à atteindre, voire dépasser, les performances humaines dans certains d'entre eux. Cela a mené des systèmes informatiques équipés de telles techniques d'IA à quitter les environnements contraints et artificiels des laboratoires, pour être déployés dans notre monde et notre société humaine, afin de résoudre des tâches ayant un impact bien réel. Ces systèmes ont une influence plus ou moins directe sur des humains, que ce soit leur vie pour les cas les plus extrêmes, ou de manière plus subtile, mais plus ubiquitaire, leur quotidien. Des questions se posent ainsi quant à leur capacité d'agir en accord avec les valeurs (morales) qui nous semblent importantes.
Divers champs de recherche se sont intéressés à des aspects de ce problème, tels que la capacité à fournir des décisions équitables et justes, ou encore la capacité à être intelligible, et ainsi fournir aux utilisateurs humains des raisons d'accorder leur confiance, et de savoir quand ne pas l'accorder. Dans cette thèse, nous nous concentrons particulièrement sur le domaine des Machine Ethics, qui consiste à produire des systèmes ayant les moyens d'intégrer des considérations éthiques, c'est-à-dire des systèmes ayant une prise de décision éthique, en accord avec les valeurs humaines qui sont importantes à la société.
Notre but est ainsi de proposer des systèmes, qui soient capables d'apprendre à exhiber des comportements jugés comme éthiques par les humains, à la fois dans des situations ayant des enjeux éthiques non en conflit, mais aussi dans les cas plus complexes de dilemmes entre les valeurs morales. Nous proposons 3 contributions, chacune ayant un objectif différent, pouvant être prises indépendamment les unes des autres, mais ayant été conçues pour s'associer afin de combiner leurs avantages, et de répondre à la problématique globale.
Premièrement, nous proposons un algorithme d'apprentissage par renforcement, capable d'apprendre à exhiber des comportements intégrant ces considérations éthiques à partir d'une fonction de récompense. Le but est ainsi d'apprendre ces enjeux éthiques dans de nombreuses situations, au fil du temps. Un cadre multi-agent est utilisé, ce qui augmente v d'une part la richesse de l'environnement, et d'autre part offre une simulation plus réaliste, plus proche de notre société humaine, intrinsèquement multi-agent, et dans laquelle ces approches sont vouées à être déployées. Nous nous intéressons particulièrement à la question de l'adaptation des agents aux changements, à la fois aux dynamiques de l'environnement, tels que les changements saisonniers, mais aussi aux variations dans les moeurs éthiques couramment acceptées par la société.
Notre deuxième contribution se concentre sur la conception de la fonction de récompense, afin de guider l'apprentissage. Nous proposons l'intégration d'agents juges, se basant sur du raisonnement symbolique, chargés de juger les actions des agents apprenants et déterminer leur récompense, relativement à une valeur morale spécifique. L'introduction de multiples agents juges permet de rendre explicite l'existence de multiples valeurs morales. L'utilisation de jugement symbolique facilite la conception par des experts du domaine applicatif, et permet d'améliorer l'intelligibilité des récompenses ainsi produites, ce qui offre une fenêtre sur les motivations que reçoivent les agents apprenants.
Troisièmement, nous nous focalisons plus précisément sur la gestion des dilemmes. Nous profitons de l'existence de multiples valeurs morales afin de fournir plus d'informations aux agents apprenants, leur permettant ainsi d'identifier explicitement ces situations de dilemme, lorsque 2 valeurs morales (ou plus) sont en conflit et ne peuvent être satisfaites en même temps. L'objectif est d'apprendre, d'une part à les identifier, et d'autre part à les trancher, en fonction des préférences des utilisateurs et utilisatrices du système. Ces préférences sont contextualisées, elles dépendent ainsi à la fois de la situation dans laquelle se place le dilemme, et à la fois de l'utilisateur. Nous nous inspirons des techniques d'apprentissage multi-objectif afin de proposer une approche capable de reconnaître ces conflits, d'apprendre à reconnaître les dilemmes qui sont similaires, autrement dit ceux qui peuvent être tranchés de la même manière, et finalement d'apprendre les préférences des utilisateurs et utilisatrices.
Nous évaluons ces contributions sur un cas d'application que nous proposons, définissons et implémentons : la répartition d'énergie au sein d'une smart grid.
Viabilité environnementale : les agents doivent éviter d'acheter de l'énergie provenant de sources polluantes.
Notre première contribution concerne 2 algorithmes d'apprentissage, nommés Q-SOM et Q-DSOM, qui reposent sur l'utilisation de Cartes Auto-Organisatrices (Self-Organizing Map -SOM), aussi appelées cartes de Kohonen. Ces cartes permettent de faire le lien entre des domaines continus pour les états du monde et les actions de l'agent et des identifiants discrets utilisables dans une Q-Table pour apprendre les intérêts de chaque paire état-action, selon un algorithme proche du Q-Learning. ix Les principaux intérêts de ces deux algorithmes, face aux autres algorithmes de l'état de l'art, résident en :
• leur capacité à manipuler des espaces continus tout en offrant une représentation discrète, ce qui permet une manipulation et une comparaison des états et actions ; • leur capacité à s'adapter au changement, grâce aux propriétés des cartes de Kohonen, et de leur extension, les cartes dynamiques (DSOM).
Ces algorithmes sont évalués sur diverses fonctions de récompense, qui intègrent des considérations éthiques, telles que les valeurs morales décrites précédemment. Certaines des fonctions mélangent plusieurs de ces considérations, afin de tester la capacité des agents à apprendre plusieurs valeurs morales. Enfin, deux des fonctions sont spécifiquement conçues pour changer de définition après un certain nombre de pas de temps, ce qui force les agents à s'adapter afin de maximiser leur espérance de récompenses.
Nous comparons nos algorithmes à d'autres algorithmes de l'état de l'art, tels que DDPG et son extension multi-agent, MADDPG.
Notre deuxième contribution concerne la construction de la récompense. Traditionnellement, les récompenses en apprentissage par renforcement sont calculées par des fonctions numériques. Nous proposons d'introduire des agents juges dans l'environnement, chargés de déterminer la récompense pour chacun des agents apprenants, selon les valeurs morales qu'ils représentent, à partir de techniques de raisonnement, telles que des règles logiques ou de l'argumentation.
De tels agents permettent :
• d'expliciter les différentes valeurs morales, et ainsi identifier plus facilement les conflits entre elles ; • de spécifier le comportement attendu par un jugement symbolique, en déterminant les éléments acceptables parmi le comportement actuel de l'agent et les éléments inacceptables ou qui doivent être améliorés ; • la modification et en particulier l'addition ou la suppression des règles morales par l'ajout ou la suppression d'agents juges ; • une meilleure intelligibilité des motivations des agents apprenants, c'est-à-dire des récompenses qu'ils reçoivent.
Nous implémentons cette proposition sous 2 formes différentes. La première s'inspire des agents juges Ethicaa et se base sur des agents "Croyances, Désirs, Intentions" (Bex liefs, Desires, Intentions -BDI) et des règles logiques afin de produire le raisonnement symbolique. La seconde approche se base sur des mécanismes d'argumentation.
Nous comparons ces 2 approches, à la fois vis-à-vis des récompenses numériques traditionnelles, mais aussi entre elles, afin d'évaluer les forces et faiblesses de chacune.
Notre troisième et dernière contribution se focalise sur la question des dilemmes et vise à intégrer un peu plus l'humain dans la boucle. Tandis que la seconde contribution, à des fins de simplification, agrégeait les récompenses produites par chaque agent juge, autrement dit par chaque valeur morale, dans cette contribution, nous étendons l'algorithme d'apprentissage pour recevoir en entrée de multiples récompenses, c'està-dire un cadre multi-objectif. Les agents apprenants reçoivent ainsi la récompense correspondant à chaque valeur morale individuellement.
Cela leur permet principalement d'identifier les situations de dilemmes, dans lesquelles il est impossible de satisfaire toutes les valeurs morales en même temps. À partir de cette identification, nous proposons de "trancher" les dilemmes selon les préférences des utilisateurs. Nous postulons que celles-ci sont contextualisées, c'est-à-dire qu'elles dépendent du contexte dans lequel se produit le dilemme : par exemple, nous ne ferions probablement pas le même choix en hiver qu'en été.
L'algorithme d'apprentissage est ainsi modifié afin de :
1. apprendre des actions qui soient intéressantes, quelles que soient les préférences humaines, afin de proposer des compromis aux utilisateurs si un dilemme survient ; 2. identifier automatiquement, selon une définition que nous proposons, qui s'inspire de la littérature existante, mais qui prend en compte les préférences humaines, les situations de dilemme ; 3. proposer les alternatives aux utilisateurs et apprendre à trancher ces dilemmes comme ils le souhaitent, en contexte.
Le découpage des dilemmes en contextes se fait en grande partie grâce à l'utilisateur.
Nous créons des profils d'utilisateur fictifs, afin de démontrer la capacité de cette approche à apprendre les préférences utilisateurs et à s'adapter à celui-ci.
Ainsi, nos 3 contributions, qui peuvent être prises séparément, mais qui sont conçues pour s'appuyer les unes sur les autres, compensant ainsi les faiblesses de chacune et proposant un cadre général, permettent de répondre à nos objectifs initiaux.
La diversité au sein de la société est traitée par :
xi
• l'utilisation de multiples agents apprenants, qui s'appuient sur des profils différents, à la fois dans leurs habitudes de consommation, mais aussi leurs préférences sur les valeurs morales ; • l'utilisation de multiples agents juges, qui représentent des valeurs morales différentes.
L'évolution du consensus éthique est traité par :
• les algorithmes d'apprentissage, qui sont conçus avec cet objectif en tête, qui intègrent des choix de conception cohérents avec celui-ci et qui sont spécifiquement évalués sur cet aspect ; • la présence de multiples agents juges, qui facilite les mécanismes d'ajout ou de suppression, donc la modification, des valeurs morales.
L'apprentissage des comportements, avec ou sans emphase sur la notion de dilemmes, est traité par les algorithmes d'apprentissage, dont la première contribution apprend de manière générale, c'est-à-dire que les conflits entre valeurs morales sont moyennés pour obtenir un résultat satisfaisant en moyenne ; la troisième contribution apporte l'emphase sur les dilemmes pour traiter explicitement ces compromis, ce qui rend le système plus performant et également plus intelligible pour l'utilisateur, en explicitant ces points de blocage qu'il ne sait pas ou ne peut pas résoudre.
La construction de la récompense se fait par l'intermédiaire des agents juges, dont le raisonnement symbolique permet notamment d'intégrer des connaissances expertes. L'argumentation en particulier est très efficace pour éviter les phénomènes de reward hacking, grâce à sa relation d'attaque entre les arguments, afin d'empêcher des comportements précis.
Enfin, l'étude de la faisabilité a été démontrée au fur et à mesure par la conception d'un cas d'application, l'implémentation d'un simulateur et l'évaluation des contributions sur celui-ci.
xii
First and foremost, I would like to thank my best friends, Dylan and Kévin, for supporting (and even enduring!) me throughout these 3 years (and even before that). Our numerous evenings of playing games have helped me release the stress of my daily (and sometimes nightly) work. Dylan deserves a golden (chocolate) medal for hearing my (too many) rants about administrative problems, my unconditional love for Linux, and hatred for Windows (to the point it became a running gag). The post-defence buffet would not have been such a success without his help. Kévin also deserves a medal for all the laughs we had -our sessions together were a heist of fun! -and for his legendary Breton crêpes.
Many thanks to Ambre for the (delicious) restaurants we had together, even though my belly fat and wallet do not participate in the appreciation; and to Maëlle for making me discover Friends, and for our "Simpson binge-watching" evenings.
This thesis would probably not have happened without my parents and my family: they have always pushed me towards academic excellence, and have nurtured my curiosity since primary school.
Special thanks to my friends, which, I know, I have not seen enough recently due to this thesis: Pierre, Grégory, Rayane, Bénédicte, and Admo. I promise you, now that's finished, we will see each other again! I would also like to thank all of my colleagues at the LIRIS, who have made this research experience great, and especially my colleagues from the SyCoSMA team: Arthur, Simon (F.), Simon (P.), Bastien, Frédéric, Mathieu, Laetitia, Alain, Véronique. I will miss our profound (and sometimes stupid) conversations at lunchtime, but at least we will still have our board games evenings.
Last but not least, my thesis would not have been the same without my co-directors: Salima, Olivier, and Mathieu. Thank you for proposing such an interesting subject, accepting me as your PhD candidate, and more importantly, taught me like you did, both in your respective fields and in the general research methodology. It was great working xiii with you, and I hope we will be able to continue our fruitful collaboration in the years to come.
I would like to take this opportunity to pay a special homage to Salima in particular, who was kind enough to accept me on this exciting project. I was very happy to find something other than Deep Learning and computer vision, and I don't regret my choice after these years! Although she was always very (too?) busy, she always took time to help me when I needed it. I learned a lot from her (probably a few bad habits along the way, such as checking my email in meetings), and I intend to honor her teachings in my future career.
And finally, thank you, dear reader, for taking the time to read my manuscript!
Remy Chaput xiv
List of Figures
Context
During the last decades, Artificial Intelligence (AI) techniques have been considerably developed. These techniques have made impressive progress in various tasks, such as image recognition, prediction, behaviour learning, etc. In some specific domains, such AI techniques are even deemed to exceed human performance.
Due to this, AI-powered applications are increasingly deployed in our society. For example, loans can be granted based on the recommendation of an AI system. We might encounter more and more artificial agents, either physical, i.e., robots, or virtual, in the next years.
A potential risk, which is frequently mentioned, is the "Value Alignment Problem" [START_REF] Dignum | Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way[END_REF]World Economic Forum, 2015). We can indeed wonder to which degree these systems, while solving their given task, will adopt behaviours that are aligned, or misaligned, with our human values.
This question has led to the rise of several fields of research more or less related to the Ethics of AI. For example, the Fairness community aims to limit the learning of biases from the datasets that AI systems use to learn how to solve their task. The Explainable AI field aims to increase our ability to understand the process that these systems execute, such that we can agree with their outputs, or on the contrary question them. As such, it might help us know when to trust that these systems act accordingly to our values, and when we cannot trust them.
In this thesis, we will more specifically focus on the Machine Ethics field. This field "is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making" [START_REF] Anderson | Machine Ethics[END_REF].
As a general framework, we consider that AI systems are integrated in a human society (ours), and do not live in their own world. As such, we will not talk about the ethics of the technical system alone, but rather the ethics of the whole Socio-Technical System (STS), which comprises both the artificial and human agents. Firstly, ethical considerations already exist in the social part of the system, and we can probably capture them for the technical part, either by (domain) expert knowledge, or user knowledge. Secondly, another consequence of such a framework is that the technical system can be used to improve humans' own ethical reasoning and considerations. Dilemmas can be leveraged to help users, and more generally stakeholders, reflect on their current considerations, acknowledge new dilemmas that they did not know about, and perhaps propose ways to evolve their ethical preferences and stances. As such, we should not automatically hide away dilemmas, but instead expose them to users. This requires some interpretability, and interaction, to allow for this co-construction of ethics within the STS. Whereas we did not solve these requirements (interpretability alone is a whole field of research), they served as some sort of guidelines, shaping the problems and potential approaches.
Throughout the manuscript, we will sometimes refer to this general framework to emphasize the importance of some design choices. These design choices pave the way for the requirements of the general framework.
From this general framework, we particularly extract two assumptions. First, ethical stakes and the whole ethical knowledge that is necessary for ethical decision-making, i.e., the ability to make decisions that integrate ethical considerations and would be deemed as ethically-aligned by humans, necessarily comes from humans. Whether artificial agents can be fully moral agents is an out-of-scope debate for this manuscript, and we leave this question to moral philosophers. We will simply assume that current agents are probably not able to demonstrate the same level of ethical capabilities as humans, because they lack some metaphysical properties, such as free will. Despite this, we still would like them to make actions that consider, as much as possible, ethical considerations, so as to improve our lives.
Floridi and Sanders argue that, whereas there is a difference between artificial agents and humans, under the correct Level of Abstraction, some artificial agents can be said to have moral accountability. They propose an alternative definition of morality that allows including these artificial agents [START_REF] Floridi | On the Morality of Artificial Agents[END_REF]):
An action is said to be morally qualifiable if and only if it can cause moral good or evil. An agent is said to be a moral agent if and only if it is capable of morally qualifiable action.
Chapter 1
In order to make artificial agents capable to make morally good actions, rather than evil, we argue they need to receive ethical considerations, from those who possess them, i.e., humans. We will refer to this as the ethical injection, of which the particular form depends on the specificities of employed AI techniques, the envisioned use-case, etc.
The second assumption is that, although we share similar moral values, human ethical preferences are contextualized. They are different from a person to another, and between different situations. This means that, on the one hand, the ethical injection must be sufficiently rich to embed these contextualized preferences, and on the other hand, the artificial agents should learn the different preferences, and exhibit a behaviour consistent with the multiple preferences, instead of a single set.
Objectives
In this thesis, we focus on, and solve, the following objectives.
1. Represent and capture the diversity of moral values and ethical preferences within a society.
1. This diversity derives from multiple sources: multiple persons, multiple values, and multiple situations. 2. The currently accepted moral values, and their definitions, i.e., the "consensus" as part of societal norms and mores, may shift overtime.
2. Learn ethical behaviours, in situations, which are well-aligned with human values.
1. Behaviours should consider non-dilemma situations, which may imply ethical stakes, but not necessarily in conflict. 2. Behaviours should consider dilemmas situations, with conflicts between stakes. 3. The system must learn according to a well-defined specification of the desired behaviour, to avoid "specification gaming"1 or mis-alignment.
3. Implement a prototype use-case to demonstrate the feasibility of the approach.
To help us solve these objectives, we formulate the following design choices. These choices will be partially supported in our State of the Art.
1. Reinforcement Learning (RL) is an appropriate method to learn situational behaviours. 2. Continuous domains (for both states and actions) allow for more complex environments, including multi-dimensional representations, which would be less practical to implement with discrete domains. 3. Multi-agent systems can help us address diversity, by representing different persons and encode various moral values. 4. Agentifying the reward construction paves the way for co-construction, with both artificial agents and humans, as per the general framework, and facilitates adaptation to shifting ethical consensus, as per our objectives.
From the aforementioned design choices and objectives, we pose the following research questions.
1. How to learn behaviours aligned with moral values, using Reinforcement Learning with complex, continuous, and multi-dimensional representations of actions and situations? How to make the multiple learning agents able to adapt their behaviours to changes in the environment?
2. How to guide the learning of agents through the agentification of reward functions, based on several moral values to capture the diversity of stakes?
3. How to learn to address dilemmas in situation, by first identifying them, and then settling them in interaction with human users? How to consider contextualized preferences in various situations of conflicts between multiple moral values?
We present 3 contributions that aim to answer our research questions; they are briefly introduced here.
1. Two RL algorithms that learn behaviours "in general" (without emphasis on dilemmas), using continuous domains, and particularly focusing on adaptation to changes, both in the environment dynamics and the reward functions.
2. A hybrid method relying on symbolic judgments to construct the reward function from multiple moral values, focusing on the understandability of the resulting function.
3. A multi-objective extension to the RL algorithms that focuses on addressing dilemmas, by learning interesting actions for any possible human preferences, identifying
4
Chapter 1 situations of dilemma, and learning to settle them explicitly, according to contextualized human preferences.
Table 1.1 summarizes the relation between objectives, research questions and contributions.
Tab. 1.1.: Associations between objectives, research questions, and contributions. Objective O3 (implementation of prototype use-case) is transverse to all contributions.
Plan
This manuscript is divided into 7 chapters. The (current) first one has introduced the context of this thesis, as well as the objectives and research questions. Note that, in each of our contribution chapters, we present both the model and associated experiments and results. This allows contributions to be more clearly separated, and presented in an incremental manner. Advantages and limitations are immediately highlighted, before moving on to the next contribution chapter. In addition, it exemplifies each model by directly applying on a use-case, anchoring them in a practical context on top of the theoretical. Nevertheless, it does not detract from the genericity of our approach, which could be extended to other application domains.
Chapter 2 explores the different research areas related to our work, namely Machine Ethics, Multi-Agent Reinforcement Learning, Multi-Objective Reinforcement Learning, and Hybrid AI. It identifies recent advances, and the remaining challenges. An analysis and comparison of existing work raises interesting properties to be integrated in our contributions: continuous domains, multiple agents, multiple moral values, a hybrid approach, the ability to adapt to changes, and interaction with the user. Our problematic, objectives and research questions are detailed in the light of these elements.
6
Chapter 1
Chapter 3 specifies our positioning, methodological approach, presents our architecture conceptually and describes our application case. The first part shows our vision of the integration of an artificial intelligence system within a human society, and how to integrate ethical considerations into these systems, through an ethical injection. The second part describes our methodology, as a multi-disciplinary approach that we treat incrementally through successive contributions, each validated separately and integrating with the others. It also positions our objectives in relation to the challenges identified in the state of the art. The third part conceptually presents our architecture, composed of 3 contributions, briefly introduces these contributions, how they are articulated between them, and presents the novelties of our approach. Finally, the fourth part presents the application case that we have chosen to validate our contributions: energy distribution within a Smart Grid. This presentation will allow us to exemplify each of the contributions and to anchor them in a practical case, although they are conceived as generic, and to facilitate the experiments' description.
Chapter 4 presents our first contribution: two reinforcement learning algorithms, Q-SOM and Q-DSOM, dedicated to learning behaviours aligned with moral values. These algorithms focus on the use of continuous domains and adaptation to changes in the dynamics of the environment. They are evaluated on multiple reward functions, testing various moral values, including functions that combine several values, or whose definition changes after a certain number of time steps.
Chapter 5 presents our second contribution: the agentification of reward functions, to compute these rewards through judgments, made by symbolic agents. We propose two ways to do this, the first one through logical rules and Beliefs-Desires-Intentions agents, whereas the second one uses argumentation graphs. In both cases, the moral values are explicitly defined by the system designer, and the judging agents reason about them in order to produce their judgment. We compare these two ways and evaluate the ability of agents using our learning algorithms to exhibit behaviours that correspond to these reward functions, i.e., that are aligned with the moral values on which they are based.
Chapter 6 presents our third and final contribution, which opens up the reward functions defined earlier to take a multi-objective approach. This allows agents to consider each of the moral values directly, and thus to identify situations that put one or other of these values in difficulty. We propose a definition of "dilemmas" adapted to our framework, which allows agents to recognize situations in which they are unable to satisfy all moral values at the same time. We also modify the learning algorithm so that the agents can offer alternatives to the user in these dilemma situations, i.e., actions that satisfy the moral values to different degrees. Finally, agents learn to perform the actions chosen by users in similar contexts; we particularly emphasize the contextual aspect of user preferences.
Finally, Chapter 7 summarizes our contributions, their benefits, and how they address our research questions and objectives. It analyzes their various limitations, but also the questions and avenues of research that they open up.
Background knowledge and State of the Art 2
In this chapter, we introduce the necessary knowledge to understand our contributions, and we explore the state of the art in the fields related to our work: Machine Ethics, Multi-Agent Reinforcement Learning, Multi-Objective Reinforcement Learning, and Hybrid Artificial Intelligence. This exploration allows us on the one hand to compare the existing approaches, their advantages, but also their limitations, and to define some concepts necessary to the understanding of our work. On the other hand, we identify research challenges and thus precise our research questions.
As our research questions relate to the question of behaviours aligned with moral values, we begin this state of the art with the domain of Machine Ethics. The analysis of proposed approaches and of the properties considered as important opens the way to the other fields.
Machine Ethics
The field of Machine Ethics is relatively recent among the other fields of Artificial Intelligence, although its roots might be older. Indeed, as early as 1950, [START_REF] Wiener | The human use of human beings: Cybernetics and society[END_REF] mentions the link between ethics and technology, including machines in the sense of cybernetics, and the need to use these technologies in a way that benefits humanity. On the other hand, the earliest works closer to the field of Machine Ethics as we know it today can, to the best of our knowledge, be traced back to around 2000 [START_REF] Allen | Prolegomena to any future artificial moral agent[END_REF][START_REF] Ashley | Reasoning with reasons in case-based comparisons[END_REF][START_REF] Varela | Ethical know-how: Action, wisdom, and cognition[END_REF], although the name Machine Ethics does not appear in these papers. The article by [START_REF] Allen | Prolegomena to any future artificial moral agent[END_REF], in particular, introduces the term "Artificial Moral Agent" (AMA).
The term Machine Ethics appeared as early as 1987 [START_REF] Waldrop | A question of responsibility[END_REF], but it is from 2004 onwards that it began to gain academic momentum, under the impulse of [START_REF] Anderson | Towards machine ethics[END_REF], then in 2005 with an AAAI Symposium dedicated to it.
From this symposium, a book published in 2011 gathers different essays on the nature of Machine Ethics, its importance, the difficulties and challenges to be solved, and also a few first approaches. This book defines this new field of research:
The new field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas we might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making. [START_REF] Anderson | Machine Ethics[END_REF] Being a recent field, several articles have sought to position themselves, or to offer a philosophical background. For example, [START_REF] Moor | Four kinds of ethical robots[END_REF] proposes a definition of what might be an "ethical robot", and differentiates 4 different kinds of robots, ranging from those with the least ethical considerations to those which have near-human ethical reasoning abilities. We list below his definitions:
Definition 2.1 (Ethical agents). "In the weakest sense of 'ethical agents', ethical impact agents are those agents whose actions have ethical consequences whether intended or not. Any robot is a potential ethical impact agent to the extent that its actions could cause harm or benefit to humans. (. . . ) Next, implicit ethical agents are agents that have ethical considerations built into (ie implicit in) their design. Typically, these are safety or security considerations. (. . . ) Explicit ethical agents are agents that can identify and process ethical information about a variety of situations and make sensitive determinations about what should be done. When ethical principles are in conflict, these robots can work out reasonable resolutions. Explicit ethical agents are the kind of agents that can be thought of as acting from ethics, not merely according to ethics. (. . . ) Lastly, let's distinguish explicit ethical agents from full ethical agents. Like explicit ethical agents, full ethical agents make ethical judgements about a wide variety of situations (and in many cases can provide some justification for the judgements). However, full ethical agents have those central metaphysical features that we usually attribute to ethical agents like us -features such as consciousness, intentionality and free will." [START_REF] Moor | Four kinds of ethical robots[END_REF] The goal, for Machine Ethics designers and researchers, is to attain explicit ethical agents, as it is still unsure whether artificial full ethical agents can be built.
The excitement generated by this new field has led to many approaches being proposed, and subsequently to surveys attempting to classify these approaches [START_REF] Nallur | Landscape of machine implemented ethics[END_REF][START_REF] Tolmeijer | Implementations in machine ethics: A survey[END_REF][START_REF] Yu | Building ethics into artificial intelligence[END_REF]. In the 10 Chapter 2
following, we present a brief summary of these surveys, which we organize in the form of a set of properties that we consider important for the design of AMAs. We detail some approaches in order to illustrate each of these properties and to highlight the limitations we seek to overcome.
Discrete or continuous domains
In order to implement ethical considerations into an artificial agent, these considerations must be represented. This includes, e.g., data about the current situation, and the potential actions or decisions that are available to the agent. The choice of this representation must allow both for use-case richness, and for the agent's ability to correctly use these representations. Two types of representations are commonly used: either discrete domains, which use a discrete set of symbols and discrete numbers, or continuous domains, which use continuous numbers that lead to an infinite set of symbols.
So far, discrete domains seem prevalent in Machine Ethics. For example, the emblematic Trolley Dilemma [START_REF] Foot | The problem of abortion and the doctrine of the double effect[END_REF] describes a situation where an uncontrolled trolley is driving on tracks towards a group of 5 persons. These persons, depending on the exact specification, are either unaware of the trolley, or unable to move. An agent may save this group by pulling up a lever, which would derail the trolley towards a single person.
It can be seen that the representation of both the situation and the available actions are discrete in this dilemma: 2 actions are proposed, pull the lever or do nothing, and on the tracks are present 1 and 5 persons, respectively.
Similarly, the now defunct DilemmaZ database listed a plethora of moral dilemmas, proposed by the community, of which many apply to Artificial Intelligence and IT systems in general, e.g., smart homes, robots. Although a formal description of these dilemmas is not available, most of the natural language descriptions seem to imply discrete features. This is particularly clear for the definition of actions; for example, the "Smart home -Someone smoking marijuana in a house" dilemma, by Louise A. Dennis, offers the following 3 actions: "a) do nothing, b) alert the adults and let them handle the situation or c) alert the police".
A final example is the Moral Gridworlds idea of [START_REF] Haas | Moral Gridworlds: A Theoretical Proposal for Modeling Artificial Moral Cognition[END_REF] to train a Reinforcement Learning agent "to attribute subjective rewards and values to certain 'moral' actions, states of affairs, commodities, and perhaps even abstract representations". Moral Gridworlds are based on gridworlds, which represent the environment as a 2-dimensional grid of
Machine Ethics
cells. A RL agent is placed in one of these cells, and may either act in its cell, or move to one of the adjacent cells. Again, the environment uses discrete features, both for perception, i.e., a discrete set of cells, and for actions, i.e., either act, move up, left, right, or down. Specifically, Haas chooses to implement the well-known Ultimatum Game [START_REF] Güth | An experimental analysis of ultimatum bargaining[END_REF] to try to make RL agents learn the value of fairness. The Ultimatum Game consists of two agents, a Proposer and a Responder. The Proposer receives a certain amount of money at the beginning of the game, and has to make an offer to split the money with the Responder. Once the offer is made, the Responder has to accept or reject it: when the offer is accepted, the money is split, and agents receive the agreed amount. Otherwise, if the offer is rejected, both agents receive nothing.
Perhaps the ubiquitous use of discrete representations in Machine Ethics can be at least partially explained by their simplicity of usage within AI techniques. These "discrete dilemmas" are important, because they may very well happen one day in our society. We need systems that are able to make the best decision, with respect to our moral values, in such situations.
However, there are other situations that cannot be easily described by discrete representations. For example, foretelling the Smart Grid use-case that we describe in Section 3.4, when considering an energy distribution system, we may transition from a closed question "Should the agent consume energy? yes/no" to a more open question "What power should the agent request during a given time step?". Arguably, such an action could be represented as a discrete set, by discretizing the continuous domain into a set, e.g., {0Wh, 1Wh, • • • , 1000Wh}, which contains 1001 actions. But this solution is harder to leverage when considering multi-dimensional domains: in addition to "how much energy should it consume", we may also ask "What power should the agent buy?". In this case, discretizing the continuous and multi-dimensional domain would result in a combinatorial explosion. The set of discrete actions may be represented as {(0Wh, 0Wh), (0Wh, 1Wh), (1Wh, 0Wh), (1Wh, 1Wh), • • • , (1000Wh, 1000Wh)}, which contains 1001 × 1001 different actions, where each action is represented as a pair (consumed, bought). We already see, on 2 dimensions and with a grain of 1Wh, that a million actions would require too much time and computational resources to explore and analyze, in order to find the best one. The same argument can be made for perceptions as well: for example, instead of having a perception "the situation is fair", or "the situation is unfair", we may want to have an indicator of how fair the situation is, e.g., through
Chapter 2
well-known measures such as the Gini index, which is a real number comprised between 0 (perfect equality) and 1 (perfect inequality) [START_REF] Gini | On the measure of concentration with special reference to income and statistics[END_REF].
Such situations, which imply a large, continuous and multi-dimensional domain, are as likely to happen in our society as the discrete ones. That is why we emphasize in this manuscript the importance of exploring Machine Ethics algorithms that focus on these continuous domains. This observation has motivated our 2nd design choice, which is to use continuous domains for complex environments.
Mono-or Multi-agent
According to a survey (H. [START_REF] Yu | Building ethics into artificial intelligence[END_REF], many works consider a single agent isolated in its environment. This is the case, to give some examples, of GenEth [START_REF] Anderson | A value-driven eldercare robot: Virtual and physical instantiations of a case-supported principle-based behavior paradigm[END_REF], or the ethics shaping technique [START_REF] Wu | A low-cost ethics shaping approach for designing reinforcement learning agents[END_REF]. Other approaches, such as Ethicaa [START_REF] Cointe | Ethical judgment of agents' behaviors in multi-agent systems[END_REF], use multiple agents, which take actions and have an impact in a common, shared environment.
As [START_REF] Murukannaiah | New foundations of ethical multiagent systems[END_REF] put it:
Ethics is inherently a multiagent concern -an amalgam of (1) one party's concern for another and (2) a notion of justice.
In their work, they focus on Socio-Technical Systems (STS), comprising social entities, i.e., principals and stakeholders, and technical entities, i.e., artificial agents. Both principals and stakeholders have interests in the system, e.g., because they may be impacted by the system, although only the principals are active decision-makers in the system. The artificial agents' goal is to represent and support the principals in their decision-making, by relying on their computational power, communication with other agents, and various resources such as data and sensors. To ensure that STS promote ethical outcomes, they define ethical postures, both individual, i.e., how an agent responds to a principal's value preferences, and systemic, i.e., how a STS considers all stakeholders' value preferences. The designers' goal is thus to correctly engineer the STS and agents' ethical postures, by eliciting value preferences from principals and stakeholders.
In Ethicaa [START_REF] Cointe | Ethical judgment of agents' behaviors in multi-agent systems[END_REF], a judgment process is defined to allow agents to both 1) select the best ethical action that they should make, and 2) judge the behaviour of other agents so as to determine whether they can be deemed as "ethical", with respect to one's own preferences and upheld moral values. One long-term objective of this 2nd
Machine Ethics
point can be to define and compute a trust indicator for other agents; if an agent acts ethically, we may trust it. This raises an interesting rationale for exploring Machine Ethics in Multi-Agent Systems: even if we manage to somehow create a full ethical agent, which is guaranteed to take moral values and ethical stakes into account, it will have to work with other agents. We cannot guarantee that these agents will follow the same ethical preferences, nor even that they will consider ethical stakes at all. Our own agent must therefore take this into account.
Based on the previous reasons, we argue that the multi-agent case is important. Indeed, it corresponds to a more realistic situation: such artificial agents are bound to be included in our society, and thus to have to interact with other agents, whether artificial or human, or at least to live in an environment impacted by these other agents, and not in a perfectly isolated world. The question of the impact of other agents on an agent's decision-making is thus of primary importance. Our 3rd design choice, using multi-agents, is partially motivated by this reasoning.
In order to deal with it, we focus on the field of Multi-Agent Reinforcement Learning in Section 2.3.
One or several moral values
Some approaches consider only one objective, often formulated as "maximizing wellbeing", or "being aligned with moral values", etc. This is the case of [START_REF] Wu | A low-cost ethics shaping approach for designing reinforcement learning agents[END_REF]), which we have already cited as an example above: their reward function consists in bringing the agent's behaviour closer to that of an average human. Moral values, which are the main issue, are thus hidden behind the notion of distance. The agent can, for example, satisfy almost all moral values, except for one which it does not satisfy at all. We will therefore say that it is close to the expected behaviour, but that it can still improve its behaviour. In another case, it may satisfy all the moral values to some extent, but none entirely. Here again, we say that it is close to the expected behaviour. How can we distinguish between these two situations? This idea may be linked to the notions of incommensurability and pluralism of values [START_REF] O'neill | Pluralism and incommensurability[END_REF]. A popular view in ecology, for example, is that we cannot find a common measure, such that a loss in a given value can be compensated by a gain in this measure, e.g., money. This view would thus hint towards the use of multiple moral values, rather than a single measure representing all moral values.
Chapter 2
In most application use-cases, there are multiple moral values. For example, in the context of energy allocation, we can mention on the one hand respect for ecology, and on the other hand fairness between agents. [START_REF] Dennis | Practical challenges in explicit ethical machine reasoning[END_REF] note that while in many situations there are few ethical issues, in the situations we are interested in, we often have to choose between several objectives. Thus, differentiating between multiple moral values allows us to make explicit the choice, or trade-off, to be made between them, and in particular in situations in which they conflict. It also makes it possible, as we have illustrated, to separate the cases where the agent satisfies all of them on average, or only some 100% while ignoring others.
The work of Rodriguez-Soto, Lopez-Sanchez, & Rodriguez-Aguilar ( 2021) is one of the few that explicitly targets multiple objectives. They extend the definition of a Multi-Objective Markovian Decision Process (MOMDP) to an Ethical Multi-Objective Markovian Decision Process (Ethical MOMDP), by decomposing the reward function into 3 components:
• An individual component R O .
• An evaluative reward function R E to reinforce praiseworthy actions.
• A normative reward function R N to penalise actions that violate norms.
From this, they define an ethical policy as a policy that maximizes the ethical objective, i.e., the normative and evaluative components. The agent's goal is then to learn a policy that is ethical-optimal, i.e., the policy that maximizes the individual component among ethical policies. A policy that yields an even higher value for the individual component may exist, but at the expense of the normative and/or evaluative components. Thus, their definition ensures that the ethical stakes are considered by the agent first and foremost.
However, their work has a few limitations. The most important one is that they use only 2 components for the ethical stakes, namely the evaluative and normative. This means that, if the agent should consider several moral values, they will be merged together in these components. The agent will thus not be able to identify trade-offs between these moral values. Similarly, as the agent tries to maximize the sum of the evaluative and normative components, a negative reward can be compensated by a positive one. For example, an agent could steal an object, thus violating a norm, to make a gift to someone, which is praiseworthy. The "ethical objective" could be satisfied, or not, depending on the respective weights of these actions in their own components.
It seems important to consider multiple moral values explicitly, as we humans are more likely to have multiple ones on our mind in a given situation. Also, as we have mentioned,
Machine Ethics
explicitly specifying them avoids putting the burden of balancing them on AI designers. This is captured by our objective O1.1, which focuses on diversity, especially through moral values.
In order to correctly consider these different moral values, we look at techniques from the field of Multi-Objective Reinforcement Learning in Section 2.4. We explain what they entail, how they differ, and describe a few representative works of each one below.
Top-Down
Top-Down approaches are interested in formalizing existing ethical principles from moral philosophy, such as Kant's Categorical Imperative, or Aquinas' Doctrine of Double Effect. The underlying idea is that, if these moral theories could be transformed into an algorithm that agents could follow to the letter, surely these agents' behaviour would be deemed as ethical by human observers.
This formalization is often done through symbolic representation and reasoning, e.g., through logic, rules-based techniques, or even ontologies. Reasoning over these symbolic representations can rely upon expert knowledge, a priori injected. They also offer a better readability, of both the injected knowledge, and the resulting behaviour.
Top-Down approaches allow for a variety of ethical theories, as they can represent:
• Consequentialist theories, which are focused on actions' consequences. These theories try to evaluate the impact of each possible action in the current situation. The retained action is the one which offers the "best" consequences, both in the short and long term. The precise definition of "best" here depends on the considered theory.
16
Chapter 2
• Deontological theories, which focus on actions themselves and their motives, instead of consequences. Using deontological theories, an action may be forbidden, not because of its consequences, but rather because of what it implies in itself.
The difference between consequentialist and deontological theories is often illustrated with the "Fat Man" variation of the Trolley Dilemma, proposed by Thomson (1976, example 7). Let us consider a trolley out of control, speeding towards a set of 5 persons on the tracks, which are unable to move. However, this time, instead of having the possibility to steer a lever, we are on a bridge, next to a "fat man". We may stop the trolley, by pushing this fat man onto the trolley, thus saving the 5 persons, but killing the fat man at the same time. Consequentialist theories will most likely consider that a cost of 1 life is less than saving 5 persons, and thus, we should push the fat man. On the contrary, deontological theories are likely to forbid pushing the fat man: by taking this action, we would directly kill this person, which is neither desirable nor acceptable.
One of the advantages of Top-Down approaches is this ability to leverage such existing ethical principles from moral philosophy. Intuitively, it seems indeed better to rely on theories proposed by moral philosophers, which have been tested and improved over time.
Another advantage, emphasized by the work of [START_REF] Bremner | On Proactive, Transparent, and Verifiable Ethical Reasoning for Robots[END_REF], is the ability to use formal verification to ensure that agents' behaviours stay within the limits of the specified rules. To do so, the Ethical Layer they propose includes a planning module that creates plans, i.e., sequences of actions, and an ethical decision module to evaluate the plans, prevent unethical ones, and proactively ask for new plans if necessary. See Figure 2.1 for an illustration of the Ethical Layer.
This formal verification ability is an important strength, as there are worries about agents malfunctioning. An agent that could be formally verified to stay within its bounds, could be said to be "ethical", with respect to the chosen ethical principle or theory.
However, there are some weaknesses to Top-Down approaches. For example, conflicts between different rules may arise: a simple conflict could be, for example, between the "Thou shalt not kill" rule, and another "You may kill only to defend yourself". The second one should clearly define when it is allowed to take precedence over the first one. A more complicated conflict would be two rules that commend different, non-compatible actions. For example, let us imagine two missiles attacking two different buildings in our country: the first one is a hospital, the second one is a strategic, military building, hosting our defense tools. An autonomous drone can intercept and destroy one of the two missiles, but not the two of them; which one should be chosen? A rule may tell us to protect human lives, whereas another encourages us to defend our arsenal, in order to be able to continue protecting our country. These two rules are not intrinsically in conflict, unlike our previous example: we would like to follow both of them, and to destroy the two missiles. Unfortunately, we are physically constrained, and we must make a choice. Thus, a rule has to be preferred to the other.
Ethicaa [START_REF] Cointe | Ethical judgment of agents' behaviors in multi-agent systems[END_REF] agents make a distinction between the moral values and ethical principles, and they consider multiple ethical principles. Each ethical principle determines whether an action is ethical, based on the permissible and moral evaluations.
Multiple actions can thus be evaluated as ethical by the ethical principles, and, in many cases, there is no single action satisfying all ethical principles. To solve this issue, agents also include a priority order over the set of ethical principles known to them. In this way, after an agent determines the possible, moral, and ethical actions, it can choose an action, even if some of its rules disagree and commend different actions. To do so, they filter out the actions that are not evaluated as ethical, and thus should not be selected, by their most preferred ethical principle, according to the ethical priority order. As long as multiple actions remain considered, they move on to the next preferred ethical principle, and so on, until a single action remains.
Finally, another drawback is the lack of adaptability of these approaches. Indeed, due to their explicit but fixed knowledge base, they cannot adapt to an unknown situation, or to an evolution of the ethical consensus within the society. We argue that this capability to adapt is particularly important. It is similar to what Nallur (2020) calls the Continuous Learning property:
Any autonomous system that is long-lived must adapt itself to the humans it interacts with. All social mores are subject to change, and what is considered ethical behaviour may itself change.
We further note that, in his landscape, only 1 out of 10 considered approaches possesses this ability (Nallur, 2020, Table 2). This important property has therefore not been studied enough; in this thesis, it is captured by Objective 1.2.
Bottom-Up
Bottom-Up approaches try to learn a behaviour through experience, e.g., from a dataset of labeled samples, or trial and error interactions.
For example, GenEth [START_REF] Anderson | A value-driven eldercare robot: Virtual and physical instantiations of a case-supported principle-based behavior paradigm[END_REF] uses ethicists' decisions in multiple situations as a dataset representing the ethical considerations that should be embedded in the agent. This dataset is leveraged through Inductive Logic Programming (ILP) to learn a logical formula that effectively drives the agent's behaviour, by determining the action to be taken in each situation. ILP allows creating a logical formula sufficiently generic to be applied to other situations, not encountered in the dataset. An advantage of this approach is that it learns directly from ethicists' decisions, without having to program it by hand. The resulting formula may potentially be understandable, provided that it is not too complex, e.g., composed of too many terms or terms that in themselves are difficult to understand.
Another approach proposes to use Reinforcement Learning RL [START_REF] Wu | A low-cost ethics shaping approach for designing reinforcement learning agents[END_REF]. Reinforcement Learning relies on rewards to reinforce, or on contrary, to mitigate a given behaviour. Traditionally, rewards are computed based on the task we wish to solve. In the work of [START_REF] Wu | A low-cost ethics shaping approach for designing reinforcement learning agents[END_REF], an ethical component is added to the reward, in the form of a difference between the agent's behaviour, and the behaviour of an average human, obtained through a dataset of behaviours, and supposedly exhibiting ethical considerations. The final reward, which is sent to agents, is computed as the sum of the "task" reward, and the "ethical" reward. Agents thus learn to solve their task, while exhibiting the ethical considerations that are encoded in the human samples. One advantage of this approach is that the "ethical" part of the behaviour is mostly task-agnostic. Thus, only the task-specific component of the reward has to be crafted by designers for a new task. Nevertheless, one may wonder to which extent does this dataset really exhibit ethical considerations? We humans do not always respect laws or moral values, e.g., we sometimes drive too fast, risking others' lives, or we act out of spite, jealousy, etc. To determine whether this dataset is appropriate, an external observer, e.g., a regulator, an ethicist, or even a concerned citizen, has to look at its content, and understand the data points.
These 2 approaches, although based on learning, have not considered the question of long-term adaptation to changing situations and ethical mores. Indeed, if the current society norms with regard to ethics change, these agents' behaviours will have to change as well. It will probably require to create a new dataset, and to learn the agents again, from scratch, on these new data.
Moreover, Bottom-Up approaches are harder to interpret than Top-Down ones. For example, a human regulator or observer, willing to understand the expected behaviour, will have to look at the dataset, which might be a tedious task and difficult to apprehend, because of both its structure and the quantity of data. This is all the more true with Deep Learning approaches, which require an enormous amount of data [START_REF] Marcus | Deep learning: A critical appraisal[END_REF], making datasets exploration even more daunting.
Hybrid
Finally, Hybrid approaches combine both Top-Down and Bottom-Up, such that agents are able to learn ethical behaviours by experience, while being guided by an existing ethical framework to enforce constraints and prevent them from diverging. As [START_REF] Dignum | Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way[END_REF] points out:
By definition, hybrid approaches have the potential to exploit the positive aspects of the top-down and bottom-up approaches while avoiding their problems. As such, these may give a suitable way forward. (Dignum, 2019, p. 81) One of such hybrid works is the approach by [START_REF] Honarvar | An artificial neural network approach for creating an ethical artificial agent[END_REF] to combine BDI agents with Case-based Reasoning and an Artificial Neural Network. Faced with a given situation, the agent proposes an action to perform, and then searches its 20 Chapter 2
database of already known cases for similar situations and similar actions. If a close enough case is found, and the action was considered as ethical in this case, the action is taken. However, if in this close enough case, the action was considered as unethical, a new action is requested, and the agent repeats the same algorithm. If the agent does not have a sufficiently close case, it performs the action, and uses its neural network to evaluate the action's consequences and determine whether it was effectively aligned with the ethical considerations. This evaluation is memorized in the case database, to be potentially reused during the next decision step. This approach indeed combines both reasoning and learning capabilities; however, it may be difficult to apply. Case-based reasoning allows grouping close situations and actions, but requires to specify how to group them, i.e., what is the distance function, and how to adapt an evaluation when either the situation or the action differs. For example, let us assume that, in a situation s, the agent's action was to consume 500Wh of energy, and the action was evaluated as ethical. In a new situation, s , which is deemed as similar to s by the case-based reasoner, another action is proposed, which is to consume 600Wh. Is this action ethical? How can we translate the difference between 600 and 500 in terms of ethical impact? This requires specifying an "adaptation knowledge" that provides the necessary knowledge and tools.
Still, Hybrid approaches offer the possibility of learning a behaviour, thus adapting to any change in the environment, while still guiding or constraining the agent through symbolic reasoning and knowledge, thus injecting domain expert knowledge, more easily understandable and modifiable than datasets of examples. We explore the Neural-Symbolic branch of AI in Section 2.5 to identify potential leads.
Reinforcement Learning
As we have chosen Reinforcement Learning (RL) as a method to learn behaviours aligned with moral values, we provide here the background knowledge and concepts that are necessary to understand the rest of the manuscript. We detail motivations for using RL, definitions of core concepts, and equations. Yet, we will not explore the state of the art of RL algorithms, as this is too vast for this manuscript. A few RL algorithms have been described in the Machine Ethics section, and we will focus more specifically on Multi-Agent and Multi-Objective RL in the next 2 sections.
RL is a method to learn a behaviour, mainly by using trial-and-error. [START_REF] Sutton | Reinforcement learning: An introduction[END_REF] define it as follows:
Reinforcement learning problems involve learning what to do -how to map situations to actions -so as to maximize a numerical reward signal. (Sutton & Barto, 2018, p. 2) To do so, learning agents are placed in a closed-loop with an environment, with which they interact. Through the environment, they have knowledge of which state they are in, and they take actions to change the state. One of the key points of RL is that learning agents are not told which action is the correct one; the feedback they receive, or reward, merely tells them to which degree the action was satisfying. Learning agents must discover the best action, i.e., the one that yields the highest reward, by accumulating enough experience, that is by repetitively trying each action in each situation, and observing the received rewards.
In this sense, RL is a different paradigm than the well-known supervised and unsupervised learnings. Indeed, in the supervised paradigm, the correct answer is clearly defined, by means of datasets containing examples typically annotated by human users. The agents' goal is to learn the correct association between inputs and the expected output, based on these datasets. When presented with an input x, the agent outputs a decision ŷ, and then receives the correct answer y as feedback, so it can correct itself, based on the distance between ŷ and y. On the contrary, in the unsupervised paradigm, the agent does not receive any feedback. Its goal is not the same: it must, in this case, learn the hidden structure of the input data, so it can create a compressed representation, while retaining the original information contained in data. As we mentioned, RL agents receive feedback, which differentiates them from the unsupervised paradigm. However, unlike the supervised paradigm, this feedback does not clearly indicate which was the correct answer. This removes the assumption that we know the correct answer to each input. Instead, we provide a reward function, and thus optimize the agent's output step by step, by improving the proposed action based on the reward.
There is also evidence that RL could model the learning and decision-making mechanisms of the human brain [START_REF] Subramanian | Reinforcement learning and its connections with neuroscience and psychology[END_REF]. In particular, neuromodulators and especially dopamine have been studied with RL [START_REF] Niv | Reinforcement learning in the brain[END_REF]). Yet, some human behaviours seem to be inconsistent with (current) RL, e.g., alternating actions rather than repeating when the reward is far higher than expected [START_REF] Shteingart | Reinforcement learning and human behavior[END_REF].
22
Chapter 2
The goal of a RL algorithm is to learn a policy, or strategy, denoted π, such that the agent knows which action to take in each situation. π is often defined as π : S → A in the case of a deterministic policy, where S is the space of possible states, and A the space of possible actions. To each state s is associated a single action π(s) = a, which the agent should take in order to maximize its reward. Another formulation is π : S × A → [0, 1], in the case of a stochastic policy. For each combination of state-action (s, a) is associated a probability π(s, a) of taking action a in the state s, such that ∀s ∈ S : ∀a∈A π(s, a) = 1.
There are several challenges in RL, of which one of the most known and perhaps important is the exploration-exploitation trade-off. To illustrate this challenge, let us consider the k-armed bandit example [START_REF] Berry | Bandit problems: Sequential allocation of experiments (monographs on statistics and applied probability)[END_REF]:
Example 2.1 (The k-armed bandit). A RL agent is facing a set of k gambling machines, or "one-armed bandits", which is often referred to as a k-armed bandit. Each of the bandits has a probability distribution of rewards, or payoffs, which the agent does not know, and must discover. The goal of the agent is to gain as much as possible, in a limited number of steps, by choosing a gambling machine at each step. The agent must therefore explore, i.e., try to select a gambling machine to receive a reward. By selecting a single machine several time steps, the agent receives different payoffs from the same probability distribution, and is able to update its estimation of the distribution towards the "true value" of the one-armed bandit. The agent could then continue using the same machine, to gain an almost guaranteed reward, since the agent has a good estimation of the true value. Or, it could try to select another machine, with a less accurate estimation. Suppose that the new machine yields lower rewards than the first one; should the agent go back to exploiting the first one? Yet, the second machine's probability distribution is less known. Perhaps its true value is, actually, higher than the first one. Should the agent continue exploring the other actions?
In order to facilitate learning the policy function, RL researchers often rely on the notion of values1 , in aptly-named value-based methods, such as the well-known Q-Learning [START_REF] Watkins | Q-learning[END_REF]. The value of a state, or a state-action pair, represents the long-term interest of being in this state, whereas the reward is short-term feedback. The agent could receive a high reward for taking an action a in a state s, but ending up in a state s in which only low rewards can be obtained. In this case, we will say that the value of state s , denoted as V(s ) is low. By extension, the agent has little interest in performing action a while in state s, since it will lead it to a low-interest state.
In the previous paragraph, we derived the interest of action a, in a state s, from the value V(s ) which it leads to. It is also possible to learn directly the value of state-action pairs, which is the main idea of the Q-Learning algorithm.
V(s) = max a Q(s, a).
Based on these definitions, the agent is able to learn the Q- [START_REF] Bellman | Dynamic programming[END_REF]:
Q t+1 (s t , a t ) ← α r t + γ max a Q t (s t+1 , a ) + (1 -α)Q t (s t , a t ) (2.1)
Where r t was the reward received at step t, s t was the state at step t, a t was the action chosen by the agent, and s t+1 is the new state resulting from performing a t in s t .
As the values are updated by taking the difference between the old value and a new value, this type of methods is named the Temporal Difference learning, or TD-Learning.
Multi-Agent Reinforcement Learning
Although Reinforcement Learning was originally concerned with the learning of a single agent, there are numerous cases where a multi-agent system can, or must, be considered.
For example, let us consider a virtual agent dedicated to helping a human user in its day-to-day tasks, such as booking appointments. The diversity of human users implies a diversity of virtual agents, which will have to communicate and interact together, in order to solve the tasks of their users. In this example, the multiplicity of agents is a necessity that stems from the social system in which we live.
Chapter 2
Another example is the mapping of a room, or place. Whereas a single agent could manage this task on its own, it would be more efficient to employ several agents. They would have to collaborate, in order to each work on a separate sub-area, such that they neither work on the same place, which would be a useless redundancy, nor leave any blind spot, which would not satisfy the task.
In these multi-agent systems, several additional challenges arise:
• How do we specify the reward such that each agent learns according to its own contribution? This challenge is often called the "Multi-Agent Credit Assignment Problem". • How do we make agents learn to control their own behaviour, while being immersed in an environment where other agents take action at the same time? This brings the question of non-stationarity of the environment.
In the sequel, we explore these 2 challenges.
Multi-Agent Credit Assignment Problem
Several definitions of the Multi-Agent Credit Assignment Problem (MA-CAP) have been given in the literature, which are all very similar. We particularly appreciate the formulation of Yliniemi & Tumer (2014a) :
Each agent seeks to maximize its own reward; with a properly designed reward signal, the whole system will attain desirable behaviors. This is the science of credit assignment: determining the contribution each agent had to the system as a whole. Clearly quantifying this contribution on a per-agent level is essential to multiagent learning. (Yliniemi & Tumer, 2014a, p. 2)
The survey of Panait & Luke (2005, p. 8) summarizes several methods to assign rewards.
The Global reward approach considers the contribution of the whole team. Usually, the same reward is given to all agents, either by taking the sum of contributions, or by dividing the sum of contributions by the number of learners. In any case, a consequence is that all learners' rewards depend on each agent. When an agent's contribution decreases (resp. increases), all learners see their reward decrease as well (resp. increase). This is a
Multi-Agent Reinforcement Learning
simple approach that intuitively fosters collaboration, since all agents need to perform well in order to receive a high reward.
However, this approach does not send accurate feedback to the learners. Let us consider a situation in which most agents have exhibited a good behaviour, although another one has failed to learn correctly, and has exhibited a rather bad (or uninteresting) behaviour.
As the individual reward depends on the team's efforts, the "bad" agent will still receive a praising reward. It will therefore have little incentive to change its behaviour. On the contrary, the "good" agents could have received a higher reward, if it were not for their "bad" colleague. Their behaviour does not necessarily need to change, however they will still try to improve it, since they expect to improve their received rewards.
At the opposite extreme, the Local reward approach considers solely the contribution of an individual agent to determine its reward. For example, if the agents' task is to take waste to the bin, an agent's reward will be the number of waste products that this specific agent brought. An advantage of this approach is to discourage laziness, as the agent cannot rely upon others to effectively achieve the task. By definition, agents receive a feedback that is truer to to their actual contribution.
A problem of local rewards is that they incentivize greedy behaviours and do not always foster collaboration. Indeed, as agents are rewarded based on their own contribution, without taking the others into account, they have no reason to help other agents, or even to let them do their task. In the waste example, an agent could develop a stealing behaviour to take out more waste products. Another common example is the one of a narrow bridge that two agents must cross to achieve their task. They both arrive at the bridge at the same time, and none of them is willing to let the other one cross first, since that would reduce their own reward, or, phrased differently, would prevent them from getting an even higher reward. Thus, they are both stuck in a non-interesting situation, both in the collective and individual sense, due to their maximizing of the individual interest only.
More complex credit assignments also exist, such as social reinforcement, observational reinforcement, or vicarious reinforcements [START_REF] Mataric | Learning to behave socially[END_REF].
Another method to determine an agent's contribution to the team is to imagine an environment in which the agent had not acted. This method is sometimes called the Wonderful Life Utility [START_REF] Wolpert | Optimal payoff functions for members of collectives[END_REF], or Difference Rewards (Yliniemi & Tumer, 2014a). The idea of this method is to reward agents if their contribution was helpful for the team, and to force a high impact of an agent's action on its own reward. It is computed as follows:
D i (z) = G(z) -G(z -i ) (2.2)
where D i (z) is the reward of an agent i, based on the context z, which is both the state and the joint-action of all agents in the environment; G(z) is the global reward for the context z, and G(z -i ) is an hypothetical reward, which would have been given to the team, if the agent i had not acted in the environment. In other words, if the current environment is better than the hypothetical one, this means the agent's action has improved the environment. It should be rewarded positively so as to reinforce its good behaviour. As G(z) > G(z -i ), the reward will effectively be positive. Conversely, if the current environment is worse than the hypothetical one, this means the agent's action has deteriorated the environment, or contributed negatively. The agent should therefore receive a negative reward, or punishment, in order to improve its behaviour. In this case, as G(z) < G(z -i ), the result will be negative. If the agent did not contribute much, its reward will be low, to encourage it to participate more, although without impairing the team's effort, as in the bridge example. Otherwise, the global reward G(z) would diminish, and the agent's reward would therefore decrease as well. Finally, it can be noted that the other agents' actions have a low impact on an agent reward.
Learning to cope with non-stationarity
Another challenge is the non-stationarity of the environment, i.e., the fact that its dynamics may change as time goes on. This is mostly due to the fact that the environment dynamics depend upon the agents that live in the environment. For example, in an energy distribution use-case, if agents mostly consume a lot of energy, then the dynamics of the environment will reveal a state of scarcity most of the time. Conversely, if agents consume a low amount of energy, then, assuming the same initial amount available, the dynamics will reveal a state of abundance.
The problem is that, as there are multiple agents in the environment, each agent will have an influence upon the dynamics. Thus, each agent will have to adapt to the impact of the others. In turn, since agents are adapting to one's own impact, an agent must adapt to the fact they are adapting! And so on.
A non-stationary environment makes learning harder for the agents, compared to a stationary one, i.e., one in which the dynamics do not change. To solve this nonstationarity problem, two principal ideas have emerged.
The first one is to simply ignore the violation of the expected properties: the result is a set of independent learners that would work as in a single-agent setup. As surprising as it may seem, this approach actually works well in some environments [START_REF] Matignon | Independent reinforcement learners in cooperative markov games: A survey regarding coordination problems[END_REF].
The other main idea is to provide more information to learning agents so that they can cope with the non-stationarity in some way.
In their survey, Hernandez-Leal, Kaisers, Baarslag, & de Cote (2019) classify approaches into the 5 following categories:
1. Ignore. The most basic approach which assumes a stationary environment. 2. Forget. These algorithms adapt to the changing environment by forgetting information and at the same time updating with recent observations, usually they are model-free approaches. 3. Respond to target opponents. Algorithms in this group have a clear and defined target opponent in mind and optimize against that opponent strategy. 4. Learn opponent models. These are model-based approaches that learn how the opponent is behaving and use that model to derive an acting policy. When the opponent changes they need to update its model and policy. 5. Theory of mind. These algorithms model the opponent assuming the opponent is modelling them, creating a recursive reasoning. (Hernandez-Leal et al., 2019, p. 18) Each category is more sophisticated than the previous, with the ignore category being the simplest one: as its name indicates, the algorithms simply ignore the existence of other agents and assume a perfectly stationary environment. On the contrary, the theory of mind category includes the algorithms that are best-equipped to deal with other agents, as they try to model not only the other agents' behaviours, but also how they would react to a change in one's own behaviour.
A multitude of algorithms are classified, based on several properties. First, the category, as we previously described, into which the algorithm fits. Secondly, the agents' observability when using this algorithm, i.e., which information do they have access to. Thirdly, opponent adaptation, i.e., at what pace agents can adapt to changes. Finally, the type of problems that the algorithm is designed for. Most of these types are inspired from the Game Theory field, hence the use of the "opponent" vocabulary, including: One-Shot Games (OSG), that take place only once, in a single situation, e.g., the well-known Prisoner's Dilemma; Repeated Games (RG), which repeat a single situation over and over, e.g., the Iterated Prisoner's Dilemma; Multi-Armed Bandit scenarii (MAB), which we have already mentioned, and which do not necessarily involve the description of a situation; and Extensive-form Games (EG) that describe a game as a sequence of actions by (alternating) players, in the form of a tree, e.g., some representations of Chess. Finally, the following types are closer to the Reinforcement Learning field: Sequential Decision Tasks (SDT), which consider a set of situations, and actions in each situation that induce transitions to a next situation; Stochastic Games (SG) that extend the concept of SDTs to multiple players.
Of the listed types, we only retain here the last 2, SDTs and SGs, as they allow modelling environments that correspond the closest to our society. Each time an action is taken, this makes the current state, or situation, change. Agents must learn to take the actions that lead to the best states.
To take other agents into account, the preferred way seems to be integrating additional data about them to the agent's observations. Indeed, out of the 24 approaches suited for SDTs or SGs identified by [START_REF] Hernandez-Leal | A Survey of Learning in Multiagent Environments: Dealing with Non-Stationarity[END_REF], we note that only 2 rely only on the local agents' observations and their own reward: Q-Learning [START_REF] Watkins | Q-learning[END_REF], and Wolf-PHC [START_REF] Bowling | Multiagent learning using a variable learning rate[END_REF]. All other approaches require at least the observation of others' actions, and 7 of them further require the observation of the others' rewards as well. These approaches are listed in Table 2.1, where the algorithms' requirements in terms of observability are colorized to ease reading. We refer the interested reader to Hernandez-Leal et al. (2019, p. 22) These additional data help agents learn a model of other agents, but represent a privacy breach. Indeed, considering a "real-life" use-case, such as the repartition of energy within a neighborhood, this means that each agent has access to the consumption habits of the neighbors, which may not be acceptable, or even legal.
This remark is similar in Deep Reinforcement Learning approaches, where one of the common trends is to centralize agents' data during the learning process, whereas data is kept separated during the execution process; this is referred to as "Centralized Training; Decentralized Execution" [START_REF] Papoudakis | Dealing with non-stationarity in multi-agent deep reinforcement learning[END_REF]. The intuitive rationale behind this idea seems indeed satisfying: learning is often done on simulated data, in laboratories, etc. Therefore, it does not impair privacy to share all actions' perceptions and actions with every other agent. When the system is deployed, on the other hand, agents do not learn any more, and therefore do not need to obtain all these data. However, this idea relies on the assumption we just mentioned: learning happens on simulated data, and not when agents are deployed. This assumption does not hold when considering continuous learning, an important property we have discussed in 2.1.4. Moreover, the very process of sharing data itself in a deployment setting is a challenge; as [START_REF] Papoudakis | Dealing with non-stationarity in multi-agent deep reinforcement learning[END_REF] mention in their "Open Problems":
While this is not a strong assumption during centralized training, it is very limiting during testing, especially when there is not established communication between the agents. More precisely, assuming that we have access in the observations and actions of the opponents during testing is too strong. Therefore, it is an open problem to create models that do not rely on this assumption. (Papoudakis et al., 2019, p. 5)
Multi-Objective Reinforcement Learning
In the traditional RL, previously presented, it can be noted that the reward is a scalar, i.e., a single value. In turn, this makes the states' and state-actions' values scalars as well. This makes the update process simple to perform, using the Bellman equation. Nevertheless, a consequence is that states, and state-action pairs, are compared on a single dimension.
For example, we may consider a state s 1 and 2 actions a 1 and a 2 , with Q(s 1 , a 1 ) = 1 and Q(s 1 , a 2 ) = 0.5. This means that a 1 has a higher interest than a 2 , at least in s 1 , and should thus be preferred.
Although this representation makes sense for many use-cases where the objective can be qualified as a single dimension, there exists numerous applications in which we would like to consider several objectives, as we discussed in Section 2.1.3. For example, within the Machine Ethics field, we may want our agents to learn to respect several moral values, which are not always compatible.
An intuitive workaround is to simply consider a virtual objective, as a combination of the true objectives, e.g., a simple average, or even a weighted sum [START_REF] Hayes | A practical guide to multi-objective reinforcement learning and planning[END_REF]. Let us assume that an agent deserved a reward r 1 = 1 with respect to a first moral value, and another reward r 2 = 0.2 with respect to a second moral value. To be able to send a unique reward to the agent, these 2 rewards are scalarized, for example through an average: r = r 1 +r 2 2 = 1+0.2 2 = 0.6. In case the resulting behaviour is not satisfactory, the designer may replace the average by a weighted sum, and tweak the weights to foster one or another moral value.
Although this strategy may work in some cases, it brings several problems. In the words of [START_REF] Hayes | A practical guide to multi-objective reinforcement learning and planning[END_REF]:
We argue that this workflow is problematic for several reasons, which we will discuss in detail one by one: (a) it is a semi-blind manual process, (b) it 32 Chapter 2 prevents people who should take the decisions from making well-informed trade-offs, putting an undue burden on engineers to understand the decisionproblem at hand, (c) it damages the explainability of the decision-making process, and (d) it cannot handle all types of preferences that users and human decision makers might actually have. Finally, (e) preferences between objectives may change over time, and a single-objective agent will have to be retrained or updated when this happens. (Hayes et al., 2022, p. 2) It seems to us that many of these issues resonate quite well with the problem of learning behaviours aligned with moral values, as we defined it in our general context and objectives. Issue b), for example, relates to the fact that the ethical stakes should be discussed within the society as a whole, including human users, stakeholders, regulators, ethicists, etc., and not by fully delegating it to AI experts. As for issue c), we have already stated that explainability is important within the context of Machine Ethics. Finally, issues d) and e) are particularly significant when considering a diversity of human preferences.
In Multi-Objective Reinforcement Learning (MORL), the value function, which maps each state or state-action pair to a value or interest, outputs a vector ∈ R m where m is the number of objectives, instead of a scalar ∈ R. For example, the value of a state s 1 may be 1 with respect to a first objective, but only 0.5 for a second objective. In this case, we say that V(s 1 ) = (1, 0.5). This change in the definition raises a problem to compare states between them: let us consider another state s 2 with V(s 2 ) = (0.8, 2). Is s 2 preferable to s 1 , or s 1 preferable to s 2 ? Each of them dominates the other on one objective, but is dominated on the other. We might want to say that they are incomparable; still, the agent must make a decision and choose an action that will yield either s 1 or s 2 .
Similarly, the "best" policy is no longer defined: we cannot, for each state, choose the action that has the maximum Q(s, a ), since Q(s, a ) becomes a vector, instead of a scalar, and the max operator is not defined among vectors. To solve this problem, most MORL algorithms propose to compute a solution set, i.e., a set of multiple optimal policies. The actual policy that will be used by the agent is chosen through the concept of utility. A utility function u : R m → R represents the user's priorities over objectives, by returning a scalar value from a multi-objective vector. For example, [START_REF] Hayes | A practical guide to multi-objective reinforcement learning and planning[END_REF] define the undominated set of optimal policies as follows: Definition 2.2 (Undominated set). The undominated set, U (Π), is the subset of all possible policies Π and associated value vectors for which there exists a possible utility function u whose scalarised value is maximal:
U (Π) = π ∈ Π|∃u, ∀π ∈ Π : u (V π ) ≥ u V π .
(2.3) (Hayes et al., 2022, p. 7) The idea of MORL algorithms is then to learn a set of optimal policies such that, for any human user preferences, which are represented by the utility function u, a policy can be found that has the maximized scalarised value.
Numerous algorithms have been proposed, which all have different properties, and several surveys have attempted to catalogue them [START_REF] Hayes | A practical guide to multi-objective reinforcement learning and planning[END_REF][START_REF] Liu | Multiobjective Reinforcement Learning: A Comprehensive Overview[END_REF][START_REF] Rȃdulescu | Multi-objective multiagent decision making: A utility-based analysis and survey[END_REF][START_REF] Roijers | A Survey of Multi-Objective Sequential Decision-Making[END_REF]. Among these properties, one can mention: the number of objectives, the ability to learn a single policy or a set of undominated policies, whether the environment needs to be episodic, when can the user set its preferences, etc.
For example, some approaches only consider 2 objectives, e.g., [START_REF] Avigad | Optimal strategies for multi objective games and their search by evolutionary multi objective optimization[END_REF][START_REF] Saisubramanian | A multi-objective approach to mitigate negative side effects[END_REF]. Even though they offer different advantages, they are therefore not suitable for implementing an algorithm that considers a generic number of moral values.
Other approaches directly learn a single, appropriate policy, e.g., [START_REF] Ikenaga | Inverse reinforcement learning approach for elicitation of preferences in multi-objective sequential optimization[END_REF]. They thus require user preferences to be specified upfront, so that the policy corresponds to these preferences. These works are naturally interesting, and important, because they allow preferences to be taken into account. However, specifying upstream makes it difficult to allow preferences to be changed during execution: the agent must in this case re-train its policy entirely. This may be an advantage, for example, to incrementally adapt the agent; but it is highly time-consuming. In addition, and depending on the preferences' structure, it might be difficult to specify by a lay user, e.g., for a vector of weights. By extension, this does not allow preferences to be contextualized, i.e., dependent on the situation.
Other works propose to learn a convex hull of Q-Values, e.g., [START_REF] Barrett | Learning all optimal policies with multiple criteria[END_REF][START_REF] Hiraoka | Parallel reinforcement learning for weighted multi-criteria model with adaptive margin[END_REF][START_REF] Mukai | Multi-objective reinforcement learning method for acquiring all pareto optimal policies simultaneously[END_REF]. Instead of learning a single interest for each state-action pair, a convex hull allows learning a set of interests, for all possible preferences. Once the interests are learned, i.e., once the 34 Chapter 2
agent is deployed, the policy can be obtained by injecting the preferences into the convex hull and choosing, for each state, the action that maximizes the preference-weighted interests. This type of approach effectively allows preferences to be changed, without the need to re-learn the agent's policy. On the other hand, it can still be complicated to use contextualized preferences; the agent would then have to re-compute its policy at each change of preferences, which can be costly if preferences change at (almost) every time step. Furthermore, this requires preferences in the form of weights, e.g., w = {0.33, 0.67}, which may be difficult to specify. What would be the difference, i.e., the impact on the agent's policy, between w = {0.33, 0.67} and w = {0.4, 0.6}? A lay user may have to answer this question, or at least to reflect on it, in order to choose between these 2 sets of weights. There is no obvious translation between the tensions among several moral values, and the preferences as numbers.
Moreover, [START_REF] Hayes | A practical guide to multi-objective reinforcement learning and planning[END_REF] note that few works are interested in both Multi-Objective and Multi-Agent:
Numerous real-world problems involve both multiple actors and objectives that should be considered when making a decision. Multi-objective multiagent systems represent an ideal setting to study such problems. However, despite its high relevance, it remains an understudied domain, perhaps due to the increasingly complex dimensions involved. (Hayes et al., 2022, p. 31) This is therefore a limitation that still exists in the current literature, partly due to the lack of proposed benchmarks, which prevents MOMARL (Multi-Objective Multi-Agent Reinforcement Learning) approaches from being developed and compared on known, common use-cases. Existing benchmarks are perhaps too simple, and do not often consider multiple agents.
Hybrid Neural-Symbolic Artificial Intelligence
Methods in AI are often separated into neural, or connexionists, approaches and symbolic approaches. Each of them has its advantages, and drawbacks.
Harmelen & Teije (2019) mention, based on the literature, that learning systems, including deep learning ones, suffer from the following limitations: they are data-hungry, they have limited transfer on new tasks, they are susceptible to adversarial attacks, they are difficult to understand and explain, and they do not use prior knowledge. The field of
Hybrid Neural-Symbolic Artificial Intelligence
Deep Learning is constantly evolving, and these limitations may be alleviated by future works in Deep Learning. For example, research on few-shot learning may reduce the need for large dataset [START_REF] Wang | Generalizing from a few examples: A survey on few-shot learning[END_REF].
Symbolic reasoning systems, for their part, suffer from the following limitations: it is difficult to capture exceptions, they are unstable when facing noisy data, their size makes them expensive to build, and they may lead to combinatorial explosions when the number of rules increase.
Although neither pure learning nor pure reasoning techniques seem perfect, it is increasingly recognized that they can be complementary. The 3rd branch of AI, which brings together neural and symbolic to combine their advantages, is often called the Hybrid AI, or Neural-Symbolic AI. Many ways to do so have been proposed: [START_REF] Harmelen | A Boxology of Design Patterns for Hybrid Learning and Reasoning Systems[END_REF] summarizes them in a "boxology" that classifies approaches and their common features.
For example, one of the most simple techniques may be to apply Machine Learning tools on symbolic inputs to produce symbolic outputs. Another variation may be to learn from "data", i.e., non-symbolic inputs in their terminology, to produce symbols. A large part of this boxology is dedicated to explainable systems, which is coherent with the current increase of literature on the subject: a ML tool may produce symbols from data, and a Knowledge Reasoning (KR) tool can be used to produce an explanation in the form of symbols. We refer the interested reader to the survey for more details about the evoked techniques [START_REF] Harmelen | A Boxology of Design Patterns for Hybrid Learning and Reasoning Systems[END_REF].
Whereas Neural-Symbolic AI is not a novel idea -some works can be found as early as 1991 [START_REF] Shavlik | An approach to combining explanation-based and neural learning algorithms[END_REF][START_REF] Towell | Symbolic knowledge and neural networks: Insertion, refinement and extraction[END_REF] and 1994 [START_REF] Shavlik | Combining symbolic and neural learning[END_REF] -it recently seemed to gain traction. Hybrid approaches managed to attain impressive performances (D. [START_REF] Yu | A survey on neural-symbolic systems[END_REF] in various domains.
In the field of (Deep) Reinforcement Learning, more specifically, several works have been proposed. Figure 2.2 gives an example of the approach proposed by [START_REF] Garnelo | Towards deep symbolic reinforcement learning[END_REF], which represents an endogenous hybrid architecture, in which the RL agent comprises a neural back-end, which learns representations by mapping high-dimensional input data to low-dimensional symbols, and a symbolic frontend, which reasons about the low-dimensional symbols to decide which action should be executed. An advantage of their approach is that the RL agent learned to "recognize" different types of objects, and to only collect the correct ones, when compared to a purely Deep Learning approach (DQN). This also allows the hybrid architecture to transfer 36 Chapter 2 knowledge between two tasks, one using a fixed grid, and another using a random grid, as the symbolic elements are similar enough. However, their experiments used a toy example, which raises questions about the proposed approach's ability to perform in a more complex ("real world") scenario. Another approach tries to implement Common Sense into RL, through a hybrid architecture, using sub-states to represent symbolic elements [START_REF] Garcez | Towards symbolic reinforcement learning with common sense[END_REF]. However, as in the previous work, their use-case rely on a toy example.
The previous examples can be categorized as adding symbolic elements to a machine learning base (the RL algorithm). A different form of Hybrid architecture is the other way around: implement ML techniques within a symbolic framework. For example, [START_REF] Bordini | Agent programming in the cognitive era[END_REF] focus on Agent-Oriented Programming (AOP) and propose to develop better BDI agents by leveraging ML at 3 different phases: 1) sensing, where, e.g., computer vision can be used to identify objects in a scene; 2) planning, where, e.g., ML can be used to learn contextual elements, or RL can be used to optimize action selection; and 3) acting, where ML can be used to determine the order in which steps should be executed to ensure they are achieved by their deadline. Interestingly, they mention that the architecture itself can be endogenous, i.e., the ML components and techniques are embedded within the BDI agent, or exogenous, i.e., ML is provided as a service, which may run on the same machine as the agent, or on a remote machine ("in the Cloud"). Such Cloud services are common in speech recognition or text-to-speech, for example.
Positioning, abstract overview and illustrative domain 3
In this chapter, we position our work with respect to the state of the art, and the identified objectives and challenges. We first describe our methodology, and justify the reasons behind some of our design choices. Then, we present what we call the "ethical model", which is a systematic framework we use to structure our work. Section 3.3 contains an overview, or conceptual architecture, of the contributions that we detail in the other chapters of this manuscript. Finally, we introduce in Section 3.4 our use-case, a Smart Grid simulator of energy distribution that we will use throughout our experiments. We emphasize that our propositions are agnostic to the application case; this Smart Grid simulator is used as both an evaluation playground, and an exemple to anchor the algorithms in a realistic scenario.
Methodology
Our methodology is based on the following principles, which we detail below:
1. Pluri-disciplinarity 2. Incremental approach 3. Logical order of contributions 4. Human control in a Socio-Technical System 5. Diversity and multi-agent 6. Co-construction
The first principle of our methodology is its pluri-disciplinar character. Although I was myself only versed in multi-agent systems and reinforcement learning at the beginning of this thesis, my advisors have expertise in various domains of AI, especially multi-agent systems, learning systems, symbolic reasoning, agent-oriented programming, and on the other hand philosophy, especially related to ethics and AI. These disciplines have been mobilized as part of the ETHICS.AI research project.
A second, important principle of our methodology is that we propose an incremental approach, through successive contributions. Thus, we evaluate each of our contributions independently and make sure each step is working before building the next one. In addition, it allows the community to reuse or extend a specific contribution, without having to consider our whole architecture as some sort of monolith. Even though these contributions can be taken separately, they are still meant to be combined in a coherent architecture that responds to the objectives identified in Chapter 1. Thus, it is natural that a specific contribution, taken in isolation, includes few limitations that are in fact focused on in another contribution. At the end of each chapter, we discuss the advantages of the proposed contribution, its limitations and perspectives, and we explicitly note which ones are answered later in the manuscript. We recall that, as mentioned in Chapter 1, this had an impact on this document's structure, which thus presents models and related experiments in the same chapter.
Contributions have been made, and are presented, in a logical order. As we detail in Section 3.3 and the following chapters, we first describe 2 reinforcement learning algorithms, which focus on learning behaviours aligned with moral values and adaptation to changes. Once we have the learning algorithms, we can detail how to construct reward functions through symbolic judgments, in our second contribution: having proposed the RL algorithms beforehand allows us to evaluate our new reward functions, whereas the RL algorithms themselves can be evaluated on more traditional, mathematical reward functions. Finally, with the new reward functions that contain several explicit moral values through different judging agents, we can now focus on the question of dilemmas and conflicts between moral values.
We recall that we place our work in the larger scope of a Socio-Technical System, and we particularly focus on giving control back to humans. In this sense, the system that we propose must be taught how to behave accordingly to the moral values that are important to us. In other words, it is not that AI "solves" ethics for us humans, but rather humans (designers, but also users) who try to embed mechanisms into AI systems so that their behaviours will become more "ethically-aligned", and thus will empower humanity. This mindset influenced our propositions, and particularly the second contribution in Chapter 5, where one of the main goals is to make the reward function more understandable, and thus, scrutable by humans. It influenced even more so our third and last contribution in Chapter 6, in which we focus on human preferences, and explicitly give them control through interaction over the artificial agents' behaviours.
Chapter 3
Another important aspect is that we focus on the richness of the use case. This stems from the diversity captured in the environment, which comes from several sources. First, the environment dynamics themselves, and the allowed interactions within the environment, through the designed observations and actions that we detail in Section 3.4, imply a variety of situations. Secondly, diversity is also related to the multiple moral values that agents need to take into consideration. And finally, as highlighted in the state of the art, we propose to increase diversity through the inclusion of multiple learning agents: agents may have different profiles, with different specificities, all of which suggesting different behaviours. We thus adopt the multi-agent principle as part of our method.
The ethical model
In this section, we take a step back on the technical aspect of producing "ethical systems", or rather "ethically-aligned systems", and reflect on how can we achieve this from a philosophical and societal point of view. The ultimate goal is to have a satisfying "ethical outcome" of the system, in terms of both its resulting behaviour, i.e., the actions it proposes or takes, and also how the system functions, i.e., which inputs it takes, what resources the computation requires, etc.
We have mentioned in the previous section that the general framework of this thesis is a Socio-Technical System (STS), in which we want to give the control back to the humans. This focus on STS when talking about AI and ethics has been championed by several researchers. [START_REF] Stahl | From computer ethics and the ethics of AI towards an ethics of digital ecosystems[END_REF], notably, describes the discourses on computer ethics, and ethics of AI, and proposes to move towards ethics of digital ecosystems, instead of focusing on specific artefacts. This notion of ecosystems is a metaphor that refers to socio-technical systems in which AI systems and humans interact with each other; as they mention:
Ethical, social, human rights and other issues never arise from a technology per se, however, but result from the use of technologies by humans in societal, organisational and other setting. (Stahl, 2022, p. 72) In other words, AI systems should not be considered "alone", but rather as a part of a more complex system incorporating humans. By focusing on STS, they hope to be able to explore and discover ethical issues related to new technologies, including but not limited to AI, and without the need to clearly define what AI is. In a similar vein, although they primarily focus on trustworthiness and legitimacy instead of ethics, [START_REF] Whitworth | Legitimate by design: Towards trusted sociotechnical systems[END_REF] propose to take into account users' requirements and concerns, such as privacy, and other ethics-related issues, directly into the design of STS.
To be ethically aligned with our ethical considerations, AI systems need to be guided. This guidance necessarily comes from humans, as ethics is, for now at least, a privilege of humans. The ethical considerations, through this guidance, are "injected" into systems, and we name this an ethical injection. This injection can take multiple forms, depending on the considered AI system, e.g., it can be part of the data, when we talk about supervised learning, or directly implemented as part of an algorithm's safeguards, or part of a learning signal, etc. This notion of ethical injection builds, among other things, upon the principles of STS, according to which it is important that the system incorporates human concerns.
Chapter 3
From this description of the "ethical injection", we can see that it relates to the already mentioned "Value Alignment Problem". This problem has been described by many researchers, including moral philosophers and computer scientists, see, e.g., [START_REF] Gabriel | Artificial intelligence, values, and alignment[END_REF][START_REF] Russell | Human-compatible artificial intelligence[END_REF]. It mainly refers to 2 important concerns when building an AI system: first, we humans must correctly articulate the objective(s) that we want the system to solve. For example, proxy objectives may be used to simplify the definition of the true objectives, however we risk getting a behaviour that solves only the proxy objectives, and not the true ones. Secondly, the system must solve these objectives while not taking any action that would contradict our values. For example, reducing the emission of greenhouse gaz by killing all of humanity is not an acceptable solution. In our framework, the "ethical injection" is the technical way of incorporating these ethical considerations, both the true objectives and the values that should not be contradicted, into the AI system.
In order to make an AI system more beneficial for us, it should include the moral values that we want the system to be aligned with. Several works have proposed, notably, principles that AI systems should follow in order to be beneficial [START_REF] Jobin | The global landscape of AI ethics guidelines[END_REF]. Another work debates this, and recommends instead focusing on tensions, as all principles cannot be applied at the same time [START_REF] Whittlestone | The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions[END_REF]. In any way, we argue that, as the vast majority of these choices do not target only technical considerations, they cannot, or should not, be made in isolation by AI experts. They must in fact involve a larger audience, from several disciplines and origins: AI experts, domain experts, philosophers, stakeholders, and impacted or interested citizens.
Yet, this idea raises a problem: these various participants do not have the same knowledge, and therefore need some common ground, so that they can discuss the ethical choices together. This common ground is necessary, we believe, to produce definitions that are both technically doable, and morally suitable. For example, the definition of justice, or fairness, can be difficult to implement by AI experts alone, as multiple definitions are used by different cultures, see for example [START_REF] Boarini | Interpersonal comparisons of utility in bargaining: Evidence from a transcontinental ultimatum game[END_REF]. Which one should they use? On the other hand, definitions produced by non-technical experts may prove difficult, or even impossible, to implement, because they rely on abstract notions.
Discussion between the various participants should yield the ethical injection, that is, the data and design choices used to guide the system towards an ethical outcome. For example, making the explicit choice of using a simplified model, which requires less computational resources (Green IT) would be considered part of the ethical injection.
The ethical model
Similarly, the choice of the inputs that the system will be given access to can be considered part of the ethical injection, especially if the choice implies notions of privacy, accuracy, etc. Finally, in a learning system, the choice of the training data will also be important for the ethical injection, as it represents the basis from which the system will derive its behaviour.
Reaching this common ground is not trivial, as participants may come from different domains. Yet, it is crucial, because design choices shape the way the system works, and some of them have ethical consequences. The various ways and loci of ethical injection should thus be discussed collectively: to do so, the AI system needs to be simplified, so that the discussion can focus on the important aspects, without being diverted by details. Still, it should not be too much simplified, otherwise we would risk obtaining a wrong mental model of the system. For example, simplifying too much the idea of a physical machine, e.g., when talking about Cloud-based solutions, would make participants forget about the physical impact of computation, in terms of resources (energy, space, etc.).
Simplifying too much about AI systems' capabilities could raise false hopes, or make them believe that the system is never wrong. Presenting the correct level of simplification, i.e., neither too little nor noo much, is at the crux of getting this common ground.
Figure 3.1 represents an agnostic system in interaction with a human society. The diverse arrows, such as "construct computational structure", "construct input", "produce input", etc., all represent potential loci of ethical injection. On the left side of the figure, humans with various roles, e.g., designers, users, or stakeholders, discuss and interact with the system, on the right side. The "interface" represents the way the system can communicate with the external world, which includes our human society; the interface is composed of inputs and outputs, of which the structure is chosen by the humans. Inputs, in particular may also be produced by humans, e.g., datasets. In the rest of the manuscript, we consider notably that the moral values, the reward functions, especially when constructed from symbolic judgments, and the contextualized preferences, are the main components of the ethical injection, aside with a few design choices that protect privacy for example. We focus on the technical ability of the learning system to leverage this ethical injection, and thus we place ourselves as the "almighty" designers, without discussing with potential users, nor stakeholders. Nevertheless, we defend that, for a real deployment of such systems, into our human society, these choices should be discussed. Moral values, in particular, should be elicited, to make sure that we have not forgotten one of them.
We will sometimes refer to this idea of including users and other stakeholders in the
44
Chapter 3
Socio-Technical
Designer(s) User(s)
Stakeholder(s)
Interface
Output Input discussion, which requires understanding on their part, and therefore intelligibility on the part of the system.
Conceptual architecture
In this thesis, we propose a multi-agent system, comprised of multiple agents interacting in a shared environment. This architecture can be seen, on an abstract level, as composed of 3 parts, shown in Figure 3.2. First, the learning part that we present in Chapter 4 introduces learning agents, which are tasked with learning an ethical behaviour, i.e., a behaviour aligned with moral values that are important to humans. To do so, they contain a decision and a learning processes, which feed each other to choose actions based on observations received from the shared environment, and to learn how to choose better actions. The shared environment receives the learners' actions and computes its next state based on its own dynamics. Observations from the environment state, which include both global data and local data specific to each agent, are then sent to the learning agents so that they can choose their next action, and so on.
Conceptual architecture
46
Chapter 3
The second part concerns the construction of the reward signal, which is sent to the learning agents as an indication of the correctness of their action. Whereas we do not know exactly which action should have been performed (otherwise we would not need to learn behaviours), we can judge the quality of the action. An action will rarely be 100% correct or incorrect; the reward signal should therefore be rich enough to represent the action's correctness, and accurately reflect the eventual improvement, or conversely deterioration, of the agents' behaviour. Learning agents will learn to select actions that yield higher rewards, which are better aligned with our moral values, according to the reward function. In order to construct this reward function, we propose in Chapter 5 to use distinct judging agents, each tasked with a specific moral value and the relevant moral rules. Judging agents employ a symbolic reasoning that is easier to design, and more understandable.
Finally, the last part focuses on addressing the specific situations of ethical dilemmas, when multiple moral values are in conflict and cannot be satisfied at the same time.
In order to, on the first hand, be able to identify these dilemmas, and on the second hand to manage them, we introduce in Chapter 6 an extension to the learning algorithm. Instead of the simpler configuration in which learning agents receive an aggregated reward, the new learning algorithm receives multiple rewards, one for each moral value.
A new process, specialized on dilemmas, is introduced and interacts with human users to identify situations that are relevant to them, and to learn their ethical preferences so as to automatically settle dilemmas accordingly. The goal of the learning agents is thus to learn to settle the dilemmas according to humans' contextualized preferences; this implies a certain number of interactions between the learning agents and the human users. As the agents learn the users' preferences, the amount of interactions decreases, thus avoiding putting too much of a burden on the human users.
Smart Grids as a validation use-case
In order to demonstrate that our proposed approach and contributions can be effectively applied, we developed a Smart Grid simulator.
We emphasize that this simulator is meant only as a validation domain: the approach we describe in this thesis is thought as generic. In particular, the learning algorithms could be leveraged for other domains.
Motivation
The choice of the Smart Grid use-case is motivated by our industrial partner Ubiant, which has expertise in this domain and helped us make a sufficiently realistic (although simplified) simulation. We also argue that this use-case is rich enough to embed ethical stakes, in multiple situations, concerning multiple moral values and multiple stakeholders, as per our objectives and hypotheses.
One may wonder why we chose to propose a new use-case, rather than focusing on existing "moral dilemmas" that have been proposed by the community of researchers: Trolley Dilemmas, Cake or Death, Care robot for alcoholic people, etc.1 As LaCroix points out, moral dilemmas have been misunderstood and misused in Machine Ethics, especially the trolley-style dilemmas (LaCroix, 2022). For example, the well-known Trolley Dilemma [START_REF] Foot | The problem of abortion and the doctrine of the double effect[END_REF][START_REF] Thomson | Killing, letting die, and the trolley problem[END_REF] was meant to highlight the difference between "killing" and "letting die". Whereas it may seem acceptable (or even desirable) to steer the lever, and letting 1 people die to save 5 another, we also intuitively dislike the idea of a person pushing someone (the Fat Man variation), or of a physician killing voluntarily a healthy person to collect their organs and save 5 patients. Although consequences are very similar (1 person dies; 5 are saved), there is a fundamental difference between "killing" and "letting die", which makes the former unacceptable. This observation was one of the primary goals of the Trolley Dilemma and similar philosophical thought experiments, which have since led to countless discussions and applications in moral philosophy. However, regardless of their importance for the study of philosophical and legal concepts, they do not represent realistic case studies for simulations aimed at learning or implementing ethically-aligned behaviours.
Therefore, instead of focusing solely on "moral dilemmas", we want to consider an environment with ethical stakes. These ethical stakes transpire at different moments (time steps) of the simulation. Contrary to moral dilemmas, they sometimes may be easy to satisfy. Indeed, we see that in our lives, we are not always in a state of energy scarcity, nor in an impeding-death scenario when we drive. However, even in these scenarii, there are stakes that the artificial agent should consider. Even when it is simple to consume, because there is no shortage, the agent should probably not consume more than it needs, in order to let other people consume as well. Machine Ethics approaches should be validated on these "simple" cases as well. And, on the other hand, there are "difficult" situations, when two (or more) moral values are in conflict and cannot be satisfied at the same time. In this case, which we call a dilemma, although not in the "thought experiment" sense, a compromise must be made.
Definition
As Smart Grids are a vivid research domain, there are several definitions of a Smart Grid, and more generally, Smart Energy Systems. [START_REF] Hadjsaïd | Smart grids[END_REF] note that these definitions depend on the priorities of actors behind the development of these grids, such as the European Union, the United States of America, or China. We propose to focus on the EU's definition (EU Commission Task Force for Smart Grids, 2010) in order to highlight one of the major differences between smart and "non-smart" grids:
Definition 3.1 (Smart Grid). A Smart Grid is an electricity network that can cost efficiently integrate the behaviour and actions of all users connected to it -generators, consumers and those that do both -in order to ensure economically efficient, sustainable power system with low losses and high levels of quality and security of supply and safety. Though elements of smartness also exist in many parts of existing grids, the difference between a today's grid and a smart grid of the future is mainly the grid's capability to handle more complexity than today in an efficient and effective way. A smart grid employs innovative products and services together with intelligent monitoring, control, communication, and self-healing technologies in order to:
• Better facilitate the connection and operation of generators of all sizes and technologies. • Allow consumers to play a part in optimising the operation of the system.
• Provide consumers with greater information and options for how they use their supply. • Significantly reduce the environmental impact of the whole electricity supply system. • Maintain or even improve the existing high levels of system reliability, quality and security of supply. • Maintain and improve the existing services efficiently.
• Foster market integration towards European integrated market.
(EU Commission Task Force for Smart Grids, 2010, p. 6)
Smart Grids as a validation use-case
As can be seen in definition 3.1, smart grids add various technologies ("innovative products and services"), in order to improve and extend the grid's functionalities. For example, instead of being mere consumers of energy, the introduction of solar panels allow the grid's participants to become prosumers, i.e., both producers and consumers. In this case, they gain the ability to truly participate in the grid's energy dynamics by exchanging energy with the grid, and thus other prosumers, in a two-way manner. We refer the interested reader to X. [START_REF] Yu | The new frontier of smart grids[END_REF] and [START_REF] Hadjsaïd | Smart grids[END_REF] for more details on Smart Grids, their definitions, and challenges.
Multiple ethical issues have been identified in Smart Grids [START_REF] Tally | Smart Grids and Ethics: A Case Study[END_REF], such as: health, privacy, security, affordability, equity, sustainability. Some of them can be addressed at the grid conception level, e.g., privacy and security by making sound algorithms, health by determining the toxicity of exposition to certain materials, and limiting such exposition if necessary. The remaining ones, although partially addressable by design, can also be addressed at the prosumer behaviour level. For example, prosumers can reduce their consumption when the grid is over-used, in order to let other prosumers, more in need, improve their comfort, and avoiding to buy energy from highly polluting sources.
Specifically, we imagine the following scenario. A micro-grid provides energy to a small neighbourhood of prosumers. Instead of being solely connected to the national grid, the micro-grid embeds its own production, through, e.g., a hydroelectric power plant. Prosumers can produce their own energy through the use of photovoltaic panels, and store a small quantity in their personal battery. They must distribute energy, i.e., choose how much to consume from the micro-grid, from their battery, how much to give back to help a fellow neighbor, how much to buy or sell from the national grid, in order so to satisfy their comfort, supporting a set of moral values all the while. Prosumers are represented by artificial agents that are tasked with taking these decisions to alleviate the prosumers' life, according to their profiles and wishes. This is another marker of the "smart" aspect in Smart Grids.
Actions
The prosumers take actions that control the energy dynamics within the grid. Prosumers are therefore able to:
• Consume energy from the micro-grid to improve their comfort.
50
Chapter 3
• Store energy in their battery from the micro-grid.
• Consume energy from their battery to improve their comfort.
• Give energy from their battery to the micro-grid.
• Buy energy from the national grid, which is stored in their battery.
• Sell energy from their battery to the national grid.
All of these actions can be done at the same time by a prosumer, e.g., they can both consume from the micro-grid and buy from the national grid. We will detail the exact representation of these actions in Section 4.4. Let us remark that these actions offer the possibility of rich behaviours: prosumers can adopt long-term strategies, e.g., by buying and storing when there is enough energy, in prevision of a production shortage or conversely a consumption peak, and give back when they do not need much.
These actions, and the behaviours that ensue from them, are also often ethically significant. For example, a prosumer who consumes too much energy might prevent another one from consuming enough, resulting in a higher inequality of comforts.
Our goal is therefore to make artificial agents learn how to consume and exchange energy, as some sort of proxy for the human prosumers. These behaviours will need to consider and take into account the ethical stakes.
Observations
In order to help the prosumers take actions, they receive observations about the current state of the environment. Indeed, we do not need to consume the same amount of energy at different times. Or, we may recognize that there is inequality in the grid, and decide to sacrifice a small part of our comfort to help the left-behind prosumers. These observations may be common, i.e., accessible to all prosumers in the grid, such as the hour, and the current available quantity of energy, or they may be individual, i.e., only the concerned agent has access to the value, such as the personal comfort, and the current quantity of energy in the battery. The distinction between individual and common observations allows to take into account ethical concerns linked with privacy: human users may want to keep part of their individual situation confidential.
The full list of observations that each agent receives is:
• Storage: The current quantity of energy in the agent's personal battery.
Smart Grids as a validation use-case
• Comfort: The comfort of the agent, at the previous time step.
• Payoff: The current payoff of the agent, i.e., the sum of profits and costs made from selling and buying energy.
• Hour: The current hour of the day.
• Available energy: The quantity of energy currently available in the micro-grid.
• Equity: A measurement of the equity of comforts between agents, at the previous time step, using the Hoover index.
• Energy waste: The quantity of energy that was not consumed and therefore wasted, at the previous time step.
• Over-consumption: The quantity of energy that was consumed but not initially available at the previous time step, thus requiring the micro-grid to purchase from the national grid.
• Autonomy: The ratio of transactions (energy exchanges) that were not made with the national grid. In other words, the less agents buy and/or sell, the higher the autonomy.
• Well-being: The median comfort of all agents, at the previous step.
• Exclusion: The proportion of agents which comfort was lower than half the median.
Moral values and ethical stakes
We have explored the literature to identify which moral values are relevant in the context of Smart Grids. Several works have identified moral values that should be considered by political decision-makers when conceiving Smart Grids (de Wildt, Chappin, van de Kaa, Herder, & van de Poel, 2019; Milchram, Van de Kaa, Doorn, & Künneke, 2018). These moral values have been defined with the point of view of political decision-makers, rather than prosumers. For example, the "affordability" value originally said that Smart Grids should be conceived such that prosumers are able to participate without buying too much. In our use-case, we want artificial agents to act as proxy for prosumers, and thus they should follow moral values that take the point of view of prosumers. We therefore adapted these moral values, e.g., for the "affordability", our version states that prosumers should buy and sell energy in a manner that does not exceed their budget.
52
Chapter 3
We list below the moral values and associated rules that are retained in our simulator:
MR1 -Security of Supply An action that allows a prosumer to improve its comfort is moral. MR2 -Affordability An action that makes a prosumer pay too much money is immoral. MR3 -Inclusiveness An action that improves the equity of comforts between all prosumers is moral. MR4 -Environmental Sustainability An action that prevents transactions (buying or selling energy) with the national grid is moral.
Prosumer profiles
In order to increase the richness of our simulation, we introduce several prosumer profiles. A prosumer agent represents a building: we do not consider a smaller granularity such as buildings' parts, or electrical appliances. Prosumer profiles determine a few parameters of agents:
• The needs in terms of Watt-hours of the buildings' inhabitants, for each hour of the year. This need is extracted from public datasets of energy consumption: given the energy consumed by a certain type of building, during a given hour, we assume this represents the amount of energy the inhabitants needed at this hour. The need acts as some sort of target for the artificial agents, which must learn to consume as close as possible to the target need, according to MR1, for every hour of the year. Seasonal changes implies different needs, for example we have a higher consumption in Winter, due to the additional heating.
• The personal storage's capacity. Each building has a personal storage, or battery, that they can use to store a limited quantity of energy. This limit, that we call the capacity, depends on the size of the (typical) building: smaller buildings will have a smaller capacity.
• The actions' maximum range. As we have seen previously, actions are represented by quantities of energy that are exchanged. For example, a household could perform the action of "consuming 572Wh". Learning algorithms require this range to be bounded, i.e., we do not know how to represent "consume infinite energy"; we chose to use different ranges for different profiles. Indeed, profiles of larger buildings have a higher need of energy. Thus, we should make it possible for them
Smart Grids as a validation use-case
to consume at least as much as they can possibly need. For example, if a building has a maximum need, along all hours of the year, of 998Wh, we say that its action's maximum range is 1, 000Wh.
To define our building profiles, we used an OpenEI public dataset of energy consumption [START_REF] Ong | Commercial and residential hourly load profiles for all TMY3 locations in the united states[END_REF], which contains consumptions for many buildings in different places in the United States of America. We have defined 3 profiles, based on Residential, which we rename Household, Office, and School buildings from the city of Anchorage:
• The Household type of agents has an action range of 2, 500Wh, and a personal storage's capacity of 500Wh. • The Office type of agents has an action range of 14, 100Wh, and a personal storage's capacity of 2, 500Wh. • The School type of agents has an action range of 205, 000Wh, and a personal storage's capacity of 10, 000Wh.
54
Chapter 3
Summary
We summarize this proposed use-case in Figure 3.4. As we can see, the grid contains several buildings, each represented by a learning (prosumer) agent. Three buildings profiles are available: Household, which represents a small, single-family habitation; Office, which represents a medium-sized office; and School, which represents a large building with important needs. Each building has a solar panel that regularly produces energy, and a personal storage in which the energy is stored. They are linked to a Smart Grid, which contains a source of energy shared by all buildings, e.g., a hydraulic power plant, and which is also linked to the national grid. The buildings receive observations from the environment, which can be either individual (local) data that concern only a given agent, or shared (global) data that are the same for all buildings at a given time step. This protects the privacy of the buildings' inhabitants, as it avoids sharing, e.g., the payoff, or comfort, of an agent to its neighbors. In return, they take actions that describe the energy transfers between them, their battery, the micro-grid power plant, and the national grid. In this chapter, we present the first contribution, which is the learning part from the conceptual architecture presented in Figure 3.2. Section 4.1 begins with an overview of this contribution, recalls the research question, explains its motivations, and presents a schema derived from the conceptual architecture, focusing on the learning part, and showing its internal mechanisms. Sections 4.2 and 4.3 presents the components necessary to our algorithms, and algorithms are then described in Section 4.4. We detail the experiment setup, based on the multi-agent Smart Grid use-case presented in Section 3.4, in Section 4.5, and Section 4.6 reports the results. A discussion of these algorithms' advantages and drawbacks is finally presented in Section 4.7.
Overview
We recall that the aim of this first contribution is to answer the first research question: How to learn behaviours aligned with moral values, using Reinforcement Learning with complex, continuous, and multi-dimensional representations of actions and situations? How to make the multiple learning agents able to adapt their behaviours to changes in the environment?
To do so, we propose 2 Reinforcement Learning algorithms. These algorithms needed to have 2 important properties: 1) to be able to handle multi-dimensional domains, for both states and actions, as per our hypothesis H2, and as motivated in our state of the art of Machine Ethics. 2) to be a modular approach, as per the "incremental approach" principle of our methodology. Indeed, the algorithms were meant as a first step, a building basis on top of which we would add improvements and features throughout our next contributions. In order to facilitate these improvements, modularity was important, rather than implementing an end-to-end architecture, as is often found in Deep Neural Networks. Such networks, while achieving impressive performance on their tasks, can be difficult to reuse and modify. As we will see in the results, our algorithms obtain similar or even better performance.
The algorithms that we propose are based on an existing work (Smith, 2002a[START_REF] Smith | Dynamic generalisation of continuous action spaces in reinforcement learning: A neurally inspired approach[END_REF] that we extend and evaluate in a more complex use-case. Smith's initial work proposed, in order to handle continuous domains, to associate Self-Organizing Maps (SOMs) to a Q-Table . This idea fulfills the first property, and also offers the modularity required by the second: by combining elements, it allows to easily modify, replace, or even add one or several elements to the approach. This motivated our choice of extending Smith's work.
First, we explain what is a Q-Table from the Q-Learning algorithm, and its limitations.
We then present Self-Organizing Maps, the Dynamic Self-Organizing Map variation, and how we can use them to solve the Q- Learning agents presented in this chapter correspond to prosumer agents in Figure 3.4. They receive observations and take actions; during our experiments, these observations and actions are those described in this figure and more generally in Section 3.4.
Figure 4.1 presents a summarizing schema of our proposed algorithms. It includes multiple learning agents that live within a shared environment. This environment sends observations to agents, which represent the current state, so that agents may choose an action and perform it in the environment. In response, the environment changes its state, and sends them new observations, potentially different for each agent, corresponding to this new state, as well as a reward indicating how correct the performed action was.
Learning agents leverage the new observations and the reward to update their internal model. This observation-action-reward cycle is then repeated so as to make learning agents improve their behaviour, with respect to the considerations embedded in the reward function. The decision process relies on 3 structures, a State (Dynamic) Self-Organizing Map, also named the State-(D)SOM, an Action (Dynamic) Self-Organizing Map, also named the Action-(D)SOM, and a Q-Table . They take observations as inputs and output an action, which are both vectors of continuous numbers. The learning process updates these same structures, and takes the reward as an input, in addition to observations.
Chapter 4
Learning Agent l1 We recall that the interests take into account both the short-term immediate reward, but also the interest of the following state s , resulting from the application of a in s. Thus, an action that leads to a state where any action yields a low reward, or in other word an unattractive state, would have a low interest, regardless of its immediate reward.
Assuming that the Q-Values have converged towards the "true" interests, the optimal policy can be easily obtained through the Q-Table, by selecting the action with the maximum interest in each state. By definition, this "best action" will lead to states with high interests as well, thus yielding, in the long-term, the maximum expected horizon of rewards.
An additional advantage of the Q-Table is the ability to directly have access to the interests, in comparison to other approaches, such as Policy Gradient, which typically manipulate actions' probabilities, increasing and decreasing them based on received rewards. These interests can be conveyed to humans to support or detail the algorithm's decision process, an advantage that we will exploit later, in Chapter 6.
Nevertheless, Q-Tables have an intrinsic limitation: they are defined as a tabular structure. This structure works flawlessly in simple environments, e.g., those with a few discrete states and actions. Yet, in more complex environments, especially those that require continuous representations of states and actions, it is not sufficient any more, as we would require an infinite number of rows and columns, and therefore an infinite amount of memory. Additionally, because of the continuous domains' nature, it would be almost impossible to obtain twice the exact same state: the cells, or Q-Values, would almost always get at most a single interaction, which does not allow for adequate learning and convergence towards the true interests.
60
Chapter 4
To counter this disadvantage, we rely on the use of Self-Organizing Maps (SOMs) that handle the continuous domains. The mechanisms of SOMs are explained in the next section, and we detail how they are used in conjunction with a Q-Table in Section 4.4.
(Dynamic) Self-Organizing Maps
A Self-Organizing Map (SOM) [START_REF] Kohonen | The self-organizing map[END_REF]) is an artificial neural network that can be used for unsupervised learning of representations for high-dimensional data. SOMs contain a fixed set of neurons, typically arranged in a rectangular 2D grid, which are associated to a unique identifier, e.g., neuron #1, neuron #2, etc., and a vector, named the prototype vector. Prototype vectors lie in the latent space, which is the highly dimensional space the SOM must learn to represent.
The goal is to learn to represent as closely as possible the distribution of data within the latent space, based on the input data set. To do so, prototype vectors are incrementally updated and "moved" towards the different regions of the latent space that contain the most data points. Each time an input vector, or data point, is presented to the map, the neurons compete for attention: the one with the closest prototype vector to the input vector is named the Best Matching Unit (BMU). Neurons' prototypes are then updated, based on their distance to the BMU and the input vector. By doing this, the neurons that are the closest to the input vector are moved towards it, whereas the farthest neurons receive little to no modification, and thus can focus on representing different parts of the latent space.
As the number of presented data points increases, the distortion, i.e., the distance between each data point and its closest prototype, diminishes. In other words, neurons' prototypes are increasingly closer to the real (unknown) distribution of data.
When the map is sufficiently learned, it can be used to perform a mapping of high dimensional data points into a space of lower dimension. Each neuron represents the data points that are closest to its prototype vector. Conversely, each data point is represented by the neuron whose prototype is the closest to its own vector.
This property of SOMs allows us to handle continuous, and multi-dimensional state and action spaces. The update received by a neuron is determined by Equation (4.1), with v being the index of the neuron, W v is the prototype vector of neuron v, D t is the data point presented to the SOM at step t. u is the index of the Best Matching Unit, i.e., the neuron that satisfies
u = argmin ∀v D t -W v . W t+1 v ← W t v + θ(u, v, t)α(t) D t -W t v (4.1)
In this equation, θ is the neighborhood function, which is typically a gaussian centered on the BMU (u), such that the BMU is the most updated, its closest neighbors are slightly updated, and farther neurons are not updated. The learning rate α, and the neighborhood function θ both depend on the time step t: they are often monotonically decreasing, in order to force neurons' convergence and stability.
One of the numerous extensions of the Self-Organizing Map is the Dynamic Self-Organizing Map (DSOM) [START_REF] Rougier | Dynamic self-organising map[END_REF]. The idea behind DSOMs is that self-organization should offer both stability, when the input data does not change much, and dynamism,
62
Chapter 4
when there is a sudden change. This stems from neurological inspiration, since the human brain is able to both stabilize after the early years of development, and dynamically re-organize itself and adapt when lesions occur.
As we mentioned, the SOM enforces stability through decreasing parameters (learning rate and neighborhood), however this also prevents dynamism. Indeed, as the parameters approach 0, the vectors' updates become negligible, and the system does not adapt any more, even when faced with an abrupt change in the data distribution.
DSOMs propose to replace the time-dependent parameters by a time-invariant one, named the elasticity, which determines the coupling of neurons. Whereas SOMs and other similar algorithms try to learn the density of data, DSOMs focus on the structure of the data space, and the map will not try to place several neurons in a high-density region.
In other words, if a neuron is considered as sufficiently close to the input data point, the DSOM will not update the other neurons, assuming that this region of the latent space is already quite well represented by this neuron. The "sufficiently close" is determined through the elasticity parameter: with high elasticity, neurons are tightly coupled with each other, whereas lower elasticity let neurons spread out over the whole latent space.
DSOMs replace the update equation with the following:
W t+1 i ← α D t -W t i h η (i, u, D t ) D t -W t i (4.2) h η (i, u, D t ) = exp - 1 η 2 P(i) -P(u) 2 D t -W u 2 (4.3)
where α is the learning rate, i is the index of the currently updated neuron, D t is the current data point, u is the index of the best matching unit, η is the elasticity parameter, h η is the neighborhood function, and P(i), P(u) are respectively the positions of neurons i and u in the grid (not in the latent space). Intuitively, the distance between P(i) and P(u) is the minimal number of consecutive neighbors that form a path between i and u.
The Q-SOM and Q-DSOM algorithms
We take inspiration from Decentralized Partially-Observable Markovian Decision Processes (DecPOMDPs) to formally describe our proposed algorithms. DecPOMDPs are an extension of the well-known Markovian Decision Process (MDP) that considers multiple
4.4
The Q-SOM and Q-DSOM algorithms agents taking repeated decisions in multiple states of an environment, by receiving only partial observations about the current state. In contrast with the original DecPOMDP as described by Bernstein [START_REF] Bernstein | The complexity of decentralized control of markov decision processes[END_REF], we explicitly define the set of learning agents, and we assume that agents receive (different) individual rewards, instead of a team reward.
Definition 4.1 (DecPOMDP). A Decentralized Partially-Observable Markovian Decision
Process is a tuple L, S, A, T, O, O, R, γ , where:
• L is the set of learning agents, of size n = |L|.
• S is the state space, i.e., the set of states that the environment can possibly be in.
States are not directly accessible to learning agents. • A l is the set of actions accessible to agent l, ∀l ∈ L as all agents take individual actions. We consider multi-dimensional and continuous actions, thus we have A l ⊆ R d , with d the number of dimensions, which depends on the case of application.
• A is the action space, i.e., the set of joint-actions that can be taken at each time step.
A joint-action is the combination of all agents' actions, i.e.,
A = A l 1 × • • • × A ln . • T is the transition function, defined as T : S × A × S → [0, 1].
In other words, T(s |s, a) is the probability of obtaining state s after taking the action a in state s.
• O is the observation space, i.e., the set of possible observations that agents can receive. An observation is a partial information about the current state. Similarly to actions, we define O l as the observation space for learning agent l, ∀l ∈ L. As well as actions, observations are multi-dimensional and continuous, thus we have O l ⊆ R g , with g the number of dimensions, which depends on the use case.
• O is the observation probability function, defined as
O : O × S × A → [0, 1], i.e.,
O(o|s , a) is the probability of receiving the observations o after taking the action a and arriving in state s . • R is the reward function, defined as ∀l ∈ L R l : S × A l → R. Typically, the reward function itself will be the same for all agents, however, agents are rewarded individually, based on their own contribution to the environment through their action. In other words, R l (s, a l ) is the reward that learning agent l receives for taking action a l in state s. • γ is the discount factor, to allow for potentially infinite horizon of time steps, with γ ∈ [0, 1[.
64
Chapter 4
The RL algorithm must learn a stochastic strategy π l , defined as
π l : O l × A l → [0, 1].
In other words, given the observations o l received by an agent l, π(o l , a) is the probability that agent l will take action a.
We recall that observations and actions are vectors of floating numbers, the RL algorithm must therefore handle this accordingly. However, it was mentioned in Section 4.2 that the Q-Table is not suitable for continuous data. To solve this, we take inspiration from an existing work (Smith, 2002a[START_REF] Smith | Dynamic generalisation of continuous action spaces in reinforcement learning: A neurally inspired approach[END_REF] and propose to use variants of Self-Organizing Maps (SOMs) [START_REF] Kohonen | The self-organizing map[END_REF].
We can leverage SOMs to learn to handle the observation and action spaces: neurons learn the topology of the latent space and create a discretization. By associating each neuron with a unique index, we are able to discretize the multi-dimensional data: each data point is recognized by the neuron with the closest prototype vector, and thus is represented by a discrete identifier, i.e., the neuron's index.
The proposed algorithms are thus based on 2 (Dynamic) SOMs, a State-SOM, and an Action-SOM, which are associated to a Q-Table . To navigate the Q- Our algorithms are separated into 2 distinct parts: the decision process, which chooses an action from received observations about the environment, and the learning process, which updates the algorithms' data structures, so that the next decision step will yield a better action. We present in details these 2 parts below.
The decision process
We now explain the decision process that allows an agent to choose an action from received observations, which is described formally in Algorithm 4.1 and represented in Figure 4.3. First, we need to obtain a discrete identifier from an observation o that is a vector ∈ O l ⊆ R g , in order to access the Q-Table . To do so, we look for the Best Matching Unit (BMU), i.e., the neuron whose prototype vector is the closest to the observations, from the State-SOM, which is the SOM tasked with learning the observation space. The unique index of the BMU is used as the state identifier s (line 2).
4.4
The Q-SOM and Q-DSOM algorithms Let P be the Boltzmann distribution over the Q-Values. We draw a random variable X from P, and we denote the probability that X equals a given value j : P (X = j).
4 Draw j ∼ P (X = j) = exp(Q(s,j)) τ |W| k=1 exp(Q(s,k)) τ 5
Let W j be the chosen action's parameters State hypothesis s = 7
U 2 U 1 U 3 U 4 U 5 U 6 U 7 U 8 U 9 State-(D)SOM Action 1 Action 2 Action 3 Action 4 State 1 Q s1,a1 Q s1,a2 Q s1,a3 Q s1,a4 ... ... ... ... ... State 7 Q s7,a1 Q s7,a2 Q s7,a3 Q s7,a4 ... ... ... ... ... State 9 Q s9,a1 Q s9,a2 Q s9,a3 Q s9,a4 W 2 W 1 W 3 W 4
Action-(D)SOM
Proposed action identifier j = 3 We call this identifier a "state hypothesis", and we use it to navigate the Q-Table and obtain the expected interest of each action, assuming we have correctly identified the state. Knowing these interests Q(s, .) for all actions, we can assign a probability of taking each one, using a Boltzmann distribution (line 3). Boltzmann is a well-known and used method in RL that helps with the exploration-exploitation dilemma. Indeed, as we saw in Section 2.2, agents should try to maximize their expectancy of received rewards, which means they should exploit high-rewarding actions, i.e., those with a high interest. However, the true interest of the action is not known to agents: they have to discover it incrementally by trying actions into the environment, in various situations, and memorizing the associated reward. If they only choose the action with the maximum interest, they risk focusing on few actions, thus not exploring the others.
[ W 3,1 , ..., W 3,k ] Q-Table
Proposed action parameters
By not sufficiently exploring, they maintain the phenomenon, as not explored actions will stay at a low interest, reducing their probability of being chosen, and so on. Using Boltzmann mitigates this problem, by giving similar probabilities to similar interests, and yet, a non-zero probability of being chosen even for actions with low interests.
The Boltzmann probability of an action j being selected is computed based on the action's interest, in the current state, relatively to all other actions' interests, as follows:
P (X = j) = exp(Q(s,j)) τ |W| k=1 exp(Q(s,k)) τ (4.4)
4.4
The Q-SOM and Q-DSOM algorithms
Traditionally, the Boltzmann parameter τ should be decreasing over the time steps, such that the probabilities of high-interest actions will rise, whereas low-interest actions will converge towards a probability of 0. This mechanism ensures the convergence of the agents' policy towards the optimal one, by reducing exploration in later steps, in favour of exploitation. However, and as we have already mentioned, we chose to disable the convergence mechanisms in our algorithms, because it prevents, by principle, continuous learning and adaptation.
We draw an action identifier j from the list of possible actions, according to Boltzmann probabilities (line 4). From this discrete identifier, we get the action's parameters from the Action-SOM, which is tasked with learning the action space. We retrieve the neuron with identifier j, and take its prototype vector as the proposed action's parameters (line 5).
We can note that this is somewhat symmetrical to what is done with the State-SOM.
To learn the State-SOM, we use the data points, i.e., the observations, that come from the environment; to obtain a discrete identifier, we take the neurone with the closest prototype. For the Action-SOM, we start with a discrete identifier, and we take the prototype of the neuron with this identifier. However, we need to learn what are those prototype vectors. We do not have data points as for the State-SOM, since we do not know what is the "correct" action in each situation. In order to learn better actions, we apply an exploration step after choosing an action: the action's parameters are perturbed by a random noise (lines 6-9).
In the original work of Smith (2002a), the noise was taken from a uniform distribution U [-,+ ] , which we will call the epsilon method in our experiments. However, in our algorithms, we implemented a normal, or gaussian, random distribution N (μ, σ 2 ), where μ is the mean, which we set to 0 so that the distribution ranges over both negative and positive values, σ 2 is the variance, and σ is the standard deviation. and σ 2 are the "noise control parameter" for their respective distribution. The advantage over the uniform distribution is to have a higher probability of a small noise, thus exploring very close actions, while still allowing for a few rare but longer "jumps" in the action space. These longer jumps may help to escape local extremas, but should be rare, so as to slowly converge towards optimal actions most of the time, without overshooting them. This was not permitted by the uniform distribution, as the probability is the same for each value in the range [-, + ].
Chapter 4
The noised action's parameters are considered as the chosen action by the decision process, and the agent executes this action in the environment (line 10).
The learning process
After all agents executed their action, and the environment simulated the new state, agents receive a reward signal which indicates to which degree their action was a "good one". From this reward, agents should improve their behaviour so that their next choice will be better. The learning process that makes this possible is formally described in Algorithm 4.2, and we detail it below. First, we update the Action-(D)SOM. Remember that we do not have the ground-truth for actions: we do not know which parameters yield the best rewards. Moreover, we explored the action space by randomly noising the proposed action; it is possible that the perturbed action is actually worse than the learned one. In this case, we do not want to update the Action-(D)SOM, as this would worsen the agent's performances. We thus determine whether the perturbed action is better than the proposed action by comparing the received reward with the memorized interest of the proposed action, using the following equation:
r + γ max j Q(s , j ) ? > Q(s, j) (4.5)
If the perturbed action is deemed better than the proposed one, we update the Action-(D)SOM towards the perturbed action (lines 4-8). To do so, we assume that the Best Matching Unit (BMU), i.e., the center of the neighborhood, is the neuron that was selected at the decision step, j (line 3). We then apply the corresponding update equation, Equation (4.1) for a SOM, or Equation (4.2) for a DSOM, to move the neurons' prototypes towards the perturbed action.
Secondly, we update the actions' interests, i.e., the Q-Table (
∀u ∈ U ψ U (u) ← exp -1 η 2 U P U (u)-P U (s) o-Uu 3 ∀w ∈ W ψ W (w) ← exp -1 η 2 W P W (w)-P W (j) a-Ww
/* If the action was interesting */
4 if r + γ max j Q(s , j ) ? > Q(s, j) then /* Update the Action-(D)SOM */ 5 forall neuron w ∈ W do 6 W w ← α W a -W w ψ W (w) (a -W w ) + W w 7 end 8 end /* Update the Q-Table */ 9 Q(s, j) ← α Q ψ U (s)ψ W (j) [r + γ max j Q(i , j ) -Q(s, j)] + Q(s, j) /* Update the State-(D)SOM */ 10 forall neuron u ∈ U do 11 U u ← α U o -U u ψ U (u) (o -U u ) + U u 12 end 13 end 70 Chapter 4
neighborhoods of the State-and Action-(D)SOMs (computed on lines 2 and 3). Equation (4.6) shows the resulting formula:
Q t+1 (s, j) ← αψ U (s)ψ W (j) r + γ max j Q t (s , j ) + (1 -α)Q t (s, j) (4.6)
where s was the state hypothesis at step t, j was the chosen action identifier, r is the received reward, s is the state hypothesis at step t +1 (from the new observations). ψ U (s) and ψ W (j) represent, respectively, the neighborhood of the State-and Action-(D)SOMs, centered on the state s and the chosen action identifier j. Intuitively, the equation takes into account the interest of arriving in this new state, based on the maximum interest of actions available in the new state. This means that an action could yield a medium reward by itself, but still be very interesting because it allows to take actions with higher interests. On the contrary, an action with a high reward, but leading to a state with only catastrophic actions would have a low interest.
Finally, we learn the State-SOM, which is a very simple step (lines 10-12). Indeed, we have already mentioned that we know data points, i.e., observations, that have been sampled from the distribution of states by the environment. Therefore, we simply update the neurons' prototypes towards the received observation at the previous step. Prototype vectors are updated based on both their own distance to the data point, within the latent space, and the distance between their neuron and the best matching unit, within the 2D grid neighborhood (using the neighborhood computed on line 2). This ensures that the State-SOM learns to represent states which appear in the environment.
Remark. In the presented algorithm, the neighborhood and update formulas correspond to a DSOM. When using the Q-SOM algorithm, these formulas must be replaced by their SOM equivalents. The general structure of the algorithm, i.e., the steps and the order in which they are taken, stays the same.
Remark. Compared to Smith's algorithm, our extensions differ in the following aspects:
• DSOMs can be used in addition to SOMs.
• Hyperparameters are not annealed, i.e., they are constant throughout the simulation, so that agents can continuously learn instead of slowly converging. • Actions are chosen through a Boltzmann distribution of probabilities based on their interests, instead of using the -greedy method. • The random noise to explore the actions' space is drawn from a Gaussian distribution instead of a uniform one.
4.4
The Q-SOM and Q-DSOM algorithms
• The neighborhood functions of the State-and Action-(D)SOMs is a gaussian instead of a linear one. • The number of dimensions of the actions' space in the following experiments is greater (6) than in Smith's original experiments [START_REF]List of multi-agent approaches focusing on Sequential Decision Tasks or Stochastic Games, and the observability they require[END_REF]. This particularly prompted the need to explore other ways to randomly noise actions, e.g., the gaussian distribution. Note that some other methods have been tried, such as applying a noise on a single dimension each step, or randomly determining for each dimension whether it should be noised at each step; they are not disclosed in the results as they performed slightly below the gaussian method. Searching for better hyperparameters could yield better results for these methods.
Experiments
In order to validate our proposed algorithms, we ran some experiments on our Smart Grid use-case that we presented in 3.4.
First, let us apply the algorithms and formal model on this specific use-case. The observation space, O, is composed of the information that agents receive: the time (hour), the available energy, their personal battery storage, . . . The full list of observations was defined in Section 3.4.4. These values range from 0 to 1, and we have 11 such values, thus we define O l = [0, 1] 11 .
Similarly, actions are defined by multiple parameters: consume energy from grid, consume from battery, sell, . . . These actions were presented in Section 3.4.3. To simplify the learning of actions, we constrain these parameters to the [0, 1] range; they are scaled to the true agent's action range outside the learning and decision processes. For example, let us imagine an agent with an action range of 6, 000, and an action parameter, as outputted by the decision process, of 0.5, the scaled action parameter will be 0.5 × 6, 000 = 3, 000.
We have 6 actions parameters, and thus define A l = [0, 1] 6 .
In the sequel, we present the reward functions that we implemented to test our algorithms, as well as the experiments' scenarii. Finally, we quickly describe the 2 algorithms that we chose as baselines: DDPG and MADDPG.
72
Chapter 4
Reward functions
We implemented multiple reward functions that each focus on different ethical stakes. Most of them are based on the principle of Difference Reward [START_REF] Yliniemi | Multi-objective multiagent credit assignment through difference rewards in reinforcement learning[END_REF] to facilitate the Credit Assignment, as discussed in Section 2.3.1. Additionally, 2 functions focus on multiple objectives, but with a rather naïve approach (see our next contributions for a better approach), and another 2 focus on adaptation, i.e., the agents' capacity to adapt their behaviour to changing mores, by making the reward function artificially change at a fixed point in time.
We give an intuitive definition and a mathematical formula for each of these reward functions below.
Equity Determine the agent's contribution to the society's equity, by comparing the current equity with the equity if the agent did not act. The agent's goal is thus to maximize the society's equity.
R eq (agent) = (1 -Hoover(Comf orts)) -(1 -Hoover(Comf orts \ {agent}))
Over-Consumption Determine the agent's contribution to over-consumption, by comparing the current over-consumed amount of energy, with the amount that would have been over-consumed if the agent did not act. The agent's goal is thus to minimize society's over-consumption.
R oc (agent) = 1- OC ∀a (Consumed a + Stored a ) - OC -(Consumed agent + Stored agent ) ∀a =agent (Consumed a + Stored a )
Comfort Simply return the agent's comfort, so that agents aim to maximize their comfort.
This intuitively does not seem like an ethical stake, however it can be linked to Schwartz' "hedonistic" value, and therefore is an ethical stake, focused on the individual aspect. We will mainly use this reward function in combination with others that focus on the societal aspect, to demonstrate the algorithms' capacity to learn opposed moral values.
R comf ort (agent) = Comf orts agent
Experiments
Multi-Objective Sum A first and simple reward function that combines multiple objectives, namely limitation of over-consumption and comfort. The goal of agents is thus to both minimize the society's over-consumption while maximizing their own comfort. This may be a difficult task, because the simulation is designed so that there is a scarcity of energy most of the time, and agents will most likely over-consume if they all try to maximize their comfort. On the contrary, reducing the over-consumption means they need to diminish their comfort. There is thus a trade-off to be achieved between over-consumption and comfort.
R mos (agent) = 0.8 × R oc (agent) + 0.2 × R comf ort (agent)
Multi-Objective Product A second, but also simple, multi-objective reward functions.
Instead of using a weighted sum, we multiply the reward together. This function is more punitive than the sum, as a low reward cannot be "compensated". For example, let us consider a vector of reward components [0.1, 0.9]. Using the weighted sum, the result depends on the weights: if the first component has a low coefficient, then the result may actually be high. On contrary, the product will return 0.1×0.9 = 0.09, i.e., a very low reward. Any low component will penalize the final result. We increase the difficulty by making 2 changes, one after 2000 time steps, and another after 6000 time steps, and by considering a combination of 3 rewards after the second change. As we can see, the various reward functions have different aims. Some simple functions, such as equity, overconsumption, or comfort, serve as a baseline and building blocks for other functions. Nevertheless, they may be easy to optimize: for example, by consuming absolutely nothing, the overconsumption function can be satisifed. On the contrary, the comfort function can be satisfied by consuming the maximum amount of energy, such that the comfort is guaranteed to be close to 1. The 2 multi-objective functions thus try to force agents to learn several stakes at the same time, especially if they are contradictory, such as overconsumption and comfort. The agent thus cannot learn a "trivial" behaviour and must find the optimal behaviour that manages to satisfy both as much as possible.
R mop (agent) = R oc (agent) × R comf ort (agent) Adaptability1
R ada2 (agent) = ⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ R oc (agent) if t <
Finally, the adaptability functions go a step further and evaluate agents' ability to adapt when the considerations change.
Scenarii
In order to improve the richness of our experiments, we designed several scenarii. These scenarii are defined by 2 variables: the agents' consumption profile, and the environment's size, i.e., number of agents.
We recall from Section 3.4.6 that learning agents are instantiated with a profile, determining their battery capacity, their action range, and their needs, i.e., the quantity of energy they want to consume at each hour. These needs are extracted from real consumption profiles; we propose 2 different versions, the daily and the annual profiles. In the daily version, needs are averaged over every day of the year, thus yielding a need for each hour of a day: this is illustrated in Figure 4.4. This is a simplified version, averaging the seasonal differences; its advantages are a reduced size, thus decreasing the required computational resources, a simpler learning, and an easier visualization for humans. On the other hand, the annual version is more complete, contains seasonal differences, which improve the environment's richness and force agents to adapt to important changes.
The second property is the environment size. We wanted to test our algorithms with different sets of agents, to ensure the scalability of the approach, in the sense that agents are able to learn a correct behaviour and adapt to many other agents in the same environment. This may be difficult as the number of agents increases, since there will most certainly be more conflicts. We propose a first, small environment, containing 20 Households agents, 5 Office agents, and 1 School agent. The second environment, medium, contains roughly 4 times more agents than in the small case: 80 Household agents, 19 Office agents, and 1 School.
Experiments
DDPG and MADDPG baselines
In order to prove our algorithms' advantages, we chose to compare them to the wellknown DDPG [START_REF] Lillicrap | Continuous control with deep reinforcement learning[END_REF] and its multi-agent extension, MADDPG [START_REF] Lowe | Multi-agent actorcritic for mixed cooperative-competitive environments[END_REF].
DDPG, or Deep Deterministic Policy Gradient is one of the algorithms that extended the success of Deep Reinforcement Learning to continuous domains [START_REF] Lillicrap | Continuous control with deep reinforcement learning[END_REF]. It follows the quite popular Actor-Critic architecture, which uses 2 different Neural Networks: one for the Actor, i.e., to decide which action to perform at each time step, and another for the Critic, i.e., to evaluate whether an action is interesting. We chose it as a baseline since it focuses on problems with similar characteristics, e.g., continuous domains, and is a popular baseline in the community.
76
Chapter 4
MADDPG, or Multi-Agent Deep Deterministic Policy Gradient, extends the idea of DDPG to the multi-agent setting [START_REF] Lowe | Multi-agent actorcritic for mixed cooperative-competitive environments[END_REF], by relying on the Centralized Training idea. As we mentioned in Section 2.3.2, it is one of the most used methods to improve multi-agent learning, by sharing data among agents during the learning phase. This helps agents make a model of other agents and adapt to their respective behaviours. However, during execution, sharing data in the same manner is often impracticable or undesirable, as it would impair privacy and require some sort of communication between agents; thus, data is not shared any more at this point (Decentralized Execution). We recall that, as such, Centralized Training -Decentralized Execution makes a distinction between training and execution, and is thus inadequate for continuous learning, and constant adaptation to changes. On the other hand, if we were to make agents continuously learn with centralized data sharing, even in the execution phase, we would impair privacy of users that are represented or impacted by the agents. These reasons are why we chose not to use this setting for our own algorithms Q-SOM and Q-DSOM. While we do not use centralized training, we want to compare them to an algorithm that uses it, such as MADDPG, in order to determine whether there would be a performance gain, and what would be the trade-off between performance and privacy. In MADDPG, the Centralized Training is simply done by using a centralized Critic network, which receives observations, actions, and rewards from all agents, and evaluates all agents' actions. The Actor networks, however, are still individualized: each agent has its own network, which the other agents cannot access. During the training phase, the Critic network is updated thanks to the globally shared data, whereas Actor networks are updated through local data and the global Critic. Once the learning is done, the networks are frozen: the Critic does not require receiving global data any more, and the Actors do not rely on the Critic any more. Only the decision part, i.e., which action should we do, is kept, by using the trained Actor network as-is.
Results
Several sets of experiments were performed:
• First, numerous experiments were launched to search for the best hyperparameters of each algorithm, to ensure a fair comparison later. Each set of hyperparameters was run 10 times to obtain average results, and a better statistical significance. In order to limit the number of runs and thus the computational resources required, we
Results
decided to focus on the adaptability2 reward for these experiments. This function is difficult enough so that the algorithms will not reach almost 100% immediately, which would make the hyperparameter search quite useless, and is one of the 2 that interest us the most, along with adaptability1, so it makes sense that our algorithms are optimized for this one. The annual consumption profile was used to increase the richness, but the environment size, i.e., number of agents, was set to small in order to once again reduce the computational power and time.
• Then, the 4 algorithms, configured with their best hyperparameters, were compared on multiple settings: both annual and daily consumption profiles, both small and medium sizes of environment, and all the reward functions. This resulted in 2 × 2 × 7 scenarii, which we ran 10 times for each of the 4 algorithms.
In the following results, we define a run's score as the average of the global rewards per step. The global reward corresponds to the reward, without focusing on a specific agent. For example, the equity reward compares the Hoover index of the whole environment to a hypothetical environment without the agent. The global reward, in this case, is simply the Hoover index of the entire environment. This represents, intuitively, how the society of agents performed, globally. Taking the average is one of the simplest methods to get a single score for a given run, which allows comparing runs easily.
Searching for hyperparameters
Tables 4.1, 4.2, 4.3, 4.4 summarize the best hyperparameters that have been found for each algorithm, based on the average runs' score obtained when using these parameters. The results presented in Figure 4.5 and Table 4.5 show that the Q-SOM algorithm performs better. We use the Wilcoxon statistical test, which is the non-parametric equivalent of the well-known T-test, to determine whether there is a statistically significant difference in the means of runs' scores between different algorithms. Wilcoxon's test, when used with the greater alternative, assumes as a null hypothesis that the 2 algorithms have similar means, or that the observed difference is negligible and only due to chance. The Wilcoxon method returns the p-value, i.e, the likelihood of the null hypothesis being true. When p < α = 0.05, we say that it is more likely that the null hypothesis can be refuted, and we assume that the alternative hypothesis is the correct one. The alternative hypothesis, in this case, is that the Q-SOM algorithm obtains better results than its opposing algorithm. We thus compare algorithms 2-by-2, on each reward function and scenario.
The statistics, presented in Table 4.6, prove that the Q-SOM algorithm statistically outperforms other algorithms, in particular DDPG and MADDPG, on most scenarii and reward functions, except a few cases, indicated by the absence of * next to the p-value. For example, DDPG obtains similar scores on the daily / small overconsumption and multiobj-prod cases, as well as daily / medium overconsumption, and annual / medium overconsumption. Q-DSOM is also quite on par with Q-SOM on the daily / small adaptability1 case. Yet, MADDPG is consistently outperformed by Q-SOM.
Results
Tab. 4.6.: Comparison of the Q-SOM algorithm with the others, using a Wilcoxon statistical test, with the 'greater' alternative. .6 shows the evolution of individual rewards received by agents over the time steps, in the annual / small scenario, using the adaptability2 reward function. We chose to focus on this combination of scenario and reward function as they are, arguably, the most interesting. Daily scenarii are perhaps too easy for the agents as they do not include as many variations as the annual; additionally, small scenarios are easier to visualize and explore, as they contain fewer agents than medium scenarios. Finally, the adaptability2 is retained for the same arguments that made us choose it for the hyperparameters search.
Wilcoxon
We show a moving average of the rewards in order to erase the small and local variations to highlight the larger trend of the rewards' evolution.
We can see from the results that the small scenarii seem to yield a slightly better score than the medium scenarii. Thus, agents are impacted by the increased number of other agents, and have difficulties learning the whole environment dynamics. Still, the results reported for the medium scenarii are near the small results, and very close to 1. Even though there is indeed an effect of the environment size on the score, this hints towards the scalability of our approach, as the agents managed to learn a "good" behaviour that yields high rewards.
Results
Fig
Discussion
In this chapter, we presented our first contribution consisting in two reinforcement learning algorithms, Q-SOM and Q-DSOM, which aimed to answer the following research question: How to learn behaviours aligned with moral values, using Reinforcement Learning with complex, continuous, and multi-dimensional representations of actions and situations? How to make the multiple learning agents able to adapt their behaviours to changes in the environment?
We recall that the principal important aspects and limitations identified in the State of the Art, and which pertains to our 1st research question, were the following:
86
Chapter 4
• Using continuous and multi-dimensional domains to improve the environment's richness. • Continuously learning and adapting to changes in the environment, including in the reward function, i.e., the structure that encodes and captures the ethical considerations that agents should learn to exhibit. • Learning in a multi-agent setting, by taking into account the difficulties posed by the presence of other agents.
The continuous and multi-dimensional aspect was solved by design, thanks to the SOMs and DSOMs that we use in our algorithms. They learn to handle the complex observations and actions domains, while advantageously offering a discrete representation that can be leveraged with the Q-Table, permitting a modular approach. This modular approach, and the use of Q-Tables, allow for example to compare different actions, which is not always possible in end-to-end Deep Neural Networks. This point in particular will be leveraged in our 3rd contribution, through multi-objective learning, in Chapter 6.
The continuous adaptation was also handled by our design choices, notably by disabling traditional convergence mechanisms. The use of (D)SOMs also help, as the representation may shift over time by moving the neurons. Additionally, our experiments highlight the ability of our algorithms to adapt, especially when compared to other algorithms, through the specific adaptability1 and adaptability2 functions.
Finally, challenges raised by the multi-agent aspect were partially answered by the use of Difference Rewards to create the reward functions. On the other hand, the agents themselves have no specific mechanism that help them learn a behaviour while taking account of the other agents in the shared environment, e.g., contrary to Centralized Training algorithms such as MADDPG. Nevertheless, our algorithms managed to perform better than MADDPG on the proposed scenarii and reward functions, which means that this limitation is not crippling.
In addition to the multi-agent setting, our algorithms still suffer from a few limitations, of which we distinguish 2 categories: limitations that were set aside in order to propose a working 1st step, and which will be answered incrementally in the next contributions; and longer-terms limitations that are still not taken care of at the end of this thesis. We will first briefly present the former, before detailing the latter.
From the objectives we set in this manuscript's introduction, and the important points identified in the State of the Art, the following were set aside temporarily:
Discussion
• Interpretable rewards and reward functions constructed from domain expert knowledge: The reward functions we proposed have the traditional form of mathematical formulas; the resulting rewards are difficult to interpret, as the reasoning and rationale are not present in the formula. In this contribution, proposed reward functions were kept simple as a validation tool to evaluate the algorithms' ability to learn. We tackle this point in the next contribution, in Chapter 5, in which we propose to construct reward functions through symbolic-based judgments.
• Multiple moral values: Some of our reward functions hinted towards the combination of multiple moral values, such as the multi-objective-sum, multi-objective-prod, adaptability1, and adaptability2. However, the implementations of moral values were rather simple and did not include many subtleties, e.g., for the equity, we might accept an agent to consume more than the average, if the agent previously consumed less, and is thus "entitled" to a compensation. We also improve this aspect in the next contribution, by providing multiple judging agents that each focus on a different moral value.
• Multiple moral values / objectives: Moreover, these moral values, which can be seen as different objectives, are simply aggregated in the reward function. On the agents' side, the reward is a simple scalar, and nothing allows to distinguish the different moral values, nor to recognize dilemmas situations, where multiple values are in conflict and cannot be satisfied at the same time. This is addressed by our 3rd contribution, in Chapter 6, in which we extend the Q-(D)SOM algorithms to a multi-objective setting.
As for the longer-terms limitations and perspectives:
• As we already mentioned, the multi-agent aspect could be improved, for example by adding communication mechanisms between agents. Indeed, by being able to communicate, agents could coordinate their actions so that the joint-action could be even better. Let us assume that an agent, which approximately learned the environment dynamics, believes that there is not much consumption at 3AM, and chooses the strategy of replenishing its battery at this moment, so as to have a minimal impact on the grid. Another agent may, at some point, face an urgent situation that requires it to consume exceptionally at 3AM this day. Without coordination, the 2 agents will both consume an import amount of energy at the same time, thus impacting the grid and potentially over-consuming. On the other hand, if the agents could communicate, the second one may inform other agents of
88
Chapter 4
its urgency. The first one would perhaps choose to consume only at 4AM, or they would both negotiate an amount of energy to share, in the end proposing a better joint-action than the uncoordinated sum of their individual actions. However, such communication should be carefully designed in a privacy-respectful manner.
In order to make the reward functions more understandable, in particular by non-AI expert, such as lay users, domain experts, or regulators, to be able to capture expert knowledge, and to allow easier modification, e.g., when the behaviour is not what we expected, we propose to replace the numerical reward functions by separate, symbolic, judging agents. We detail this in the next chapter.
Discussion
Designing a reward function through symbolic judgments 5
In this chapter, we present the second contribution, which is the judging part from the conceptual architecture in Figure 3.2. First, Section 5.1 starts with an overview of this contribution, makes the link with the previous contribution, and motivates it with respect to the research question. We present two different models: the first, using logic rules, is described in Section 5.2, whereas the second, using argumentation, is described in Section 5.3. Section 5.4 details the experiments' conditions, which are based on the previous chapter, and results are reported in Section 5.5. Finally, we discuss the two models' advantages and limitations and compare their differences, in Section 5.6.
Overview
In the previous chapter, we defined and experimented the learning agent component of our architecture, to learn behaviours with ethical considerations. The logical next step is thus to guide the learning agents through a reward signal, which indicates the degree of their actions' correctness, i.e., alignment with moral values. This corresponds to the judgment of behaviours in the global architecture presented in Section 3.3.
We recall that we do not know beforehand which action should an agent take, otherwise we would not need learning behaviours. This lack of knowledge stems from the environment's complexity, and the question of long-term consequences. Would it be acceptable to take an action almost perfect, from a moral point of view, which leads to situations where it is difficult or infeasible to take further morally good actions? Or would it be better to take a morally good action, although not the best one, which leads to situations where the agent is able to take more good actions?
These questions are difficult to answer, without means to compute the weight of consequences, for each action in each situation. If the horizon of events is infinite, this is even computationally intractable.
However, we assume that we are able to judge whether a proposed action is a good one, based on a set of expected moral values. The more an action supports the moral values, the higher the reward. On the contrary, if an action defeats a moral value, its associated reward diminishes.
The task thus becomes: how to judge proposed actions and send an appropriate reward signal to agents, such that they are able to learn to undertake morally good actions? This is in line with our second research question: How to guide the learning of agents through the agentification of reward functions, based on several moral values to capture the diversity of stakes?
Traditionally, most reinforcement learning algorithms use some sort of mathematical functions to compute the rewards, as we did in the previous chapter. This seems rather intuitive, since the learning algorithm expects a real number, and thus reward functions usually output real numbers. However, such mathematical functions also entail disadvantages.
The first point is their difficulty to be understandable, especially to non-AI experts, such as external regulators, domain experts, lay users, . . . We argue that understanding the reward function, or the individual rewards that it produces, may help us understand the resulting agent's behaviour. Indeed, the reward function serves as an incentive, it measures the distance with the behaviour that we expect from the agent, and thus intuitively describes this expected behaviour.
A second point is that some processes are easier to describe using symbolic reasoning rather than mathematical formulas. As we have seen in the state of the art, many works in Machine Ethics have proposed to use various forms of symbolic AI, such as argumentation, logic, event calculus, etc. Symbolic reasoning has already been used to judge the behaviour of other agents', and we propose to extend this judgment to the computation of rewards.
Symbolic formalizations allow to explicitly state the moral values, their associated rules, and the conflicts between rules. In this chapter, we propose to apply two different symbolic methods to design judgment-based reward functions. First, we partially use the Ethicaa platform [START_REF] Cointe | Ethical judgment of agents' behaviors in multi-agent systems[END_REF] to create judging agents based on logic reasoning,
92
Chapter 5
beliefs, and Prolog-like rules. The second alternative uses argumentation graphs, with arguments and attack relations between them, to compute the judgment.
Figure 5.1 shows the abstract architecture of this idea, without taking into account the details of either logic-based or argumentation implementations. On the right-side of the figure, learning agents still receive observations from the environment, and they output actions to the environment. However, we can see that the "Compute reward" function, which was previously within the environment, is now "agentified", by introducing judging agents, on the left side of the figure. To perform this computation, judging agents receive the observations from the environment, and actions from learning agents (through the environment).
We want our learning agents to receive rewards based on multiple moral values, as highlighted by our objective O1.1. Each judging agent is attributed to a unique moral value, and rely on this explicitly defined value and its associated moral rules to determine the reward that should be sent to each learning agent, as part of their judgment process. It ensues that multiple rewards are produced for each learning agent. However, the learning algorithm expects a single, scalar reward. Thus, rewards from the different judging agents are aggregated before they are sent to learning agents. The two propositions, logic- symbolic-based rewards. We then discuss the differences between these 2 propositions, advantages and drawbacks with respect to the numeric-based reward functions, and remaining limitations and perspectives.
Remark. These two implementations outcome from collaboration with interns. Jérémy Duval worked on the implementation of the first method, based on Ethicaa agents [START_REF] Duval | Judgment of ethics of behavior for learning[END_REF][START_REF] Chaput | A multi-agent approach to combine reasoning and learning for an ethical behavior[END_REF]. Benoît Alcaraz worked on the second method, judging through argumentation [START_REF] Alcaraz | Argumentation-based judging agents for ethical reinforcement learning supervision[END_REF]. Christopher Leturc, who was at the time an associate lecturer at École des Mines St-Étienne (EMSE), brought his expertise on argumentation for this second internship.
Designing a reward function through logic rules
In this section, we first explain the motivations for combining a symbolic judgment with reinforcement learning agents, and for using logic-based agents to do so. To perform the symbolic judgment, we then introduce new, specific judging agents that are based on Ethicaa agents. Finally, the proposed model for the computation of rewards, which can be integrated with a reinforcement learning algorithm, is presented.
Motivations
As mentioned previously, this contribution focuses on leveraging simplified Ethicaa agents [START_REF] Cointe | Ethical judgment of agents' behaviors in multi-agent systems[END_REF] to judge the learning agents' actions, and determine an appropriate reward. There are multiple advantages and reasons behind this idea.
• The combination of symbolic (top-down) judgments and neural (bottom-up) learning constitutes a hybrid approach, which cumulates advantages of both.
-Neural learning has the ability to generalize over unexpected situations.
-Symbolic reasoning offers a better intelligibility of the expected behaviour, and the possibility to integrate prior knowledge from domain experts.
• Agentifying the reward function, by introducing judging agents, allows the judging and learning agents to evolve independently, and paves the way to co-construction.
94
Chapter 5
-Judging agents' rules can be updated by human designers, and learning agents must adapt their behaviour to comply with the judgments resulting from the new rule set. This opens to a perspective of human-centered AI with a human-in-the-loop schema.
• As mentioned in our general framework, intelligibility is important, especially to confirm whether the expected behaviour is aligned with our desired moral values.
-The judgment process is implemented on explicit moral values and rules, expressed in a symbolic form, which improves the intelligibility.
• We obtain a richer feedback by combining the judgment of multiple judging agents, each corresponding to a single moral value.
-This facilitates the implementation of the judgment process on one single moral value, and makes it more intelligible. -It also offers the possibility of more complex interactions between different judging agents each in charge of a single moral value with a dedicated rule set, such as negotiation processes. -Finally, it offers a way to update rules by adding, removing, or replacing judging agents.
We begin by introducing our judging agents, which are based on Ethicaa agents, how they work, and how they produce rewards.
Judging agents
To perform symbolic judgments with respect to given moral values and rules, we leverage the existing Ethicca agents [START_REF] Cointe | Ethical judgment of agents' behaviors in multi-agent systems[END_REF] that we have already mentioned in the State of the Art. These agents use the Beliefs -Desires -Intentions (BDI) architecture, and have an ethical judgment process to determine which actions are acceptable in a given situation.
An Ethicaa agent may perform a judgment on itself to determine which action it should take, or on another agent to determine whether it agrees with the other agent's behaviour.
We propose to adapt this judgment of others to compute the reward functions for learning agents. The original Ethicaa agent includes several processes, such as the awareness and evaluation processes, to obtain beliefs about the situation, and determine the feasibility on an action. We ignore such processes, as we judge actions already taken, and focus on
Designing a reward function through logic rules
the "goodness process", retaining only the components that we need for our approach. Figure 5.2 represents such a simplified agent, with an explicit base of moral rules (MR), and moral values in the form of "value support" rules (VS). Our proposed simplified agents use their beliefs about the current situation (B), the moral values (VS), the moral rules (MR), and the knowledge about actions (A) to produce a Moral Evaluation (ME) of the actions. The Moral Evaluation returns a symbol, either moral, immoral or neutral, from which we will compute the reward. We describe how we leverage this for our proposed model in the next section.
VS MR
ME
The proposed model: LAJIMA
We now describe the Logic-based Agents for JudgIng Morally-embedded Actions (LAJIMA) model, which is represented in Figure 5.3. Each judging agent contains an explicit moral value (VS) and associated moral rules (MR). They receive observations (o l ) that are transformed into beliefs (B), and actions (a l ) that are similarly transformed into symbols (A). Their judgment process relies on a moral evaluation function (ME) for each of the actions component a l,1 , a l,2 , • • • , a l,d . The moral evaluation function leverages the moral value (VS) and moral rules (MR) to do so. Symbolic moral evaluations compose the judgment of each judging agent j, which is later transformed into a numeric feedback through the F function. Per-judge feedbacks are finally aggregated to form the reward r l sent to the learning agent l. This contribution focuses on the reward function, and, for all things necessary, we assume the same DecPOMDP framework as described in the previous chapter. More specifically, the reward functions we will construct through judging agents work with any RL algorithm, under two assumptions: The judgment process of logic-based judging agents, which produces a reward as a scalar value for a learning agent l. This process is duplicated for each learning agent. The symbolic judgments are then transformed as numbers through the feedback function F, and finally averaged to form a scalar reward r l .
Designing a reward function through logic rules
• Observations and actions are multi-dimensional and continuous.
• RL agents expect a scalar reward signal.
These assumptions are sufficiently generic to support a large part of existing RL algorithms, in addition to our own Q-SOM and Q-DSOM algorithms. They mean that judging agents will have to handle real vectors as inputs, and produce a real scalar as output.
At each step t + 1 of the simulation, after learning agents took an action at step t, observations and executed actions are sent to judging agents. We recall, from the DecPOMDP framework 4.1, that an observation o l,t is a vector ∈ O l ⊆ R g , and an action a l,t is a vector ∈ A l ⊆ R d . In this case, observations received by judging agents are the same as observations received by learning agents, yet they do not know the full state of the environment, nor the exact process that learning agents used. In Ethicaa's vocabulary, this is deemed to be a partially informed ethical judgment, since judging agents have some but not all information about the judged agents.
From these observations, judging agents generate beliefs (B) about the current situation, simply by creating a belief for each component of the observation vector o l , with the same name as the component, and a parameter that corresponds to the component's value. This produces symbols that can be handled by the judging agents' moral evaluation mechanism, from the numeric vectors. Similarly, judging agents must judge actions based on symbols, whereas enacted actions by learning agents are vectors. Actions imply an additional difficulty, compared to observations: a single action vector represents in fact several "sub-actions" that happen at the same time, on different dimensions, e.g., consuming energy from the grid, and buying energy from the national network. A judging agent may judge one of the dimensions as supporting its associated moral value, whereas another dimension defeats the same moral value. Since they must return a scalar reward from their judgment, what should the judging agent return in this case?
To simplify this, we propose to decompose the enacted action into a set of action symbols {∀i ∈ [[1, d]] : a l,i }, where l is the learning agent, a l is the enacted action by l, i.e., a
98
Chapter 5
vector of d components, and a l,i is the i-th component, i.e., a real value. Each "action symbol" will be first judged individually, before being aggregated by the judging agent. As for the observations, a belief is generated with the component name and the component value as a parameter, for each component. However, contrary to the observations, the actions are scaled accordingly to the learning agent's profile. Judging agents use the set of generated scalar values, known moral values and associated moral rules to determine if each component of the action, or "action symbol" supports or defeats their moral value. Moral rules are logical predicates expressing the support (or defeat) of an action component to a moral value, based on observations. To do so, we define the Moral Evaluation mechanism of judging agent j as a function ME j : B × R → V, where we define B as the space of possible beliefs about the situation, and V as the set of possible valuations V = {moral, immoral, neutral}.
For example, the following lines, which are in a Prolog-like language, indicate that a learning agent's action supports the sub-value of "promoting grid autonomy" if the quantity X associated to the buy_energy component is not more than 100W. This subvalue is related to the environmental sustainability value, and each action component is judged individually and receives a moral evaluation, with respect to this moral value. If the action component defeats the moral value, or one of its sub-values, the evaluation is said to be immoral. Otherwise, if the action component supports the value, its evaluation is said to be moral. A default neutral evaluation is assigned when the action component neither supports nor defeats the value.
valueSupport(buy_energy(X), "promote_grid_autonomy") :-X <= 100. valueDefeat(buy_energy(X), "promote_grid_autonomy") :-X > 100.
subvalue("promote_grid_autonomy","env_sustain").
moral_eval(_,X,V1,immoral):-valueDefeat(X,V1) & subvalue(V1,"env_sustain").
Designing a reward function through logic rules
moral_eval(_,X,V1,moral):-valueSupport(X,V1) & subvalue(V1,"env_sustain"). moral_eval(_,_,_,neutral).
Remark. Note that, in the previous example, the subvalue fact highlights the possibility of creating a value hierarchy, represented by the "VS" knowledge base of value supports.
In our case, this hierarchy is not necessary and can be simplified: we use it mainly to make the code more understandable, by aptly naming the sub-values, e.g., here
promote_grid_autonomy for actions that avoid buying too much energy from the national grid.
The judgment of a learning agent l by a judging agent j is thus the Moral Evaluation on every action symbol. Mathematically, we define Judgment
j (l) = {∀i ∈ [[1, d]] : ME j (beliefs(o l ), a l,i )},
where o l is the observation vector received by l, and a l is the action vector chosen by l.
A judging agent represents a single moral value; however, we want to judge a learning agent's behaviour based on various moral values. A learning agent l is judged by all judging agents, which results in a list of lists of valuations.
Example 5.3. Let d = 3 the size of an action vector, l be a learning agent, and j 1 , j 2 two judging agents. To simplify the notation, we have B = beliefs(o l ). Assume the judgments received by l is Judgment j 1 (l), Judgment j 2 (l)
= {{ME j 1 (B, a l,1 ), ME j 1 (B, a l,2 ), ME j 1 (B, a l,3 )} , {ME j 2 (B, a l,1 ), ME j 2 (B, a l,2 ), ME j 2 (B, a l,3 )}} = {{moral, neutral, neutral} , {immoral, immoral, moral}}
In this example, we can see the first judging agent j 1 deemed the first dimension of the action a l,1 to be moral, i.e., consistent with its moral value and rules, whereas the second judging agent j 2 deemed this same dimension to be immoral, i.e., inconsistent with its moral value and rules. The second dimension of the action a l,2 was deemed neutral by the first judge, and immoral by the second judge; the third and final dimension a l,3 was deemed neutral by the first judge, and moral by the second.
The reward function R l : S × A l × S → R must return a single, real number r l . However, we have a list of lists of valuations. Additionally, as we saw in the previous example, we may have conflicts between judgments, both within a single judge, and between judges. A 100 Chapter 5
judge may determine that some components of the action are immoral and others moral; multiple judges may determine that the same component of an action is both immoral and moral, according to their own different moral values.
We propose the following method to transform the set of valuations, although many other methods are possible. Such methods, how they differ with ours, and what these differences imply, are discussed in Section 5.6.
The Feedback function F :
V d |J | → R |J |
transforms the valuations into a list of numbers.
The judgment of each judging agent, i.e., a list of valuations, is transformed into a single number, by counting the number of moral, and dividing it by the sum of the number of moral and immoral valuations. This means that the more moral valuations an action receives, the more it will tend towards 1. On the contrary, the more immoral valuations an action receives, the more it will tend towards 0. If an action only received neutral valuations, we consider it was neither good nor bad, and we set the number to 0.5 as a special case. We can note that this special case also corresponds to a situation where an action received as much moral valuations as immoral.
We thus have a single number for each judging agent, which resolves conflicts within judges. Then, to solve the conflicts between judges, and produce a single reward for all judges, we set the reward r l to be the average of these judgment numbers produced by each judging agent.
Example 5.4. We reuse the same judgments from example 5.3: Judgment j 1 (l) = {moral, neutral, neutral}, and Judgment j 2 (l) = {immoral, immoral, moral}. The first judgment contains 1 moral valuation, and 0 immoral; thus, the resulting number is 1 1+0 = 1. On the other hand, the second judgment contains 1 moral valuation but 2 immoral valuations; thus, its resulting number is 1 1+2 = 1 3 . In other words, f l = F ({{moral, neutral, neutral} , {immoral, immoral, moral}}) = 1, 1 3 . Finally, the reward produced by the aggregation of these two judgments is simply the average of these numbers, i.e., r l = average(f l ) = average( 1, 1 3 ) = 2 3 .
Algorithm 5.1 summarizes the judgment process that we defined in this section. The foreach loop that begins on line 3 performs the judgment of all learning agents individually and sequentially. Each judging agent computes its beliefs over the observations o l received by a learning agent l (line 5), and uses them to judge each component of the action parameters (lines 7-9). The judgment for each dimension of the action is retained in a vector, so that the judge can later count the number of moral and immoral valuations (lines 14-16). Remember that, if the learning agent received only neutral valuations by a given judge, a default reward of 1 2 is attributed (lines 13 and 19). Finally, the reward ultimately received by a learning agent is the average of the per-judge rewards (line 22).
Designing a reward function through argumentation
We now propose a second method to design the reward function as symbolic judgments, through argumentation. The differences between logic rules and argumentation, their respective advantages and drawbacks, are discussed in Section 5.6.
Motivations
• Argumentation graphs offer a rich structure, by relying on the notion of arguments and attack relations. This allows explicit conflicts, represented by attacks between arguments, and offers a finer control over the targeted behaviour. In particular, it is easier to reprehend an undesired behaviour, by attacking arguments when specific conditions are met. For example, let us imagine that an agent learned to hoard energy at a time step t in order to give energy at a time step t + 1, therefore receiving praises for its "generous" behaviour at t + 1. If the hoarding behaviour at t is problematic, we may add an argument to the graph, which is activated when the agent stored a lot of energy at the previous step, and which attacks the pro-argument of having given energy. Thus, this pro-argument will be killed in such situations, which will prevent the agent from receiving a high reward: it will have to stop its hoarding behaviour to get better rewards.
• Note that it is easier for designers to structure the moral rules through the graph, as mentioned in the previous point. The attack relationship makes explicit the impact of an argument on another.
• Additionally, argumentation using a graph structure makes it feasible to visualize the judgment process, by plotting the arguments as nodes and attacks as edges between nodes. Non-developers, such as external regulators, lawyers, ethicists, or even lay users, can look at the graph and get a glimpse of the judgment function.
Designing a reward function through argumentation
• Arguments themselves, and whether they were activated at a given time step t, can be leveraged to understand the learnt and exhibited behaviour. For example, if the argument "has given energy" was activated, we can understand why the agent received a high reward. Or, on the contrary, if both arguments "has given energy" and "has previously hoarded energy" were activated, we can understand why the agent did not receive a high reward, although its actions at this specific time step seemed praiseworthy. They can be further used in explanations techniques: whereas we do not consider our method an explanation technique per se, these arguments represent elements of explanations that can be leveraged in explanation methods to explore and understand the expected behaviour, the incentives that led to each reward and ultimately that led to learn a given behaviour, and compare the exhibited behaviour to the expected one. In most AI techniques, these elements simply do not exist.
• Finally, the previous points may help us diminish the reward hacking risk, i.e., the possibility that an agent learns to optimize the reward function through an undesired (and unexpected) behaviour. A classic example is an agent that is rewarded based on the distance towards a goal, and which may find that it is more profitable to indefinitely circle as close as possible to the goal, in order to obtain an infinitely positive reward. Such reward hacking can, first, be detected by looking at the sequence of activated arguments. Without using argumentation, we might detect that something is off, by seeing a sequence of low then high rewards, e.g., 0.2, 0.8, 0.2, 0.8, but we might not understand what or why. We again take the example of the "hoarding then giving" behaviour presented earlier.
Looking at arguments' activations might reveal that, when the reward is low, the argument "agent stored too much energy" was activated; when the reward is high, the argument "agent gave a lot of energy" was activated. We can thus understand what the problem is. Secondly, the reward hacking can be fixed, by adding new arguments that prevent such hacking, as we have already mentioned.
Argumentation
Before introducing our AJAR model in the next section, based on argumentation decision frameworks, we briefly cover some necessary knowledge about argumentation.
Chapter 5
Abstract argumentation frameworks, first introduced by [START_REF] Dung | On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games[END_REF], allow us to express knowledge as a set of arguments, such as "There are clouds today", "The weather is nice". Arguments are arranged as nodes of a directed graph, where the links, or edges, between nodes represent binary attack relations. For example, the argument "There are clouds today" attacks "The weather is nice". This means that, if we consider the argument "There are clouds today" as true, or alive, in the current situation, it will be difficult to accept the argument "The weather is nice" as well. An example is given in Figure 5. Multiple argumentation frameworks exist, e.g., attacks can be weighted (Coste-Marquis, Konieczny, Marquis, & Ouali, 2012), bipolar frameworks bring the notion of supporting an argument in addition of attacking [START_REF] Amgoud | On bipolarity in argumentation frameworks[END_REF], arguments and attacks can have a probability [START_REF] Li | Probabilistic argumentation frameworks[END_REF], etc. In this contribution, we want to leverage the argumentation to produce a judgment of an agent's behaviour. This can be seen as some sort of decision, where the possible outcomes are "The agent's action was moral in regard to this specific moral value", or "The agent's action was immoral in regard to this specific moral value."
We propose to use a simplified version of the Argumentation Framework for Decision-Making (AFDM) of [START_REF] Amgoud | Using arguments for making and explaining decisions[END_REF]. Their model considers that multiple decisions can be possible, whereas in our case, we only need to determine the degree to which an action can be considered moral, or immoral. We thus restrict our arguments to be pros, cons, or neutral arguments, and drop the set of decisions. This simplified version, instead of targeting decision-making, focuses on judging decisions, and we thus name it Argumentation Framework for Judging a Decision (AFJD).
Definition 5.1 (AFJD). An Argumentation Framework for Judging a Decision (AFJD) is defined as a tuple: AF = Args, Att, F p , F c , where:
Designing a reward function through argumentation
• Args is a non-empty set of arguments. An argument is represented by a name, and has an aliveness condition function. This function is defined as alive : S → B, where S is the space of world states, and B = {0, 1} is the set of boolean values. It determines whether the argument can be considered "true" in the current state of the world, or "false", in which case the argument is ignored. • Att is a binary relation named the attack relation, defined on pairs of arguments, such that A Att B means that argument A attacks argument B. • F p ∈ 2 Args (for pros) is the set of pro-arguments, which indicate that the currently judged action was a moral one. • F c ∈ 2 Args (for cons) is the set of con-arguments, which indicate that the currently judged action was an immoral one.
To simplify notations in the sequel, we will refer to the elements of an AFJD through subscripts. In other words, for a given AF = Args, Att, F p , F c , we will note:
AF [Args] = Args, AF [Att] = Att, AF [Fp] = F p , AF [Fc] = F c .
We will have to filter out arguments that cannot be considered as alive in the current situation, before we can make a decision. For example, if in the current situation, there are no clouds, then we disable the "There are clouds today". Disabling an argument removes the node from the graph, as well as its attacks on other arguments of the graph. Formally, we call this filtered graph a "sub-AFJD", i.e., an AFJD which is some subset of another AFJD: its arguments are a subset of the other AFJD's arguments, and, by extension, its attack relation, pros, and cons are also subsets. We thus define the set of all possible sub-AFJD as:
P(AF ) := { Args , Att , F p , F c : Args ⊆ AF [Args] , Att ⊆ Args 2 ∩ AF [Att] , F p ⊆ Args ∩ AF [Fp] , F c ⊆ Args ∩ AF [Fc] }
As arguments may attack each other, and we want to compute the compliance of the learning agent's action, we need a way to determine whether we should take a specific argument into account. too much". A second one, arg 2 attacks arg 1 and says "The agent consumed more than the average of all agents". Finally, a third argument arg 3 attacks arg 2 : "The agent had been in short supply for several time steps". As arg 2 attacks arg 1 and arg 3 attacks arg 2 , we say that arg 3 defends arg 1 . Which arguments should be taken into account to judge the agent's action? As arg 2 attacks arg 1 , we might be tempted to ignore arg 1 , however, arg 3 also attacks arg 2 . Thus, if we ignore arg 2 , can we keep arg 1 as well?
In argumentation theory, this problem is known as defining acceptability. An argument that is inacceptable should not be taken into account by the judge, contrary to an acceptable argument. Acceptability is often based on the notion of conflictness between arguments. The following definitions, conflict-freeness and acceptability, are paraphrased from Dung (1995, p. 6). We can note from these two definitions that acceptability depends on the considered set. It is possible to find a conflict-free set, in which all arguments are acceptable, but maybe another set exists, holding the same properties. In this case, which one should we choose? And more importantly, why?
To solve this question, argumentation scholars have proposed various definitions of admissible sets of arguments, called extensions. The simplest one may be the admissible extension, while others propose the complete extension, stable extension, or preferred extension [START_REF] Dung | On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games[END_REF]. We define a few extensions that we will use in our proposed Although there is no clear consensus in the community as to which extension should be used, the uniqueness property of some extensions, such as the grounded or ideal, makes them an attractive choice [START_REF] Caminada | Comparing two unique extension semantics for formal argumentation: Ideal and eager[END_REF]. Indeed, this property means that we can compute the extension, and be guaranteed of its unicity, without worrying about having to implement in the judging agent a choice mechanism between possible sets. The grounded extension also has the advantage of being computed through a very efficient algorithm in O (|Args| + |Att|) time, where |Args| is the number of arguments in the whole graph, and |Att| is the number of attacks between arguments in Args [START_REF] Nofal | Computing grounded extensions of abstract argumentation frameworks[END_REF]. For these two reasons, we will consider the grounded extension in our proposed model.
The proposed model: AJAR
We now present the Argumentation-based Judging Agents for ethical Reinforcement learning (AJAR) framework. In this proposed model, similarly to the logic-based model, we introduce judging agents to compute the rewards. Each judging agent has a specific moral value, and embeds an AFJD relative to this moral value.
Definition 5.5 (Argumentation-based Judging Framework). We define an Argumentationbased Judging Framework as a tuple J , {AF j } , { j } , {J j } , g agr , where:
108
Chapter 5
• J is the set of judging agents.
• ∀j ∈ J j : L × AF × S → P(AF j ) is a function that filters the AFJD to return the sub-AFJD that judging agent j uses to judge the learning agent i.
• ∀j ∈ J J j : P(AF j ) → R is the judgment function, which returns the reward from the sub-AFJD. • g agr : R |J | → R is the aggregation function for rewards.
To simplify, we consider that the j is the same function for each judging agent j; however, we denote it j to emphasize that this function takes the AFJD associated to j as input, i.e., AF j , and returns a sub-AFJD ∈ P(AF j ). The same reasoning is applied to the judgment function J j .
The j function is used to filter the AFJD a judging agent relies on, according to the current state of the world. We recall that each argument of the AFJD has an alive condition function, which determines whether the argument is alive or not in this state. This means that designers create an AFJD with all possible arguments, and only the relevant arguments are retained during the judgment, through the function.
Example 5.6. For example, an AFJD may contain 3 arguments: "the agent consumed 10% more than the average", "the agent consumed 20% more than the average", and "the agent consumed 30% more than the average". At a given time step t, in state s t , if the agent consumed 27% more than the average, the alive condition of the first two arguments will be true, whereas the last argument will be considered dead. Thus, when performing the judgment, the judging agent will only consider the first two, even for computing the grounded extension. Thanks to j , the judgment acts as if the third argument was simply not part of the graph, and all its attacks are removed as well.
In the current definition, the j function assumes that judging agents have full knowledge about the world's state s. This allows comparing the learning agent to other agents, e.g., for the average consumption, which we cannot access if we only have the learning agent's observations o l . This assumption can be lifted by changing the definition of j to take O l as domain; however, this would limit the available arguments, and thus the judgment as a whole. The world's state s consists of data from the true environment's state and the agents' actions. This represents some sort of very limited history, with the world state containing both the action itself and its consequences. A better history could also be used, by
Designing a reward function through argumentation
remembering and logging the actions taken by each agent: this would expand what the designers can judge within the argumentation graphs, e.g., behaviours over several time steps, such as first buying energy at t and then giving at t , with t > t. Note that actions from all learning agents are included in the world's state, so that the currently judged agent can be compared to the others. This allows arguments such as "the learning agent consumed over 20% more than the average".
For the sake of simplicity, we pre-compute a few elements from these continuous data, such as the average comfort, the difference between the maximum and minimum comfort, the quantity of energy the currently judged learning agent consumed, etc. These elements are made available to the judging agents so that they can filter the graph through j . This pre-computation makes the argumentation graphs easier to design, as arguments, and by extension graphs and judging agents, can directly rely on these elements rather than performing the computation themselves, and it avoids some overhead when 2 different graphs use the same element.
From the filtered sub-AFJD, which only contains the alive arguments according to the current situation, we then compute the grounded extension to keep only the acceptable arguments, i.e., we remove arguments that are attacked and not defended. This produces an even more filtered graph grd, which only contains arguments that are both alive and in the grounded extension.
From this graph grd, we want to produce a reward as a scalar number: this is the goal of the judgment function J j . There are multiple methods to do so, and we provide and compare a few in our experiments. Still, we give here an intuitive description to roughly understand what J j is doing. We recall that, in an AFJD, an argument may belong to the pros set, F p , or the cons set, F c . An argument that is in the pros set supports the decision that the agent's behaviour was moral with respect to the moral value. J j intuitively counts the number of arguments in grd [Fp] , and the number of arguments in grd [Fc] and compares them. The more pros arguments, the more the reward will tend towards 1; conversely, the more cons arguments, the more the reward will tend towards 0.
The J j function should however provide as much "gradient" as possible, i.e., an important graduation between 1 and 0, so that: 1) we can differentiate between various situations, and 2) we offer the agent an informative signal on its behaviour. If there is not enough gradient, we may have a case where the agent's behaviour improves, in the human's eye, but the reward does not increase, because the difference is too marginal. The agent would thus probably try another behaviour, to improve its reward, whereas its behaviour 110 Chapter 5 was already improving. On the contrary, if there is enough gradient, the reward will effectively increase, and this will signal the learning algorithm that it should continue in this direction. For example, if we suppose the number of cons arguments stayed the same, but the number of pros arguments increased, the resulting reward should increase as well, to indicate that the behaviour improved.
The described AJAR judging process is formally presented in Algorithm 5.2 and graphically summarized in Figure 5.5. The figure shows the process for a single learning agent, which is then repeated for each learning agent, as the formal algorithm shows with the for loop on the set of learning agents L. Similarly to the previous algorithm, the for loop beginning on line 3 performs the judgment individually and sequentially for each learning agent. We compute the world state from all observations of all learning agents (line 5): this world state may contain some pre-computed values to simplify the arguments' activation functions, such as "the learning agent consumed 27% more than the average". Judging agents use this world state to filter out arguments that are not enabled, through the j function (line 7). This function returns the sub-AFJD that consists of all arguments whose activation function returns true for the current world state s; attacks of disabled arguments are also removed. In Figure 5.5, these disabled arguments are in a lighter shade of grey. From this sub-AFJD, judges compute the grounded extension (line 8), by removing arguments that are killed and not defended by other arguments. To do so, we used the (efficient) algorithm proposed by [START_REF] Nofal | Computing grounded extensions of abstract argumentation frameworks[END_REF]. The per-judge reward is computed by comparing the pros and cons arguments remaining in the grounded extension (line 9); we describe a few methods to do so in Section 5.4.2. Finally, the per-judge rewards are aggregated into a scalar one for each learning agent (line 10); again, we compare several approaches in Section 5.4.2.
Designing a reward function through argumentation
Remark (Value-based Argumentation Frameworks). Usually, AI systems that leverage argumentation and deal with (moral) values focus on Value-based Argumentation Frameworks (VAF) [START_REF] Bench-Capon | Value based argumentation frameworks[END_REF]. In these frameworks, compared to our AFJD, each argument is associated to a value, and the framework additionally contains a preference relation over the possible values. This allows comparing arguments w.r.t. their respective values, such that an argument can be deemed stronger than another because its associated value is preferred. When using such frameworks, all arguments belong to the same graph and can thus interact with each other. On the contrary, in our approach, we proposed 112 Chapter 5
that different values are represented by different graphs, separated from each other. This means that an argument, relative to a given value, e.g., ecology, cannot attack an argument relative to another value, e.g., well-being. An advantage of this design choice is that we may add (resp. remove) moral values, i.e., argumentation graphs, without having to consider all possible interactions between the existing arguments and the new (resp. old) arguments. Yet, VAFs are well suited for decision-making, as they require the designer to consider such interactions when building the graph; we argue that using our AFJD is on the other hand suited for judgment-making.
Experiments
We have designed several experiments to evaluate our contribution, both on the logicbased and the argumentation-based agents. These experiments rely on the moral values we previously described in Section 3.4.5: security of supply, inclusiveness, environmental sustainability, and affordability. As for the previous experiments, we particularly emphasize the adaptation capability: in some scenarii, we incrementally enable or disable the judging agents. We recall that each judging agent is specific to a given moral value: in turn, this means that we add or remove moral values from the environment at different time steps. It might seem curious to remove moral values; we use this scenario as proof that, in combination with the ability to add moral values, we can effectively learn to follow evolutions of the ethical consensus within the society. Indeed, removing a previous moral value and adding a new one ultimately amounts to replace and update an existing moral value.
Remark. Note that, as the goal of this contribution is to propose an alternative way of specifying the reward function, i.e., through symbolic judgment rather than mathematical functions, the reward functions in these experiments are different from those used in the first contribution's experiments in Section 4.5. Thus, the hyperparameters are not optimized for these new reward functions: the agents could attain a sub-optimal behaviour. However, we have chosen to avoid searching for the new best hyperparameters, in order to save computing resources. We will also not compare the algorithms to baselines such as DDPG or MADDPG. Indeed, we present here a proof-of-concept work on the usability of symbolic judgments for reward functions, and not a "competitive" algorithm. The goal is not to evaluate whether the algorithm achieves the best score, but rather to evaluate whether: 1) the reward function can be learned by the algorithm, and 2)
Experiments
the reward function implies an interesting behaviour, that is, a behaviour aligned with the moral values explicitly encoded in the function. These objectives do not require, per se, to use the best hyperparameters: if learning agents manage to learn an interesting behaviour by using sub-optimal parameters, it means that our proof-of-concept symbolic judgments can be used.
In both logic-based (LAJIMA) and argumentation-based (AJAR) models, we implement judging agents that represent the moral values detailed in Section 3.4.5:
• Security of supply, which motivates agents to satisfy their comfort need.
• Affordability, which motivates agents not to pay too much.
• Inclusiveness, which focuses on the equity of comforts between agents.
• Environmental Sustainability, which discourages transactions with the national grid.
Specific logic rules are detailed in Appendix C and argumentation graphs in Appendix D.
We design several scenarii, which notably depend on the 2 following variables:
• The environment size, i.e., the number of learning agents.
• The consumption profile, i.e., annual or daily.
These variables are exactly the same as in the previous chapter. We recall that a small environment corresponds to 20 Households, 5 Offices, and 1 School, whereas a medium environment corresponds to 80 Households, 20 Offices, and 1 School. The annual profile contains a consumption need for every hour of every day in a year, which we use as a target for agents: they want to consume as much as their need indicates; the daily profile is averaged on a single day of the year, and therefore does not contain the seasonal variations.
Additional variables are added separately for logic-based and argumentation-based experiments. In the sequel, we first describe experiments for logic-based agents, and then for argumentation-based agents.
Learning with LAJIMA judging agents
Our experiments leverage the Ethicaa agents [START_REF] Cointe | Ethical judgment of agents' behaviors in multi-agent systems[END_REF], which use the JaCaMo platform [START_REF] Boissier | Multi-agent oriented programming with JaCaMo[END_REF]. JaCaMo is a platform for 114 Chapter 5
Multi-Agent Programming that combines the Jason language to implement agents with CArtAgo for specifying environments with which agents can interact, and Moise for the organization aspect. We will particularly focus on the agent aspect, i.e., the Jason language [START_REF] Bordini | Programming multi-agent systems in AgentSpeak using jason[END_REF], which is derived from AgentSpeak(L), itself somewhat similar to Prolog. The CArtAgo environment, defined in the Java programming language, allows us to connect the simulation and learning algorithms to the judging agents, by representing them as an artifact, storing the received observations and actions, and thus making them available to the judging agents. In addition, the CArtAgo environment is also responsible for sending rewards back to the simulation after the judgment process occurred. Judges continuously observe the shared environment and detect when new data have been received: this triggers their judgment plan. They push their judgments, i.e., the rewards, to the shared environment; once all rewards have been pushed, by all judging agents and for all learning agents, the environment sends them to the learning algorithm.
However, as we mentioned, JaCaMo relies on the Java language and virtual machine; our learning algorithms and Smart Grid simulator are developed in Python, we thus needed a way to bridge the two different codes. To do so, we took inspiration from an existing work that tries to bridge the gap between BDI agents, especially in JaCaMo, and RL [START_REF] Bosello | From programming agents to educating agents -a jasonbased framework for integrating learning in the development of cognitive agents[END_REF], by adding a web server on the simulator and learning side. The CArtAgo environment then communicates with the simulator and learning algorithms through regular web requests: to ask whether the simulation is finished, whether a new step is ready for judgment, to obtain the data for judgment, and for sending back the rewards.
We now describe the scenarii that we designed, and which principally depend upon the configuration of judging agents. Indeed, we want to evaluate the ability to adapt to changing environment dynamics, especially in the reward function. Agentifying this reward function offers a new, flexible way to provoke changes in the reward function, by modifying the set of judging agents, e.g., adding, or removing. We thus developed the judges so that they can be activated or disabled at a given time step, through their initial beliefs, which are specified in the JaCaMo configuration file. The following configurations were used:
• Default: all judging agents are activated for the whole experiment: the rewards aggregate all moral values. • Mono-values: 4 different configurations, one for each moral value, in which only a single judging agent is enabled for the whole experiment. For example, in
Experiments
the affordability configuration, only the affordability agent actually judges the behaviours. These scenarii serve as baselines. • Incremental: At the beginning, only affordability is activated and produces feedbacks.
We then enable the other agents one-by-one: at t = 2000, we add environmental sustainability, then inclusiveness at t = 4000, and finally supply security at t = 6000.
From t = 6000 and up, all judging agents are activated at the same time. • Decremental: Conversely to the incremental scenario, at the beginning, all judging agents are enabled. We then disable them one-by-one: at t = 2000, environmental sustainability is removed, then inclusiveness at t = 4000, and finally supply security at t = 6000. Afterwards, only the affordability agent remains.
Learning with AJAR judging agents
Similarly to the previous experiments, we designed several scenarii that are based on the following variables:
• The judgment function J j used to transform a grounded extension, and more specifically its F p and F c sets of arguments, into a single number. • The aggregation function g agr used to transform the different rewards returned by the judging agents into a single reward. • The configuration of judges, i.e., which judges are activated and when.
The environment size and consumption profile are also used, as in the previous chapter and in the logic-based experiments. The other variables are described below with their possible values.
The judgment function
We recall that the judgment function J j is the last step of the judging process, before the aggregation. It takes as input the computed grounded extension of the argumentation graph, i.e., the graph that contains only arguments that are both alive and acceptable. Some of these arguments are in favour of the decision "The learning agent's action was moral with respect to the moral value" and are said to be pros (F p ), whereas other counter this decision and are said to be cons (F c ). The rest of the arguments are neutral. From this grounded extension grd and the sets F p , F c , the J j function must output a 116 Chapter 5 reward, i.e., a single number, that corresponds to which degree the agent respected the moral value.
Contrary to the logic-based LAJIMA model, where the number of moral valuations is fixed, in the argumentation-based AJAR framework, the number of moral valuations can be any number, only restricted by the total number of arguments. Indeed, the moral valuations are given by the F p and the F c , and any argument can be a pro or con argument. Thus, many methods can be imagined for this judgment function. We have designed the following functions, which basically rely at some point on counting the number of F p and F c , and comparing them. However, they differ in the exact details, which impact the gradient offered by the function, i.e., the possible values. Some functions will, for example, only be able to return 0 3 , 1 3 , 2 3 , 3 3 , which has a low gradient (only 4 different values).
simple Function that simply compares the number of F p with the sum of F p and F c in grd.
J simple = ⎧ ⎪ ⎨ ⎪ ⎩ |grd [Fp] | |grd [Fp] |+|grd [Fc] | if |grd [Fp] | + |grd [Fc] | = 0 1 2 else
This function is simple and intuitive to understand, however, it fails to take into account difference when there are no arguments in F p or in F c . To illustrate this, let us assume that, at a given step t, there were 3 arguments in grd [Fp] , and 0 in grd [Fc] . The result is thus 3 3 = 1. This correctly represents the fact that, at t, the action only had positive evaluations (for this specific moral value). Let us also assume that, at another step t , there were still 0 arguments in grd [Fc] , but this time 4 arguments in grd [Fp] . One would intuitively believe that the action at t was better than the one at t: it received more positive evaluations. However, this function returns in this case 4 4 = 1, which is the same reward. There is virtually no difference between an action with 3 or 4 pros arguments, as long as there is 0 cons arguments. The exact same problem applies with 0 arguments in F p , which would invariably return 0 x = 0. diff Function that compares first the number of F p in the grounded with the total number of F p in the unfiltered, original, argumentation graph, then does the same for the number of F c , and finally compares the F p and the F c .
J dif f = |grd [Fp] | |AF [Fp] | - |grd [Fc] | |AF [Fc] |
Experiments
Thus, this function is able to take into account the existence of non-activated arguments, either pros or cons, when creating the reward. For example, if only 3 of the 4 possible F p arguments are activated, the function will not return 1. Intuitively, this can be thought as "You have done well. . . But you could have done better." In this sense, this function seems better than the previous one. However, a problem arises if some arguments are too hard to activate, and therefore (almost) never present in the grd, or if 2 arguments cannot be activated at the same time. The agent will thus never get 1, or conversely 0 in the case of F c , because the function relies on the assumption that all arguments can be activated at the same time. ratio Function that compares the sum of the number of F p and F c with the maximum known of activated F p and F c at the same time.
top = |grd [Fp] | 2 -|grd [Fc] | 2 down = |grd [Fp] | + |grd [Fc] | max_count t = max(max_count t-1 , down) J ratio = top max_count t
This function thus avoids the pitfalls of the previous one, by only comparing with a number of activated arguments that is realistically attainable. However, it still suffers from some drawbacks. For example, we cannot compute a priori the maximum number of activated arguments; instead, we maintain and update a global counter. This means that the rewards' semantics may change over the time steps: a 1 2 might be later judged as a 1 3 , simply because we discovered a new maximum. Taking this reasoning further, we can imagine that maybe, in a given simulation, the maximum known number will be 2 for most of the time steps, and finally during the later steps we will discover that in fact the maximum could attain 4. Thus, all previous rewards would have deserved to be x 4 instead of x 2 , but we did not know that beforehand. grad Function that takes into account the total possible number of F p and F c arguments.
J grad = 0.5 + |grd [Fp] | × 0.5 |AF [Fp] | -|grd [Fc] | × 0.5 |AF [Fc] |
This function creates a gradient between 0 and 1, with as many graduations between 0.5 and 1 as there are possible pros arguments, and conversely as many graduations between 0.5 and 0 as there are possible cons arguments. Each activated pro argument advances one graduation towards 1, whereas each activated con argument 118 Chapter 5 advances one graduation towards 0, with a base start of 0.5. This function is similar to diff, except that it stays within the [0, 1] range. offset Function that avoids division by 0 by simply offsetting the number of activated arguments.
J of f set = min 1, 1 + |grd [Fp] | 1 + |grd [Fc] |
Since neither the numerator nor the denominator can be 0, adding one F c argument without changing the number of F p would effectively change the resulting reward. For example, assuming |F p | = 0, and |F c | = 3, the reward would be 1 4 . Adding one F c argument would yield 1 5 , thus effectively reducing the reward. However, to avoid having rewards superior to 1, the use of the min function prevents the same effect on F p : if |F c | = 0, the reward will be 1 no matter how many F p are present.
The aggregation function
As for the logic-based judgment, we have several judges that each provides a reward for every learning agent. Yet, the current learning agents expect a single reward: we thus need to aggregate.
We retain the average aggregation that was proposed for the logic-based, as it is a quite simple, intuitive, and often used method. Another option is to use a min aggregation that is based on the Aristotle principle. The Aristotelian ethics recommends to choose the action for which the worst consequence is the least immoral [START_REF] Ganascia | Modelling ethical rules of lying with answer set programming[END_REF]. Adapting this reasoning to our judgment for learning, we focus on the worst value so that the agent will learn to improve the worst consequence of its behaviour. The final reward is thus the minimum of the judges' rewards. The reward signal thus always focuses on the least mastered of the moral values. For example, assuming the agent learned to perfectly exhibit the inclusiveness value and always receives a reward of 1 for this moral value, but has not yet learned the value of environmental sustainability, and receives a reward of 0.2 for this second moral value, the reward will be 0.2 so that the agent is forced to focus on the environmental sustainability to improve its reward. When the agent's mastery of the environmental sustainability exceeds that of other values, its associated reward no longer is the minimum, and another reward is selected so that the agent focuses on the new lowest, and so on. Thus, it emphasizes the learning of all moral values, not strictly at the same time, but in a short interval. The agent should not be able to "leave aside"
Experiments
one of the moral values: if this value's reward becomes the lowest, it will penalize the agent. Formally, this function is simply defined as:
g min (F(l)) = min ({∀f l,j ∈ F(l)})
where F(l) is the set of rewards for the learning agent l, of size |J |, and f l,j is the specific reward for learning agent l determined by judging agent j.
However, I noticed that in some cases this function could become "stuck". For example, if one of the argumentation graph is ill-defined, and it is impossible to get a high reward for one of the moral values, e.g., more than 0.2. In this case, as long as all other moral values have an associated reward 0.2, they will not be the minimum, and thus not selected by the g min aggregation. The agent therefore cannot receive feedback on its behaviour with respect to the other moral values. It would be difficult to improve further these moral values without feedback: even if the agent tries an action that improves another moral value, the "stuck" one will remain at 0.2, and thus be selected again, and the final reward will be 0.2. The received signal would seem to (wrongly) indicate that this action did not improve anything. It would not be a problem in the case of a few steps, because the agent would have later steps to learn other moral values. However, in this example scenario, as one of the moral values is "stuck", even later steps would rise the same issue.
To potentially solve this concern, I proposed to introduce stochasticity in the aggregation function: the function should, most of the time, return the lowest reward so as to benefit from advantages of the g min function, but, from time to time, return another reward, just in case we are stuck. This would allow agents to learn other moral values when they fail to improve a specific one, while still penalizing this "stuck" value and avoid skipping the learning of this value. To do so, we propose the Weighted Random function, in which each reward has a probability of being selected, which is inversely proportional to its value, relatively to all other rewards. For example, if a reward is 0.1, and the other rewards are 0.8 and 0.9, then the 0.1 will have an extremely high probability of being selected. Yet, the two others will have a non-zero probability. Formally, we define the function as follows:
g WR (F(l)) = draw f l,X ∼ P (X = j) = 1 -f l,j k∈J (1 -f l,k ) 120 Chapter 5
where X is a random variable over the different judging agents, J , and f l,X is the reward determined by judging agent X.
Configuration of judges
Finally, we propose, similarly to the logic-based judges, several configurations of argumentation judges. Judges can be enabled or disabled at a given time step, and we thus use it to evaluate the adaptation to changing reward functions. The 3 following configurations have been tested:
• Default: all judging agents are activated, all the time.
• Incremental: Only the affordability agent is enabled at the beginning, then the environmental sustainability agent is activated at t = 2000, followed by the inclusiveness agent at t = 4000, and finally the supply security agent at t = 6000. All agents are thus enabled after t = 6000. • Decremental: Basically the opposite of incremental, all agents are initially activated.
At t = 2000, the environmental sustainability agent is disabled, the inclusiveness agent is disabled at t = 4000, and finally the supply security agent at t = 6000. Only remains the affordability agent after t = 6000.
Results
We first present the results for the LAJIMA model, using logic-based agents, and then for the AJAR framework, using argumentation-based agents. As for the previous chapter, the score here is defined as the average global reward at the end of the simulation, where the global reward is the mean of rewards received individually by learning agents, at a given step.
Learning with LAJIMA judging agents
Figure 5.6 shows the results for the experiments on each scenario. Results show that, on most scenarii, Q-SOM and Q-DSOM algorithms perform similarly, except on the env_sustain mono-value configuration, i.e., when only the judge associated to the Environmental Sustainability value is activated. Some moral values seem easier to learn, e.g.,
Results
Fig. 5.6.:
Results of the learning algorithms on 10 runs for each scenario, when using the LAJIMA agents.
the affordability and supply_security scenarii, which consistently have a score of nearly 1.
Others, such as the inclusiveness, seem harder to learn, but nevertheless attain a score of 0.5.
A statistical comparison between the small and medium sizes of environments, using the Wilcoxon test, does not show a significant difference in the obtained difference (p-value = 0.73751; we cannot accept the alternative hypothesis of a non-0 difference). This shows the scalability of our approach.
122 Chapter 5
Learning with AJAR judging agents
Figure 5.7 shows the scores for the argumentation-based experiments. As we have detailed, in the AJAR framework, 2 additional parameters must be set: the judgment function J j that returns a single number from a set of moral evaluations, and the aggregation function g agr that returns a final reward from the set of judgments for each judge. We can see that, for every choice of J j and g agr , and for every scenario, the learning algorithms managed to attain a very high score, superior to 0.9. This means that the judgment functions offer a good enough gradient to allow learning algorithms derive the correct behaviour from the reward signal. However, they are differences between different judgment functions. The offset, grad, and simple seem indeed to yield slightly higher rewards.
Interestingly, we can note that the Q-DSOM algorithm performs better on most of the experiments, compared to the Q-SOM algorithm. This is confirmed by a Wilcoxon test, using the "greater" alternative (p-value = < 2.22e-16).
Results
123
Discussion
In this chapter, we presented a new line of research for constructing reward functions, through the use of symbolic judgment. More specifically, we proposed to introduce new judging agents to the multi-agent system, which judge the behaviour of the learning agents to compute their rewards, by leveraging symbolic reasoning, through either logicbased rules or argumentation frameworks. To the best of our knowledge, this line of research has not been studied, especially within Machine Ethics.
We recall that the principal objectives we identified from the state of the art were:
• How to correctly judge a learning agent's behaviour, and particularly avoiding reward gaming? • How to include a diversity of moral values? By using symbolic judgments, we can leverage domain experts' knowledge to correctly judge the learning agents. This is even more true with argumentation-based judgments, as the attack relationship can be leveraged to disable arguments in a specific context. We have given the fictional example of an agent that learned to hoard then give energy, to demonstrate how this kind of reward gaming could be fixed, by adding a new argument that checks whether the agent hoarded energy at the previous step. When this argument is true, the arguments that would tend to reward the agent for giving energy are disabled through the attacks. Using argumentation also helps with identifying a behaviour that exhibits reward gaming, by looking at arguments' activation throughout the time steps.
Multiple moral values can be represented by different judging agents; using distinct agents instead of a single one offers several advantages. The agents can be individually added, updated, or removed, thus giving the designers control over the represented moral values in the system at runtime. Additionally, this paves the way to more complex interactions between the judging agents, which we will detail later.
The methods proposed in this chapter, logic-based and argumentation-based, have various specificities, and we now compare them, both to the traditional mathematical reward functions, and between them. We note that, as we have 2 different implementations, some differences, advantages or drawbacks, emerge from the technical implementation details, whereas others stems from more fundamental elements. The implementation differences are still important to note, even though they do not reflect a fundamental flaw or inherent advantage of the method; we will therefore particularly emphasize whether 124 Chapter 5 our remark is related to a technical implementation choice, or a fundamental conception, in the sequel. We also highlight the limitations and perspectives that arise from these methods.
First, we compare the two methods to the traditional mathematical functions, as used in the previous chapter. As we saw in the state of the art, multiple works have tried to implement ethical principles or moral values as symbolic rules. Such formalization seems therefore appropriate, when comparing to mathematical formulas, or more general programming languages. However, we note that one disadvantage of our chosen symbolic methods, when applied to the problem of generating rewards for learning agents, is to offer a poorer gradient than mathematical formulas, as we have already mentioned. By gradient, we mean that a difference in the situation and/or action should result in an equivalent difference in the reward, no matter how small the initial difference may be. The thinnest the gradient is, the more agents will learn when trying new behaviours: by exploring a new action, the result will be different, and this difference will be reflected in the reward, either positively or negatively. If the reward is better than before, the agent will then learn that this new action was a good one, and will retain it, rather than the old one. On the other hand, if the gradient is too coarse, the action difference induced by the learning agent risks not being correctly captured by the judgment. This is particularly visible when we think of thresholds for example: let us assume that we evaluate the action as moral if the agent did not consume more than 10% more energy than the average. At a first step t 0 , the agent may consume 20% more, and thus not being positively rewarded. At a second step t 1 , the agent may randomly consume 12% more than the average: the action is clearly better, however it is still above the threshold of 10%, and thus the agent is still not positively rewarded. It therefore has no feedback indicating that its action was better, yet not sufficient, but that it could continue in this direction. On contrary, mathematical functions offer a gradient by principle: let us take for example, R = consumed -(average × 1.1), which represents roughly the same idea: the agent should not consume more than 110% of the average. We can clearly see that the reward will be different when the agent consumes 20% more, or 12%. It would even be different if the agent consumed 19.99999% more. This problem might be mitigated by other techniques, such as fuzzy logic [START_REF] Zadeh | Fuzzy sets[END_REF], or weighted argumentation frameworks [START_REF] Coste-Marquis | Weighted attacks in argumentation frameworks[END_REF], for example, which handle more naturally continuous values in addition to symbols. This problem of gradient is linked with the notion of moral evaluation that we proposed in our methods. The symbolic rules produce these evaluations, which are in the form of
Discussion
moral and immoral symbols in the LAJIMA model, or represented by pros (F p ) and cons (F c ) arguments in AJAR. Then, the reward is determined based on these evaluations, the number of positive evaluations compared to the negative ones, etc. We note a first difference, which is purely an implementation detail, between our two methods: in LAJIMA, we have chosen to produce exactly as much evaluation as there are dimensions in an action; thus, each action dimension is judged and associated with a moral evaluation. On the contrary, in AJAR, we can have as many pros and cons arguments, and thus moral evaluations, as we desire. These 2 choices have both advantages and disadvantages: on the first hand, LAJIMA simplifies the problem by setting a fixed number of evaluations. Producing a reward can be as simple as dividing the number of moral evaluations by the sum of moral and immoral evaluations. On the other hand, AJAR is more permissive, and offer more flexibility, in particular for the reward designers, who do not have to exactly set 6 evaluations, and may instead create the arguments as they see fit. It also opens the way for different methods to produce rewards: we proposed a few non-exhaustive options in our experiments to illustrate this. This may be seen as a disadvantage, since it requires an additional parameter for selecting the judgment function, which implies fine-tuning. Yet, this disadvantage seems justified, as the experiments tend to show a better gradient for the AJAR framework.
We note a first perspective here, about this judgment function; indeed, designing and selecting such a function to transform a set of symbols into a reward is not a trivial task. There is maybe more work to perform here, and we identify 2 potential (non-exhaustive) research avenues. First, the judgment aggregation theory domain may be leveraged, in order to correctly choose a number to represent a set of diverging evaluations. Secondly, an idea that is more specific to the AJAR framework, the argumentation graphs could be replaced by weighted ones. Indeed, we used a simple argumentation framework in our proof-of-concept work: the attack relationship is a simple, unidirectional one. If an argument is attacked, and not defended by another, it is considered killed. However, other argumentation frameworks exist in the literature, such as the weighted argumentation graphs [START_REF] Coste-Marquis | Weighted attacks in argumentation frameworks[END_REF][START_REF] Amgoud | Acceptability semantics for weighted argumentation frameworks[END_REF], where arguments and attacks are augmented by a notion of weight. An argument that has a significant weight will not be killed if it is attacked by an argument with a smaller weight. However, it might be killed if several arguments with small weights attack it, and so on. The idea here, to ultimately improve the judgment function, would thus be to rely on the remaining arguments' weights to produce the reward. If an argument is attacked but not killed, its weight will be reduced, so that we can say the agent "did well, but not enough": this avoids the pitfalls of thresholds mentioned earlier. To compute the 126 Chapter 5 final reward, the judgment function could, e.g., sum the weights of pros arguments and subtracts the weights of cons ones.
Another difference that comes from a technical implementation detail is the kind of inputs used by judging agents. Indeed, the logic-based ones in LAJIMA rely on the individual learning agents' observations and actions: this is inspired by the Ethicaa work, which describes this as a partially informed judgment, i.e., a judgment where the judge uses knowledge from the judged agent, especially about the situation and the agent's action. Intuitively, we can see this kind of judgment as "Would I have done the same thing, had I been at this agent's place?". On contrary, in AJAR, we described the input of the argumentation graphs as the whole knowledge of the current state, e.g., including data about other agents. This allows for example to compute the average consumption, and compare the judged agent's consumption to the entire society. Since all data are included, the judging agents could also compare it to the median, or the 1st quartile, etc. This makes judging agents sort of "omniscient", thus making more accurate judgments, but potentially impairing the privacy, since more data are shared. However, we note that the judging agents are meant as some sort of central controller, and are external to the learning agents. Thus, a Smart Grid participant still cannot access their neighbors' data. There is most certainly an important question to ask and answer here, about the correct tradeoff between the accuracy of judgments, and the privacy of the human users. Some of them will probably not want to share data. We leave this question aside, as our work does not concern the acceptability of Smart Grid (or more generally, AI) systems, but we acknowledge that the ability to correctly judge, and ultimately learn, ethical behaviours may be limited by this acceptability concern.
One of the fundamental differences between our 2 proposed models is the kind of symbolic elements that are leveraged. In LAJIMA, logic rules are used, along with beliefs, and plans, whereas AJAR uses arguments and graphs. This has an impact on the ability to read and understand the reward function, i.e., the expected behaviour, by designers first, but more importantly by non-experts people. For example, the well-known Mycin [START_REF] Buchanan | Rule based expert systems: The mycin experiments of the stanford heuristic programming project[END_REF] example has shown that expert systems, which rely on similar rules to our LAJIMA model, were difficult to understand by doctors, although it could theoretically "explain" its reasoning by showing the trace of rules that were triggered. More importantly, it was difficult for users to integrate their own knowledge in the system. Arguments' activation allows visualizing the reasoning as a graph and see the attacks as a map, which may be easier to understand for lay users rather than plans and beliefs. Graphs are updated by adding (or removing) arguments (nodes), and
Discussion
specifying the attacks on other arguments, visually represented by edges; this, again, may be easier to use rather than rules laid out in a textual format, for domain experts who are not versed in AI development. However, we did not test this hypothesis with a Human-Computer Interaction experiment, and more work is required on this subject.
Both our models have a small limitation linked to the simplification of their definition. Indeed, we consider a set of judging agents, with each judge representing a moral value, and we define a symbolic judgment with respect to this moral value. However, one might argue that moral values are not the only important objectives: there might be some others, that we denote "technical aims", or aims for short, which are important for the system but not especially linked to a moral value. An example of such aims may be, for example, to correctly balance the production and consumption within a Smart Grid. If these aims can be computed as a symbolic judgment, they can be added to the system exactly as the moral values are, without loss of generality. However, some aims may be easier to design as a mathematical function, as the example above, which can be simply formulated as a difference between the production and the consumption. The current model does not allow this. The formal definitions of the model can be extended to consider not only the set of judging agents, but rather the union of judges with aims functions. Thus, the aggregation function takes the symbolic-produced rewards for moral values and the rewards for aims. Considering this union is not difficult from a technical point of view, but complicates the definition and diverts attention from the important part of the contribution, we thus chose to leave it aside.
Finally, another important element is the choice of the aggregation function. Indeed, the learning algorithms, in their current form, require a single reward per learning agent, whereas we have multiple judging agents representing various moral values. We thus need to aggregate, and we proposed several methods to do so: the average function, which is probably the most classical and intuitive way, the min function, which is almost as classical, and finally our custom Weighted Random, which mimics the behaviour of min while adding stochasticity to avoid "stuck" situations. Perhaps other functions could be explored, e.g., in computational social choice [START_REF] Chevaleyre | A short introduction to computational social choice[END_REF], and more specifically in multi-criteria decisions, several approaches have been proposed to replace min-based aggregation, such as ordered weighted min, or leximin [START_REF] Dubois | Beyond min aggregation in multicriteria decision:(ordered) weighted min, discri-min, leximin[END_REF]. Basically, the idea of these approaches is to introduce some kind of preference or priority over the various criteria. They have been thoroughly explored within the field of social choice, especially for fair division, for decision-making purposes. However, in our case, the "decision" is not to select an action, 128 Chapter 5 e.g., a way to distribute resources among agents, but rather to select a reward that will be used to learn the correct action. This (subtle) difference calls for more research and experiments to apply such approaches to the problem of reward selection, in order to avoid pitfalls. For example, who should set the preferences over moral values? Should it be the designers, using the same preferences for all agents, or the users, using their own (different) preferences for their own agent? In this second case, would users be able to make their agent "ignore" of the moral values by carefully crafting their preferences? Should we allow them this possibility? And how can we prevent them from doing so?
The choice of the aggregation function is important for the resulting behaviour, as it determines how the moral values are learned, in which order, how the system handles values of different "difficulties", and situations where values are in conflict. Yet, the scalarization idea is not ideal: by principle, scalarizing makes us lose information. Whether we take an average, a min, or a random reward among those proposed, we still send less feedback to the agent than with the full set of rewards. We have identified 2 groups of ideas to replace it: 1) An improved interaction between judging agents, e.g., through negotiation. In a sense, the judgment aggregation theory mentioned earlier could belong to this group. Thus, the final reward is itself constructed by reasoning. If, for example, one of the judging agents detects that its moral value is not correctly learned, because the learning agent keeps repeating the same mistakes, it can negotiate with other judges to emphasize its reward, or on contrary to ignore it, in order to focus on other moral values and avoid preventing their learning. This can be achieved, e.g., by negotiation between the judging agents themselves, or by creating a meta-reasoning agent, which takes the results of the lower-level judges as inputs, and outputs the final reward. This meta-judge could therefore include ethical principles to decide, whereas the other judges would include rules that concern only their moral value. This idea could be extended to create rewards that do not only depend on the current judgments at step t, but on the whole history of the learning agent. For example, the meta-judge could decide that, although the learning agent's action was good at this step, it could have been better, and thus award only a small amount to the learning agent, to encourage it to improve. Or, on the contrary, the meta-judge could give a higher reward even though some moral value is not respected, to help the agent learn the other values first, in some sort of adversarial or active learning loop. 2) Instead of aggregating the judgments per moral value into a single reward for each learning agent, we could simply send all feedbacks to them. Thus, the learning agents get more information, and can identify situations where 2 moral values are in conflict. This new ability can be exploited to inject human users'
Discussion
preferences inside the learning agents to resolve these conflict situations, in a way that is specific to each learning agent, and thus to each human user.
We note that, by aggregating the moral values, we settle potential dilemmas a priori. This prevents the learning agents from correctly handling these dilemmas, and even recognizing there was a dilemma in the first place. We will therefore, in the next chapter, explore this 2nd family of ideas, which leads to a multi-objective approach for the learning algorithm. Scalarizing was necessary as a first step, in order to evaluate the contribution of agentifying the reward function and leveraging symbolic judgments on its own. Then, we "open up the box", by sending directly multiple rewards, one for each moral value, to each learning agent. This way, they can compare the rewards, recognize situations of dilemmas, and each learning agent can settle dilemmas in a different way, depending on both the context and their user's preferences. For example, user A may favor the inclusiveness value, and thus the learning agent will choose actions that yield maximal rewards for this value when it cannot optimize all of them. Another user, B, may favor on contrary the affordability value, and thus its own learning agent will learn differently from user A's agent. In a different context, e.g., in winter instead of summer, perhaps user A may also favor the affordability value, and the learning agents will have to learn this.
Chapter 5
Identifying and addressing dilemmas with contextualized user preferences 6
We present in this chapter the final and third contribution, which relates to the "dilemma management" part from the conceptual architecture in Figure 3.2. Section 6.1 begins with an overview of the contribution, explains what is missing in the previous contribution, and why it is necessary to focus on dilemmas. Then, Sections 6.2, 6.3, and 6.4 describe the 3 steps of our contribution. These steps are summarized in Section 6.5, along with two algorithms that formalize and recapitulate the operations performed in these steps. The experimental setup is presented in Section 6.6, and the results reported in Section 6.7. We finally discuss this contribution's benefits, limitations, and perspectives in 6.8.
Overview
In the previous chapters, we have first proposed reinforcement learning algorithms to learn behaviours aligned with moral values, and then a new way of constructing reward functions through symbolic reasoning by judging agents, with respect to several moral values. However, the reward, as a scalar number, does not detail the agents' performances for each moral value. This is due to the reinforcement learning algorithms themselves, which expect a scalar reward, and is true whether we consider the mathematical functions in Chapter 4, such as multi-objective sum, or the symbolic-based judgments in Chapter 5, which were aggregated. This is problematic, as it necessarily impoverishes the feedback sent to learning agents, and hides some details, such as conflicts between moral values.
Any aggregation function has some "collisions", i.e., different feedbacks that yield the same reward. These collisions result in virtually no difference in the reward between two very different situations, with strong ethical significance. The question of which feedbacks collide and result in the same reward depends on the aggregation function itself. For example, using an average, average ({0.5, 0.5}) = average ({1, 0}). Similarly, using a min, min ({0.2, 0.2}) = min ({0.2, 0.8}). In these examples, the sets of numbers are feedbacks that represent specific rewards for two different moral values. We can see that a first feedback with equal satisfaction (or defeat) of both moral values can have the same aggregated reward as another, very different feedback, where there is a significant difference between the moral values' rewards. As we illustrate just below, such differences between feedbacks that can be obfuscated by aggregation can be very significant on the ethical point of view. We may need to keep track of them and handle the tensions they reflect, with respect to moral values.
We recall that rewards are the signal that is used to learn the interests, or Q-Values, of actions in given states. The interests, in turn, are used to select the most promising action in the current state. Let us consider the following example: in a given situation, we have learned the interests of 3 different actions, which are all 0.5. The actions therefore seems comparable, and it would be as interesting to select the first as the second or third. The agent may simply roll a die to choose one of them. However, had we learned interests per moral value, instead of aggregated ones, perhaps we would have a completely different story. The actions' vector interests could be, e.g., Q(s, a 1 ) = [0.5, 0.5], Q(s, a 2 ) = [1, 0], and Q(s, a 3 ) = [0, 1], indicating that the first action a 1 has the same, medium interest for both moral values. On the contrary, the second action a 2 has a high interest for the first moral value, and a low interest for the second moral value, and a 3 mirrors a 2 with a low interest for the first moral value, and a high interest for the second. Thus, there truly is an important choice to be made: do we want to satisfy all moral values equivalently? Or do we prefer one of the moral values, at the expense of the other? This decision should be made explicit, and deliberate, which is only possible if we have access to this knowledge. In the aggregated scenario, we did not even know that there was a trade-off in the first place.
This question of a deliberate decision also opens up the subject of human preferences. We have mentioned "do we want to satisfy all moral values equivalently?": in this sentence, the we should refer, we believe, to the human users' preferences. Or, more specifically, to the agent's preferences, which should match the human ones. As we mentioned in Chapters 1 and 3, the agents learn ethics according to an ethical intention that we, humans, inject in the system one way or another. Up to now, this ethical injection was limited to a feedback upon actions compliance to given moral values. We must now extend it to also include the ethical preferences over moral values, when a choice has to be made, because the agent does not know how to satisfy all moral values at once.
Chapter 6
We argue that these preferences differ between users: we do not have the same priority order over moral values. They also differ between contexts, where context refers to the current situation: we may not prioritize the same moral value in summer than in winter, for example, depending on our resistance to heat or cold, as well as the cost of heating or cooling, or the country's climate. In this contribution, we thus propose to take into account this multi-objective aspect and to learn contextualized human preferences. This is related to our third research question:
How to learn to address dilemmas in situation, by first identifying them, and then settling them in interaction with human users? How to consider contextualized preferences in various situations of conflicts between multiple moral values?
To do so, we propose a Multi-Objective extension of the Q-SOM and Q-DSOM reinforcement learning algorithms, which we name Q-(D)SOM-MORL. This extension follows 3 steps, each one addressing a different need.
1. The first step is to learn "interesting" actions in all situations, so that we can identify dilemmas and make an informed decision, or choice, when presented with a dilemma. 2. The second step is to correctly identify dilemmas. We propose definitions, inspired from the literature, and refine them to better suit our continuous domains case. 3. The third and final step is to learn the user preferences, based on the interesting actions from step 1, in the dilemmas identified at step 2. We refine the vague notion of "context" mentioned earlier, and describe what is a user preference, and how we can map them to the different contexts so that agents learn to settle dilemmas using the correct, expected preferences. Dilemmas in the same context are settled in the same way, so as to reduce the amount of required interactions with the human users.
Figure 6.1 represents these 3 steps, and details the conceptual architecture proposed in Figure 3.2. Learning agents, on the right side of the figure, still receive observations from the environment, and output actions to the environment. However, their decision and learning algorithms now integrate additional data structures and processes, and learning agents are associated to a human user.
The exploration profiles focus on the 1st step, learning interesting actions, based on the observations of the current situation, and the multiple rewards received by the learning agent. Note that, instead of aggregating them and receiving a single scalar ∈ R, as in the previous chapter, we now send directly all judgments from judging agents, and thus
Overview
the learning agents obtain multi-objective rewards ∈ R m , where m is the number of objectives. In our case, objectives correspond to the respect of moral values, and we thus equate "moral values" with "objectives" in this chapter. The 2nd step aims at identifying whether the current situation is a dilemma, either known or unknown, by leveraging exploration profiles, and a human profile which is learned based on interactions with a human user. Finally, the 3rd step selects an action: if the situation is not a dilemma, then the learning agent takes the best action; otherwise, it learns and re-uses the human contextualized preferences to settle the dilemma and choose an action, according to human preferences. These 3 steps are detailed in the following sections.
Learning interesting actions for informed decisions
The shift from single-objective RL to multi-objective RL (MORL) brings several changes to the Q-(D)SOM algorithms presented in Chapter 4. Mainly, the interests, or Q-Values, become vectors instead of scalars, and some equations are no longer defined for this kind of input, especially for selecting and exploring actions. In addition, as we want to leverage 134 Chapter 6
human preferences, we need to present a comparison measure for the different actions. Specifically, 2 problems arise, which relate to the notion of "interesting actions":
1. How to correctly learn the actions' interests and select an action, compared to others, in the exploration-exploitation dilemma? 2. How to determine whether an action perturbed by a random noise, for exploration purpose, was interesting, with respect to previously learned action?
These 2 problems are detailed below; then, a solution involving the introduction of two distinct phases and exploration profiles is proposed and commented.
The first problem relates to the learning of actions in each situation, and more specifically of "interesting actions", which, in this case, means an action that is a suitable alternative as part of a dilemma. This is necessary, as the ultimate goal is to make agents settle dilemmas, i.e., making a choice in a trade-off between interesting actions, based on human users' preferences. To make this decision, both from the agents' and the humans' point of view, we need to know the actions' interests. We also recall that a reinforcement learning algorithm learns interests as actions are tried in the environment. Thus, during the learning, the interests at a given step might not reflect the "true" interests of the action. For example, let us assume that we have an action a 1 with interests Q(s, a 1 ) = [0.8, 0.7]. This action seems pretty good with respect to both moral values. Another action a 2 has other interests Q(s, a 2 ) = [1, 0.3]. This action seems better on the first objective, or moral value, but worse on the second objective. We might think we are in a dilemma, as we have to choose between prioritizing the first objective, or the second, by selecting one of these 2 actions. However, let us also consider that the first action was well explored, it was selected many times by the agent, and we are quite certain that the learned interests have converged very close to the true interests. On the other hand, the second action was almost not explored, the agent only tried it once. It might be possible that the received reward was a bit exceptional, due to some circumstances that do not often happen, and that, in fact, the true interests are closer to [0.75, 0.3]. In such case, action a 2 would not be interesting, but rather dominated, in the Pareto sense, by a 1 , as Q(s, a 1 ) would be strictly greater on each dimension.
This is what we mean by "interesting" actions in this first problem: to effectively compare them and determine whether there is a trade-off to be made, we need to have correctly learned their interests. If the interests are not known, we risk considering an action as a potential candidate, whereas in fact it should not be, or conversely ignoring an actually good action. In the previous example, perhaps a 1 in fact dominates a 2 , or conversely 6.2 Learning interesting actions for informed decisions a 2 is strictly better than a 1 . An action that is strictly dominated by another cannot be "interesting", as the other action would yield a better result for each moral value. Asking users for their preferences in an uncertain case like this would only bother them. Indeed, as the interests will be updated afterwards, it is unlikely the agent will have to retain the same preferences the next time it arrives in this situation. We thus need to learn the interests of all actions, as close as possible to their true interests, before we can begin identifying and settling dilemmas, and asking users for their preferences.
The second problem that emerges when considering multiple objectives, is to answer whether the explored, randomly noised action is "interesting", i.e., yields better interests than its original action. Indeed, we recall that, in the Q-SOM and Q-DSOM algorithms, in order to explore the actions space, a proposed action is first identified from the Q-Table . The action's parameters are taken from the prototype vector of the associated neuron in the Action-(D)SOM. Then, to explore and potentially find an even better action, a random noise is applied on these parameters. The explored, or "perturbed" action, is thus enacted in the environment, and the agent receives an associated reward. If this perturbed action was better than the proposed, learned action, the agent should update the Action-(D)SOM: to determine this, we used Equation (4.5), that we recall below:
r t + γ max j Q(s t+1 , j ) ? > Q(s t , j)
where j is the index of the proposed action. Basically, this equation means that if the received reward, and the maximum interest obtainable by taking an action in the new, resulting state, is higher than the learned interest of the proposed action, then the perturbed action is better than the proposed one, and the Action-(D)SOM should be updated towards the perturbed action.
However, this equation does not work any more in a multi-objective setting. Indeed, we replaced the 2-dimensional Q-Table by a 3-dimensional table, where the 3rd dimension is the moral value. In other words, we previously had Q(s, a) ∈ R, but we now have
Q(s, a) ∈ R m , and Q(s, a, k) ∈ R, where k is the moral value index ∈ [[1, m]],
with m the number of moral values. Similarly, the reward r previously was ∈ R but is now ∈ R m . The equation relied on taking the maximum interest and comparing 2 scalar values, which is trivial, but these relations are no longer defined in a vectorial space.
To solve these problems, we propose to change the Q-SOM and Q-DSOM algorithms, by introducing 2 distinct phases: a bootstrap phase, and a deployment phase. In the bootstrap phase, agents are tasked with learning "interesting" actions, i.e., both the 136 Chapter 6
parameters that yield the best interests, and the true interests that correspond to these parameters, without focusing on dilemmas or user preferences. These, on the other hand, are focused on during the deployment phase, where the agents leverage their knowledge of "interesting" actions to identify dilemmas and learn to settle dilemmas according to contextualized user preferences.
Concerning the first problem more specifically, we propose to change the action selection mechanism, and to prioritize exploration. Indeed, since we now consider a bootstrap phase separated from the deployment, we do not need to maximize the expected sum of rewards during exploration. We can instead focus solely on exploring and learning the actions' interests. Taking inspiration from the Upper Confidence Bound method [START_REF] Auer | Finite-time analysis of the multiarmed bandit problem[END_REF], we memorize the number of times an action has been enacted, and we use this information as our new criterion. A simple and intuitive way to then ensure that all actions are correctly learned, is to always select the action with the minimum number of times enacted. Thus, at the end of the bootstrap phase, even if some actions have been less enacted, e.g., because the number of steps was not a multiple of the number of actions, we ensure that the "unfairness" is minimal. We detail in Section 6.6 several specific methods that build upon min and compare them.
The second problem of determining whether a perturbed action is more interesting than the previously learned action can be solve by making the formula valid again. One such way is to scalarize the vector components, but this would favour exploration in specific sub-zones of the action space. For example, if we use an average aggregation, we would consider an action interesting, i.e., better than the learned one, only if the perturbed actions has higher interests on average, on all moral values. If an action has a better reward on one specific moral value, but lower rewards on all other dimensions, it will not be learned. This would prevent discovering some actions that can still be part of a trade-off. To avoid this, during the bootstrap phase, we introduce the notion of exploration profiles to the learning agents. Each exploration profile contains a vector of weights, which intuitively tells us which zone of the action space will be explored by this profile. For example, the weights of an exploration profile might be [0.9, 0.033, 0.033, 0.033], which will focus on actions that yield high interest on the first moral value. To also discover actions that yield high interest on the second moral value, and so on, we create multiple exploration profiles, thus partitioning the action space between these profiles.
Additionally, as different exploration profiles will learn differently, notably updating their Action-(D)SOMs at various time steps, we also place the data structures in exploration profiles.
6.2
Learning interesting actions for informed decisions Definition 6.1 (Exploration profile). An exploration profile p ∈ P is defined as the following data structures and functions:
• State-(D)SOM: the "map" of situations to discrete states, which contains a fixed number of neurons, and thus of possible states. We define the set of possible states identifiers as
S = [[0, • • • , |U|]],
where |U| is the number of neurons.
-States p : O → S is the function that returns a discrete state identifier for any observation vector, based on the profile p.
• Action-(D)SOM: the "map" of action identifiers to action parameters, which also contains a fixed number of neurons, and thus of possible actions. The set of possible action identifiers is
A = [[0, • • • , |W|]],
where |W| is the number of neurons. Each neuron is associated to a prototype vector, which is the action's parameters.
-Actions p : A → A l is the function that returns the action's parameters for a learning agent l from an action discrete identifier in profile p.
• Q-Table : the 3-dimensional table that learns and memorizes the interests of each (discrete) action identifier in each (discrete) state identifier. The interests are themselves a vector, indexed by the moral values.
-Q p : S × A → R m is the function that returns these interests for an action a in profile p, in a state s, where m is the number of moral values.
• ρ : the weights used to scalarize rewards and interests to determine whether a perturbed action is interesting, a vector ∈ R m .
We note that, contrary to existing approaches that use scalarization at some point, our exploration profiles are only used to explore, and never to choose an action. This is a crucial difference; we want to use the human users preferences to select actions. The exploration profiles are combined during the deployment phase so that agents have at their disposal all actions learned in the different sub-zones of the action space. We note an exploration profile's vector of weights as ρ; the formula that determines whether a perturbed action is interesting thus becomes:
r t • ρ + γ max j ρ • Q(s t+1 , j ) ? > ρ • Q(s t , j) (6.1)
where • denotes the dot product between 2 vectors, i.e., x in Section 5.4. We propose 5 different exploration profiles: 4 of them each focus on a different moral value, while the last one focuses on learning actions that yield interests "good on average". The generalist exploration profile simply is set to
• y = x 1 y 1 + x 2 y 2 + • • • + x n y n .
[ 1 m , 1 m , 1 m , 1 m ],
where m = 4 is the number of moral values. Thus, this profile considers all moral values equivalently. For the other, specialized profiles, we propose to use a weight of 0.9 for their specific moral value, and a weight of 0.1 m-1 = 0.033 for all other objectives. For example, let us consider the previously learned interests Q(s, j) = [0.8, 0.3, 0.3, 0.3], and the received reward for the perturbed action r = [0.8, 0.4, 0.4, 0.4]. To simplify, let us ignore the max j part of the equation, and instead focus on comparing the reward with the previously learned interests. The perturbed action thus did not manage to improve the interest on the first moral value, but did improve on the other objectives. Had we used a weight of 0, we would ignore this improvement and determine that the perturbed action is not interesting, which would be counter-intuitive and counter-productive. We use a very low weight on these other objectives, so that an eventual decrease on the targeted moral value cannot be compensated by an improvement on the other dimensions. Other exploration profiles, such as [0.75, 0.25, 0, 0] could also be used, and we detail in Section 6.8 perspectives on this topic. Note that, in addition, the Bellman equation must be adapted as well, where k is used to iterate on the various moral values:
∀k ∈ [[1, m]] : Q t+1 (s t , a t , k) ← α r t,k + γ max a ,ρ Q t (s t+1 , a , k) + (1 -α)Q t (s t , a t , k) (6.2)
The Q-Value was previously a scalar, updated by adding the reward, which was also a scalar; they are know both vectors. We adapt the Equation (2.1) presented earlier by simply using element-wise addition of vectors. In other words, the first dimension of the Q-Value is updated by taking into account the first dimension of the reward, and so on. We also need to obtain the interest of the next state, which was computed as the maximum Q-Value of any action in the next state s t+1 . As we previously mentioned when adapting the "interesting criterion", the max operator is not defined when comparing vectors: we thus propose to use the interests of the action that maximizes the dot product with the exploration weights ρ.
6.2
Learning interesting actions for informed decisions
Identifying dilemmas
Once the interesting actions have been learned, they can be leveraged to identify dilemma situations when the agent is deployed. First, explorations profiles are merged into agents, and "frozen", i.e., their data structures, especially actions, are not learned any more. When deployed, the learning agents will, at each step, compare the actions proposed by each exploration profile for the current situation, represented by the received observations from the environment. To choose the action to execute, the agent first needs to determine whether there is a dilemma. If there is, it cannot directly choose an action, and must rely on the human user's preferences; otherwise, there is no dilemma, thus the best action can be clearly defined, and the agent simply selects this action.
We pose a few definitions to formalize the dilemma identification process. To better explain our reasoning behind the algorithm we propose, we start from a naïve application of an existing definition of dilemma, and point out the problems that arise and how we overcame them. Let us recall that a situation is described by a vector of observations o ∈ O, according to the DecPOMDP framework described in Definition 4.1.
We start by adapting a definition of dilemma proposed by Bonnemains (2019):
A situation is considered as a dilemma if there is at least two possible decisions, and every decision is unsatisfactory either by nature or by the consequences.
We refer the interested reader to Bonnemains' thesis for the formal definition. However, Bonnemains used a symbolic model, whereas our actions are defined in terms of continuous interests and parameters. Thus, we adapt it to better fit our conditions: Definition 6.2 (Naïve dilemma). A situation is considered as a dilemma if, among the proposed actions, all actions are unsatisfactory. An action is unsatisfactory if there is another action with a higher interest on at least one dimension.
This definition takes into account the fact that actions have continuous interests, and we always have a choice between at least 2 actions, provided that the number of neurons in the Action-(D)SOM is greater than or equal to 2, or that there are at least 2 exploration profiles. It echoes the notion of regret also mentioned by Bonnemains. For example, considering an action a 1 with interests Q(s, a 1 ) = [0.5, 0.5] and another action a 2 with interests Q(s, a 2 ) = [1, 0], we can say that taking action a 1 would result in a regret with respect to the first moral value, as a 2 would have yielded a better result. Conversely, 140 Chapter 6 taking a 2 would result in a regret with respect to the second moral value, as a 1 would have yielded a better result. Thus, for each of these actions, there is "another action with a higher interest on at least one dimension": if these are the only possible actions, the situation is a dilemma.
To formalize this definition, we note that it is in line with the notion of Pareto Front (PF), which is a well-known tool in multi-objective reinforcement learning. We first define the Pareto-dominance operator:
x > P areto y ⇔ (∀i : x i ≥ y i ) and (∃j :
x j > y j ) (6.3)
In other words, a vector Pareto-dominates another if all its dimensions are at least equal, and there is at least one dimension on which the vector has a strictly superior value.
Applying it to our problem of identifying dilemmas, we can compute the Pareto Front (PF), which is the set of all actions which are not Pareto-dominated by another action:
PF(o) = (p, a) ∈ (P, A) | (p , a ) ∈ (P, A) Q p (States p (o), a ) > P areto Q p (States p (o), a) (6.
4) Note that we compare all actions from all profiles together to compute this Pareto Front (PF). An action is unsatisfactory if there is another action with higher interests on at least one moral value: in other words, if it does not dominate all other actions. Actions not in the PF are dominated by those in the PF and are unsatisfactory. However, actions in the Pareto Front cannot dominate each other, by definition. Thus, if the PF contains more than 1 action, it means that, for each action, there exists in the PF another action with a higher interest on at least one dimension, and thus the situation is a dilemma.
However, first experiments have demonstrated that this definition was not well-suited, because there was not a single situation with only 1 action in the PF. This is due to 3 reasons: 1) As the interests are continuous, it is possible to have an action which is nondominated by only a small margin, e.g., compare [0.9, 0.9, 0.9, 0.9] to [0.900001, 0, 0, 0]. Clearly the first action seems better, nonetheless, because of the small difference on the first dimension, it does not dominate the second action. 2) We explore from different sub-zones of the action space, and we combine different exploration profiles which all search for different interests, e.g., one exploration profile tries to obtain actions with high interests on the first dimension, whereas another profile tries to obtain actions with high interests on the second dimension. Thus, it seems natural to obtain actions that cannot dominate each other. 3) We do not impose a limit on the number of moral values: we use 4 in our Smart Grid use-case, but we could theoretically implement dozens. To
Identifying dilemmas
dominate another action, an action needs first and foremost to be at least equally high on all dimensions: if we increase the number of dimensions, the probability to find at least one dimension for which this is not the case will increase as well, thus preventing the second action to be dominated, and adding it to the PF. Thus, the first and rather naive approach does not work in our case; we extend and improve this approach, by keeping the ideas of Pareto-dominating and Pareto Front, but adding new definitions and data structures, namely ethical thresholds and theoretical interests. First, we build upon one of our general assumptions with respect to the Socio-Technical System: ethics comes from humans. Thus, to determine whether an action is acceptable, from an ethical point of view, we choose to rely on the "source of truth" for ethics, i.e., the human users, and we introduce an ethical threshold, reflecting their judgment upon the point at which an action satisfies a given moral objective or value. The ethical threshold is set by a human user for each learning agent, and is used as a target for the actions' interests. Moreover, we note that learned actions' interests may be different between the various moral values. This is partially due to the fact that some moral values may be harder to learn than others, or because of the way the reward function (or judgment) for this specific value is designed. Human users may have different requirements for these moral values, for example someone who is not an ecologist might accept actions with a lower interest for the environmental sustainability value than the inclusiveness value. For these 2 reasons, we choose to have a multi-objective ethical threshold: in other words, the thresholds for each moral value may differ. [1, m]], can be read as a threshold between 0% and 100%, relative to the interest associated with moral value i.
Note that we define the ethical thresholds as values between 0 and 1. This is indeed, we argue, rather intuitive for the human users, and easy to understand: 0 represents an action completely uninteresting, whereas 1 represents an action perfectly interesting. However, this poses a problem with the actual actions' interests: they are updated at each time step using the modified Bellman equation (6.2), which leaves them unbounded. Interests depend both on the action's correctness, and the number of times they have been explored. In other words, an action with a low interest could be explained either by its non-compliance with the moral values, or because it was not often explored. For example, running a simulation for 5, 000 steps could yield an interest of 6, whereas running the simulation for 10, 000 steps could yield an interest of 11. These absolute interests are not comparable, especially by lay users, as there is no reference: what does 6 mean? Is it good? We thus propose to introduce an anchor, or point of comparison, in the form of a theoretical interest. The theoretical interests are computed using the same Bellman equation as the interests, with a small difference: we assume the received reward was the maximum possible, as if the action was "perfect".
Theoretical interests are updated at the same time as the interests, and thus grow similarly. If the action is poorly judged, its interest will be lower than the theoretical interests; if the action is judged as adequate, its interests will converge close to the theoretical ones. The number of steps impacts interests and theoretical interests exactly in the same manner: thus, the ratio between the two can be considered time-independent, e.g., if we train actions for only 5, 000 steps and get an effective interest of 6, the theoretical interests will reflect the fact that the maximum will be near 7. Thus, an action with a ratio of 6 7 can be considered as quite good. On the other hand, if we train actions for 10, 000 steps but still get an effective interest of 6, the maximum indicated by theoretical interests will be near 11, and an action with a ratio of 6 11 will be considered as less satisfactory. We therefore offer a reference to compare unambiguously actions' interests that does not depend on the number of steps. To compute and memorize the theoretical interests, we introduce a new data structure to the agents' exploration profiles in the bootstrap phase, which we name the Q-theoretical table. As its name indicates, it is very similar to the Q-Table for interests; the only difference is the update equation, as we mentioned earlier. In the sequel, we assume the maximum reward to be 1, the equation for updating theoretical interests is thus:
Q theory t+1 (s t , a t ) ← α 1 + γ max a Q theory t (s t+1 , a ) + (1 -α)Q theory t (s t , a t ) (6.5)
Note that, to simplify, we did not consider the multi-objective aspect in this equation. The actual formula adds a third dimension to the Q-theoretical table, but the update formula stays the same, as we use 1 as the "theoretical reward", regardless of the moral value. users, and we thus ask for their preferences. However, asking for preferences adds mental charge to the humans: it is not a trivial task. If we ask too often, it might become a burden, and the system becomes unusable: one of the goals of implementing a system of artificial agents is to automate some of our tasks, so as to relieve us; if the system asks us for the correct decision at each step, this completely negates the benefits.
To avoid this, we want to learn the human preferences, so that artificial agents will solicit humans less often, while still exhibiting a behaviour that corresponds to the human user's preferences. Learning the correct preferences for dilemmas could be as simple as maintaining a map of dilemmas to preferences; yet, we recall that we are in continuous domains. In particular, the situation in which a dilemma occurs is represented by an observation vector, which is composed of continuous values. Thus, even a small difference in only one of the observation's components would yield, strictly speaking, a different dilemma. This seems counter-intuitive: surely, not all dilemmas are unique? We may consider that some conflicts in various situations correspond to similar dilemmas. For instance, a choice of consumption at 2:00 AM or 2:01 AM can be considered close, and settled equivalently. Yet, a choice between the same actions, with the same conflicts between moral values, may constitute a different dilemma when occuring at 7:00 AM, i.e., the choice may be different. To reduce the burden on human users, agents could "group" dilemmas that are close; we thus need a way of grouping them. To do so, we propose the notion of context, which we vaguely define, for now, as such: "A context is a group of dilemmas identified as being similar, such that they can be settled in the same manner, i.e., with the same action selection".
We now need to propose a formal definition for these contexts, such that artificial agents can automatically identify them and group dilemmas by contexts. This is what we first describe in this section. Then, we explain how the unknown dilemmas are presented to human users to create new contexts when asking for their preferences, and how the preferences are learned. As previously, we start with a naïve definition to highlight its issues, and explain the reasoning behind the actual definition.
First, we can notice that contexts are somewhat to dilemmas what discretized states are to observations. A simple and intuitive idea can be to leverage the notion of states to define a context: if dilemmas appear in the same state, they may belong to the same context. However, agents now use several exploration profiles when deployed, and each of the profiles has its own State-(D)SOM: in other words, each exploration profile discretizes observations into states differently, even though they receive the same observations vectors. For example, profile p 1 might say that the current situation corresponds to state and the guarantee that dilemmas in a same context have the same actions, as we mentioned. However, earlier experiments have demonstrated a few flaws to this simple definition. Indeed, with 5 exploration profiles, and thus lists of 5 elements, the probability that at least one of these elements differ is quite high, and increases with the number of possible discrete states, i.e., the number of neurons in the State-(D)SOM. In one of our experiments, over 10, 000 steps and using 144 neurons in the State-SOM, we obtained 1, 826 unique lists: there is still a 5× decrease factor, which is interesting for such a simple definition, but it is not enough. Asking the users 1, 826 times in a simulation seems way too much in our opinion. The main problem is that the vast majority of lists appear only a few times: 55% of time appear exactly 1 time, only 45% appear more than 2 times, 30% more than 3 times, 10% more than 10 times, etc.
Another attempt tried to leverage the AING [START_REF] Bouguelia | An Adaptive Incremental Clustering Method Based on the Growing Neural Gas Algorithm[END_REF] algorithm to automatically group dilemmas through distance between situations they occur in, in the observation space. AING is a clustering algorithm, based on the Growing Neural Gas idea [START_REF] Fritzke | A growing neural gas network learns topologies[END_REF]: each time step, a data point, i.e., a dilemma in our case, is presented to the neural gas. The algorithm compares the point with the existing neurons: if it is sufficiently close to one of the neurons, according to an automatically determined 146 Chapter 6
threshold, the data point is associated with this neuron, and the neuron is updated slightly towards the data point. If no neuron is sufficiently close, a new one is created at the exact position of the data point: this mechanism represents the creation of a new context in our case, when existing ones do not suffice to describe the current situation. However, it failed to work, perhaps because dilemmas appear in a seemingly random order: we may have first a dilemma in the bottom-left quadrant, and then another in the upper-right quadrant, and so on. Thus, the algorithm creates lots of neurons because all these dilemmas seem so far away from each other. When finally a dilemma appears that seem close to another one, there are so many neurons around that the algorithm computes an infeasible distance threshold, and this dilemma is therefore also assigned to a new neuron. We could have tweaked the distance formula to force the creation of fewer neurons, however, we feared that this may artificially create a topology that does not initially exist, and thus making our agents learn "garbage" contexts. Another disadvantage is that neurons use a single distance threshold, relative to their center, which makes a sphere of attraction around their prototype. However, we have no certainty that the bounds between any two different contexts can be represented by such spheres. Perhaps, in some cases, we might have an abrupt bound, on a specific dimension, because at this specific point, there is a clear change of contexts for human users. In other words, the same distance traveled does not have the same significance on two different dimensions.
A one-minute shift between 2:00 and 2:01 may not change the context, but a 1% shift in comfort between 49% and 50% could change the context, as seen by the user.
Reflecting on this, we thought that contexts exist in the human eye. We thus propose to leverage human users to identify contexts, instead of a purely automated tool. An additional advantage of this approach is that humans can bring their own baggage, and thus their own mental state, to the identification of contexts, which the artificial agents do not have access to. More specifically, we define a context as a set of bounds, i.e., minimal and maximal values, for each dimension of the observation vector. This is a quite simple definition that still allows for arbitrary bounds around the context.
An interface is created to allow human users to choose the bounds for each non-recognized dilemma, thus creating a new context. When a dilemma is identified by a learning agent, the dilemma's situation is compared to the bounds of known contexts: if the situation is entirely inside the bounds of a given context, we say that this context "recognizes" the dilemma, and we apply the action that was learned for this context. In other words, the dilemma belongs to the context.
∈ [[1, g]] : b k ≤ o l,k ≤ B k .
Now that we can define contexts, we can learn the users' preferences in each context about which action should be taken in dilemmas that are recognized by this context. We also propose to filter actions by comparing their parameters and removing those with similar parameters, as they would have a similar effect on the environment. This allows reducing the number of actions proposed to the user. Indeed, because of the way we combine exploration profiles and the use of Pareto-domination, we end up with many proposed actions that do not dominate each other. When simply merging the profiles together, and taking the Pareto Front of all proposed actions, we obtained most of the time between 4 and 16 optimal actions in the PF. Although it might not be difficult to choose an action among 4, the cognitive load seems too high when we have to compare 16 actions. For each pair of actions among the PF, if, for every dimension, their parameters differ by less than a given threshold, we consider the actions to be equivalent, and we only retain one of the two. This threshold can be an absolute or relative value, and we detail in our experiments and results the choice of this threshold, and the resulting number of actions. To give an order of magnitude, we managed to reduce from maximum 16-20 actions to about 4-6 actions each step, while using what seemed to us reasonable thresholds.
Remark. Let us emphasize that we only compare actions on their parameters. Indeed, actions are sometimes similar on their interests as well; however, we argue it would not be a good idea to remove actions that have similar interests, if they have different parameters. This is precisely what we want to avoid, by giving the control back to the user and asking them their preferences: to make the trade-offs explicit. Using the interests is a great pre-filtering tool: if an action is clearly better than another, we do not want to keep the second one, as the first one was, in all likelihood, judged better with respect to the moral values, and thus would have a better impact. This is why we use the interests to compute the Pareto Front. Once we have the actions that cannot be compared to each other, because none of them dominates any other, i.e., the Pareto Front, we need to focus on the parameters instead. If an action proposes to consume 600Wh while another proposes to consume 590Wh, the difference is perhaps not that important, and we can 148 Chapter 6
remove one of them. On the contrary, if two actions have almost equal interests, but different parameters, this means there is a compromise to be made: although they would have a comparable impact in terms of rewards, by their very nature they are different.
With the number of proposed actions reduced, and thus more manageable for lay users, we present these alternatives to the users when a dilemma occurs, and they accordingly define a new context. The actions are compared both on their interests and their parameters, so that users may make an informed decision. The association between contexts and chosen action, i.e., user's preferences, is simply memorized by an associative table.
Summarizing the algorithm
In this section, we summarize the 3 previous steps, which are formally described in 2 algorithms, one for the bootstrap phase, and another for the deployment phase:
1. Bootstrap phase (Algorithm 6.1): learning interesting actions 2. Deployment phase (Algorithm 6.2)
1. Identifying dilemmas in situations 2. Learning contextualized user preferences
Bootstrap phase
We recall that, in the first phase, we introduce new exploration profiles: each agent learns a separate profile. Each exploration profiles contains a State-(D)SOM, an Action-(D)SOM, a Q-Table , and a new Q-theoretical table, which they learn throughout the time steps by receiving observations, selecting actions to explore them, and receiving rewards. Algorithm 6.1 describes the process that takes place during the bootstrap phase, although in a pseudo-algorithm manner, where some functions such as the random perturbation (noise) over actions are left out or the update of (D)SOMs, as they have been previously described in Chapter 4.
The bootstrap phase is set for a fixed number T of steps (line 1). At each time step t, learning agents receive observations (line 4), and determine the state hypothesis as in the Q-SOM and Q-DSOM algorithms (line 5), by leveraging the State-SOM and finding 6.5 Summarizing the algorithm the Best Matching Unit. They select actions based on the number of times they were already chosen (lines 6 and 7), and noise the action parameters to explore the action space (line 8). The selected action is then executed in the environment (line 9). Learning agents receive the new observations and the reward (lines 12 and 13), and use them to update their data structures (State-(D)SOM, Action-(D)SOM, Q-Tables), similarly to the Q-SOM and Q-DSOM algorithms (lines 14-21). However, this updated algorithm differs on 2 aspects:
1) The perturbed action is deemed interesting or not based on the exploration profile's weights and Equation ( 6.1) (line 16).
2) In addition to the Q-Values, which are updated using Equation ( 6
∀u ∈ U ∀w ∈ W Q(u, w) ← α Q ψ U (u)ψ W (w) [r + γ max j (Q(i , j )) -Q(u, w)] + Q(u, w) 20 ∀u ∈ U ∀w ∈ W Q theory (u, w) ← α Q ψ U (u)ψ W (w) r + γ max j Q theory (i , j ) -Q theory (u, w) + Q theory (u,
Deployment phase
On the other hand, Algorithm 6.2 describes the process in the deployment phase. Note that the algorithm considers a fixed number of time steps T (line 2), but in practice, nothing prevents from setting T = ∞. At each time step, learning agents receive observations (line 3). To select an action, agents compute the Pareto Front (PF) of optimal actions (line 4), as described in Equation (6.4). They then determine whether each of them is an acceptable action, according to Definition 6.4, leveraging the ethical thresholds ζ set by the human user (line 5). If at least 1 acceptable action can be found, the situation is not a dilemma, and the agent automatically selects one of the acceptable actions (lines 7-9). We propose to choose based on the sum of interests per action, so that the selected action has the best impact on the environment (line 8). Otherwise, if no acceptable action can be found, the situation is a dilemma (lines 10-26), and the agent looks for a context that would recognize this dilemma, as defined in Definition 6.6. This means that, for each context, and for each dimension k of the observation space, we check whether the observation o l,t,k is within the bounds c [b k ] and c [B k ] defined by the context (line 13). When such a context is found, the action that was associated to this context by the user is selected. In the case where no context corresponds (lines 18-24), an interface asks the user for which action to be enacted, and the definition of a context in which the same action should be selected again. We filter the proposed actions to remove those which have too similar parameters (lines 20-21): if another action is found with parameters differing by less than 3% on each dimension, the action is removed from the set of proposed actions. For example, [1, 1, 1, 1, 1, 1] and [0.99, 0.98, 0.99, 0.99, 0.99, 1] have less than 3% difference on each dimension, and thus one of them will be removed. Finally, the association of this context to the selected action is memorized (line 23), so that it may be re-used in future time steps. The selected action (whether from an existing or a new context) is executed in the environment (line 25).
Experiments
In this section, we describe the experiments' setup we used to evaluate our contribution on the multi-objective aspect, with a focus on learning interesting actions and identifying dilemmas to, ultimately, learn the contextualized users' preferences in dilemmas. These experiments are split into 2 sets: the bootstrap phase, and the deployment phase. Note that the latter leverages the data structures, i.e., exploration profiles, learned from the former. The implementation details for some mechanisms, such as the action exploration selection during the bootstrap, which were left out in previous sections, are also detailed and compared here.
The implementation was done in the Python 3 language, so that it could be integrated easily with the Q-(D)SOM algorithms. The Graphical User Interface (GUI) used to interact with users was developed using the Tkinter library, and Ttk (Themed Tk) widgets, as it is a simple library for creating GUIs. Tkinter also has the advantages of being part of standard Python, available on most platforms (Unix, macOS, Windows), and capable of easily integrating Matplotlib plots, which are used to present actions' interests graphically.
The rewards came from argumentation-based judging agents (AJAR) as described in Section 5.4.2, using the "min" aggregation and the "simple" judgment function. They were slightly modified to avoid aggregating the reward and instead sending the vector to the RL algorithms.
Learning profiles from bootstrapping
We recall that the objective of the bootstrap phase is to learn interesting actions, in all situations. To do so, we introduced exploration profiles that include a weight vector for scalarizing rewards and interests, so as to focus on different sub-zones of the action space. Agents need to explore the actions so that they can be compared fairly: they do not yet need to exploit at this point, as the bootstrap phase is separated from the deployment phase. Thus, the action selection mechanism has been modified with a focus on the number of times they have been selected, instead of their interests.
154
Chapter 6
Action selection
Initially, we chose to simply select actions randomly: according to the law of large numbers, every action having the same probability of being drawn, they should all be drawn a similar number of times as the number of draws increase. This simple method also avoids memorizing the number of times each action is selected in each state, which saves computational resources, especially memory. However, first experiments have shown empirically that, in some states, the distribution of actions' selection was not uniform. States that were more visited had higher standard deviations of the number of times each action was chosen. This is a problem, as we want the actions' interests to be fairly comparable, so that they can be shown to users later, and they can make an informed choice. Whereas it was part of the exploitation-exploration dilemma in the Q-(D)SOM algorithms, we now want the actions to be fairly compared by having been selected, and thus explored, roughly the same number of times. To reduce the standard deviation of actions' selection within states, and thus achieve fairer comparisons, a first improvement was to replace the random method with a min method, by memorizing N l,s,a the number of times an agent l has chosen action a in state s, and by always choosing the action with the minimum N l,s,a . This method indeed managed to reduce the disparities, within a state. There was still a disparity between states, because some states are more rarely visited than other, but this stems from the environment dynamics, on which we cannot act. Within a given state, all actions were chosen exactly the same number of times, with a margin of ±1. Indeed, if a state is visited a total number of times that is not a multiple of the number of actions, it is impossible to select every action the same number of times. Yet, by construction, the min method does not allow the difference to be higher than 1: if all actions but one have been selected x times, and the last one x -1, min will necessarily choose this one. At the next step, all actions have therefore been selected x times, any of them can be chosen, thus resulting in one action selected x + 1 times and all other x times. If this state is never visited any more after that, we cannot compensate this difference and end up with a margin of 1 between actions. We computed the standard deviation for each state, and then compared a few statistics: when using the random method, standard deviations ranged from 0.7 to 6.6, with an average of 2.4, whereas, when using the min method, they ranged from 0 to 0.5, with an average of 0.4.
The min method thus offers more theoretical guarantees than the random method, at the expense of higher computational needs. However, there is still a slight bias in this 6.6 Experiments method, due to an implementation detail of the argmin method that returns the action a with the lowest N l,s,a : when multiple actions are candidates, i.e., have the same lowest N l,s,a , argmin returns the first one. Thus, the order of action selection will always be a 1 , a 2 , a 3 , a 1 , a 2 , a 3 , • • • for 3 actions in a given state. If the state is not visited a number of steps that is a multiple of the number of actions, the first actions will be advantaged compared to the others, e.g., a 1 , a 2 , a 3 , a 1 . This is only a small bias, as the difference will only be 1 in the worst case, but still, we can wonder why choosing always the first actions, perhaps other ones would be better. We propose to use the "min+random" method, which consists in taking a random action among those which have the lowest N l,s,a only. It has, again by construction, the same guarantees as the min method, but is not biased towards the first actions.
Agents and exploration profiles
As mentioned in the previous sections, we propose to use a "generalist" profile granting equal importance to all moral values, and several "specialized" profiles that focuses on the compliance with a particular moral value. Concretely, the weights vector for each profile are:
• ρ 1 = [0.25, 0.25, 0.25, 0.25] • ρ 2 = [0.9, 0.033, 0.033, 0.033] • ρ 3 = [0.033, 0.9, 0.033, 0.033] • ρ 4 = [0.033, 0.033, 0.9, 0.033] • ρ 5 = [0.033, 0.033, 0.033, 0.9] Each of these exploration profiles is learned by a separate learning agent. In addition, as the agents' learned actions will be used in a deployment phase, where many agents will impact the same environment, we added several other learning agents, that do not learn exploration profiles but rather try to optimize their behaviour as previously, using the Q-SOM algorithm. Thus, the exploration profiles are learned in a quite realistic scenario.
We ran 1 simulation using the annual consumption profile, with our 5 "exploration profile" agents, and 26 other agents (20 Households, 5 Offices, 1 School). At the end of the simulation, the exploration profiles were exported to be reused later, and particularly in the deployed experiments. 156 Chapter 6
Deployment phase
Once the exploration profiles were learned, we created new agents that contained the 5 previously learned exploration profiles, instead of having a State-(D)SOM, an Action-(D)SOM. There are 2 goals for the experiments of this phase:
1. Show a proof-of-concept interface that is usable by human users to learn the contexts and preferences. 2. Demonstrate that agents learn behaviours that are aligned with the given preferences over moral values.
User interaction through a graphical interface
In order to allow users to interact with the learning agents, we created a prototype Graphical User Interface (GUI), which is used when the agent detects a dilemma in an unknown (new) context. We recall that, from the presented algorithm, the agent asks the user for the solution of the dilemma, i.e., which action should be chosen, and the definition of a context in which other dilemmas will be considered similar, i.e., the lower and upper bounds.
The GUI thus presents the current situation, and a set of sliders so that the user can set up the lower and upper bounds, for each dimension of the observation space. This is depicted in Figure 6.2.
Remark. Note that the environment simulator and the learning algorithms manipulate vectors of values ∈ [0, 1] g . This is indeed easier to learn representations: if a dimension shows higher absolute values than others, it might become (falsely) preponderant in the decision process. Having all dimensions in [0, 1] mitigates this. However, human users do not have this requirement; on contrary, it is sometimes more difficult to understand what such a value means, e.g., for the time (hour). We understand immediately what 3 o'clock means, but 3 24 = 0.125 is not immediate. Thus, we transformed the value of the "hour" dimension to use a 24-hour format rather than the machine-oriented [0, 1] format. Similar transformations could be applied to other dimensions as well, by injecting a priori knowledge from the designers.
The user must select both the context bounds and the action that should be taken when this context is identified, in any order. The bounds' tab is initially shown, but the user may change the tab freely. We note that it may also be interesting to first ask the user the dilemma's solution before asking for the bounds, as the conflicts between values and actions might guide its ethical reasoning. To do so, the interface presents the different available alternatives, i.e., the optimal actions inside the Pareto Front (PF), after filtering the actions that are too close in parameters. To describe the actions, we choose to plot all actions' respective parameters on the same plot, as several histograms, such that users can compare them quickly. To improve the visualization, we also compute and plot the mean of each parameter for the proposed actions: this allows distinguishing and comparing actions. Figure 6.3 gives an example of these plots. The first two actions, ID = 0 and ID = 1, can be compared, e.g., as follows: "action #0 proposes to consume from the grid twice as energy as the mean of proposed actions; action #1 on the contrary, consumes from the grid less than the average". Similarly, we also plot and compare the actions' respective interests, in a separate tab. These interests correspond to the expected satisfaction of each moral value, for every proposed action. As for the parameters, we show the mean on each dimension to facilitate comparison, and, in addition, we also show the theoretical maximum interest. experiment, placing myself as a user of the system. Thus, the experiment is somewhat limited, with only 1 single agent, and we cannot determine how usable the interface is for non-expert users; however, this experiment still proves the ability to interact with agents, and how agents can learn the contexts. One important aspect is the number of identified contexts, as we do not want the interface to be overwhelming to users. If agents ask their users every 2 steps, the system would quickly be deemed unusable. We expect the agents to quickly identify dilemmas that they cannot settle, as they have initially no knowledge of the preferences, and to reduce the number of required interactions as the time steps increase, because more and more dilemmas will be associated with a known context.
Learning behaviours aligned with contextualized preferences
The second goal of these simulations is to demonstrate that the learned behaviours are aligned with the preferences. For example, if a human user prefers ecology, represented by the environmental sustainability value, the agent should favour actions with high interests for this value, when in an identified dilemma. Again, we did not have resources for making experiments with a sufficient number of users. Thus, we propose a proxy experiment, by implementing a few synthetic profiles that are hardcoded to make a decision in a dilemma by using simple rules. We make a proof-of-concept experiment and do not assume that such profiles accurately represent the diversity of human preferences in our society, but we make the hypothesis that, if the system correctly learns behaviours that correspond to these hardcoded preferences, it will learn appropriate behaviours when in interaction with real human users.
We propose the following profiles:
• The "Flexible ecologist" represents humans who want to support the environmental sustainability value, as long as their comfort is above a certain threshold. In other words, they filter the actions for which the well-being interest is above the threshold, and take the one that maximizes the environmental sustainability interest among them. However, if no such action is proposed, then they resort to maximizing their comfort through the well-being interest. • The "Activist ecologist" is the contrary of the "flexible" one. As long as there are actions for which the environmental sustainability is above an acceptable threshold, they take the action in this sub-group that maximizes their comfort through the well-being interest. When no action satisfies this threshold, they sacrifice their comfort and take the action that maximizes environmental sustainability.
Experiments
• The "Contextualized budget" focuses on the notion of contextualized preferences, and makes different choices based on the context, i.e., the current situation. When a dilemma is identified during the day, this profile chooses the action that avoids paying too much, by favouring the affordability value. On contrary, when a dilemma is identified during the night, the profile chooses to improve its comfort by maximizing the well-being value. This example echoes some contracts where the energy price is time-dependent (peak/off-peak hours), and where the price is usually cheaper at night.
Results
In this section, we present the results for the experiments described above.
Learning profiles
We first show the learned interests, from all profiles, relatively to their theoretical interests, i.e., the interests actions they propose would have if they received the maximum reward 1 at each time step, on all moral values. The closer an action is to the theoretical interests, the better its impact on the environment, as judged by the reward function. Figure 6.5 shows the ratios of interests over theoretical interests. From this figure, we note two important results. First, actions manage to attain high interests, w.r.t. the theoretical ones, on all moral values, although some can be more difficult to satisfy than others, particularly the environmental sustainability in our case. Thus, we demonstrate that the proposed algorithm, during the new bootstrap phase, can learn "interesting" actions, which was one of our goals for the experiments. Yet, we see that the satisfaction of moral values are not learned equivalently. The second point is that, in order to select the ethical thresholds, having prior and expert knowledge about the learned interests might be an advantage. Providing human users with this knowledge could help them to choose appropriate ethical thresholds. On the other hand, a potential risk is that it would encourage users to lower their expectations too much, and reduce the desirable to the automatable, by settling for what the machine can do.
As we mentioned in the previous sections, it is important that actions are selected a similar number of times, to ensure a fair comparison, before presenting the alternatives to human 162 Chapter 6 Fig. 6.5.: Ratio between learned interests and theoretical interests, for all actions in all states, from all exploration profiles.
Results
users. Otherwise, we would risk proposing two actions that are in fact incomparable, because one of them has higher uncertainty over its interests. We thus memorized the number of times each action was selected, in each state: Figure 6.6 shows the results. In this figure, each state of the State-(D)SOM is represented as a row, whereas each action of the Action-(D)SOM is represented as a column. The color indicates the number of times an action was selected in a given state. We remark from the figure that, although rows have various colors, each line is, individually, roughly uniform. For example, the first row, i.e., state #0, is entirely purple, there is no red, orange, or yellow that would indicate an action less often selected. This demonstrates that our proposed action selection mechanism effectively explores actions similarly, without focusing too much on one specific action within a given state. Thus, we can, when in a dilemma, compare the actions' interests fairly, as we know they were given the same chances.
To verify this mathematically, in addition to visually, we computed the standard deviation of the number of times the actions were enacted, within each state. the distribution of the found standard deviations, for every state. As we can see, the vast majority of states have a low deviation, which means actions were explored similarly.
Learning human preferences
As mentioned previously, in this experiment I placed myself as the human user of a learning agent, and settled dilemmas that were presented to me through the prototype interface. Our hypothesis was that the agent would, initially, identify many dilemmas, as it had not learned yet my preferences; as the time steps goes by, most identified dilemmas should fall within one or another of the already known contexts, and the number of interactions should decrease. To verify this, we have, during the experiment, memorized each dilemma the agent encountered, and the time steps at which each context was created. The contexts also memorize which dilemmas they "recognize", i.e., the dilemmas that correspond to this context and are automatically settled by the agent. After the
Results
experiment, we computed the total number of identified dilemmas the agent encountered: knowing at which time step a context is created, and how many situations it recognized during the simulation, we can plot the number of "remaining" dilemmas to be settled. Figure 6.8 shows this curve: we can see that, at the beginning of the simulations, we have still 10, 000 dilemmas to settle.
At the first step, a first dilemma is identified: this is natural, as the agent does not know anything about my preferences at this point. This first situation is somewhat of an exception: as the simulation just started, the agent's observations are mostly default values, e.g., 0. For example, the agent did not have a comfort at the previous time step, because there is no previous time step. Thus, this situation is not realistic, and we cannot expect something similar to happen ever again in the simulation. This is why the bounds I have chosen at t = 0 defined a context that recognized only a single situation during the whole simulation, i.e., this first exceptional step. At the next time step, t = 1, the agent once again asks for my input, and this time the new context covers more dilemmas (about ~400); similarly for t = 2 (about ~800). We can see that, at the beginning, many time steps present new, unknown dilemmas, and thus impose asking the user to choose the proper action and define a new context. This behaviour is in line with our hypothesis. The "interaction time steps" are dense and close together until approximately time step t = 50, after which interactions happened only sporadically. On the curve, these interactions are identified by drops in the number of remaining dilemmas, as the new contexts will recognize additional dilemmas.
This supports our hypothesis that the number of interactions would quickly decrease, as the agent is able to settle automatically more and more dilemmas, based on the preferences I gave it. The total number of interactions, in this experiment was 42, which is appreciable, compared to the 10, 000 time steps.
Remark. Note that we have in this experiment as many dilemmas as time steps: this indicates that the ethical thresholds I have set may have been too high, my expectations were not met by the agent, and no acceptable actions could be found. This is not a problem for this experiment, as we wanted to demonstrate that the number of interactions decrease with the time: as every situation was recognized as a dilemma, it was a sort of "worst-case" scenario for me. Had the ethical thresholds been set lower, I would have had fewer dilemmas to settle, and thus, fewer interactions.
On the other hand, this remark also emphasizes a point made previously: lay users might have some difficulties choosing the ethical thresholds, if they have no prior knowledge 166 Chapter 6
Fig. 6.8.: Number of "remaining" dilemmas to be settled, at each time step of the simulation. When a context is created, we subtract the number of dilemmas this context will identify during the entire simulation. A cross indicates a time step at which a context is created.
Results
about the agents' learning performances. It would be better to correctly explain to users, and to give them information or data so that they can set the thresholds in an informed manner, or perhaps to learn these ethical thresholds from a dataset of dilemma solutions given by users.
A second result is the number of filtered actions proposed in each interaction, which imply an input from the user to select an action and to create a context. We recall that human users have to choose the action when they settle a dilemma; however, the combination of different exploration profiles and the use of a Pareto Front yield many actions, which can be difficult to manage for a lay user. We have proposed to filter actions that are sufficiently close in terms of their parameters, because they represent the same behaviour, in order to reduce the number of actions, and we set the threshold to 3% of relative difference. In other words, if two actions have a difference of 3% or less on every parameter, we remove one of them. Figure 6.9 shows the distributions of the number of proposed actions, in each dilemma, and the number of filtered actions. We can see that, in the original set of proposed actions, the number of actions ranged from 8 to 25, whereas the number of filtered actions ranges from 2 to 6, which is simpler to compare, as a user.
Discussion
In this chapter, we proposed an extension to the Q-SOM and Q-DSOM algorithms that focuses on the multi-objective aspect. By doing so, we make explicit the (eventual) conflicts between moral values, and the trade-offs that ensue, which aims to answer our third research question :
How to learn to address dilemmas in situation, by first identifying them, and then settling them in interaction with human users? How to consider contextualized preferences in various situations of conflicts between multiple moral values?
The objectives, as defined in Chapter 1, were to capture the diversity of moral values and ethical preferences within a society, and especially through multiple human users (O1.1), and to learn behaviours in dilemma situations (O2.2). Dilemmas are situations where no single action is able to optimize all moral values at once; the goal is to make artificial agents able to identify them, and to learn to settle dilemmas according to contextualized human preferences from their respective users.
168
Chapter 6
Fig. 6.9.: Distributions of the number of proposed actions in each dilemma of the simulation. The Unfiltered distribution shows the total number, before the filter, whereas the Filtered distribution shows the number of remaining actions, after actions too similar are removed.
O1.1 was attained by considering several human users, and their various contextualized preferences, i.e., preferences are different from a user to another, and from a situation to another. Specifying ethical thresholds for each agent, and thus each user, also improves diversity: an agent could consider most situations as non-dilemmas, whereas another agent would classify the same situations as dilemmas, depending on their users' ethical thresholds. To solve O2.2, we separated the contribution into 3 main parts: 1) learning the "interesting" actions, 2) identifying dilemmas in situation based on interesting actions and ethical thresholds of acceptability, and 3) learning the human preferences. These 3 steps effectively learn to settle dilemmas according to human preferences.
An important advantage of our approach is its high configurability, by putting the user back in the loop. Indeed, several definitions rely on the human user, such as the ethical thresholds, and the contexts. This means that the system will be aligned to the human users. For example, as the dilemmas are recognized based on the ethical thresholds, users could have a very low amount of dilemmas, and let the system handle most of the actions, whereas other users could ask for higher expectations, getting more dilemmas and thus taking a more active role. Similarly, as the contexts' definition rely on users, they can bring their own mental state, which the machine does not have access to.
An additional goal was to make the system usable to human users. Even though we could not demonstrate this result, due to the lack of resources for a panel experiment, we have kept this in mind when designing the algorithms' extension. For example, whereas some MORL algorithms from the state of the art propose to use vectors of preferences as a criteria to choose the optimal strategy, and thus actions, we argue it might be complicated for a non-expert user to choose such a vector. The relationship between preferences and outcomes is not always clear nor linear [START_REF] Van Moffaert | A novel adaptive weight selection algorithm for multi-objective multi-agent reinforcement learning[END_REF]. Instead, human users need to specify a vector of thresholds that determines which actions are acceptable. This might still be difficult to specify, for both technical and ethical reasons: users need to understand what these thresholds entail, in terms of ethical demands. However, we argue it has a more limited impact: if the thresholds are not appropriately set, two cases may arise. On the one hand, if the thresholds are set unrealistically too high, then the agents will not manage to propose actions for which interests attain the required thresholds. Thus, almost all situations will be labelled as dilemmas, and the system will ask the user for the correct action to choose. This is a problem, as the system will become a burden for the user, but at least it will not lead to undesired actions being taken, as would be the case if a vector of preferences was incorrectly set. On the other hand, if the thresholds are set too low, then the agent will automatically solve some dilemmas that should have been 170
Chapter 6
presented to the user. The taken action will be the one that maximizes the average of moral values: thus, it might not correspond to the human preferences. For these reasons, it would probably be safer, in order to limit a potential impact non-aligned with human preferences from the machine, to use high rather than low thresholds.
Surely it is important for human users to understand how the system works, what the ethical thresholds mean, what are the learned actions' interests, etc. We have already mentioned this point in the results, when we presented the learned actions' interests.
As some moral values might be harder than other to learn, it is probable that, on the corresponding dimensions, the interests will be lower. If human users are not aware of this, they might set similar thresholds for all dimensions, which would fail to find acceptable actions. It is essential to remember that the system may very well have limits, either because of some imperfection, e.g., in the learning process, or because it was naturally infeasible in the given situation. Users should not follow the system "blindly".
Our system has also a few limitations, possible short-term improvements, and longer-term research perspectives, all of which we discuss below.
The first potential limitation concerns the moral values themselves, and more specifically their number. In our experiments, we used 4 moral values, as described in our usecase, yet, the proposed algorithms are theoretically not limited in terms of moral values. However, intuitively it seems that, the more moral values we implement, the more difficult it will be to learn actions that satisfy all of them, especially if they conflict. It follows that, if actions cannot satisfy all moral values at once, the majority of situations will be considered as dilemmas. In addition, the more dimensions we use for the interests, the more the Pareto Front will yield many actions. Indeed, it is more likely that, for each action, we will find at least one dimension on which it is not dominated, as the number of dimensions increases. Thus, with more actions in the Pareto Front, human users will have more actions to compare in order to choose the one to be executed in this dilemma, which would increase their cognitive load. Based on this reasoning, we wonder whether there is a relation between the number of moral values and the system's performance; if such a relation exists, what is the maximum, reasonable number of moral values before the system becomes too "chaotic" to be used? 4 moral values seems already a high number to us, as many works in the MORL literature focus only on 2 objectives; nevertheless, it would be interesting to try to apply our proposition to use cases with even more moral values, e.g., 6, 8, or even 10.
Discussion
The second limitation is the manual partitioning of the action space, when exploring and learning the interesting actions, namely, the exploration profiles. As the profiles are manually set by the designers, it means that some gaps may exist: two different exploration profiles may find actions in two different sub-zones of the action space, but perhaps they may be actions between these two sub-zones, which we cannot find. A better exploration could rely on intrinsic motivation, such as curiosity, to fully explore this action space. For example, the algorithm could begin with similar weights as the ones we defined, and then try to explore in a different zone. If no interesting actions, i.e., actions that are not Pareto-dominated, can be found in this zone, the algorithm could preemptively stop its exploration, and resort to another zone. However, we left this aspect aside, as the exploration of the action space was only a pre-requirement for the rest of contribution. It was not the main part, contrary to the identification of dilemmas and more importantly, the contextualized preferences. Our manual partitioning is already sufficient for the rest of contribution, although it could be improved: this is thus an important perspective.
A more important limitation, which is directly linked with the rest of the thesis, concerns the ability to adapt to changes. As we introduced a bootstrapping phase, which is separated from the deployment phase in which agents are expected to actually exhibit their behaviour, it is harder to adapt to changes, either in the environment's dynamics, or in the reward functions, as we have shown in previous chapters. Indeed, actions, including their parameters, are learned during the bootstrap phase, and "frozen" during the deployment phase, so that they can be compared fairly. If the environment changes during the deployment, in our current algorithm, agents could not adapt any more. We propose two potential solutions to this:
1. The simplest one is to alternate bootstrap and deployment phases regularly. The environment used in the bootstrap phase must be as close as possible to the deployment one, which can be the real world for example. Thus, agents would not be able to adapt immediately to a change, but in the long-term, their behaviour will be "upgraded" after the next bootstrap phase. Note that this still requires some work from the designers, which need to update the bootstrap environment, and to regularly update learning agents as well. It can also be computationally intensive, as the bootstrap phase needs to simulate a lot of time steps. 2. A more complex method could be to make agents able to learn during deployment, similarly to what we have done in the two previous chapters. However, the exploration-exploitation dilemma must be carefully handled, as we need to propose actions to human users. It will probably require work on the explanation aspect as well: how do we present and compare actions that have not been explored the same number of times? If an action has been less explored than another, a difference of interests between them can be explained either because one is truly better, or because one has more uncertainty. Perhaps works such as Upper-Confidence Bound and similar can be leveraged, to memorize not only the current interests, but also their uncertainty. This approach would require close collaboration with Human-Computer Interaction researchers, so that the correct information can be presented to human users, in a manner that help them make an informed choice.
There is, however, one aspect of adaptation that is separated from the question of learning interesting actions. Indeed, perhaps human users will want to update, at some point, their definition of a context, or to change the chosen action, e.g., because they have discovered new information that made them reflect on their choice. This does not seem technically complicated, based on our representation of a context: we recall that a context comprises a set of lower and upper bounds to recognize situations, and the chosen action.
The recognized situations can thus be changed by simply updating the bounds, and the chosen action replaced by another. However, it would require, again, specific work on the interface itself to present the functionality to the users.
Following up on the notion of the user interface, we acknowledge that it is not very clear for the moment, and most certainly confusing for lay users. The tabs that, respectively, present sliders to set the context's bounds, and the actions' plots of interests and parameters, could be improved. Yet, we have chosen to make a simple, prototype interface, as the contribution focuses on the algorithmic capabilities of agents to integrate contextualized preferences from human users. The source of these preferences is, in this regard, a secondary problem of implementation, and the prototype interface suffices to demonstrate, by interacting with the system, that we can indeed settle dilemmas.
Another limitation that is due to a technical implementation detail is that, in the current version, users must choose the bounds and the desired action when a dilemma is presented to them. This is appropriate and sufficiently simple for a laboratory experiment ; however, in a real-world scenario, if a dilemma happens at 3 o'clock in the morning, surely we would not want to force the user to get up and come settle it at this instant. It would be interesting to allow users to interact with the system at a later step, perhaps by making the agent resort to a default selection mechanism, e.g., a random action or taking the one with the maximum average interests, while still memorizing that this dilemma was not settled. Thus, when the user decides to interact with the system, a list of encountered
Discussion
dilemmas would be presented so that adequate actions can be elicited and corresponding contexts defined.
Finally, we reflect back on some definitions we have proposed, and how different definitions could yield other results. A first definition that can be potentially limiting is the ethical threshold: we recall that they correspond to the user-set thresholds, above which the interests of an action are considered acceptable. The definition of these thresholds that we propose is a simple vector of as many dimensions as there are moral values, i.e., objectives in the multi-objective reward, for example [80%, 75%, 90%, 70%].
In this example, the user accepts an action if its interest w.r.t. the 1st moral value attains at least 80% of the theoretical (maximum) interest, and at least 75% w.r.t. the 2nd moral value, and so on. Users specify a single threshold; however, we sometimes want to accept different alternatives. For example, a user might say "I would accept an action that satisfies at least 80% of the 1st moral value and 75% of the 2nd moral value, or an action that satisfies at least 60% of the 1st moral value and 90% of the 3rd moral value". As hinted in this example, such preferences can be represented by using disjunctions of conjunctions, i.e., by combining "or" alternatives of "and"-based thresholds. In a pseudo-mathematical format, the previous example could be represented as (80% ∧ 75% ∧ 0% ∧ 0%) ∨ (60% ∧ 0% ∧ 90% ∧ 0%). Note that it also explicitly allows users to specify thresholds for subsets of values. Whereas this is already feasible with the current definition, it is not recommended as there is a single vector of thresholds: a threshold set to 0% would result in the moral value being completely ignored. When using disjunctions of conjunctions, a value can be ignored in a given vector, but not in another vector. As a result, the ethical thresholds would be more generic, and more flexible. This definition would not be technically difficult to implement, as we simply need to compare the interests to a set of potentially multiple vectors, instead of a several vector. Yet, in order to be effective, more research is needed, both in moral philosophy to ensure that it is an appropriate definition, and in Human-Computer Interaction to ensure that it is easily usable by users.
In the same vein, we now reflect back on the definition of a "user preference": in this contribution, we proposed to simply define a user preference as an action choice. Others definitions could be possible, e.g., "Choose the action that maximizes the well-being value", which can be encoded as a formula based on the actions' interests. On the one hand, these definitions would be harder to use for the user, compared to simply selecting the action they prefer. On the other hand, they would allow for more flexibility: if the set of actions changes, e.g., because of adaptation, we can still select an appropriate action,
174
Chapter 6
because the formula is more generic than directly memorizing the desired action. Perhaps a long-term perspective could be to learn such formulas, from the user interactions. For example, based on the chosen actions' interests, the agent could derive a logical formula that mimics the users' choices. This idea appeals again to the notion of symbolic reasoning in a hybrid context that we used in Chapter 5. However, this time the hybrid combination would be endogenous, taking place within the agent itself.
We could even imagine a system that would automatically learn preferences by observing the human user's behaviour and choices in various situations, not necessarily linked to the current application domain, as a matter of fact. This can be linked to the idea of an "ethics bot", which [START_REF] Etzioni | Incorporating ethics into artificial intelligence[END_REF] define as follows:
An ethics bot is an AI program that analyzes many thousands of items of information (not only publicly available on the Internet but also information gleaned from a person's local computer storage and that of other devices) about the acts of a particular individual in order to determine that person's moral preferences.
The idea of ethics bot is a bit far from our work, as we instead propose to directly ask users for their preferences, and we try to teach the agents how to "behave ethically", or, more precisely, to exhibit a behaviour aligned with moral values. However, it illustrates that some parts of our contribution can be linked, and even reused or extended, in different lines of research within the Machine Ethics community.
Discussion
Synthesis, discussion, and perspectives 7
In this final chapter, we summarize our answers to the problem and research questions, and reflect on the proposed contributions. The first section makes a synthesis of the 3 contributions, and we particularly highlight how they answer the defined objectives. The second section describes the limitations and perspectives offered by our work; we outline here the salient points of each contribution and how they relate to the global architecture, as the details have been discussed in the chapters on each contribution.
Main results
In the state of the art in Section 2.1, we notably presented a set of properties that we deem important for the implementation of artificial moral agents:
• Continuous domains allow for use-cases that can be difficult to represent using discrete domains. • Multi-agent is more realistic, as the human society is inherently multi-agent, and artificial agents are bound to be integrated into our society, in one form or another. • Multiple moral values offer more diversity and better represent human moral reasoning. • The capacity to adapt to changes is crucial as the social mores constantly evolve.
We have also described our methodology in Section 3.1, for which we emphasize some of the properties:
• Pluri-disciplinarity is important as AI experts do not usually have the necessary domain knowledge, in particular in moral philosophy. • Putting humans in control and reflecting in terms of Socio-Technical Systems ensures that the (technical) system is aligned with humans, rather than humans being forced to align themselves with the system.
• Diversity, especially of moral values and of human preferences, is a necessity as we have different ethical considerations.
These properties echo the ethical model that we introduced in Section 3.2. This model could be leveraged by other works; one of the main points to retain is that producing systems ethically-aligned, whether they focus on producing "ethical behaviours", or simply take ethical considerations as part of their behaviours, requires a discussion with a large audience, consisting of designers, users, stakeholders, philosophers, etc. As these various people have different knowledge and expertise, this discussion necessitates in itself to reach a common ground of explaining the key choices when constructing a system. These choices, which we name the "ethical injection", reflect what we want the machine to take into account.
Finally, we presented 3 contributions in this manuscript. The first one focused on learning behaviours through reinforcement learning algorithms that particularly emphasize the adaptability of agents to changes in the environment dynamics, including in the reward function, i.e., in the expected behaviour. The second one tackled the construction of the reward function and replaced the traditional mathematical functions with an aggregated multi-agent symbolic judgment, relying on explicitly defined moral values and rules.
Finally, the third one opened up the question of dilemmas and replaced the aggregation of the second contribution with a new process that allows learning agents to identify situations of conflicts between moral values, and to address such dilemmas according to human preferences.
We presented in Chapter 1 a set of objectives that should be targeted in this thesis. Table 7.1 shows that each of them is handled in at least one of our 3 contributions. We recall that the 3rd objective, implementing a prototype, was tackled throughout each contribution by the means of the Smart Grid simulator, which was used for the experiments. The simulator was updated when necessary, e.g., making a connection to the Ethicaa platform in the LAJIMA model, or adding a simple user interface to present dilemmas and capture human preferences in the last contribution. Thus, our thesis accordingly answers all objectives. In the following subsections, we recapitulate these contributions and their advantages. Nevertheless, this does not mean that these objectives can be considered "closed": we presented in each contribution chapter some perspectives, and, in Section 7.2, we provide additional ones that pertain more to the global proposed architecture rather than individual components. Yet, we consider that we have made a substantial contribution to each of these objectives.
Learning behaviours
The first contribution consists in 2 reinforcement algorithms, named Q-SOM and Q-DSOM (see Chapter 4). These algorithms target the various challenges identified from the state of the art that pertain to our problematic of learning morally-aligned behaviours. In particular, there was a lack of approaches targeting continuous domains; this is a problem, as such domains may be necessary, or at least more practical, to describe some situations or use-cases, with the example of our Smart Grid energy distribution. This is true for both describing situations, e.g., with the inequality measure, and actions, e.g., various amounts of energy to choose. In order to introduce more diversity in the experiments, as per the methodology and objective O1.1, we introduced multiple learning agents: they each encounter different experiences, and thus exhibit various behaviours; they also embed one of several profiles that dictates their needs and available actions. Another aspect that shaped our algorithms was the notion of privacy: as the learning agents represent human users, it would certainly not be acceptable to share their data with every other agent. Some sharing is necessary at some point, at least in a centralized
Main results
component which computes the rewards, yet, the learning agents do not have access to other agents' data, neither their observations, actions, nor rewards. Finally, the most important challenge here is the "Continuous Learning", as defined by [START_REF] Nallur | Landscape of machine implemented ethics[END_REF], or as we call it, the ability to adapt to changes. This is particularly crucial within Machine Ethics, as the ethical consensus may shift overtime. We note that these algorithms are not exclusive to the learning of ethical considerations, or morally-aligned behaviours, in the sense that they do not explicitly rely on ethical components, such as moral values, moral supports or transgressions, etc.
In order to address these challenges, the Q-(D)SOM algorithms are based on Self-Organizing Maps (SOMs), which allow them a mapping between continuous and discrete domains. Thus, multi-dimensional and continuous observations can each be associated with a discrete state; similarly, a set of discrete action identifiers are linked to continuous action parameters. The interest of performing an action in a situation, i.e., the expected reward horizon from that action and the resulting situation, is learned in a Q-Table , a tabular structure that relies on discrete state identifiers for the rows, and action identifiers for the columns. The Q-Table , along with the State-(D)SOM and Action-(D)SOM, are used to learn the behaviours in various situations, which corresponds to objective O2.1. Learning mechanisms, and in particular the exploration-exploitation dilemma, were specifically designed to allow adaptation to changes, to address objective O1.2.
Regarding the notion of privacy, we designed the simulator and the algorithms in such a way that agents do not obtain information about their neighbours, which seems more acceptable. On the other hand, this complicates their learning, as the behaviours of other agents in the environment constitute a kind of "noise" on which they have no information.
In order to counterbalance this effect, we use Difference Rewards, which calculate the contribution of each agent, by comparing the current state of the environment with a hypothetical state in which the agent would not have acted.
We have evaluated these algorithms by using several "traditional" reward functions, i.e. mathematical formulas, which target various moral values, such as the well-being of agents, or the equity of comforts. Some of these functions target several moral values at the same time, using simple aggregation, or are designed so that their definition evolves over time: new moral values are added as time goes by. The results show that our algorithms learn better than 2 state-of-the-art baseline algorithms, DDPG and MADDPG. It also seems that the size of the environment, i.e., the number of learning agents, has an impact on the learning performance: it is more difficult for the agents to learn the 180 Chapter 7
dynamics of the environment when there are more elements that impact these dynamics. However, the results remain satisfactory with a high number (50) of agents.
Constructing rewards through symbolic judgments
The second contribution consists in the construction of rewards through the agentification of symbolic judgments (see Chapter 5). The need for this contribution stems mainly from objective O2.3: the need of correctly specifying the expected behaviour to express the significance of moral values, which include preventing the agent from "gaming" or "hacking" the reward function Additionally, a better usability and understandability of the reward function for external users or regulators constitutes a secondary objective for this contribution. Reward hacking is a well-known phenomena in reinforcement learning, in which the agent learns to maximize its received reward by adopting a behaviour that corresponds to the criteria implemented in the reward function, but does not actually represent what the designer had in mind. For example, an agent could circle infinitely around a goal without ever reaching it, to maximize the sum of rewards on an infinite horizon. Within the Machine Ethics community, it would be an important problem if the agent learned to "fake" acting ethically. The reward function should thus be designed to prevent reward hacking, or at least to be easily repaired when reward hacking is detected. The second objective itself encompasses 2 concepts: usability and understandability. Indeed, as the reward function must signal the expected behaviour to the learning agent, and thus appropriately encode the relevant moral values and more largely, ethical considerations, the AI experts are not always the most competent to design it. The specificities of the application domain and the ethical considerations could be best apprehended by domain experts and moral philosophers. Yet, it may be difficult for them to produce a mathematical function that describes the expected behaviour, and this is why another form of reward function is needed. On the other hand, understandability is important for all humans, not only the designers. Human users may be interested in what the expected behaviour is; regulators may want to ensure that the system is acceptable, with respect to some criteria, which can include the law. As we mentioned in the state of the art, this understandability may be compromised when the reward function is a mathematical formula and/or leverages a large dataset.
Whereas the previous contribution used mathematical functions, this contribution proposed to leverage symbolic judgment. We propose a new architecture of an exogenous, multi-agent hybrid combination of neural and symbolic elements. This architecture
Main results
agentifies the reward function, where judging agents implement the symbolic judgment that forms the reward signal, and learning agents implement the neural learning process. The two sets of agents act in a decision-judgment-learning loop, and the learning agents learn to capture and exhibit the ethical considerations that are injected through the judgment, i.e., the moral values. An advantage of this learning-judgment combination is that the resulting behaviour can benefit from the flexibility and adaptation of the learning: assuming that, in an example situation, it is infeasible for the agent to satisfy all moral values, the agent will find the action that provides the best trade-off. This computation relies on the Q-Values, which take into account the horizon of expected rewards, i.e., the sequence of received rewards from the next situation. Thus, human designers can simply design the judgment based on the current step, and do not necessarily need to consider future events, yet, learning agents will still take them into account.
Two different models, and their implementation, were proposed. The first one, LAJIMA (Logic-based Agents for JudgIng Morally-embedded Actions), relies on Beliefs-Desires-Intentions (BDI) agents, and more specifically on simplified Ethicaa agents [START_REF] Cointe | Ethical judgment of agents' behaviors in multi-agent systems[END_REF], which implement a set of logic rules. On the other hand, the second one, AJAR (Argumentation-based Judging Agents for ethical Reinforcement learning), uses argumentation graphs to determine the correctness of an action with respect to moral values. In both cases, several moral values are explicitly considered, either through different BDI agents, or several argumentation graphs, which adds diversity to the usecase, as per objective O1.1. We note that logic and argumentation are very similar, and, in definitive, there is no theoretical difference in their expressiveness. However, there is a difference in their ease of use: we argue that argumentation graphs are easier to grasp for external users or regulators, thanks to their visual representation. In addition, the attack relationship of argumentation allows to easily prevent a specific behaviour, which further answers our objective O2.3 of avoiding "gaming" behaviours or mis-alignments.
The experiments have shown that learning agents are still able to learn from symbolic judgments. However, the results could have been better, especially for the LAJIMA model. The rules that we defined were perhaps too naive; they suffice as a proof-of-concept first step, but they certainly need to be improved to provide more information to the learning agents. In this regard, a technical implementation advantage of AJAR was that any number of arguments could be used as positive or negative feedbacks, whereas we fixed this number to the number of dimensions of the action space in LAJIMA, e.g., d = 6 for our Smart Grid use-case. Thus, the gradient of the reward was better in AJAR, but at 182 Chapter 7
the expense of an additional hyperparameter: the judgment function, which transforms a set of arguments into a reward, i.e., a scalar.
The main idea to retain from this contribution is to leverage symbolic elements and methods to compute the rewards. We proposed BDI agents with logic rules, and argumentation graphs, as such symbolic methods, but perhaps others could be used as well. Many other forms of logics have been used within Machine Ethics, e.g., non-monotonic logic (Ganascia, 2007a), deontic logic [START_REF] Arkoudas | Toward ethical robots via mechanized deontic logic[END_REF], abductive logic [START_REF] Pereira | Modelling morality with prospective logic[END_REF], and more generally, event calculus [START_REF] Bonnemains | Embedded ethics: Some technical and ethical challenges[END_REF]. Still, we argue that argumentation in particular offers interesting theoretical advantages, namely the attack relationship and the graph structure that allow both repairing reward "gaming", and an easy visualization of the judgment process.
Our idea hopefully opens a new line of research with numerous perspectives, and an empirical and more exhaustive comparison of symbolic methods for judgment would be important.
Identifying and addressing dilemmas
The third and final contribution focuses on the notion of dilemma, i.e., a situation in which multiple moral values are in conflict and cannot be satisfied at the same time. We want the learning agents to exhibit behaviours that take into account all defined ethical considerations, mainly represented by moral values, but when they cannot do so, how should they act? This is related to our objective O2.2, which states that "Behaviours should consider dilemmas situations, with conflicts between stakes". In the state of the art, we saw that numerous multi-objective reinforcement learning (MORL) approaches already exist, and several of them use preferences to select the best policy. However, their preferences are defined as a set of weights on the objectives: this is, in our opinion, an easily usable definition for machines, but quite hard for humans. It also assumes the existence of numeric tradeoffs between moral values, in other terms, the commensurability of moral values. Thus, we have proposed an alternative definition of "preferences", based on selecting an action. To avoid forcing the user to make a choice every time a dilemma is identified, which can become a burden as the number of dilemmas increase, the learning agents should learn to apply human preferences and automatically select the same actions in similar dilemmas.
Main results
To solve these tasks, we extended the Q-(D)SOM algorithms to receive multi-objective rewards, and proposed a 3-steps approach. First, interesting actions are learned, so that, when a dilemma is identified, we can offer various interesting alternative actions to the user, and we know that we can compare them fairly, i.e., that they have similar uncertainty on their interests, or Q-Values. This is done by separating the learning into a bootstrap and a deployment phases, and creating exploration profiles to explore various sub-zones of the action space.
The second step identifies dilemmas, by computing a Pareto Front of the various actions, i.e., retaining only the actions that are not dominated by another action. To determine whether the situation is a dilemma, we compare the remaining actions' interests to ethical thresholds, chosen by the human user, which are a set of thresholds, one for each moral value. Thus, different human users may have different expectations for the moral values.
To simplify the definition of such ethical thresholds for lay users, we also learn the theoretical interests of actions, i.e., the interests they would have if they had received the maximum reward each time they were selected. The actions' interests are compared to the theoretical ones, such that the ratio yields a value between 0 and 1. If at least one of the proposed actions in the Pareto Front, for a given situation, is deemed acceptable, i.e., if all its interests are over their corresponding ethical threshold, the situation is not a dilemma, and the agent takes one of the acceptable actions. Otherwise, the situation is a dilemma, and we must resort to the human preferences. Thus, our proposed definition of a dilemma depends on the human user: the same situation can be considered as a dilemma for one human user, and not a dilemma for another user. We consider this an advantage.
Finally, when a situation is an identified dilemma, we take an action based on the human preferences. We assume that human preferences depend on the situation: we may not always prefer the same moral values. To avoid asking the user too much, e.g., each time a dilemma is encountered, we introduce the notion of contexts to group similar dilemmas. Users define a context as a set of bounds, a lower and an upper, for each dimension of the observation space. Intuitively, it is similar to drawing a polytope in the observation space: a dilemma belongs to this context if the situation's observations are inside the polytope. When a context is created, the user also specifies which action should be taken; the same action is automatically selected by the artificial agent whenever an identified dilemma falls within the same context.
An important aspect of our contribution is that we focus on giving control back to the human users. This is done through several mechanisms: the selection of ethical thresholds,
184
Chapter 7
the creation of contexts, and the definition of a preference as an action choice rather than a weights vector. The ethical thresholds allow controlling the system's sensibility, i.e., to which degree situations will be flagged as dilemmas. By setting the thresholds accordingly, the human user may choose the desired behaviour, e.g., always taking the action that maximizes, on average, the interests for all moral values, and thus acting in full autonomy, or, at the opposite extreme, recognizing all situations as dilemmas and thus always relying on the human preferences, or anything else in between. This is in line with the diversity targeted by objective O1.1. We note that our approach can therefore be used as a step towards a system that acts as an ethical decision support, i.e., a system that helps human users make better decisions by providing them with helpful information [START_REF] Tolmeijer | Implementations in machine ethics: A survey[END_REF][START_REF] Etzioni | Incorporating ethics into artificial intelligence[END_REF].
Limitations and perspectives
Our approach is firstly limited by its proof-of-concept nature. As such, it demonstrates the feasibility of our contributions, but is not (yet) suited for real-world deployment. The experiments happened in a simulated environment, and the moral values and rules we implemented were only defined by us, in our role of "almighty" designers, as mentioned in Section 3.2. We explained in the same section that, in our opinion, such design choices, which have ethical consequences, should be on the contrary discussed in a larger group, including users, stakeholders, moral philosophers, etc. The implementation would also require field experiments to validate that agents' decisions are considered correct by humans, for example through a Comparative Moral Turing Test (cMTT), as defined by [START_REF] Allen | Prolegomena to any future artificial moral agent[END_REF]. A cMTT is comparable to the Turing Test, except that it compares descriptions of actions performed by two agents, one artificial and one human, instead of discussions. The evaluator is then asked whether one of the two agents appears "less moral", in the sense that its actions are less aligned with moral values, than the other. If the machine is not identified as the "less moral" one significantly more often than the human, then it is deemed to have passed the test. In a similar vein, the simulation and the proposed interface could also be improved by focusing more on the Human-Computer Interaction (HCI) aspects. This would enhance the intelligibility of the system, which would make it easier to use and to accept. It is particularly important for our last contribution that focuses on putting users back in the loop and interacting with them to learn their preferences.
Limitations and perspectives
Several interesting lines of research have been left out in this thesis, by lack of time and to focus on the contributions that have been presented in this manuscript. One of the perspectives could be to improve the explainability aspect of agents through another hybrid combination. For example, some works focus on the idea of causal models, or more generally on the notion of consequences, within the Machine Ethics community [START_REF] Winfield | Towards an ethical robot: Internal models, consequences and ethical action selection[END_REF]. This echoes the so-called "model-based" reinforcement learning algorithms, which aim to first learn a model of the environment, and then leverage this model to select the best decisions, similarly to a planning algorithm. We imagine a step further down this line, and argue that such models could also be appealing for explainability purposes: the agent could then "explain", or at least provide elements of explanations, on why an action was taken, by presenting the learned model, the expected consequences of taking this action, and eventually a few counterfactuals, i.e., comparing the expected consequences of other actions. Furthermore, such a model could be leveraged not only as a tool to explain decisions after taking them, but even in a proactive manner: human users could explore consequences of their preferences in a simulated environment by querying the learned model. This would impact the co-construction of ethics between artificial agents and human users, with enriched opportunities for humans to reflect upon their ethical principles and preferences. The learned model could also improve the action selection interface when a new dilemma is identified: actions could be compared in terms of expected consequences, in addition to their interests, i.e., expected rewards, and parameters. In our architecture, this model could especially draw on our symbolic elements, namely the judging agents. For example, a first idea could be to provide the argumentation graphs to the learning agents, as well as the individual activation of each argument in addition to the computed reward. Learning agents could thus learn which arguments are activated by an action in a situation. This does not represent a complete model of consequences in the world, as the arguments are themselves typically at a higher level; still, it represents a first step. More generally, proactive exploration of consequences could benefit from the Hybrid Neural-Symbolic research: our current approach leverages some sort of exogenous hybrid, as the symbolic and learning elements are in separated sets of agents. If the learning agents were imbued with a symbolic model of their world, they could learn this model through repeated interactions.
Another perspective of improvement is that judging agents are somewhat limited in their agency, in terms of reasoning abilities principally, but potentially interactions between them as well. By this, we mean that the process that determines a numeric reward from the symbolic judgment is described as a function, which basically counts the number 186 Chapter 7
of positive symbols versus the number of negative ones. Instead, this process could be replaced with a deliberate reasoning and decision-making to determine the reward based on the judgment symbols. For example, the judging agent could yield a reward that does not exactly correspond to the learning agent's merits at this specific time step, but which would help it learn better. Drawing on the idea of curriculum learning proposed by [START_REF] Bengio | Curriculum learning[END_REF], the judging agents could choose to start with a simpler task, thus rewarding the learning agent positively even if all the moral values are not satisfied, and then gradually increasing the difficulty, requiring more moral values to be satisfied, or to a higher degree. Or, on the contrary, judging agents could give a lower reward than the learning agent normally deserves, because the judging agent believes the learning agent could have done better, based on previous performances. In the two previous examples, the described process would require to follow a long-term strategy, and to retain beliefs over the various learning agents to effectively adapt the rewards to their current condition with respect to the strategy. This exhibits proper agency, instead of a mere mathematical function; in addition, it would improve the quality of the learning, and potentially reduce the designers' burden, as the curriculum learning for example would be done automatically by the judging agents. Note that this idea is, again, linked with the notion of co-construction between judging and learning agents: indeed, the judges would have to retain information about the learning agents, and to adapt their strategy. For example, if learning agents do not manage to learn a given moral value, judging agents could switch their strategy, focusing on another one.
An important limitation of our approach is that we assume we can implement (philosophical) moral values as computer instructions, in this case the judgment process more specifically. In other words, this limitation means that we expect to have a formal definition, translatable in a given programming language, or "computable" in a large sense. However, we have no guarantee for this assumption. For example, how can we define human dignity 1 ? Currently, we consider that moral values are symbols, which are linked to actions that may support or defeat them, as defined by the moral rules. However, this might be a reduction of the original meaning of moral values, in moral philosophy. This limitation can be seen, in fact, as the "ultimate" question of Machine Ethics: will we be able to implement moral reasoning in a computer system? An advantage, perhaps, of our approach, is that we do not require to code how to act, but rather to specify what is praiseworthy, or on contrary punishable. This means that, even if we cannot define, e.g., human dignity entirely, we may still provide the few elements that we can think of and formalize. Thus, the agent's resulting behaviour will not exactly exhibit the human dignity value, but some aspects of it, which is better than nothing. The agent will thus exhibit a behaviour with ethical considerations, and as a consequence have a more beneficial impact on our lives and society.
Finally, we have evaluated our contributions on a Smart Grid case, which we described as an example, and mentioned that the proposed approaches were generic. We now detail the necessary steps to adapt them to another domain; these steps are grouped by contribution to make it clearer. Concerning the reinforcement learning algorithm, the first important step is to have an environment that follows the RL framework: sending observations to agents, querying actions from them, executing actions, computing rewards. As part of this step, a significant effort must be applied to the choice and design of the observations (also called "features" in the RL literature): some will provide better results, e.g., by combining several low-level features together. The presented algorithms, Q-SOM and Q-DSOM, are themselves applicable as-is to any environment that follows this RL framework, as they can be parametrized to any size of input (observations) and output (actions) vectors. However, a second step of finding the best hyperparameters will surely be necessary to optimize the agents' behaviours to this new application use-case. Concerning the symbolic judgments for reward functions, more work will be required, as we only provide a framework, be it through Beliefs-Desires-Intentions (BDI) agents or argumentation-based agents. Designers will have to implement the desired reward function using one of these frameworks: this requires wondering which moral values should be focused, what metrics are used to support them, etc. Note that, whereas argumentation can be easily plugged with the learning algorithms, as they both are implemented in Python, BDI agents in JaCaMo are implemented in Java and require an additional technical work to communicate with Python, e.g., through a Web server that will transfer observations and actions back-and-forth between Python and Java. Concerning the last part, dilemmas and human preferences, the general idea and algorithms we propose are generic, as they do not explicitly rely on the domain's specificities, except for the interface that is used to ask the human user. This interface has to present the current situation, e.g., in terms of observations, and the proposed actions' interests and parameters. These elements are specific to the chosen use-case, for example in Smart Grid the current hour is an important part of observations; actions' parameters concern quantities of energy to transfer, etc. An appropriate interface must be built, specifically for the new domain. This requires both a technical work and a bit of reflection about the UX, so that the interface can be usable to lay users.
188
Chapter 7
Learning algorithms' hyperparameters
B
The following tables show a comparison of various hyperparameters for each of the learning algorithms: QSOM, QDSOM, DDPG, and MADDPG. For each of them, numerous combinations of hyperparameters were tried multiple times, and the score of each run was recorded. We recall that, in this context, score is defined as the average global reward over the time steps, where the global reward is the reward for all agents, e.g., the equity in the environment. The mean score is computed for all runs using the same combination of hyperparameters. Then, for each hyperparameter, we compute the maximum of the mean scores attained for each value of the parameter.
This method simplifies the interaction between hyperparameters, as we take the maximum. For example, a value v 1 of a hyperparameter h 1 could yield most of the time a low score, except when associated when used in conjunction with a value v 2 of another hyperparameter h 2 . The maximum will retain only this second case, and ignore the fact that, for any other combination of hyperparameters, setting h 1 = v 1 yields a low score. This still represents valuable information, as we are often more interested in finding the best hyperparameters, which yield the maximum possible score. show in Figure D.5 an example of the judgment process using argumentation graphs. The graph is based on the Affordability moral value presented above, and updated to judge a specific situation. In this situation, the currently judged agent has achieved a very good comfort, by buying a lot of energy, but still has a positive payoff. Also, the agent did not over-consume from the grid. Thus, some arguments, such as "Over-consume 10%", "Over-consume 20%", and "Over-consume 30%" are disabled: they do not count any more in this situation, and their attacks are removed as well. These disabled arguments are shown as more transparent and drawn as "sketchy", compared to the remaining activated arguments. Then, we want to compute the grounded extension, based on only the activated arguments. The grounded extension allows to remove the arguments "Buy energy": although they were activated, they are also attacked by at least one argument, in this case "Good comfort" and "Positive payoff". Intuitively, this means that it is acceptable to buy a lot of energy, according to the affordability moral value, if this means we can obtain a good comfort, or if we still have a positive payoff. Note that, for example, "Over-consume" arguments themselves attack the "Good comfort", meaning that we cannot justify buying a lot of energy to get a good comfort, if we also consume too much from the grid. However, in the current situation, the "Over-consume" arguments were disabled, and thus ignored to compute the grounded; a similar reasoning applies to "Bad comfort", with respect to "Positive payoff". As the "Buy energy 4" argument is killed, the "Bias" argument is kept in the grounded: it is somehow defended by "Good comfort" and "Positive payoff". The arguments in the grounded extension are represented by a bold border, and are in this situation: "Average comfort", "Good comfort", "Positive payoff", and "Bias". Finally, the judgment function takes these arguments, identifies those in the F p and in the F c , and potentially compares them to the total number of pros and cons arguments in the original graph. For example, we can see that 2 out of the 3 possible pros arguments are in the grounded extension: "Not buy energy" was disabled because of the situation. None of the cons arguments are in the grounded extension. Thus, the Unique contexts and occurrences
E
We have described in Section 6.4 a "naive" definition for context, which is based on the exploration profiles and their state discretization. Let us recall that each profile has its own State-(D)SOM that is used to discretize a continuous observation vector into a discrete state identifier. The "naive" definition simply takes the list of state identifiers, i.e., the state identifier determined by each exploration profile, as the context. For example, such a context could be [35,23,43,132,12], meaning that the first profile has discretized the observations as state 35, the second profile as 23, etc. However, we also mentioned that this naive method yielded too many contexts: we expand this here and detail the number of times each unique context appeared. count (respectively on the left-side Y axis and right-side Y axis) of contexts that appeared at last X times in the simulation. The first information we can grasp for this figure is the total number of unique contexts, which is 1, 826. This is an improvement, when compared to the 10, 000 time steps of the simulation, yet it is too much for a human user: it means we have to ask 1, 826 times during the simulation. The second information is that most of the contexts appear very rarely: 55% of contexts have a single occurrence. About 30% of contexts appear more than 3 times, 10% appear more than 10 times, and so on. Although one context managed to occur about 400 times, the vast majority of contexts are rare. This means that the human user would spend time setting preferences, only for a small effect. The mean number of occurrences was around 5.48. On contrary, the human-based definition, which relies on the human user to define the bounds of a context when a new context is necessary, has managed to create less unique contexts. This is highlighted by Figure E.2, which shows a plot with a similar structure as the previous one. However, in this case, we have few contexts: only 42 unique contexts, which is a huge improvement over the 1, 826 naive ones. Also, many contexts appeared several times: more than 80% of them appeared at least 10 times, about 20% appeared more than 200 times, and approximately 10% appeared more than 700 times. The mean number of occurrences, with this definition, was 238.1. This very high mean gives us another information: contrary to the previous case, where the majority of time steps were associated to various contexts that only appeared once or twice, with this new definition, the majority of steps are captured by only few contexts (less than 4) that appear thousands of times.
3. 4 . 5 . 2 .
452 A Smart Grid as per our definition. Multiple buildings (and their inhabitants) are represented by artificial agents that learn how to consume and exchange energy to improve the comfort of the inhabitants. Buildings have different profiles: Household, Office, or School. . . . . . . . . . . . . . . . . . . . . . . 4.1. Architecture of the Q-SOM and Q-DSOM algorithms, which consist of a decision and learning processes. The processes rely on a State-(D)SOM, an Action-(D)SOM, and a Q-Table. . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2. Training of a SOM, illustrated on several steps. Image extracted from Wikipedia. 4.3. Dataflow of the Q-(D)SOM decision process. . . . . . . . . . . . . . . . . . . . 4.4. The agents' needs for every hour of the day in the daily profile. . . . . . . . . 4.5. Distribution of scores per learning algorithm, on every scenario, for 10 runs with each reward function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6. Agents' individual rewards received per time step, over 10 runs in the annual / small scenario, and using the adaptability2 reward function. Rewards are averaged in order to highlight the trend. . . . . . . . . . . . . . . . . . . . . . 5.1. Architecture of the symbolic judgment. The environment and learning agents are the same as in the previous chapter, but the rewards' computation is moved from the environment to newly introduced judging agents, which rely on explicitly defined moral values and rules. . . . . . . . . . . . . . . . . . . . xix The Goodness Process of Ethicaa agents, adapted from Cointe et al. (2016). . 96 5.3. The judgment process of logic-based judging agents, which produces a reward as a scalar value for a learning agent l. This process is duplicated for each learning agent. The symbolic judgments are then transformed as numbers through the feedback function F, and finally averaged to form a scalar reward r l . 97 5.4. Simple example of an argumentation graph that contains 5 arguments (a, b, c, d, and e), represented by nodes, and 6 attack relations, represented by edges. Argument a attacks b, b attacks a and c, c attacks e, d attacks c, and e attacks a.105 5.5. The judgment process of the AJAR framework, which leverages argumentation graphs to determine appropriate rewards. The argumentation graphs are first filtered based on the situation and the taken action to remove arguments that do not apply. Then, the reward is determined by comparing remaining pros and cons arguments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6. Results of the learning algorithms on 10 runs for each scenario, when using the LAJIMA agents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7. Results of the learning algorithms on 10 runs for each scenario, when using the argumentation-based judging agents. To simplify the plot, Scn (Scenario) regroups EnvironmentSize and ConsumptionProfile; Fcts (Functions) combines the choice of judgment and aggregation functions. . . . . . . . . . . . . . . . 6.1. Architecture of the multi-objective contribution on the identification and settling of dilemmas by leveraging human contextualized preferences. . . . . 6.2. The graphical interface used to create a new context. The current situation, represented by an observation vector's values, is shown at the top. The bottom part contains sliders that the user must set up to define the desired bounds. . 6.3. Interface to compare actions by their respective parameters. The dashed line represents the mean, for each parameter, of the proposed alternatives. . . . . 6.4. Interface to compare actions by their respective interests. The dashed line represents the mean, for each interest, of the proposed alternatives. The thick black line represents the theoretical maximum for this specific action on the specific interest. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5. Ratio between learned interests and theoretical interests, for all actions in all states, from all exploration profiles. . . . . . . . . . . . . . . . . . . . . . . . . 6.6. Number of times each action was selected in each state. . . . . . . . . . . . . 6.7. Distribution of the standard deviations of actions' number of times selected, for each state. The curve represents a kernel density estimation. . . . . . . . . xx 6.8. Number of "remaining" dilemmas to be settled, at each time step of the simulation. When a context is created, we subtract the number of dilemmas this context will identify during the entire simulation. A cross indicates a time step at which a context is created. . . . . . . . . . . . . . . . . . . . . . . . . . 6.9. Distributions of the number of proposed actions in each dilemma of the simulation. The Unfiltered distribution shows the total number, before the filter, whereas the Filtered distribution shows the number of remaining actions, after actions too similar are removed. . . . . . . . . . . . . . . . . . . . . . . . D.1. Argumentation graph of the Affordability moral value. . . . . . . . . . . . . . D.2. Argumentation graph of the Environmental sustainability moral value. . . . . D.3. Argumentation graph of the Inclusiveness moral value. . . . . . . . . . . . . . D.4. Argumentation graph of the Security of supply moral value. . . . . . . . . . . D.5. Example of judgment process on the Affordability graph. The judgment relies on the grounded extension, computed from the activated arguments, ignoring the disabled ones. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.1. Count and proportion (Y axes) of contexts appearing at least X times, using the "naive" definition, i.e., taking the list of discretized states by exploration profiles as the context. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.2. Count and proportion (Y axes) of contexts appearing at least X times, using the "human" definition, i.e., asking the user to set context bounds. . . . . . . . xxi Introduction 1
Fig. 2 .
2 Fig. 2.1.: Architecture of the Ethical Layer, extracted from Bremner et al. (2019).
Fig. 2 . 2 .:
22 Fig. 2.2.: An example architecture of Hybrid (Deep) Reinforcement Learning, proposed by Garnelo et al. (2016).
Fig. 3.1.: Representation of a Socio-Technical System, in which multiple humans construct and interact with a system.
Fig. 3 . 2 .:
32 Fig. 3.2.:Conceptual architecture of our approach, composed of 3 parts in interaction: the learning part, the judging part, and the handling of dilemmas.
Figure 3 .
3 Figure 3.3 shows the need, for each hour in a year, for these 3 profiles.
Fig. 3 .
3 Fig. 3.3.: Energy need for each of the building profiles, for every hour of a year, from the OpenEI dataset.
Fig. 3 . 4 .
34 Fig. 3.4.: A Smart Grid as per our definition. Multiple buildings (and their inhabitants) are represented by artificial agents that learn how to consume and exchange energy to improve the comfort of the inhabitants. Buildings have different profiles: Household, Office, or School.
Fig. 4 .
4 Fig. 4.1.: Architecture of the Q-SOM and Q-DSOM algorithms, which consist of a decision and learning processes. The processes rely on a State-(D)SOM, an Action-(D)SOM, and a Q-Table.
Figure 4 .
4 Figure 4.2 summarizes and illustrates the training of a SOM.The blue shape represents the data distribution that we wish to learn, from a 2D space for easier visualization. Typically, data would live in higher dimension spaces. Within the data distribution, a white disc shows the data point that is presented to the SOM at the current iteration step. SOM neurons, represented by black nodes, and connected to their neighbors by black edges, are updated towards the current data point. Among them, the Best Matching Unit, identified by an opaque yellow disc, is the closest to the current data point, and as such receives the most important update. The closest neighbors of the BMU, belonging to the larger yellow transparent disc, are also slightly updated. Farther neurons are almost not updated. The learned SOM is represented on the right side of the figure, in which neurons correctly cover the data distribution.
Fig. 4 .
4 Fig. 4.2.: Training of a SOM, illustrated on several steps. Image extracted from Wikipedia.
1 2 s
2 Function decision(): Data: U the neurons in the State-(D)SOM U i the vector associated to neuron i in the State-(D)SOM W the neurons in the Action-(D)SOM W j the vector associated to neuron j in the Action-(D)SOM Q(s, a) the Q-value of action a in state s τ the Boltzmann's temperature Noise control parameter Input: Observations o Output: An action a /* Determine the Best Matching Unit, closest neuron from the State-SOM to the observations */ ← argmin ∀i∈U ||o -U i || /* Choose action identifier using Boltzmann probabilities */ 3
[Fig. 4 .
4 Fig. 4.3.: Dataflow of the Q-(D)SOM decision process.
2 else
2 A reward function that simulates a change in its definition after 2000 time steps, as if society's ethical mores had changed. During the first 2000 steps, it behaves similarly as the Over-Consumption reward function, whereas for later steps it returns the mean of Over-Consumption and Equity rewards.R ada1 (agent) = ⎧ ⎨ ⎩ R oc (agent) if t < 2000 Roc(agent)+Req(agent)Adaptability2 Similar to Adaptability1, this function simulates a change in its definition.
Fig. 4 . 4 .:
44 Fig. 4.4.: The agents' needs for every hour of the day in the daily profile.
Figure 4.6 shows the evolution of individual rewards received by agents over the time steps, in the annual / small scenario, using the adaptability2 reward function. We chose to focus on this combination of scenario and reward function as they are, arguably, the most interesting. Daily scenarii are perhaps too easy for the agents as they do not include as many variations as the annual; additionally, small scenarios are easier to visualize and explore, as they contain fewer agents than medium scenarios. Finally, the adaptability2 is retained for the same arguments that made us choose it for the hyperparameters search. We show a moving average of the rewards in order to erase the small and local variations to highlight the larger trend of the rewards' evolution.
Fig. 4.6.: Agents' individual rewards received per time step, over 10 runs in the annual / small scenario, and using the adaptability2 reward function. Rewards are averaged in order to highlight the trend.
Fig. 5.3.:The judgment process of logic-based judging agents, which produces a reward as a scalar value for a learning agent l. This process is duplicated for each learning agent. The symbolic judgments are then transformed as numbers through the feedback function F, and finally averaged to form a scalar reward r l .
Example 5 . 1 .
51 Let us imagine a learning agent l that receives an observation vector o l = [0.27, 0.23, 0.35, 0.29, 0.51, 0.78, 0.47, 0.64, 0.95, 0.65]. A judging agent will thus generate the following beliefs when judging l: storage(0.27), comfort(0.23), payoff(0.35), hour(0.29), etc.
5. 2 4 foreach 5 B 6
2456 Designing a reward function through logic rules l ∈ L do /* Perform symbolic judgment of all judging agents */ judging agent j ∈ J do ← beliefs(o l )Judgment j (l) ← ∅ 7 foreach dimension i ∈ [[1, d]] do 8 Judgment j (l) i ← ME(B, a l,i ) j ∈ J do if at least one moral or immoral valuation in Judgment j (l) then moral ← moral ∈ Judgment j (l)immoral ← immoral ∈ Judgment j (l) average ({∀f l,j ∈ f l })
Fig. 5.4.: Simple example of an argumentation graph that contains 5 arguments (a, b, c, d, and e), represented by nodes, and 6 attack relations, represented by edges. Argument a attacks b, b attacks a and c, c attacks e, d attacks c, and e attacks a.
Example 5 . 5 (
55 Attacking and defending arguments). Let us consider an argumentation graph composed of 3 arguments. The first one, arg 1 , says "The agent did not consume 106 Chapter 5
Algorithm 5 . 2 :
52 Argumentation-based judgment process1 Function judgment():Input:Observations o l Action parameters chosen by learning agent a l Output: g agr ({∀f l,j ∈ F(l)})
Fig. 5.5.:The judgment process of the AJAR framework, which leverages argumentation graphs to determine appropriate rewards. The argumentation graphs are first filtered based on the situation and the taken action to remove arguments that do not apply. Then, the reward is determined by comparing remaining pros and cons arguments.
Fig. 5 .
5 Fig. 5.7.: Results of the learning algorithms on 10 runs for each scenario, when using the argumentation-based judging agents. To simplify the plot, Scn (Scenario) regroups EnvironmentSize and ConsumptionProfile; Fcts (Functions) combines the choice of judgment and aggregation functions.
Fig. 6.1.: Architecture of the multi-objective contribution on the identification and settling of dilemmas by leveraging human contextualized preferences.
Example 6 . 1 (
61 Exploration profiles weights). Let us consider 4 moral values, as described in the Smart Grid use-case in Section 3.4.5 and in the experiments of the previous chapter 138 Chapter 6
Definition 6 . 3 (
63 Ethical threshold). An ethical threshold is a vector ζ ∈ Z = [0, 1] m , where m is the number of moral values. Each component ζ i , ∀i ∈ [
Algorithm 6 . 2 : 2 for 5
6225 Decision process during the deployment phase 1 Function decision(): Data: L the set of learning agents T the number of time steps Contexts l the map of contexts to action learned by agent l a) ∈ optimal | ∀i ∈ [[1, m]] :Q p (Statesp(o l,t ),a)i Q theory p (Statesp(o l,t ),a)i ≥ ζ i if |acceptables| ≥ 1 then /*Not a dilemma, let us take the acceptable action with maximum sum of interests */ a ← max (acceptables) end else /* This is a dilemma, try to find an existing suitable context */context ← null forall (c, a) ∈ Contexts l do if ∀k ∈ [[1, g]] : c [b k ] ≤ o l,t,k ≤ c [B k ] then /*Context c recognizes the current situation *null then /* No context has been found, ask the user */ context ← ask bounds(o l,t ) /* Remove actions which have parameters closer than 3% on each dimension */ optimal ← {(p, a) ∈ optimal | (p , a ) ∈ optimal : ∀k ∈ [[1, d]] |Actions p (a) k -Actions p (a ) k | ≤ 0.03 × |Actions p (a ) k |} a ← ask action(optimal ) Summarizing the algorithm
Fig. 6 .
6 Fig. 6.2.: The graphical interface used to create a new context. The current situation, represented by an observation vector's values, is shown at the top. The bottom part contains sliders that the user must set up to define the desired bounds.
Fig. 6 .
6 Fig. 6.3.: Interface to compare actions by their respective parameters. The dashed line represents the mean, for each parameter, of the proposed alternatives.
Figure 6 . 6 Fig. 6 . 4 .:
6664 Fig. 6.4.: Interface to compare actions by their respective interests. The dashed line represents the mean, for each interest, of the proposed alternatives. The thick black line represents the theoretical maximum for this specific action on the specific interest.
Fig. 6 . 6 .:
66 Fig. 6.6.: Number of times each action was selected in each state.
Figure 6
6
Fig. 6 .
6 Fig. 6.7.: Distribution of the standard deviations of actions' number of times selected, for each state. The curve represents a kernel density estimation.
Fig
Fig. D.1.: Argumentation graph of the Affordability moral value.
Fig. E.1.:Count and proportion (Y axes) of contexts appearing at least X times, using the "naive" definition, i.e., taking the list of discretized states by exploration profiles as the context.
Fig. E. 2 .:
2 Fig. E.2.:Count and proportion (Y axes) of contexts appearing at least X times, using the "human" definition, i.e., asking the user to set context bounds.
2.1. Architecture of the Ethical Layer, extracted from Bremner et al. (2019). . . . . 2.2. An example architecture of Hybrid (Deep) Reinforcement Learning, proposed by Garnelo et al. (2016). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1. Representation of a Socio-Technical System, in which multiple humans construct and interact with a system. . . . . . . . . . . . . . . . . . . . . . . . . . 3.2. Conceptual architecture of our approach, composed of 3 parts in interaction: the learning part, the judging part, and the handling of dilemmas. . . . . . . .
3.3. Energy need for each of the building profiles, for every hour of a year, from the OpenEI dataset. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
: How to learn behaviours aligned with moral values, using RL, with continuous domains, in a MAS, such that agents are able to adapt to
changes?
RQ2: How to guide the learning through agentification of reward functions, based on several moral values?
RQ3: How to learn to address dilemmas in situations, according to users' contextualized preferences, with multiple moral values in various
situations?
O1.1: Diversity of ethical stakes
O1.2: Shifting ethical mores
O2.1: Learn non-dilemmas
O2.2: Learn dilemmas
O2.3: Reward specification
Research Question Contribution Objective
O1.1
RQ1 C1 O1.2
O2.1
O1.1
RQ2 C2
O2.3
O1.1
RQ3 C3
O2.2
RQ1C1: Rl algorithms; Learning behaviours; Continuous domains; Adaptation C2: Symbolic judgments; Agentification of reward functions; Multiple judges C3: Multi-Objective RL; Dilemmas; Interaction with users
2.1.4 Top-Down, Bottom-Up, and Hybrid approaches Approach type is probably the most discussed property in Machine Ethics. It characterizes the way designers implement ethical considerations into artificial agents. Similarly to
the usual classification in AI, works are divided into 3 categories
[START_REF] Allen | Artificial morality: Top-down, bottom-up, and hybrid approaches[END_REF]
: Top-Down, Bottom-Up, and Hybrid approaches.
Table by
by
iteratively collecting
experiences from the environment, in the form of s, a, s , r tuples, updating the interest
Q(s, a) based on both the short-term reward r, and the long-term interest V(s ) of arriving
in state s . Mathematically, this can be solved through dynamic programming, by applying
the Bellman equation on the Q-Values
30 Chapter 2 Tab. 2.1.: List
for the original, full table with additional details. of multi-agent approaches focusing on Sequential Decision Tasks or Stochastic Games, and the observability they require. Adapted from Hernandez-Leal et al. (2019).
Tab. 2.1.: List of multi-agent approaches focusing on Sequential Decision
Tasks or Stochastic Games, and the observability they require.
Adapted from Hernandez-Leal et al. (2019).
Algorithm Algorithm Observability Observability
OLSI Category: Ignore Opponent actions
Q-learning Category: Learn Local
RL-CD R-MAX Opponent actions and rewards Opponent actions
Category: Forget ζ-R-MAX Opponent actions
WOLF-IGA Category: Theory of mind Opponent actions and rewards
RMM WOLF-PHC Opponent actions and rewards Local
FAL-SG I-POMDP Opponent actions Opponent actions
MToM Category: Target Opponent actions
Minimax-Q Opponent actions
Nash-Q Opponent actions and rewards
HM-MDPs Opponent actions
FF-Q Opponent actions and rewards
EXORL Opponent actions and rewards
Hyper-Q Opponent actions
Correlated-Q Opponent actions and rewards
NSCP Opponent actions
ORDP Opponent actions
Pepper Opponent actions
MDP-A Opponent actions
BPR Opponent actions
HS3MDPs Opponent actions
Table
The Q-Table is the central component of the well-known Q-Learning algorithm[START_REF] Watkins | Q-learning[END_REF]. It is tasked with learning the interest of a state-action pair, i.e., the expected horizon of received rewards for taking an action in a given state. As we saw in Section 2.2, the Q-Table is a tabular structure, where rows correspond to possible states, and columns to possible actions, such that the row Q(s, •) gives the interests of taking every possible action in state s, and, more specifically, the cell Q(s, a) is the interest of taking action a in state s. These cells, also named Q-Values, can be learned iteratively by collecting experiences of interactions, and by applying the Bellman equation.
4.2 Q-
4.1 Overview
9 end 10 return a ← W j 11 end 66 Chapter 4
/* Randomly noise the action's parameters to explore */ 6 for k ∈ all dimensions of W j do /* The random distribution can be either uniform or normal */ 7 noise ∼ random( ) 8 W j,k ← W j,k + noise
line 9). To do so, we rely on the traditional Bellman's equation that we presented in the State of the Art: Equation (2.1). However, Smith's algorithm introduces a difference in this equation to increase the learning speed. Indeed, the State-and Action-(D)SOMs offer additional knowledge about the states and actions: as they are discrete identifiers mapping to continuous vectors in a latent space, we can define a notion of similarity between states (resp. actions) by measuring the distance between the states' vectors (resp. actions' vectors). Similar states and actions will most likely have a similar interest, and thus each Q-Value is updated at each time step, instead of only the current state-action pair, by taking into account the U the neurons in the State-(D)SOM U u the vector associated to neuron u in the State-(D)SOM W the neurons in the Action-(D)SOM W w the vector associated to neuron w in the Action-(D)SOM P U (u) is the position of neuron u in the State-(D)SOM grid P W (w) is the position of neuron w in the Action-(D)SOM grid Q(s, a) the Q-value of action a in state s η
2
4.4 The Q-SOM and Q-DSOM algorithms Algorithm 4.2: Learning algorithm 1 Function learning(): Data: U , η W elasticity for State-and Action-(D)SOMs α Q , α U , α W learning rates for Q-Table, State-, Action-(D)SOMs γ the discount factor Input: Previous observations o New observations o Received reward r State hypothesis s Chosen action identifier j Chosen action parameters a /* Compute the neighborhood of neurons */
2000
Roc(agent)+Req(agent) 2 else if t < 6000
Roc(agent)+Req(agent)+R comf ort (agent) 3 else
74 Chapter 4
Best hyperparameters on 10 runs for the Q-SOM algorithm, using the annual small scenario and adaptability2 reward function. Best hyperparameters on 10 runs for the Q-DSOM algorithm, using the annual small scenario and adaptability2 reward function. Best hyperparameters on 10 runs for the DDPG algorithm, using the annual small scenario and adaptability2 reward function. Average score for 10 runs of each algorithm, on each reward function and each scenario. The standard deviation is shown inside parentheses. Average score for 10 runs of each algorithm, on each reward function and each scenario. The standard deviation is shown inside parentheses.
Tab. 4.1.: Parameter Tab. 4.3.: Parameter RewardFunction Tab. 4.5.: RewardFunction QSOM Value Value QSOM Description Description QDSOM QDSOM DDPG DDPG MADDPG MADDPG
Q Learning rate Batch size 0.6 256 Update speed of Q-Values Number of samples to use for training at each step
Scenario: daily / small Scenario: annual / medium
Discount rate Learning rate 0.9 5e-04 Controls the horizon of rewards Update speed of neural networks
equity equity 1.00 (± 0.00 ) 1.00 (± 0.00 ) 0.99 (± 0.00 ) 0.99 (± 0.00 ) 1.00 (± 0.00 ) 0.99 (± 0.01 ) 0.53 (± 0.05 ) 0.56 (± 0.11 )
Action perturbation Discount rate gaussian 0.99 Method to randomly explore actions Controls the horizon of rewards
overconsumption overconsumption 0.80 (± 0.04 ) 0.88 (± 0.05 ) 0.63 (± 0.06 ) 0.78 (± 0.08 ) 0.78 (± 0.10 ) 0.87 (± 0.04 ) 0.33 (± 0.02 ) 0.52 (± 0.15 )
Action noise Tau 0.06 5e-04 Parameter for the random noise distribution Target network update rate
multiobj-sum multiobj-sum 0.91 (± 0.01 ) 0.84 (± 0.01 ) 0.77 (± 0.02 ) 0.87 (± 0.04 ) 0.79 (± 0.02 ) 0.87 (± 0.03 ) 0.63 (± 0.01 ) 0.76 (± 0.11 )
Boltzmann temperature Action perturbation 0.4 gaussian Controls the exploration-exploitation Method to randomly explore actions
multiobj-prod multiobj-prod 0.85 (± 0.01 ) 0.81 (± 0.01 ) 0.76 (± 0.02 ) 0.82 (± 0.02 ) 0.80 (± 0.01 ) 0.84 (± 0.01 ) 0.65 (± 0.03 ) 0.70 (± 0.07 )
Action noise 0.11 Parameter for the random noise distribution
adaptability1 adaptability1 0.90 (± 0.02 ) 0.82 (± 0.02 ) 0.76 (± 0.02 ) 0.87 (± 0.05 ) 0.74 (± 0.06 ) 0.79 (± 0.07 ) 0.58 (± 0.02 ) 0.68 (± 0.09 )
Value Scenario: daily / medium Tab. 4.2.: Parameter adaptability2 0.89 (± 0.02 ) Description 0.86 (± 0.02 ) adaptability2 0.83 (± 0.02 ) 0.77 (± 0.02 ) 0.71 (± 0.06 ) 0.82 (± 0.04 ) 0.62 (± 0.01 ) 0.72 (± 0.08 )
State-DSOM shape Parameter equity 12x12 Value 1.00 (± 0.00 ) Shape of the neurons' grid Description 0.99 (± 0.00 ) 1.00 (± 0.00 ) 0.54 (± 0.06 )
overconsumption 0.89 (± 0.02 ) 0.70 (± 0.05 ) 0.90 (± 0.05 ) 0.41 (± 0.03 )
State-DSOM learning rate Batch size 0.8 128 Update speed of State-DSOM neurons Number of samples to use for training at each step
multiobj-sum 0.88 (± 0.01 ) 0.82 (± 0.02 ) 0.84 (± 0.02 ) 0.68 (± 0.03 )
State-DSOM elasticity Buffer size 1 50000 Coupling between State-DSOM neurons Size of the replay memory.
multiobj-prod 0.85 (± 0.01 ) 0.81 (± 0.01 ) 0.84 (± 0.01 ) 0.69 (± 0.02 )
Action-DSOM shape Actor learning rate 3x3 0.01 Shape of the neurons' grid Update speed of the Actor network
adaptability1 0.87 (± 0.01 ) 0.83 (± 0.03 ) 0.81 (± 0.03 ) 0.64 (± 0.04 )
Action-DSOM learning rate Critic learning rate 0.7 0.001 Update speed of Action-DSOM neurons Update speed of the Critic network
adaptability2 0.88 (± 0.01 ) 0.84 (± 0.02 ) 0.79 (± 0.02 ) 0.68 (± 0.02 )
Action-DSOM elasticity Discount rate 1 0.95 Coupling between Action-DSOM neurons Controls the horizon of rewards
Scenario: annual / small
Q Learning rate Tau 0.8 0.001 Update speed of Q-Values Target network update rate
Parameter Discount rate Noise equity Value Controls a gaussian noise to explore actions Controls the horizon of rewards Description 1.00 (± 0.00 ) 0.95 0.02 0.99 (± 0.00 ) 0.99 (± 0.01 ) 0.54 (± 0.06 )
overconsumption 0.87 (± 0.05 ) 0.70 (± 0.08 ) 0.68 (± 0.14 ) 0.37 (± 0.11 )
State-SOM shape State-SOM learning rate Action perturbation gaussian 12x12 0.5 Action noise 0.09 Epsilon 0.05 Controls the exploration-exploitation Shape of the neurons' grid Update speed of State-SOM neurons Method to randomly explore actions Parameter for the random noise distribution Fig. 4.5.: Distribution of scores per learning algorithm, on every scenario, for 10 runs with each reward function. multiobj-sum 0.89 (± 0.02 ) 0.81 (± 0.02 ) 0.85 (± 0.04 ) 0.62 (± 0.05 )
multiobj-prod 0.81 (± 0.00 ) 0.78 (± 0.03 ) 0.79 (± 0.02 ) 0.66 (± 0.08 )
Action-SOM shape Boltzmann temperature 4.6.2 Comparing algorithms 3x3 0.6 adaptability1 0.87 (± 0.03 ) Controls the exploration-exploitation Shape of the neurons' grid 0.80 (± 0.07 ) 0.75 (± 0.04 ) 0.60 (± 0.09 )
Action-SOM learning rate 0.2 Update speed of Action-SOM neurons
adaptability2 0.89 (± 0.02 ) 0.84 (± 0.03 ) 0.77 (± 0.04 ) 0.63 (± 0.09 )
78 80 82 Chapter 4 Chapter 4 Chapter 4 4.6 Results
Tab. 4.1.: Best hyperparameters on 10 runs for the Q-SOM algorithm, using the annual small scenario and adaptability2 reward function.
Tab. 4.4.: Best hyperparameters on 10 runs for the MADDPG algorithm, using the annual small scenario and adaptability2 reward function.
4.6 Results
Tab. 4.5.:
Architecture of the symbolic judgment. The environment and learning agents are the same as in the previous chapter, but the rewards' computation is moved from the environment to newly introduced judging agents, which rely on explicitly defined moral values and rules.based and argumentation-based judging agents, are detailed below, as well as several experiments that demonstrate learning agents are able to correctly learn from such
Judging agent Learning agent l1
Moral Value Moral Rules Rewards
Judgment Learning agent l2
Process
Judging agent Aggregation Actions
Moral Moral Environment
Value Rules Compute next
state
Judgment
Process
Observations
Fig. 5.1.:
5.1 Overview
SArgs is an admissible extension if and only if SArgs is conflict-free, and all arguments A ∈ SArgs are acceptable, with respect to SArgs. • SArgs is a complete extension if and only if SArgs is admissible, and contains all acceptable arguments with respect to SArgs. In other words, ∀A ∈ Args, if SArgs defends A, then we must have A ∈ SArgs. • SArgs is a grounded extension if and only if SArgs is a minimal complete extension,
algorithm; they are paraphrased from Caminada (2007, see Definition 3, p.2) to be
slightly more explained.
Definition 5.4 (Extensions). Let us consider Args a set of arguments within an Argu-
mentation Framework, and SArgs ⊆ Args a subset of these arguments. The extensions
are defined as follows:
•
5.3 Designing a reward function through argumentation
with respect to , i.e., ∃SArgs ⊆ Args such that SArgs SArgs and SArgs is a complete extension.
Now that we have the theoretical interests, and the ethical thresholds, we may define what is an acceptable action.
Definition 6.4 (Acceptable action). An action (p, a) ∈ (P, A), where p is an exploration
profile and a an action identifier, is deemed acceptable if its interests, compared to
the theoretical interests, in a given situation represented by the observations o, attain
the ethical thresholds on all moral values. Formally, (p, a) is acceptable if and only if
6.3 Identifying dilemmas
Learning user preferences s 3 , whereas profile p 2 might say it corresponds to s 1 . As we combine proposed actions from all exploration profiles, and proposed actions are determined from the Q-Table,based on the discrete state, we need to consider the states discretized by all exploration profiles, thus leading to a combination of states. This combination of states effectively describe the current situation, for each exploration profile. An unambiguous combination could be, e.g.,[s 3 , s 1 , s 12 , s 44 , s 5 ], where s 3 is the discrete state from profile p 1 , s 1 from profile p 2 , etc., with a total of 5 profiles. Under this definition, another combination [s 3 , s 1 , s 12 , s 44 , s 22 ] would be a different context, because one of the exploration profiles deemed the situation as a different state, even though all other states are the same. Note that, if the discrete states are exactly the same for all exploration profiles, then we are guaranteed to have the same proposed actions, and thus the same Pareto Front of optimal actions. Two dilemmas that have exactly the same combination of states would have the same proposed actions, which makes it easier to "settle them in the same manner", as our vague definition of context puts it. Regardless of the action selection learned as a preference for this context, we are guaranteed to find the same action in both dilemmas.More formally, this definition of context can be represented as Context(o) = ∀p ∈ P | States p (o) . The 2 most important advantages are its simplicity to compute,
6.4
(Context). A context is a set of bounds, both minimal and maximal, for each dimension of the observation space O l . Formally, a context is defined asc ∈ C = (b 1 , B 1 ) , • • • , (b g , B g) , where g is the number of dimensions of the observation space (O l ⊆ R g ), b k is the lower bound for dimension k, and B k is the upper bound for dimension k. A context recognizes a situation in a dilemma, represented by its observation vector o l if and only if ∀k
Definition 6.6
6.4 Learning user preferences
.2), the Qtheoretical values are updated thanks to Equation (6.5) (line 20).
Algorithm 6.1: Learning process during the bootstrap phase
1 Function learning():
Data:
L the set of learning agents
T the number of time steps
N l,s,j the number of times action j was selected in a state s
ρ the agent's exploration profile U , W State-(D)SOM and
Action-(D)SOM neurons
U i , W j vector associated to neuron i (resp. j) of the
State-(D)SOM (resp. Action-(D)SOM)
2 for t = 1 to T do
/* All agents choose an action to explore */
3 for l ∈ L do
4 o l,t ← observe(env, t, l)
/* Discretize state */
5 s t ← argmin i ||o l,t -U i ||
/* Choose action based on number of times enacted instead of
interests */
6 j ← choose(N l,s )
7 N l,s,j ← N l,s,j + 1
/* Random noise to explore the action space */
8 a l,t ← noise(W j )
9 execute(env, a l,t )
10 end
/* Agents learn and update their data structures */
11 for l ∈ L do
12 o l,t+1 ← observe(env, t + 1, l)
13 r l,t ← reward(env, t + 1, l)
/* Compute neighborhood of the (D)SOMs */
14 ψ U ← neighborhood(U , s, o l,t )
15 ψ W ← neighborhood(W, j, a l,t )
/* If the noised action is interesting */
16 if r t • ρ + γ max j (ρ • Q(s , j )) > ρ • Q(s, j) then
17 Update the Action-(D)SOM using ψ W
18 end
/* Update Q-table and Q-theoretical */
19
150 Chapter 6
Tab. 7.1.: Recapitulative table of objectives; each one is tackled by at least one contribution.
Objective Contribution(s)
O1.1 Diversity C1 + C2 + C3
O1.2 Ethical consensus shifting C1
O2.1 Learning non-dilemmas C1
O2.2 Learning dilemmas C3
O2.3 Reward specification C2
As previously, we show the "rules" for judgment, this time in the form of argumentation graphs, represented by arguments as nodes, and attacks as edges. Arguments can be either a pro-argument, in the F p , a con-argument, in the F c , or a neutral argument. Let us also recall that arguments can be activated or not, based on the context. We also
Argumentation graphs D
Over-consume Over-consume Over-consume Legend
10% 20% 30% Cons
Pros
Average comfort Good comfort Neutral
Buy energy Buy energy Buy energy Buy energy
10% 30% 50% 75%
Hyperparameter Not buy energy Bad comfort Positive payoff Best score per value Bias
input_som_lr
action_som_lr
Tab. B.1.: Comparison of hyperparameters' values for the Q-SOM algorithm.
Also called the "King Midas problem" -getting exactly what you asked for, but not what you intended. See for example this blog post from Deep Mind for details and examples.
Chapter 1
2.1 Machine Ethics
Chapter 2
2.2 Reinforcement Learning
We use value here in a different sense than the moral value used earlier. To avoid confusion, we will always specify moral value when referring to this first meaning.
2.2 Reinforcement Learning
Chapter 2
2.3 Multi-Agent Reinforcement Learning
Chapter 2
2.4 Multi-Objective Reinforcement Learning
2.5 Hybrid Neural-Symbolic Artificial Intelligence
This multi-agent aspect starts on a single level in the first contribution, but becomes multi-level due to the addition of judging agents in the second contribution. This supports another important principle of our methodology, although we could not specify this as an objective of this thesis: the co-construction of agents. We did not have time to fully embrace co-construction, which is why it is not defined as an objective; nonetheless, we kept this notion in mind when designing our contributions. By co-construction, we mean that two different agents, which could be both human agents, both artificial, or even a mixed human-artificial combination, learn from each other and improve themselves through interaction with the other. That idea of co-construction is why we present our second contribution as "agentifying" the reward function: by making judging agents instead of simply judging functions, we pave the way for co-construction. Constructive relationships already occur in one of the two directions, as the learning agents improve their behaviour based on the signals sent by judging agents. But, in a larger scope, we could imagine judging agents also learning from learning agents, for example adapting their judgment behaviour so as to help the learners, in some sort of adversarial or active learning, thus enabling construction in the second direction and achieving co-construction. Another possibility of co-construction would be to make humans reflect on their own ethical considerations and preferences, through interaction with learning and judging agents. In this case, learning agents could act as some sort of simulation for example, the human users could try different preferences and discover the impact of their own preferences on the environment. Again, we emphasize that we did not have time to explore all these interesting research avenues. Yet, to keep co-construction attainable in potential future extensions, we tried and designed our contributions with this notion in mind.3.1 Methodology
3.4 Smart Grids as a validation use-case
A list of dilemmas can be found on the now defunct DilemmaZ database, accessible through the Web Archive: https://web.archive.org/web/20210323073233/https://imdb.uib.no/dilemmaz/artic les/all
4.3 (Dynamic) Self-Organizing Maps
6.8 Discussion
Many thanks to one of the researchers with whom I discussed at the Arqus conference for this specific example!7.2 Limitations and perspectives
Acknowledgments
I will (try to) keep this short, as the manuscript is already long enough.
List of Tables
∀i ∈ [[1, m]] :
In this formula, Q p , States p are the functions from exploration profiles given in Definition 6.1; Q theory p values are computed through Equation (6.5); ζ are the ethical thresholds as specified in Definition 6.3.
We see that acceptable actions depend on the user-specified ethical thresholds; additionnally, as the theoretical interests are by construction superior or equal to the interests, the ratio is a value ∈ [0, 1] that can be easily compared with the thresholds. Thus, an ethical threshold of ζ = [0.8, 0.75] might be read as: "An action is acceptable if its interest with respect to the first moral value is at least 80% of the maximum attainable, and its interest for the second moral value at least 75%".
From this, we can finally define a dilemma. Definition 6.5 (Dilemma). A situation is said to be in a dilemma if none of the actions in the Pareto Front is acceptable with respect to a given ethical threshold. More formally, we define a dilemma as a tuple (o, ζ, optimal) ∈ O, Z, 2 P×A , where o is the observation vector representing the situation, ζ is the user-given ethical threshold, and optimal is the Pareto Front of actions for the given situation, such that optimal = PF(o) as defined in Equation (6.4).
This formal definition of dilemmas can be computed and used automatically by learning agents, while relying on human users' preferences for the ethical threshold. An advantage is to offer configurability: some might consider that most actions are acceptable, thus letting agents choose in their place most of the time, whereas others might specify a higher threshold, thus identifying more situations as dilemmas and forcing agents to ask for their preferences concerning which action to take. As the ethical thresholds are specified individually by user, several agents with various behaviours may coexist in the system.
Learning user preferences
Once we know how to identify dilemmas, the next and final step mentioned in Section 6.1 is to settle them, i.e., to choose an action. This action cannot be the best, otherwise we would not be in a dilemma, and it reflects some trade-off between several, conflicting moral values. We believe and defend that these trade-offs must be settled by human
Mathematical notations and symbols
A
We list here both classic mathematical notations, e.g., R, and some symbols specific to our work, e.g., s. R : the "reals", or set of all real numbers. t : a time step.
|X| : depending on the context, either the cardinality of a set X, or the absolute value of a real X.
L : the set of learning agents.
A l : the action space of a learning agent l, typically A l ⊆ R d .
d : number of dimensions of the agent's action space.
a : an action in the action space, i.e., a vector of d dimensions.
O l : the observation space of a learning agent l, typically O l ⊆ R g .
g : number of dimensions of the agent's observation space.
o : observations in the observation space, i.e., a vector of g dimensions.
γ : Discount factor.
U : set of neurons in the State-(D)SOM, i.e., the neurons that learn the observation space.
W : set of neurons in the Action-(D)SOM, i.e., the neurons that learn the action space. Q : the Q-Function, or Q-Table, which stores the interest for every state-action pair.
s : usually, a discrete state identifier.
J : the set of judging agents.
B : the space of all possible beliefs about a current situation.
B : a set of beliefs about a current situation, with B ∈ B.
V : the set of possible moral valuations, V = {moral, immoral, neutral}.
ME j : the Moral Evaluation function of a judging agent j,
Judgment j : the Judgment function of a judging agent j, Judgment : L → V d .
F : the Feedback function.
Args or AF [Args] : a set of arguments in an Argumentation Framework for Judging a Decision (AFJD).
Att or AF [Att] : a binary attack relationship in an AFJD.
F p or AF [Fp] : the set of pros arguments in an AFJD.
F c or AF [Fc] : the set of cons arguments in an AFJD.
J j : the Judgment function of a judging agent j, J j : P(AF j ) → R.
g agr : the Aggregation function, g agr : R |J | → R.
grd : the grounded extension of an AFJD.
> P areto : the Pareto-dominance operator : x > P areto y ⇔ (∀i : x i ≥ y i ) and (∃j : x j > y j ).
P : the set of all exploration profiles. p an exploration profile ∈ P.
S : the set of discrete state identifiers.
States : function that returns a state identifier from an observations vector, States : O → S.
A : the set of discrete action identifiers.
Actions : function that returns action parameters from a discrete action identifier, Actions : A → A l .
PF : function that returns a Pareto Front (PF) from a set of possible actions.
Z : the space of possible ethical thresholds.
C : the set of all learned contexts.
C
We present below the judgment rules for each of the moral values that were used in the experiments.
Affordability:
valueSupport(sell_energy(X, Payoff), "improve_payoff") :-X > 100.
valueDefeat(buy_energy(X, Payoff, Budget), "improve_payoff") :-X > 100 & Payoff < (-Budget).
Environmental sustainability:
valueDefeat(buy_energy(X), "promote_grid_autonomy") :-X > 100. valueDefeat(sell_energy(X), "promote_grid_autonomy") :-X > 100. valueDefeat(grid_consumption(X, OverConsumption), "balance_supply_demand") :-X > 100 & OverConsumption > (3 / 10). valueDefeat(storage_consumption(X, EnergyWaste), "balance_supply_demand") :-X > 100 & EnergyWaste > (3 / 10). valueSupport(buy_energy(X), "promote_grid_autonomy") :-X <= 100. valueSupport(sell_energy(X), "promote_grid_autonomy") :-X <= 100. valueSupport(give_energy(X), "promote_grid_autonomy") :-X > 0. valueSupport(grid_consumption(X), "promote_grid_autonomy") :-X > 0.
Inclusiveness:
valueDefeat(buy_energy(X,Equity,WellBeing,GlobalWellBeing), "promote_justice") :-Equity < (9 / 10) & Threshold = (12 / 10) * GlobalWellBeing & WellBeing > Threshold. valueDefeat(give_energy(X,Exclusion), "promote_justice") :-Exclusion > (5 / 10). valueDefeat(grid_consumption(X,Equity,WellBeing,GlobalWellBeing), "promote_justice") :-Equity < (9 / 10) & Threshold = (12 / 10) * GlobalWellBeing & WellBeing > Threshold. valueDefeat(sell_energy(X,Exclusion), "promote_justice") :-Exclusion > (5 / 10). valueDefeat(sell_energy(X,Equity), "promote_justice") :-Equity < (9 / 10).
valueSupport(buy_energy(X,Exclusion), "promote_justice") :-Exclusion >= (5 / 10). valueSupport(buy_energy(X,Equity,WellBeing,GlobalWellBeing), "promote_justice") :-Equity < (9 / 10) & Threshold = (9 / 10) * GlobalWellBeing & WellBeing < Threshold. valueSupport(give_energy(X,Equity,Exclusion), "promote_justice") :-Equity < (9 / 10) & Exclusion < (5 / 10). valueSupport(grid_consumption(X,Exclusion), "promote_justice") :-Exclusion >= (5 / 10). valueSupport(grid_consumption(X,Equity,WellBeing,GlobalWellBeing), "promote_justice") :-Equity < (9 / 10) & Threshold = (9 / 10) * GlobalWellBeing & WellBeing < Threshold.
Security of supply:
Appendix C valueSupport(grid_consumption(X, WellBeing, GlobalWellBeing), "promote_comfort") :-X > 100 & WellBeing >= (8 / 10). valueSupport(storage_consumption(X, WellBeing, GlobalWellBeing), "promote_comfort") :-X > 100 & WellBeing >= (8 / 10).
Comfort |
00410743 | en | [
"shs.scipo"
] | 2024/03/04 16:41:22 | 2009 | https://hal.science/hal-00410743v2/file/preprint_ePart_2009_ogl.pdf | Olivier Glassey
email: [email protected]
WRITING A NEW CONSTITUTION FOR GENEVA: AN ANALYSIS OF PARTICIPATION MECHANISMS
Keywords: participation, constitution, eDemocracy, framework, case study
the people of Geneva voted in favour of a new Constitution to replace the current one, written in 1847 and considered by many to be out of line with today's society. In the first part of this paper we set the context of our study and we define a framework to analyse participation and eParticipation in terms of institutional, mediated and informal political communication mechanisms. In the second part we apply it to the campaign for the election of a Constituent Assembly and we provide the preliminary results of this survey. The last part describes how we will use this framework to investigate these mechanisms during the process of writing a new Constitution. Geneva was a pioneer in terms of eVoting and we want to find out if this will be the case again in the domain of eParticipation, with what could potentially become the first Wiki-Constitution ever. However our first findings indicate that ICTs are rather an extension of current participation mechanisms and that they do not radically change or renew them.
Introduction
In this first section we describe the context in which this study takes place, i.e. the election of a Constituent Assembly to write a new Constitution for the Canton of Geneva. We furthermore explain the general political participation mechanisms in Switzerland and in Geneva.
Geneva's Constitution
Geneva is a republic since 1535 when the city became the capital of the Protestant Reformation. The first Constitution adopted in 1543 was largely based on the "Edits Civils" written by John Calvin. Although Geneva was a French department between 1798 (when it was invaded by Napoleon's army) and 1813, it was mostly independent until joining the Swiss Confederation in 1815.
In 1846 James Fazy led a revolution that overthrew the conservative government and subsequently wrote the 1847's Constitution that is still ruling the Canton, although it has been modified many times over that period. This text is now the oldest of the 26 Cantonal Constitutions in Switzerland and many believed that its language, structure and content are not adequate anymore [START_REF] Schmitt | Quelques réflexions comparatives à propos de l'élection d'une assemblée constituante à Genève[END_REF].
In 1999 the parliamentary group of the "Parti Radical" proposed a bill in order to completely revise Geneva's Constitution, but without success. In 2005 an association called "Une nouvelle Constitution pour Genève" (a New Constitution for Geneva) was set up. Its front man was a famous law professor, Andreas Auer, and its members came from all political parties and from the civil society. They were ready to launch a popular initiative requiring a new Constitution as the government was reluctant to do so, but after long negotiations a vote was organised. In February 2008 the people of Geneva accepted a constitutional law allowing for a new Constitution in the Canton.
Election of the Constituent Assembly
In October 2008 the people of Geneva elected 80 members of the new Constituent Assembly. This was no easy task for citizens as there were 530 candidates and 18 lists to choose from. Half of these lists were presented by traditional political parties and the other nine lists represented heterogeneous interest groups (business associations, home-owners, women, retired people, and so on). Funding for the campaign was also very heterogeneous: from 5'000 Swiss Francs (about 3'300 Euros) for the women's list [START_REF]Les p'tits nouveaux de la politique genevoise[END_REF] to 200'000 (130'000 Euros) for the business associations' list [START_REF]La dure vie des petits candidats[END_REF].
The quorum for a list to be elected was initially 7%, but the Parliament lowered it to 3% in order to have a wider participation. However one cannot say that the members of this Assembly are really representative of Geneva's people: only 14 women were elected (although there will be 16 women in the Assembly because two elected men resigned in order to leave their position to women from the same left-wing party [START_REF]Deux femmes de plus à la Constituante[END_REF]) and the average age of members is 56. Furthermore only three lists outside traditional parties made the quorum:
• The lobby of pensioned people (Avivo) got 9 seats; it must be said that Christian Grobet, the leader of this list, was a member of various legislative and executive authorities in Geneva from 1967 until 2005, thus this list is not completely "outside" political parties.
• The g[e]'avance list represented business and employers' lobbies and it was attributed 6 seats.
• The FAGE (Federation of Geneva's Associations) is the umbrella organization of 480 associations of all types (parents, culture, human rights, ecology, Attac, pacifism, consumers, social integration, gays, development, etc.); the associations' list obtained almost 4% of the votes (with a quorum at 3%) and thus obtained 3 seats.
The participation rate being of 33 % [START_REF]élection à l'Assemblée Constituante du 19 octobre[END_REF] (about 10% less than the average participation, see below), one can conclude that giving Geneva a new Constitution was not a popular issue and that only "traditional" or "politicized" voters accomplished their electoral duties. The political balance of the Constituent assembly is also similar to what can be seen at the Parliament of Geneva [START_REF]Députés au Grand Conseil[END_REF]: in the Parliament 57% of the seats are attributed to right wing parties, 33 to left wing parties and 10 are considered as independent, whereas 43 seats of the Constituent Assembly account for right wing parties (53%).
e-Voting in Geneva
Although electronic voting was not used for the Constituent Assembly, we believe it is interesting to give some details on the Geneva e-Voting project in this introduction. Swiss citizens vote 4 to 6 times a year and in some cases the participation is as low as 30%. However it must be said that in Switzerland the participation rate is calculated on the basis of all citizens over 18 and not on the basis of registered voters as this is the case in many countries. The average participation in the last 30 years is around 42%, with a record of 78% in 1992 when the citizens had to decide on whether Switzerland should join the European Economic Area. Postal vote was designed as a solution to increase these participations figures: citizens are sent the voting material at home and they have the possibility to send it back to their cantonal or communal authorities, instead of having to go to the polling station on designated week-ends. Although postal vote was already introduced in 1957 in the Canton of Vaud, voting material was sent on request only (for each vote). Most Cantons generalized this system for all citizens during the 80's and the 90's and it was first used in Geneva in 1995. Turnout increased by 20 percent and currently up to 95% of Geneva's voters use the postal vote.
In 2001, the Swiss federal government decided to test and evaluate e-voting systems. The Cantons of Geneva, Neuchâtel and Zurich were chosen to develop three separate internet voting solutions, so that they could be assessed and tailored in order to fit the 26 different legal and organizational contexts of each Swiss Cantons [START_REF] Braun | Swiss E-Voting Pilot Projects: Evaluation, Situation Analysis and How to Proceed[END_REF]. The first e-Voting test took place in January 2003 in one commune of Geneva. Currently the system is still not fully deployed: for the polls of November 30, 2008, only nine (out of 45) communes could use the system.
However nine e-Voting sessions were organized to date and there are some interesting outcomes: between 22% and 25% of all voters used the e-Voting system, amongst them 19% are regular voters and 56% are usually abstainers [START_REF] Chevallier | Internet Voting: Situation, Perspectives and Issues[END_REF].
Participation Mechanisms
This section provides definitions of participation, eParticipation and eDemocracy. We then build on them in order to define the analysis dimensions of our survey.
Participation and eParticipation
According to [START_REF] Sanford | Characterizing eParticipation[END_REF] eParticipation is an emerging research area which lacks a clear literature base or research approach. In their review of the field, they identified and analysed 99 articles that are considered to be highly relevant to eParticipation. [START_REF] Sanford | Characterizing eParticipation[END_REF] write in their introduction that governments seek to encourage participation in order to improve the efficiency, acceptance, and legitimacy of political processes. They identify the main stakeholders of participation as citizens, non-governmental organizations, lobbyists and pressure groups, who want to influence the political system, as well as the opinion forming processes. Various information and communication technologies (ICTs) are available for eParticipation: discussion forums, electronic voting systems, group decision support systems, and web logging (blogs). However traditional methods for citizen participation (charettes, citizens' juries or panels, focus groups, consensus conferences, public hearings, deliberative polls, etc.) are still very widely used and must be taken into account when studying eParticipation.
[6] defines eDemocracy as the use of information and communication technologies to engage citizens, to support the democratic decision-making processes and to strengthen representative democracy. She furthermore writes that the democratic decision making processes can be divided into two main categories: one addressing the electoral process, including e-voting, and the other addressing citizen e-participation in democratic decision-making. [START_REF] Macintosh | Evaluating how eParticipation changes local democracy[END_REF] give a working definition of eParticipation as the use of ICTs to support information provision and "top-down" engagement, i.e. government-led initiatives, or "ground-up" efforts to empower citizens, civil society organisations and other democratically constituted groups to gain the support of their elected representatives.
There are many examples of surveys on eDemocracy, such as [START_REF] Ladner | From e-voting to smart-voting -e-Tools in and for elections and direct democracy in Switzerland[END_REF] who take the case of Switzerland where citizens are often called to the polls either to vote for parties and candidates or, even more often, to decide on direct-democratic votes at the three different political levels. In their paper on "smart-voting" they analyse what they call voting assistance applications, i.e. tools where citizens can compare their positions on various political issues to those of parties or candidates. They mention the Dutch "Stemwijzer" system, first introduced in 1998 and they provide in-depth information on the Swiss smartvote website.
Even if eParticipation is a relatively new research field, projects and tools are increasing thank to governmental support [START_REF] Tambouris | A Framework for Assessing eParticipation Projects and Tools[END_REF] and a number of research projects have been funded worldwide to pave the way, such as Demo-Net.org.
Analysis Framework
The goal of this study being to investigate communication and coordination mechanisms for participation and eParticipation, we therefore had to define an analysis framework. We adapted the approach used by [START_REF] Mambrey | From Participation to e-Participation: The German Case[END_REF] for its case study on participation and eParticipation in Germany, where he used the three arenas of political communications defined by [START_REF] Habermas | The Theory of Communicative Action[END_REF]. Table 1 shows these three communications modes and the systems or actors involved in political communication, as well as the vectors used to carry this communication. We made a distinction between traditional participation vectors and ICT-enabled vectors (eParticipation). In our survey we decided not to analyse the institutional communication modes and to concentrate on the mediated and informal ones, as there is already a large amount of research that has been done on eVoting and eConsultations. We indeed believed it would more relevant to survey the mediated and informal communication arenas during the process of writing a new Constitution for Geneva. When looking at Table 1 one can see that the traditional communication modes for mediated and informal communication are relatively well differentiated, but that we could not define which eParticipation tools are used by which actors. This is precisely what we want to study: how do parties or interest unions use websites or wikis to support their communication, how do citizens or associations make their voice heard through ICTs, are these tools complementary to existing participation mechanisms, do new political communication usages appear with ICTs, and so on.
Selection of stakeholders and Methodology
In order to study participation and eParticipation and to answer the various questions listed above, we selected a sample of actors in mediated and informal communication modes:
• The "Tribune de Genève" (TDG): we chose it because it is Geneva's main printed newspaper and it furthermore provides a blog platform to its readers; most candidates' blogs were hosted at the TDG.
• The "Parti Radical Genevois" (PRD) is a progressive right-wing party that has a long history in Geneva; it was born in 1841-42 during the first revolutionary movements in the Canton and it was led by James Fazy, the author of the 1847's Constitution that is still in effect today. We selected it because PRD is rather representative of traditional parties and political structures.
• "Les Verts" are Geneva's green party. It was founded in 1983 by various members of environmental and anti-nuclear associations. The Green party is now a wellestablished party with, amongst others, two elected members of the executive government in Geneva (which comprises seven ministers). We decided to survey them because they are a newer party, created by members of the civil society and based on a more associative operational mode.
• We already introduced the Federation of Geneva's Associations (FAGE) in section 1: it is the umbrella organization of 480 associations. We integrated them in our study because they are very typical of networked communication and participation.
• Last but not least, citizens of Geneva, for quite self-evident reasons when one is writing about bottom-up participation.
This survey is mostly qualitative and based on two investigation methods:
• Periodical review of all identified websites, blogs, forums, wikis, etc. related to the subject of the Constituent Assembly.
• Interviews with representatives of the stakeholders listed above, as well as with elected members of the Constituent Assembly.
Preliminary Results
This project comprises two distinct parts: we first studied participation and eParticipation during the campaign for the election of the Constituent Assembly (until October 2008) and we will then analyse these mechanisms during the process of writing the new Constitution.
We will not go into the specifics of the campaign here, but let us already mention that it was mostly traditional participation mechanisms and communication vectors that were used, such as interview in the press, shows on local TV and radios, meetings, debates, tract distribution in the streets and on markets, etc. The only eParticipation elements came from personal (and disjoint) initiatives. We will list the most interesting of them here:
• The Tribune de Genève (TDG) followed the campaign and opened its daily editions to all candidate lists. One journalist, Jean-François Mabut, set up a blog called "Gazette de la Constituante" where he commented the campaign and pointed to interesting blog posts or websites. Moreover selected content from various political and citizens' blogs was printed in the newspaper. However this was a personal initiative from the journalist that was accepted by the direction of the newspaper but that did not get much support.
• 78 candidates had a blog on the TDG platform, but several of them had no or very limited contributions. 18 of them were elected, so if we make a quick (and not very significant) calculation, about 23% of bloggers got elected (18 of 78) against 13% of non bloggers (62 elected of 452 candidates). Beyond these figures, it is very interesting to note that this blog platform was the only way for some small lists to voice their opinion and to be visible. As written in one of the comments on the Gazette de la Constituante, "blogging was the only way of expression that the women's' list had, regarding our limited financial means. (…) We published articles, photos, letters and wishes from all our candidates and we were able to speak in one voice by putting our differences aside and finding consensus".
• Some citizens also used blogging and electronic newsletters to comment on the campaign. A good example is Pascal Holenweg that set up a blog (carmagnole.blogspot.com) and a daily newsletter where he made irreverent but relevant comments on the campaign. However it must be said that Mr Holenweg is a former member of the legislative assembly of the City of Geneva and a well-known figure of local activism.
• Some younger candidates used Facebook for their campaign. Murat Alder is one of them and he was elected on the list of the Parti Radical (PRD).
• The PRD and the FAGE set up wikis in order to gather inputs from the citizens on the new Constitution. However neither of them listed any modification after the initial online publishing, so the "wiki-participation" amounted to nil. FAGE also provided an online form to make propositions: they received exactly 16 of them, ranging from supporting sustainable development, providing education to illegal migrants, prohibiting cars, legalizing cannabis or opening more public nudist spaces.
• Many candidates and associations also used mail, postcards and email campaigning in order to make their ideas known and as an incentive for people to go and vote.
Although the author received several of them, it is impossible to have an exhaustive view on that, as it was not out in the public domain.
• Although this last element is not supported by ICTs, we mention it anyway because the concept is interesting: the FAGE organised several "Caf'Idées" (idea cafés), a mix of brainstorming sessions, musical chairs and speed-dating. Four to five debaters would indeed be seated at a same table in a café and discuss a given topic for a limited amount of time, before making a synthesis and switching tables and partners.
The first part of this survey showed that there was no real eParticipation during the campaign, apart than using the Web, blogs or social networks to provide more visibility to candidates and, to a lesser extent, to sustain opinion forming.
Next Steps
On Thursday November 20 th , 2008 the Constituent Assembly held its first session and its work will be spread over four years. The second part of our project has just started as we write these lines and it is planned that it will last during the first year of sessions. Our aim is to analyse how elected Constituents will relay the ideas of their parties or of their associations' members, and to see how citizens will be involved in the process (if at all). We will use our analysis framework to study coordination and communication mechanisms, especially in terms of co-writing or co-creating the new Constitution. Indeed we believe that, although there was not much "e" in the election process, it will surely be more developed during the redaction phase. As Yves Lador, elected member of the FAGE puts it in his own word: "I never believed that a wiki would involve the general public in participating in the Constitution's elaboration; however I think it will be a great tool for internal use and collaboration (…)"
Conclusions
On the international level it is not very often that countries completely rewrite their Constitution. Switzerland and its 26 sovereign Cantons constitute a very interesting laboratory, as 21 of them did so since 1965. Eight Cantons revised their Constitutions through a Constituent Assembly, and the other fourteen relied on dedicated commissions assigned by either the legislative or the executive authority [START_REF] Schmitt | Quelques réflexions comparatives à propos de l'élection d'une assemblée constituante à Genève[END_REF]. However the last Constitution designed by a Constituent Assembly was that of the Canton of Basel and the process lasted from 1999 until 2005, meaning that at the time tools such as blogs or wikis were not yet as well-known by the general public as now. We therefore believe that this experience in Geneva might be a first in terms of eParticipation and we are really eager to see if ICTs only provide an extension of traditional participation mechanisms (e.g. offering more visibility during a campaign for elections) or if they have the ability to modify participation and to lead to the creation of what could be the first "Wiki-Constitution" in the world.
Table 1 :
1 Communication Modes and Vectors for Participation
Communication / System / Actors Traditional ICT-enabled
Coordination mode Communication Vectors Communication Vectors
Elections eVoting
Institutional Representative Democracy Consultation: citizen forums, public hearings or any formal consultation eConsultation
procedure
Direct Democracy Voting, referendums, initiatives eVoting
Articles, opinions,
Mass Media interviews, editorials, readers letters, polls, phone
calls, etc.
Mediated Parliamentary groups Lobbies Websites, forums, wikis,
Parties Strikes emails, chats, ePolls,
Interest groups Meetings, campaigns, street webcasts, social networks,
Trade unions or door-to-door mobile communications,
communication, tracts, Web 2.0, etc.
mailings, negotiation
Informal Citizens Associations Networks Street or door-to-door communication, tracts, free radios, local TVs, cafés, clubs, etc. |
01055444 | en | [
"spi",
"spi.nrj"
] | 2024/03/04 16:41:22 | 2014 | https://minesparis-psl.hal.science/hal-01055444/file/Iterative%20linear%20cuts%20strenghtening.pdf | Seddik Yassine Abdelouadoud
email: [email protected]
Robin Girard
François-Pascal Neirac
Thierry Guiot
Iterative linear cuts strenghtening the second-order cone relaxation of the distribution system optimal power flow problem
Keywords: Distributed power generation, power system planning, smart grids I
We present a novel iterative algorithm to solve the distribution system optimal power flow problem over a radial network. Our methodology makes use of a widely studied second order cone relaxation applied to the branch flow model of a radial network. Several types of conditions have been established under which this relaxation is exact and we focus here on the situations where this is not the case. To overcome this difficulty, we propose to add increasingly tight linear cuts to the second-order cone problem until a physically meaningful solution is obtained. We apply this technique to a sample system taken from the literature and compare the results with a traditional nonlinear solver.
INTRODUCTION
With the aim to increase the sustainability of the electric power system, the share of renewable energies in the production mix is scheduled to increase in the future. For example, the European Union has set goals for its member states in order to attain a 20% share of renewable energy in its final energy consumption by 2020, and some countries have taken even more ambitious stances. This target will be partially met by integrating significant amounts of dispersed renewable energy generators (mainly photovoltaic (PV) and wind power) to the distribution grid. These developments will have a considerable impact on the design and operation of the electric system, both at the national and local level and so new tools will be needed to assist in the planning and operation of at least the distribution network. Indeed, as the current passive distribution network turns into an Active Distribution Network (ADN) [START_REF] Pilo | Active distribution network evolution in different regulatory environments[END_REF] with the introduction of partially and totally controllable generation and storage means, planning studies based solely on power flows for extreme load conditions will not be adapted anymore. Considering the similarities between the current transmission network and the future ADN, it is a safe bet to assume that the Optimal Power Flow, a tool first introduced in 1962 by Carpentier [START_REF] Carpentier | Contribution à l'étude du dispatching économique[END_REF] and now widely used for the planning and operation of the transmission network, will prove useful for this purpose. Consequently, the adaptation of the OPF concept and resolution algorithms to the distribution network has been the subject of numerous publications in the last decade such as [START_REF] Gabash | Active-Reactive Optimal Power Flow in Distribution Networks With Embedded Generation and Battery Storage[END_REF], [START_REF] Ahmadi | Optimal power flow for autonomous regional active network management system[END_REF], [START_REF] Swarnkar | Optimal power flow of large distribution system solution for combined economic emission dispatch problem using partical swarm optimization[END_REF], [START_REF] Dolan | Using optimal power flow for management of power flows in active distribution networks within thermal constraints[END_REF]. Parallel to these efforts, a promising path has been taken by, for example, [7] and [8], to show that the OPF problem can be cast as a Quadratically Constrained Quadratic Program (QCQP). The QCQP class of problem has already been widely studied, as many instances of engineering problems, such as principal component analysis and the combinatorial max-cut problem, are included in it. General QCQP are non convex problems and thus cannot be solved in polynomial time. To deal with this, the standard practice in the OPF literature is to relax the problem to a convex conic program that can either be cast as a Second-Order Cone Program (SOCP) or Semi-Definite Program (SDP). On the application of such techniques to radial networks, we can cite the work of [9] and [10], who prove that the SDP relaxation over tree networks is exact under certain condition, or the work of [11]-[13], who took the SOCP route applied to the branch flow model and proved that the relaxation is exact if there are no upper bounds on loads or voltage magnitude and certain conditions on the objective function are respected. In these instances, it can therefore be shown that the OPF over radial network can be solved in polynomial time. However, as has been shown in [14], such approaches may fail to produce physically meaningful solutions in realistic conditions. Such conditions are the focus of our work here, as we are proposing a methodology based on adding increasingly tight linear cuts to the initial SOCP relaxation based on the branch flow model until a satisfactory solution is obtained.
The paper is organized as follows: first we introduce the problem formulation, then we present the methodology and we finish by applying it to a case study.
II. PROBLEM FORMULATION
A. Objective function
In this paper, we are particularly interested in situations described in detail in our previous work [14] and [15] where distributed generation and storage are integrated at the distribution system level, not only to alleviate network constraints through active and reactive power control, but also to provide services at the national level, such as energy and ancillary services provision or energy time-shift in the case of storage. As the time-coupling constraints can create extremely large problems, we have proposed to separate them into master problems dealing with the multistage scheduling of active powers and single-stage slave problems ensuring the respect of network constraints, while minimizing the deviation from the master problem. We focus here on the solution of the slave problem where, therefore, the objective function is of the following form:
K k t k c t MP k c P P 2 , , , min
, where
t MP k c P , ,
is the set of controllable active powers resulting from the master problem solution and are the corresponding control variable in the slave problem.
B. Network and apparent power constraints
As discussed earlier, we adopt here the branch flow model first introduced in [17] and [START_REF] Baran | Optimal sizing of capacitors placed on a radial distribution system[END_REF]. Henceforth, we assume that the storage devices and distributed energy units are connected to the grid through an Advanced Power Electronic Interface (APEI), as defined in [START_REF] Wächter | On the Implementation of a Primal-Dual Interior Point Filter Line Search Algorithm for Large-Scale Nonlinear Programming[END_REF], and that the substation transformer is equipped with an On-Load Tap Changer (OLTC), for which we use a simplified continuous model. Consequently, the network and apparent power equations are cast in the following form for all nodes j :
P R P j j j , , 2 2
(2)
t j NetLoad KL kl KK kk KK kk t kk t kk t kk t kk kl t j Q Q U Q P X Q j j j , , 2 2 (3) 0 2 2 2 2 ) ( 2 t j j t j t j f j t j j t j t j S Z U U X Q R P U (4) 2 2 2 2 2 2 2 , , t t t l l l t t j j j j j Q P S X R Z V U (5) t j c t j Load t j NetLoad P P P , , , (6) t j c t j Load t j NetLoad Q Q Q , , , (7) max min max min , OLTC t OLTC OLTC t j V V V V V V (8) , 2 max , 2 , 2 , j c t j c t j c S Q P (9)
, where t j P and t j Q are, respectively, the active and reactive power entering node j from the upstream node ) ( j f , t j V and t OLTC V the voltage magnitude, respectively, at node j and downstream of the substation transformer, during time step t .
C. SOCP relaxation
As explained earlier, we inspire ourselves from the work of, for example, [11] to implement the SOCP relaxation of the OPF problem. First, we define an intermediary variable t j I for all nodes j :
t j t j t j t j U Q P I 2 2 (9)
We then replace
t j t j t j U Q P 2 2 by t j
I in Eq. ( 2) and (3). In the resulting formulation, the non convexity of the problem stems only from the equality in Eq. ( 9) which we relax by keeping only the following inequality:
t j t j t j t j U Q P I 2 2 (10)
We consequently obtain a new problem consisting of Eq. ( 10) and ( 1) through (8) that we will subsequently call R-OPF.
III. METHODOLOGY
A. Guiding principle
Our methodology is based on the fact that, in order to obtain a physically meaningful solution, the equality in Eq. (9) needs to be verified. Therefore, we add linear cuts to R-OPF that take the following form for all nodes :
j t j j t j j t j j t j d U c Q b P a I (11)
These cuts are only valid only on a certain domain, which can be apprehended in the following way :
j t j j t j j t j j t j t j t j d U c Q b P a U Q P 2 2 (12)
Dropping either the active or reactive power variable to fit in a three-dimensional representation, such cuts and their domain of validity can be graphically represented in the following way :
t j t j t j t j t j t j U I Q P U I 2 2
(13)
B. Calculation of the linear cut defining parameters
If we consider that the domain of validity is of the form
max min max min max min , , , , , j j t j j j t j j j t j Q Q Q P P P U U U
we calculate the defining parameters of the linear cuts so that the barycenter of the cutting plane is the lowest possible. This is obtained by solving the following linear problem in the variables j a , j b , j c and j d :
j j j j j j j j j j d U U c Q Q b P P a 2 2 2 min max min max min max min (14)
To which we add constraints of the same form than Eq. (12) for each extremity of the validity domain.
C. Tightening of the validity domain
To identify the solution obtain at the k-th iteration, we add a second subscript to each variable in the following manner
t k j j t k j k j Q Q Q Q (17) k exp (18)
Where is a parameter that governs the speed of the tightening of the validity domain.
IV. CASE STUDY
A. Definition of the parameters
We apply this methodology to a case study consisting in the 69-bus network described in [17], with residential-type loads connected at each node and scaled so that the annual maximal load is equal to the loads used in [17]. Reactive loads are defined, so that the power factor remains constant and equal to the one used in [17]. Storage units are deployed at 13 nodes in increasing order of driving-point impedance magnitude. The maximal apparent power of the storage units is chosen to be equal to the maximal annual active load of the node where it is connected. We select the most critical time steps in the sense that we defined in [16] to ensure that our algorithm is suitable even in extreme loading or storage power injection conditions. The results are obtained through the use of the commercial solver IBM ILOG CPLEX.
B. Convergence
For this example, we set to 10e-5 to enforce a high precision of the result and we present hereafter the rate of convergence for a parameter varying between 6 and 12.
Lower values of this parameter led to infeasible problems in this example.
Figure 2 : Rate of convergence as a function of Beta
We then compare the final value of the objective function with the value obtained from the nonlinear solver IPOPT [START_REF] Wächter | On the Implementation of a Primal-Dual Interior Point Filter Line Search Algorithm for Large-Scale Nonlinear Programming[END_REF].
We can thus observe that the higher the value of , the better the result will be. Moreover, we show that this methodology is suitable to obtain comparable or even better results than off-the-shelf nonlinear solvers, in this particular case study.
I. CONCLUSION
We have presented a novel iterative algorithm to solve the radial distribution system optimal power flow that relies heavily on recent breakthroughs in this field, especially those concerning the use of a second-order cone algorithm. We have then detailed the methodology employed and the parameters that influence it. We finished by applying to a specific case study, which showed that this approach is promising, especially in terms of rate of convergence and optimality. Further work has to be done that will concern testing the methodology on broader sets of network, understanding in a more systematic ways the relationships between the parameters governing the rate of convergence and the optimality of the resulting solution and linking it to the previous work done the establishment of a criticality criterion by time steps.
of the attainable voltage downstream of the substation. Eq. (1)-(9) define the OPF problem that we propose to study.
Figure 1 :
1 Figure 1 : Graphic concepts of the linear cuts, with arbitrary units |
04107901 | en | [
"spi.fluid",
"spi.mat"
] | 2024/03/04 16:41:22 | 2023 | https://hal.science/hal-04107901/file/Manuscript.pdf | Christopher Betrancourt
email: [email protected]
Nasser Darabiha
email: [email protected]
Benedetta Franzelli
email: [email protected]
Junghwa Yi
email: [email protected]
Keywords: TiO 2, Laser-induced incandescence (LII), Laser-induced fluorescence (LIF), Phase-selective Laser-induced breakdown spectroscopy (PS-LIBS)
come
Characterization of Laser-Induced Emission of high-purity TiO 2 nanoparticles: Feasibility of Laser-Induced Incandescence
Introduction
Titanium dioxide (TiO 2 , titania) nanomaterials are the second most produced powders through flame synthesis technology [START_REF] Meierhofer | Synthesis of metal oxide nanoparticles in flame sprays: review on process technology, modeling, and diagnostics[END_REF] following carbon black. They are widely used for pigment, cosmetics, and semiconductors [START_REF] Völz | Pigments, inorganic. Ullmann's Encyclopedia of Industrial Chemistry[END_REF]. With its unique photocatalytic properties, TiO 2 also gets more attention in wastewater treatment [START_REF] Zahmatkesh | Wastewater treatment with nanomaterials for the future: A state-of-the-art review[END_REF] and solar power systems [START_REF] Hou | TiO 2 nanotubes for dye-sensitized solar cells-a review[END_REF]. Flame synthesis technology can control the final product characteristics with a wide versatility through the choice of reactants and precursors, mixture composition, residence time, and temperature experienced by the particles [START_REF] Roth | Particle synthesis in flames[END_REF]. These operating conditions are closely related to the history of the particles, i.e. the local gaseous conditions experienced by the particles along their trajectory, that play an important role on the physicochemical properties of the final product: shape, size, surface functionality, crystallinity, and particle coating [START_REF] Pratsinis | Flame aerosol synthesis of ceramic powders[END_REF][START_REF] Manuputty | Understanding the anatase-rutile stability in flame-made TiO 2[END_REF][START_REF] Ren | Flame synthesis of carbon metal-oxide nanocomposites in a counterflow burner[END_REF]. As an example, Cignoli et al. [START_REF] Cignoli | Laser-induced incandescence of titania nanoparticles synthesized in a flame[END_REF] showed via X-ray diffraction (XRD) that TiO 2 crystal phase changes from anatase to rutile along the height of a premixed lean methane-air flame doped with TTIP (titanium tetraisopropoxide, Ti(OC 3 H 7 ) 4 ) precursor.
Therefore, there is a crucial need to understand the detailed physical and chemical processes involved in a non-carbonaceous nanoparticles formation in flames to control the final product characteristics. Specifically, experimental databases with accurate measurements of volume fraction (f v ), size distribution, and local flame conditions in steady laminar flames are demanded to understand and characterize the particle formation processes: inception, growth, agglomeration, and phase changing of TiO 2 nanoparticles along the flame. Such databases are also required to develop and to evaluate the performances of CFD models for non-carbonaceous nanoparticles production in flame synthesis.
Classically, ex-situ and on-line diagnostics are used for characterizing produced particles. For example, transmission electron microscopy (TEM) [START_REF] Memon | Multiple-diffusion flame synthesis of pure anatase and carbon-coated titanium dioxide nanoparticles[END_REF] provides primary particle size and morphology of sampling, and an on-line scanning mobility particle sizer (SMPS) can be used to measure mobility diameter. The particle size obtained with ex-situ or on-line methods may differ from those found in actual flame conditions since the sampling method including extraction, deposition and/or filtration, may lead to losses and coagulation [START_REF] Russo | Optical band gap analysis of soot and organic carbon in premixed ethylene flames: Comparison of in-situ and ex-situ absorption measurements[END_REF][START_REF] Sipkens | Laser-induced incandescence for non-soot nanoparticles: recent trends and current challenges[END_REF].
Alternatively, in-situ optical techniques can be used for describing the evolution of the particle properties along the flame because of their absent or weak intrusive nature. In this sense, laser diagnostics techniques are well adapted to this purpose. Several laser techniques have been used to characterize various nanoparticle properties based on the laser-induced emission of TiO 2 . Three techniques are mainly considered in the literature for solid phase description: laser-induced fluorescence (LIF), phase-selective laser-induced breakdown spectroscopy (PS-LIBS), and laser-induced incandescence (LII).
Laser-induced fluorescence, often referred to as Photoluminescence (PL), generally uses a laser of few mW (in case of commercial spectrophotometer) to excite electrons to an excited state and relax to the ground state through various processes with a typical lifetime from ps to tens of ns [START_REF] Michelsen | Laser-induced incandescence of flame-generated soot on a picosecond time scale[END_REF]. TiO 2 LIF spectra excited by using UV laser presents a wide-range of band emissions [START_REF] Liqiang | Review of photoluminescence performance of nano-sized semiconductor materials and its relationships with photocatalytic activity[END_REF] since it strongly depends not only on particle intrinsic properties (size [START_REF] Serpone | Size effects on the photophysical properties of colloidal anatase TiO 2 particles: size quantization versus direct transitions in this indirect semiconductor?[END_REF], morphology [START_REF] Mercado | Location of hole and electron traps on nanocrystalline anatase TiO 2[END_REF], crystal state [START_REF] Fujihara | Time-resolved photoluminescence of particulate TiO 2 photocatalysts suspended in aqueous solutions[END_REF], chemical or thermal surface treatment [START_REF] Shi | Photoluminescence characteristics of TiO 2 and their relationship to the photoassisted reaction of water/methanol mixture[END_REF]) but also on LIF operation conditions (excitation laser wavelength [START_REF] Pallotti | Photoluminescence mechanisms in anatase and rutile TiO 2[END_REF], chemical environment [START_REF] Pallotti | Photoluminescence mechanisms in anatase and rutile TiO 2[END_REF], and temperature [START_REF] Zhang | Synthesis, surface morphology, and photoluminescence properties of anatase iron-doped titanium dioxide nano-crystalline films[END_REF][START_REF] Nair | Optical parameters induced by phase transformation in rf magnetron sputtered TiO 2 nanostructured thin films[END_REF]). For example, even for an identical TiO 2 atomic structure, the LIF spectra show different shapes in the visible range due to different surface geometry and relaxation processes of photogenerated carriers at the surface [START_REF] Pallotti | Photoluminescence mechanisms in anatase and rutile TiO 2[END_REF]. To obtain in-situ measurements, the LIF emission should be quantitatively measured along the flame. Nevertheless, due to its complex nature and strong dependence on environment and particle characteristics, the interpretation of the signal is challenging without a prior and robust description of TiO 2 LIF emission mechanisms.
Phase-selective Laser-induced breakdown spectroscopy (PS-LIBS) uses laser fluence in-between the breakdown thresholds of gas and particle phases. Using PS-LIBS, information can be obtained on gas-to-particle conversion [START_REF] Zhang | Two-dimensional imaging of gas-to-particle transition in flames by laser-induced nanoplasmas[END_REF], volume fraction measurement [START_REF] Zhang | A new diagnostic for volume fraction measurement of metal-oxide nanoparticles in flames using phase-selective laser-induced breakdown spectroscopy[END_REF][START_REF] Ren | In-situ laser diagnostic of nanoparticle formation and transport behavior in flame aerosol deposition[END_REF], and band gap variation in mixed crystal structures [START_REF] Ren | Doping mechanism of vanadia/titania nanoparticles in flame synthesis by a novel optical spectroscopy technique[END_REF]. Therefore, PS-LIBS could be a promising in-situ technique to investigate TiO 2 formation in flame. However, its application requires a relatively high laser fluence level from ∼ 1 to 60 J/cm 2 [START_REF] Ren | Simultaneous single-shot two-dimensional imaging of nanoparticles and radicals in turbulent reactive flows[END_REF][START_REF] Zhang | Novel low-intensity phase-selective laser-induced breakdown spectroscopy of TiO 2 nanoparticle aerosols during flame synthesis[END_REF][START_REF] Xiong | Tuning excitation laser wavelength for secondary resonance in low-intensity phase-selective laser-induced breakdown spectroscopy for in-situ analytical measurement of nanoaerosols[END_REF] typically at 355 nm and 532 nm, which may make PS-LIBS intrusive.
Laser-induced incandescence uses a pulsed laser to heat the particles. The particles absorb the laser radiation and reach a high temperature. The heated particles emit thermal radiation, i.e. the incandescent signal, during their cooling by conduction, convection, and vaporization. The incandescent signal contains information on volume fraction and primary particle size [START_REF] Eckbreth | Effects of laser-modulated particulate incandescence on raman scattering diagnostics[END_REF][START_REF] Will | Two-dimensional soot-particle sizing by time-resolved laser-induced incandescence[END_REF]. This technique, initially developed for measuring the volume fraction and size of soot particles in flames [START_REF] Melton | Soot diagnostics based on laser heating[END_REF], is getting attention in application to non-soot nanoparticles flame synthesis like various metals (Fe [START_REF] Kock | Comparison of lii and tem sizing during synthesis of iron particle chains[END_REF], Si [START_REF] Daun | Spectroscopic models for laser-heated silicon and copper nanoparticles[END_REF], Mo [START_REF] Vander Wal | Laser-induced incandescence applied to metal nanostructures[END_REF]), oxides (SiO 2 [START_REF] Altman | Light absorption of silica nanoparticles[END_REF], Al 2 O 3 [START_REF] Weeks | Aerosol-particle sizes from light emission during excitation by tea CO 2 laser pulses[END_REF], Fe 2 O 3 [START_REF] Tribalet | Evaluation of particle sizes of iron-oxide nano-particles in a low-pressure flame-synthesis reactor by simultaneous application of tire-lii and pms[END_REF]) and complex carbon-coated materials [START_REF] Ren | Flame synthesis of carbon metal-oxide nanocomposites in a counterflow burner[END_REF][START_REF] Eremin | Binary ironcarbon nanoparticle synthesis in photolysis of Fe(CO) 5 with methane and acetylene[END_REF][START_REF] Eremin | Synthesis of binary ironcarbon nanoparticles by UV laser photolysis of Fe(CO) 5 with various hydrocarbons[END_REF].
Concerning TiO 2 nanoparticles, pioneering works on flame-synthesized particles can be found in literature [START_REF] Ren | Flame synthesis of carbon metal-oxide nanocomposites in a counterflow burner[END_REF][START_REF] Cignoli | Laser-induced incandescence of titania nanoparticles synthesized in a flame[END_REF][START_REF] Maffi | Spectral effects in laser induced incandescence application to flame-made titania nanoparticles[END_REF][START_REF] De Falco | Flame aerosol synthesis and thermophoretic deposition of superhydrophilic TiO 2 nanoparticle coatings[END_REF][START_REF] De Iuliis | Light emission of flame-generated TiO 2 nanoparticles: Effect of ir laser irradiation[END_REF]. However, when TiO 2 nanoparticles are produced via flame synthesis, the carbon content of the precursors and of the fuel used in the pilot flame may lead to the generation of soot [START_REF] Kammler | Carbon-coated titania nanostructured particles: Continuous, one-step flame-synthesis[END_REF] or carbon-coated TiO 2 nanoparticles [START_REF] Ren | Flame synthesis of carbon metal-oxide nanocomposites in a counterflow burner[END_REF][START_REF] Kammler | Carbon-coated titania nanostructured particles: Continuous, one-step flame-synthesis[END_REF]. This solid carbon content can significantly change the LII signal with distinct optical properties [START_REF] Darmenkulova | Change of optical properties of carbon-doped silicon nanostructures under the influence of a pulsed electron beam[END_REF]. Thus, to our knowledge, the feasibility of the LII technique for high-purity TiO 2 nanoparticles, i.e. without carbon presence, has never been deeply explored.
In this framework, the scope of this work is to characterize the laser-induced emission of high-purity TiO 2 nanoparticles to demonstrate the feasibility of the LII technique when considering TiO 2 without carbon content.
The main challenge for the application of LII on flame synthesis TiO 2 (but also other flame-synthesized particles) compared to carbonaceous nanoparticles is related to the specific optical properties of TiO 2 . Performing LII on TiO 2 supposes that the particles absorb the laser radiation. The absorption function E(m λ ) translates this ability [START_REF] Liu | Review of recent literature on the light absorption properties of black carbon: Refractive index, mass absorption cross section, and absorption function[END_REF]. E(m λ ) is a function of the refractive index (m) at a wavelength (λ). Figure 1 displays the absorption function E(m λ ) of carbonaceous particles (e.g., soot) [START_REF] Williams | Measurement of the dimensionless extinction coefficient of soot within laminar diffusion flames[END_REF][START_REF] Köylü | Spectral Extinction Coefficients of Soot Aggregates From Turbulent Diffusion Flames[END_REF][START_REF] Chang | Determination of the wavelength dependence of refractive indices of flame soot[END_REF][START_REF] Snelling | Determination of the soot absorption function and thermal accommodation coefficient using low-fluence lii in a laminar coflow ethylene diffusion flame[END_REF] and TiO 2 from various source of the literature [START_REF] Sarkar | Hybridized guided-mode resonances via colloidal plasmonic self-assembled grating[END_REF][START_REF] Siefke | Materials pushing the application limits of wire grid polarizers further into the deep ultraviolet spectral range[END_REF][START_REF] Liu | A comparative study of amorphous, anatase, rutile, and mixed phase TiO 2 films by mist chemical vapor deposition and ultraviolet photodetectors applications[END_REF][START_REF] Jellison | Spectroscopic ellipsometry of thin film and bulk anatase (TiO 2 )[END_REF][START_REF] De Iuliis | Laser-induced emission of tio 2 nanoparticles in flame spray synthesis[END_REF]. It should be noticed that E(m λ ) values for TiO 2 are illustrated using a log-scale. The E(m λ ) of TiO 2 as a function of wavelength shows a dispersion from one study to another. All measurements are remarkably small, a few orders of magnitude smaller than carbonaceous particles. TiO 2 has a better absorption in the UV part, which decreases significantly on the visible range except for De Iuliis et al. [START_REF] De Iuliis | Laser-induced emission of tio 2 nanoparticles in flame spray synthesis[END_REF] with a nearly constant E(m λ ) in UV-visible range. Fig. 1: Absorption function E(m λ ) of (a) carbonaceous particles [START_REF] Williams | Measurement of the dimensionless extinction coefficient of soot within laminar diffusion flames[END_REF][START_REF] Snelling | Determination of the soot absorption function and thermal accommodation coefficient using low-fluence lii in a laminar coflow ethylene diffusion flame[END_REF][START_REF] Michelsen | Understanding and predicting the temporal response of laser-induced incandescence from carbonaceous particles[END_REF] and (b) TiO 2 nanoparticles (in log scale) [START_REF] Siefke | Materials pushing the application limits of wire grid polarizers further into the deep ultraviolet spectral range[END_REF][START_REF] Liu | A comparative study of amorphous, anatase, rutile, and mixed phase TiO 2 films by mist chemical vapor deposition and ultraviolet photodetectors applications[END_REF][START_REF] Jellison | Spectroscopic ellipsometry of thin film and bulk anatase (TiO 2 )[END_REF][START_REF] De Iuliis | Laser-induced emission of tio 2 nanoparticles in flame spray synthesis[END_REF].
The E(m λ ) tendency of TiO 2 highlights two main issues. First, it is recommended to heat TiO 2 particles using a UV laser to improve heat-up efficiency [START_REF] Sipkens | Laser-induced incandescence for non-soot nanoparticles: recent trends and current challenges[END_REF], which is not recommended in flame since it may lead to interferences due to gaseous species fluorescence in LII measurement [START_REF] Goulay | Photochemical interferences for laser-induced incandescence of flamegenerated soot[END_REF]. Second, since E(m λ ) values for TiO 2 are quite small compared to those of soot, the potential laser incandescence signal emitted in the visible range for an equivalent volume fraction of TiO 2 particles will be weaker by few orders of magnitude than soot particles.
De Iuliis et al. [START_REF] De Iuliis | Light emission of flame-generated TiO 2 nanoparticles: Effect of ir laser irradiation[END_REF] conducted LII on in-situ flame-synthesized TiO 2 using a 1064 nm laser despite the quasi-null absorption function at this wavelength [START_REF] Posch | Infrared properties of solid titanium oxides: exploring potential primary dust condensates[END_REF]. Two hypotheses might explain the detection of the LII signal on TiO 2 under IR excitation. First, the reported flame temperatures via pyrometry (2800-3000 K [START_REF] De Iuliis | Light emission of flame-generated TiO 2 nanoparticles: Effect of ir laser irradiation[END_REF][START_REF] De Iuliis | Laser-induced emission of tio 2 nanoparticles in flame spray synthesis[END_REF]) are much higher than the typical TiO 2 melting point (about 2100 K [START_REF] O'neil | The merck indexan encyclopedia of chemicals, drugs, and biologicals[END_REF]). Therefore, the nanoparticles in the flame could be in a liquid state so that they could absorb the IR laser radiation. In such a case, the physical properties for interpretation of the LII signal should be well adjusted in liquid phase materials. To illustrate this, Fig. 2 shows the ratio of liquid and solid phase E(m λ ) as a function of the wavelength for several metal particles and Al
Si
Ag
Fe Au
Ni
Cu Al
Fig. 2: Ratio of liquid and solid phase E(m λ ). Si and Fe from [START_REF] Sipkens | Laser-induced incandescence for non-soot nanoparticles: recent trends and current challenges[END_REF]; solid Au, Ag, and Ni from [START_REF] Sipkens | Laser-induced incandescence for non-soot nanoparticles: recent trends and current challenges[END_REF] and liquid Au, Ag and Ni from [START_REF] Miller | Optical properties of liquid metals at high temperatures[END_REF]; Al, Cu from [START_REF] Miller | Optical properties of liquid metals at high temperatures[END_REF];
Al 2 O 3 from [60].
Thus, the in-situ LII under IR excitation on flame-generated TiO 2 nanoparticles in [START_REF] De Iuliis | Light emission of flame-generated TiO 2 nanoparticles: Effect of ir laser irradiation[END_REF] might be explained if they were in a liquid state, which may substantially increase their absorption function. Unfortunately, to the author's knowledge, information on the optical property of molten TiO 2 is not available in the literature to confirm this hypothesis.
The second hypothesis is that these flame-generated nanoparticles are not pure TiO 2 nanoparticles. The prompt LII spectra obtained from TiO 2 nanoparticle flame spray synthesis using a 1064 nm laser in [START_REF] De Iuliis | Light emission of flame-generated TiO 2 nanoparticles: Effect of ir laser irradiation[END_REF] at moderate (F=0.275 J/cm 2 ) and high (F=0.562 J/cm 2 ) fluences are recalled in Fig. 3. Induced bands emission of C 2 swan and C 3 swings [START_REF] Goulay | Spontaneous emission from C 2 (d 3 Π g ) and C 3 (A 1 Π u ) during laser irradiation of soot particles[END_REF] can be recognized. The swan and swings bands emission at high fluence are characteristic of solid carbon contents in the form of soot particles or carbon-coated TiO 2 . Specifically, Ren et al. [START_REF] Ren | Flame synthesis of carbon metal-oxide nanocomposites in a counterflow burner[END_REF] showed using PS-LIBS that an increase in C 2 emission indicates an increase of carbon components in the produced carbon-coated TiO 2 nanoparticles. Fig. 3: Laser-induced light emission under different laser fluences at HAB = 2 cm in flame spray synthesis of TiO 2 nanoparticles at prompt detection timing obtained in [START_REF] De Iuliis | Light emission of flame-generated TiO 2 nanoparticles: Effect of ir laser irradiation[END_REF]. C 2 and C 3 bands from [START_REF] Goulay | Spontaneous emission from C 2 (d 3 Π g ) and C 3 (A 1 Π u ) during laser irradiation of soot particles[END_REF] are indicated in blue and red, respectively.
The above short review of the LII technique application on flamesynthesized TiO 2 nanoparticles highlights the importance of working with high-purity TiO 2 nanoparticles. Additionally, when performing ex-situ LII measurements using UV radiation on flame-synthesized TiO 2 nanoparticles, De Iuliis et al. [START_REF] De Iuliis | Laser-induced emission of tio 2 nanoparticles in flame spray synthesis[END_REF] showed that the nature of the laser-induced emission (LIE) has to be carefully analyzed since parasitic signals were observed in the prompt emission spectra in both low and high fluence regimes. These additional signals were not extensively characterized without the matrix effect of sampling filter [START_REF] De Iuliis | Laser-induced emission of tio 2 nanoparticles in flame spray synthesis[END_REF].
An extensive characterization of the LIE at prompt and delayed acquisition of high-purity TiO 2 nanoparticles in a well-controlled environment is the core of the present study. For this, an aerosol made of commercial high-purity TiO 2 nanoparticles produced via sol-gel process is transported by a nitrogen flow in an optical cell at room temperature. This permits avoiding the risk of 1) melting caused by the local environment, ensuring a solid state prior to laser interaction, 2) interferences from carbon-related species during the measurements, 3) parasitic signal from gaseous species characterizing flame environment. The effect of laser fluence and delay time with respect to the signal peak is investigated and compared to well-known carbon black nanoparticles LIE acquired in the same condition. For different laser excitation and signal detection schemes, various explanations are explored to interpret the LIE of TiO 2 nanoparticles by combining information from spectra and temporal evolution of the signals. The feasibility of LII on pure TiO 2 aerosol in a cold environment is then demonstrated. This study is organized as follows. First, the scientific strategy to investigate the nature of LIE is presented in Sec. 2. Then, the experimental setup is described in Sec. 3. Finally, experimental results are discussed in Sec. 4. To complete this study, the same strategy has also been applied to flame-synthesized TiO 2 in a coflow H 2 /AR flame. These results are reported in the Supplementary material. Three different types of laser-induced emissions (LIEs) from TiO 2 nanoparticles are considered here: Laser-Induced Fluorescence, Phase-selective Laserinduced breakdown spectroscopy, and Laser-Induced Incandescence.
In conventional LIBS, the plasma mainly emits a continuous spectrum from Bremsstrahlung emission and the recombination radiation at the tens to hundreds of nanoseconds. Upon cooling of the plasma, the relative peak intensity of ion/atom line emissions increases significantly. The behavior of the plasma and the quantitative nature of the emitted temporal evolution can substantially vary (up to ms) depending on the materials and experimental conditions, Characteristic time ps to 10ns [START_REF] Michelsen | Laser-induced incandescence of flame-generated soot on a picosecond time scale[END_REF] 10ns [START_REF] Zhang | Novel low-intensity phase-selective laser-induced breakdown spectroscopy of TiO 2 nanoparticle aerosols during flame synthesis[END_REF] 300ns-ms [67] 10 2 -10 3 ns including the gas temperature and excitation wavelengths. On the other hand, the lifetime of PS-LIBS is reported to be shorter (a few tens of nanoseconds) than those of conventional LIBS due to the absence of gaseous plasma (See Table 1). In this study, no conventional LIBS is expected to occur since the fluence regime used in this work is always lower than the air breakdown threshold (∼ 100 J/cm 2 at 355 nm [START_REF] Gao | Investigation of laser induced air breakdown thresholds at 1064[END_REF]). As a confirmation, no visible spark was observed.
The three considered LIEs can simultaneously occur during the laser excitation as illustrated in Fig. 4, which displays the schematic of the fluence range of LIE with their corresponding emission duration and the corresponding particles phase (solid, liquid, and gas). The various LIEs of TiO 2 nanoparticles have different characteristic times and occur under different fluence regimes as summarized in Table 1. Thus, by examining the time and laser fluence dependence of the spectral emissions from TiO 2 , the nature of the detected signal can be characterized.
First, general spectral characteristics of LII will be presented in Sec. 4.1 to clarify the retained strategy. Then, LIE at prompt will be deeply investigated in Sec. 4.1.1. Specifically, TiO 2 LIF is expected to be the only emission observed at low laser fluence regimes, where neither LII nor PS-LIBS are expected, as indicated in Fig. 4. Then, by increasing the laser fluence, the TiO 2 particle temperature is expected to rise, and different spectral features of TiO 2 fluorescence could be induced [START_REF] Zhang | Synthesis, surface morphology, and photoluminescence properties of anatase iron-doped titanium dioxide nano-crystalline films[END_REF]. From moderate to high laser fluence regimes, the LIF, PS-LIBS, and LII of the TiO 2 nanoparticles could occur simultaneously with different intensities which makes analyzing the signal very challenging. Finally, a delay on the signal detection will be introduced in Sec. 4.1.2 so that LIF and PS-LIBS emission signals, classically lasting for the excitation laser duration (FWHM 5 ns in this study), are not expected to be found. Thus, under different laser fluence regimes, the time-resolved behavior of delayed TiO 2 emission spectra will be investigated using LII theory to demonstrate its LII nature. These aspects will be reviewed using a temporal description in Sec. 4.2. All TiO 2 measurements will be compared with those of carbon black particles as a representative LII behavior under laser irradiation.
LII theory
The evolution of the LII signal emitted at wavelength λ em by a heated spherical particle with diameter d p at temperature T p (t) during its cooling is expressed by:
S LII (λ em , d p (t)) = 4π 2 d 3 p E(m λem ) λ em 2hπc 2 λ 5 em exp hc λ em k B T p (t) -1 -1 (1)
with E(m λem ) the absorption function of particle at λ em , h the Plank's constant, c the light speed and k B the Boltzmann's constant. To verify the LII nature of the LIE, the theoretical evolution of the signal will be verified following three points:
• The spectral emission as a function of temperature has to be verified. The peak location of the spectrum is expected to move towards longer wavelengths (red-shifted) with decreasing temperature or towards shorter wavelengths (blue-shifted) with increasing temperature, i.e. the spectrum has to follow Wien's displacement law. This thermal shift is valid only if E(m λem ) is invariant with particle temperature. Therefore, if the particles undergo a physical phase change from liquid to solid during their cooling, this assertion becomes questionable since the particle phase change may include a variation of the absorption function as illustrated in Fig. 2. The spectral analysis will be presented in Sec. 4.1. • The temporal decay characteristic time under different detection wavelengths will be employed to distinguish temporally incandescence signals from other emissions in Sec. 4.2. More precisely, LII signal S LII (λ em , T p (t)) is a function of emission wavelength λ em (Eq.( 1)). If d p (t) and E(m λem ) are assumed constant1 during the LII decay, by putting a natural log on both sides of Eq.( 1), it can be obtained that:
ln(S LII (λ em , T p (t))) = lnC -ln exp hc λ em k B T p (t) -1 (2)
with
C = 8π 3 d 3 p hc 2 E(m λem )λ -6
em . Considering C constant during the time range [t i , t j ], a decay characteristic time τ can then be defined as:
τ = t i -t j ln(S LII (t i )) -ln(S LII (t j )) = t i -t j ln exp hc λemk B Tp(tj) -1 -ln exp hc λemk B Tp(ti) -1 (3)
Since hc λemk B Tp(t) >> 1, Wien's approximation can be applied:
exp hc λ em k B T p (t i ) -1 ≈ exp hc λ em k B T p (t i ) so that: τ = k B λ em hc (t i -t j ) 1 T p (t j ) - 1 T p (t i ) -1 . (4)
It can be noted that the decay characteristic time τ is linearly proportional to the emission wavelength λ em . This relationship is valid only for LII, meaning that if it is verified, interference signals like fluorescence or atomic emission are not present. It is worth mentioning that τ depends on the evolution of the particles temperature. Therefore, assuming that all particles attain the same temperature for a given fluence and a gas temperature, the decay characteristic time will be inversely proportional to the particle diameter. • From Eq.( 2), it is possible to derive that the quantity:
∆T -1 (t) = 1 T p (t 0 ) - 1 T p (t) = ln S LII (λ em , T p (t)) S LII (λ em , T p (t 0 )) hc λ em k B -1 (5)
is not dependent on the detection emission. Thus, if the measured emission is an LII signal, the following relation has to be verified:
ln S LII (λ em,i , T p (t)) S LII (λ em,i , T p (t 0 )) hc λ em,i k B -1 = ln S LII (λ em,j , T p (t)) S LII (λ em,j , T p (t 0 )) hc λ em,j k B -1 , ∀λ em,i , λ em,j (6)
In Sec. 4.2, the temporal estimation of ∆T -1 (t) will be calculated from PMT signals at two different λ em . If identical, the LII nature of the signal at λ em will be confirmed at least for the two considered wavelengths.
In the following, carbon black will serve as a reference case for typical LII behavior and will be compared to the TiO 2 LIE induced in similar operating conditions.
Experimental set-up
The experimental set-up considered in this work is constituted of three parts, as illustrated in Fig. 5: particles dispersion, laser setting, and signal detection.
Particles dispersion
A nanoparticle-laden aerosol is prepared with commercial particles of carbon black (nanografi, NG04EO0709, d p = 20 nm, spherical) or of TiO 2 (nanografi, Rutile (NG04SO3507), d p = 28 nm, purity 99.995+%, produced by sol-gel method). Nanoparticles are transported in a non-reactive environment thanks to N 2 flow (16.67 slm, 9.8 m/s) in a first flask (1L) placed inside an ultrasonic water bath. The nanoparticle-laden gas stream passes through another buffer flask (1L) to get a homogeneous distribution. Then, nanoparticles are dispersed in an optical cell, presenting two 1-inch opposite windows (UVFS, Thorlabs WG41010) for the laser path and a 2-inch window (UVFS, Thorlabs WG42012) for the detection system. Particles are evacuated by the top hood equipped with a HEPA filter.
Since the dispersion rate of the particles is decreasing over time, the particles inside the first flask are reloaded after each series of measurements with m g = 200 mg of particles, allowing repeatable experimental conditions. This also means that only normalized signals will be considered in the following.
When changing the particle's nature (TiO 2 or carbon black), to prevent any intervention of particles from previous measurements, all the equipment components in contact with particles are entirely changed.
Laser setting
An Nd:YAG laser beam (Quantel, Q-smart 850) with a repetition rate of 10 Hz and pulse duration of 5 ns (FWHM) is used for the LII measurement. Operation wavelength was tested for three different harmonics -fundamental (1064 nm), second (532 nm), and third (355 nm)-on both carbon black and TiO 2 by adding harmonic generation modules to the laser head. The laser fluence is controlled using an attenuator consisting of a half-wave plate and two polarizers.
The beam presents a nearly top-hat energy distribution of 0.88 × 0.91 mm 2 , which is monitored with a beam profiler (Gentec Beamage). It is represented in the bottom-right in Fig. 5 and also as a histogram in Fig. 6. The nearly top-hat shape laser is then 1:1 relay-imaged at the centerline of the optical cell.
Signal detection
The signal detection consists of a telescope comprising two achromatic lenses (f 1 = 10 cm and f 2 = 20 cm). A magnification factor of 2 is used to collect the induced signal at 90°of the laser beam direction. Two notch filters (355 and 532 nm) were used to suppress the laser harmonic signal. For this reason, the emission spectra between 525 nm and 540 nm have been excluded from all the results. This collector is connected to the spectrometer entrance (Princeton Instruments, HRS-500, grating groove density of 150 groove/mm) through a multimode optical fiber (Thorlabs, FG365UEC) with a core diameter of 365 µm. The probe volume is 0.58 mm 3 . One exit of the spectrometer is connected to an intensified charge-coupled device camera (ICCD, Princeton Instruments, PI-MAX 4 1024EMB) to measure the spectral emission of LII with a gate width of 20 ns. The temporal behavior of laser-induced emission spectra is measured by considering different acquisition gate delay times (τ d = 0 -500 ns with respect to the signal peak). The detection system is calibrated using a tungsten filament lamp for signal intensity and a mercury lamp for wavelength. The acquisition parameters are adapted to ensure the best signal-to-noise (S/N) ratio and to prevent saturation of the device. The intensity level is then rescaled to be treated on the same intensity scale. The spectra displayed in this work are averaged over 5000 single shots.
The time-resolved laser-induced emission was measured using a visible photomultiplier (PMT, HAMAMATSU, R2257), which is connected to the second exit slit of the spectrometer. Signals at four different wavelengths (λ em = 450, 570, 640, and 710 nm for carbon black, and λ em = 450, 550, 650, and 710 nm for TiO 2 ) with detection bandwidth of FHWM 20 nm were measured. The choice of detection wavelengths was considered to cover the visible range of the spectrum to represent the temporal evolution of LIE signals for overall wavelengths. However, to avoid interferences at the prompt stage as much as possible and to capture the LII behavior, the choice of wavelengths between carbon black and TiO 2 was slightly different (570 nm and 640 nm for carbon black; 550 nm and 650 nm for TiO 2 ). Acquired PMT signals were recorded at 10Hz using an oscilloscope (Lecroy wave surfer 434, 350MHz bandwidth, 2GS/s sampling rate). As the particle dispersion rate decreases, the signal intensity also decreases, and so does the S/N ratio. To overcome this issue, only the PMT signals with the same order of S/N are kept, then normalized by their maximum intensity, and averaged over 6000 single shots.
Even though the absolute intensity would provide significant additional insight, due to the nature of the particle dispersion system, the measurement of absolute intensity cannot be guaranteed in the current system since the amount of materials in the optical cell decreases with time. This means that during the acquisition performed over 2-3 minutes, the emission level decreases.
Nevertheless it has been verified that the normalized spectral and temporal behavior yielded the same stable results. Therefore, all the data are presented in a normalized form by their own maximum or values at specific wavelengths that will be provided in the following.
Results and discussion
The first step of this work consisted of testing three laser wavelengths (1064 nm, 532 nm, and 355 nm) to induce LIE of pure TiO 2 and to compare the results to those obtained for carbon black in similar conditions (Table 2). For carbon black, laser-induced emission was detected for all excitation wavelengths. On the contrary, for pure TiO 2 it was not possible to detect a signal with the considered detection system when working at 532 nm and 1064 nm. This is likely to be due to the fact that the absorption function E(m λem ) of pure TiO 2 above the UV range is much lower than that of carbonaceous particles as shown in Fig. 1. Therefore, only the 355 nm laser wavelength is used in the following to study the LIE of pure TiO 2 and carbon black particles.
First, LIE will be characterized with a spectral analysis in Sec. 4.1. Prompt and delayed spectra will be detailed in Secs. 4.1.1 and 4.1.2, respectively. In Section 4.2, the temporal evolution of the LIE signals obtained using PMT will be analyzed to conform and complete the conclusions drawn from the spectral analyses. Fig. 7: Effect of laser fluence F on laser-induced emission spectra at prompt for (a) carbon black and (b) TiO 2 nanoparticles. The temperature inside the legend for carbon black particles is obtained from the spectrum fitting of the Planck function. Spectra are normalized by the value at 650 nm, except for the case of TiO 2 F = 0.06 J/cm 2 and F = 0.10 J/cm 2 , which are normalized by max value at 470 nm for visualization purpose.
Figure 7 shows the spectra under different laser fluences in prompt measurements. The equivalent temperature T eq is obtained for carbon black by fitting the spectrum into Planck's law (Eq.( 1)) while considering E(m λem ) constant. The obtained T eq value is reported in the legend of Fig. 7a. A large variation of the spectral form of absorption function in the visible range has been discussed in Fig. 1 for TiO 2 . Based on this, it would be unreliable to apply the fitting to extract the temperature information for TiO 2 from spectra of Fig. 7b.
Concerning carbon black, a laser fluence of F = 0.03 J/cm 2 is the minimum fluence to obtain an LII signal with a high enough S/N to be detected with our detection system. Still, the S/N is not high enough to allow accurate signal characterization. The case of F= 0.03 J/cm 2 will then not be considered further in the following. For all the considered fluences, the LIE signals exhibit continuous spectra in the visible wavelength range with low intensity for the shortest wavelengths. When increasing the laser fluence, the curve moves towards shorter wavelengths. This blue shift indicates that particle temperatures have increased, as confirmed by the T eq value. This trend follows Wien's displacement law, confirming the expected LII nature of the LIE signals for carbon black particles. By looking at the values of T eq , it is found that the temperature increases with fluence for F < 0.12 J/cm 2 , whereas it does not considerably change between 0.12 and 0.23 J/cm 2 while approaching the particle sublimation temperature.
Additionally, for the highest fluence (F = 0.23 J/cm 2 ), C 2 LIF signals (at approximately 468 nm and 516 nm) and some C 3 bands (at approximately 437 nm) are clearly noticed. This indicates the presence of sublimation of soot particles. All these results are coherent with the literature [START_REF] Bejaoui | Measurements and modeling of laser-induced incandescence of soot at different heights in a flat premixed flame[END_REF][START_REF] Lee | Photoluminescence of C 60 and its photofragments in the gas phase[END_REF][START_REF] Rohlfing | Optical emission studies of atomic, molecular, and particulate carbon produced from a laser vaporization cluster source[END_REF]. Especially when considering LII of soot particles produced in a premixed methane/air flame with a 355 nm laser, a laser fluence level of approximately 0.07 J/cm 2 is enough to reach the "plateau" region, i.e. the maximum temperature that soot particles attain when subjected to a laser excitation [START_REF] Bejaoui | Measurements and modeling of laser-induced incandescence of soot at different heights in a flat premixed flame[END_REF]. In the present case, it might be possible that sublimation occurs for fluence values as small as F = 0.06 J/cm 2 , but its effect on particle diameter is expected to be negligible since neither C 2 nor C 3 bands are observed.
The prompt emission spectra of pure TiO 2 particles show, on the contrary, a clearly different tendency as a function of laser fluence. At low fluence (F = 0.06 J/cm 2 ), a relatively narrow continuous spectrum centered at 470 nm is observed. This signal does not seem to correspond to LII emission that is expected to cover the whole visible range. For F = 0.1 J/cm 2 , the spectrum covers a wider region, reaching the longest wavelengths (550-700 nm). As the laser fluence additionally increases (F = 0.19 and 0.24 J/cm 2 ), the emissions present a broadband continuous spectrum in the visible range with some sharp features. Compared to the broadband emission, these peaks are less significant at F = 0.24 J/cm 2 than at F = 0.19 J/cm 2 . This possibly indicates that the relative contribution of the broadband spectrum to the total emission is likely to increase with laser fluence. As the interpretation of the prompt LIE of pure TiO 2 particles is not straightforward, a deep characterization in terms of LIF, PS-LIBS, and LII emissions will be detailed in the next section.
Characterization of prompt LIE from TiO 2
In the previous section, it has been observed that the behavior of prompt spectra emitted by carbon black is in agreement with the literature trends [START_REF] Vander Wal | Application of laser-induced incandescence to the detection of carbon nanotubes and carbon nanofibers[END_REF]. On the contrary, LIE from high-purity TiO 2 still needs to be characterized. In this section, the nature of the LIE from TiO 2 at prompt is discussed as a function of the laser fluence. As seen previously, in the low laser fluence regime, TiO 2 nanoparticles present a narrow emission centered at 470 nm. At low fluence, PS-LIBS is not expected, and LII is usually a signal covering the whole visible spectrum. Therefore, the emission observed at low laser fluences is most likely to be LIF. To confirm such conclusion, the TiO 2 spectra emission for a low laser fluence (F = 0.06 J/cm 2 ) is reminded in Fig. 8. The spectrum obtained here for TiO 2 rutile nanoparticles is quite similar to LIF emission spectra found in the literature even for different crystal phases [START_REF] Paul | Role of surface plasmons and hot electrons on the multi-step photocatalytic decay by defect enriched Ag@ TiO 2 nanorods under visible light[END_REF][START_REF] Santara | Evidence of oxygen vacancy induced room temperature ferromagnetism in solvothermally synthesized undoped TiO 2 nanoribbons[END_REF]. As an example, spectra in low laser fluence regimes from Paul et al. [START_REF] Paul | Role of surface plasmons and hot electrons on the multi-step photocatalytic decay by defect enriched Ag@ TiO 2 nanorods under visible light[END_REF] for anatase nanorods and Santara et al. [START_REF] Santara | Evidence of oxygen vacancy induced room temperature ferromagnetism in solvothermally synthesized undoped TiO 2 nanoribbons[END_REF] for TiO 2 (B) nanoribbons are added to Fig. 8 in green and in red, respectively. Each curve can be interpreted as the sum of the gaussian fit sub-band emissions illustrated with thin dashed/dotted lines in Fig. 8. These observations allow us to assert that the prompt LIE of pure TiO 2 is dominated by its fluorescence in the low fluence laser regime excited at 355 nm.
When increasing the laser fluence (F = 0.10 J/cm 2 ), a wider spectrum (Fig. 7b) is emitted. A longer wavelength part appears in LIE spectrum for higher fluence F =0.1 J/cm 2 . This would be in contradiction with LIF behaviors as in [START_REF] Forss | Temperature dependence of the luminescence of tio 2 powder[END_REF] if the signal was only originated by LIF. This seems to indicate the presence of a second signal in addition to LIF, whose contribution to the total LIE is evident at long wavelengths where the LIF signal is less significant. At higher fluences (F ≥ 0.19 J/cm 2 ), the nature of multiple emissions can be investigated by looking at the prompt emission spectra of TiO 2 nanoparticles for F = 0.19 J/cm 2 in Fig. 9. A broadband emission in the visible range is observed together with the presence of pronounced peak emissions, whose origins are unknown. The nature of peak emissions is first analyzed. One might think it could correspond to the atomic emission from laser breakdown of TiO 2 (LIBS). However, the energy level is still lower than the typical LIBS [START_REF] Lebouf | Comparison of field portable measurements of ultrafine TiO 2 : X-ray fluorescence, laser-induced breakdown spectroscopy, and fouriertransform infrared spectroscopy[END_REF]. Therefore, two possibilities can be considered to interpret these signals: LIF and/or PS-LIBS emissions.
To consider the first option, deconvoluted gaussian band emissions from various literature works are illustrated together with the emission spectra of TiO 2 in Fig. 9a. Only the center of each gaussian band with the standard deviation ±σ range is shown for a clear view. The sub-bands from the literature cover almost all the distinct feature ranges of the spectrum. Therefore, the LIE can be the result of these different LIF contributions. As the laser fluence increases, it is possible for the positions of the sub-band peaks to remain unchanged while becoming progressively sharper, eventually achieving a line distribution [START_REF] Abazović | Photoluminescence of anatase and rutile TiO 2 particles[END_REF]. This could be the first route to explain the edged features of the emission spectra of TiO 2 for high laser fluences.
Alternatively, it is possible to consider the distinct features as atomic emissions of PS-LIBS. Even though the laser fluence is relatively lower than the minimum PS-LIBS range in the literature (0.4 J/cm 2 at 355 nm in [START_REF] Xiong | Phaseselective laser-induced breakdown spectroscopy of metal-oxide nanoparticle aerosols with secondary resonant excitation during flame synthesis[END_REF]), single-photon absorption may cause atomic emission following the mechanism proposed by [START_REF] Ren | Absorptionablation-excitation mechanism of laser-cluster interactions in a nanoaerosol system[END_REF] for PS-LIBS of flame-synthesized TiO 2 . To consider this case, Ti and O atomic emissions from NIST are overlapped in blue and in red, respectively, to the emission spectrum of TiO 2 in Fig. 9b. The peak locations match fairly well with Ti and O atomic emissions.
The current results suggest that in a high laser fluence regime, LIF and/or PS-LIBS of pure TiO 2 occur at prompt. These emissions are expected to occur over a short period of time (Fig. 4). On the contrary, the broadband emission covers the whole visible spectrum, as expected for LII signal. To confirm the LII-nature of the broadband emission, laser-induced emissions at delayed acquisition time are investigated in the following section since LII is known to be a long-lasting signal.
LII nature of delayed LIE from TiO 2
As discussed previously, LIF and/or PS-LIBS signals of pure TiO 2 nanoparticles are observed at the prompt emission. Since LIF and PS-LIBS signals have a short characteristic decay time, it is possible to investigate the nature of the broadband contribution by looking at the delayed LIE of TiO 2 . The trend as a function of gate delay and emission wavelength is here compared with the theoretical behavior for LII, which is described in Sec. 2.1 and with reference results from carbon black to prove its LII-like nature.
First, Fig. 10 shows the laser-induced emission spectra of carbon black and pure rutile TiO 2 particles at three fluence levels, at various acquisition delays (over 20 ns) from the prompt timing, i.e. the signal peak. For all laser fluences, the emission spectra of carbon black show a red-shift when increasing the time delay. This indicates that the temperature decreases in agreement with LII theory as confirmed by the calculated equivalent temperatures reported in the legend.
Regarding TiO 2 nanoparticles, for the lowest fluence (F = 0.06 J/cm 2 ), the signal-to-noise ratio is not sufficiently high to obtain an exploitable spectrum for an acquisition delay as small as 20 ns. When looking at a higher laser fluence (F = 0.1 J/cm 2 ), a continuous broadband emission, which is typical for LII signal, is measured with a low S/N ratio. This suggests that at this fluence the incandescence signal starts to contribute to the whole spectrum in addition to the fluorescent emissions detected at prompt (Fig. 7b). At 20 ns, one might assume that there are still components from non-LII contributions. The spectrometer should therefore detect a mixture of non-LII and LII components of signals. In reality, only the LII broadband emission can be recognized when looking at the spectrum in Fig. 10. Two reasons have been identified to explain this: 1) the LII-like nature of signals are more predominant than non-LII contributions for the considered conditions, and/or 2) the spectrometer used in this study is not accurate enough to successfully separate non-LII and LII components in those spectra.
To summarize, below 0.06 J/cm 2 , the dominant laser-induced emission (LIE) of TiO 2 is LIF as shown in Fig. 8 (similar literature can be found in [START_REF] De Haart | The observation of exciton emission from rutile single crystals[END_REF] and [START_REF] Zhu | Investigation on the surface state of TiO 2 ultrafine particles by luminescence[END_REF] as well). At 0.1 J/cm 2 and above, two different types of signals can be observed: one from LIF centered at 470 nm over the UV wavelengths, and another at longer wavelength that corresponds to the onset of LII. The threshold for the appearance of LII is more than half of the evaporation fluence of 0.19 J/cm 2 . This behavior is not surprising for LII, as its intensity is strongly non-linear with fluence. The gap between the appearance fluence and the evaporation fluence can vary, depending on the excitation wavelength and the nature of the particles. This feature is well known for soot, as illustrated in the case reported by Goulay et al. [START_REF] Goulay | A data set for validation of models of laser-induced incandescence from soot: temporal profiles of lii signal and particle temperature[END_REF]. For soot particles, a factor of around 4 is observed between the appearance fluence and the evaporation fluence in the case of 1064 nm, and a factor of around 2 is found in the case of 532 nm [START_REF] Goulay | A data set for validation of models of laser-induced incandescence from soot: temporal profiles of lii signal and particle temperature[END_REF].
For the highest laser fluence (F = 0.24 J/cm 2 ), the spectrum exhibit incandescence trends as a function of the acquisition delay: the spectrum shifts towards longer wavelengths with time delay. This indicates that temperature decreases during the signal decay due to the cooling of the particles, confirming the black-body-like tendency for pure TiO 2 . As already discussed for F=0.1 J/cm 2 at 20 ns and 40 ns, a certain contribution from the emission of LIF or PS-LIBS might exist. Nevertheless, the overall delayed spectra consistently exhibit a red-shift as the particle cools. A very similar trend is observed for F=0. [START_REF] Pallotti | Photoluminescence mechanisms in anatase and rutile TiO 2[END_REF] Fig. 10: Effect of gate delay (gate width = 20 ns) on emission spectra of laser-induced emissions of (a,c,e) carbon black nanoparticles and (b,d,f) rutile TiO 2 nanoparticles. The temperature given inside the legend for carbon black particles is obtained from spectrum fitting. Spectra are normalized with the value of the spectrum at 650 nm.
Discussion
The nature of LIE depends strongly on the light-particle interaction, providing indications on the state of the particles. In low fluence regime, laser light can excite the electronic state of particles, leading to the fluorescence without any modification of the particle state, i.e. solid particles are expected. By increasing the fluence, particles can be thermally heated resulting in the emission of incandescence signals. If the temperature of the particles is higher than the melting point but lower than the boiling point, particles may exist in a liquid state. As discussed in Fig. 2, a higher or lower E(m λ ) value can be observed when particles are in a liquid state. If the E(m λ ) value is higher in the liquid state than in the solid state, LII measurements can be more easily performed in the intermediate fluences if particles reach the melting temperature. At even higher fluence regime, particles may reach the vaporization temperature so that PS-LIBS emission can be facilitated and observed. By looking at results in Fig. 7b, it is then possible to deduce some indications on the particle state. On one hand, carbon black nanoparticles are characterized by high E(m λ ) values when in solid state and very high melting and boiling points (T carbon black meting = 3823 K [START_REF]CAS Common Chemistry. CAS, a Division of the American Chemical Society[END_REF] and T carbon black boiling = 4473K [START_REF]CAS Common Chemistry. CAS, a Division of the American Chemical Society[END_REF], respectively) and it is classically assumed a solid to gas conversion. As a result, LII signal is observed at the lowest fluence considered and a smooth transition is observed until vaporization of carbon atoms is observed for the highest fluence.
On the other hand, TiO 2 nanoparticles have low E(m λ ) values when they are in the solid state and lower melting and boiling temperature (T TiO2 meting ∼ 2100K [START_REF] O'neil | The merck indexan encyclopedia of chemicals, drugs, and biologicals[END_REF] and T TiO2 boiling = 2773 -3273 K [START_REF] Weast | Crc handbook of chemistry and physics 69th ed (boca raton, fl: Chemical rubber)[END_REF], respectively). It seems then that these particles can hardly be heated, so only LIF is observed at low fluences. For higher fluences, it is possible that particles are initially converted to a liquid state during the heating process. Consequently, E(m λ ) increases, which subsequently enhances the particle heating up to their vaporization. As a result, no smooth transition is observed for LIE signals on TiO 2 particles. For small fluences, particles cannot be sufficiently heated due to their low E(m λ ). For high fluence regimes, it seems that particles have changed their state, increasing their E(m λ ) value and, consequently, enhancing the particle heating process.
In the following section, the temporal behavior is investigated to confirm the conclusions on the nature of LIE. Before analyzing the temporal evolution of LIE, it is important to remind that the temporal profile is closely related to the primary particle size [START_REF] Will | Two-dimensional soot-particle sizing by time-resolved laser-induced incandescence[END_REF]. In this study, particles of the same size are expected to be distributed into the optical cell during the measurement, whereas a polydisperse population is generally found in flame synthesis. Still, the particle diameter can decrease with time in case of sublimation, i.e. for high laser fluence regimes [START_REF] Michelsen | Modeling laser-induced incandescence of soot: a summary and comparison of lii models[END_REF].
Temporal analysis
The temporal evolution of the signals emitted at 450 nm for carbon black and of TiO 2 under different fluences are presented in Fig. 11. For carbon black (Fig. 11a), the decay rates are relatively linear (in log scale) without significant slope change for all fluences except for F=0.23 J/cm 2 . Since the particle temperatures for F=0.06 and 0.12 J/cm 2 are quite similar, the decay rates are nearly the same in Fig. 11a. At F = 0.23 J/cm 2 , the high fluence induces sublimation of the particles, resulting in LIF from vaporized species as discussed in Fig. 7a and decreasing the particle diameter. As a consequence, the LII signal has a rapid decay close to the prompt and presents a slope change of around 50 ns.
For pure TiO 2 particles (Fig. 11b), at low laser fluence (F ≤ 0.06 J/cm 2 ), a short-lifetime signal is detected at 450 nm. Together with previous analysis on the spectrum, this possibly indicates that the LII component, known to be characterized by long decay time, is not observed at this fluence. This is due to the fact the particle temperature is not high enough to emit a detectable LII signal. The lifetime t non-LII of this signal is estimated to be ≈ 100 ns (FWHM around 10 ns, tail of temporal profile rests until 60 ns). At higher fluences (F ≥ 0.19 J/cm 2 ), a slope change is observed near t non-LII , indicating the presence of multiple signals with both short and long lifetimes. The short non-LII contributions are predominant over the first t non-LII and present a significant slope. The long ones dominate the second period of the emission. Comparing F = 0.19 and F = 0.24 J/cm 2 for t ≥ t non-LII , the decay rate is nearly identical, meaning that the processes at the origin of the delayed LIE are quite similar for these two fluences. Once again, if the interpretation of temporal emission from soot particles is well established in literature [START_REF] Michelsen | Laser-induced incandescence: Particulate diagnostics for combustion, atmospheric, and industrial applications[END_REF], temporal LIE from TiO 2 requires more extensive analysis.
For this, Figs. 12 and 13 show the normalized temporal evolution of signal for three different fluences for carbon black and two different fluences for TiO 2 for different detection wavelengths. The temporal evolutions of LIE emissions at four wavelengths are considered for various laser fluences characterized by a significant S/N ratio. Only the temporal signals after 100 ns from peak signal are considered to completely avoid non-LII contribution. The signals are normalized at 100 ns delayed timing to compare the temporal decay.
Concerning carbon black, a shorter residence time is observed when increasing the laser fluence. This is not in contradiction with LII theory. It is true that at prompt for given initial particle state (diameter and temperature), a higher particle temperature is obtained for a higher laser fluence. However, the LII decay rate is governed by the particle temperature gradient as deduced from Eq. ( 4) and not by particle temperature itself. Such temperature gradient is governed by the evaporation, radiation and conduction processes. Thus, when considering LII in cold environment, it is possible to observe longer decay times for higher laser fluences due to a strong contribution of conduction to the cooling process as proven by LII simulations provided as Supplementary materials.
In the case of TiO 2 curves exhibit the same decay tendency for all fluences and wavelengths implying that particles have reached relatively close temperatures for the two considering fluences. To validate the LII nature of the delayed LIE, the decay characteristic time τ is calculated using Eq. ( 3) by fitting the LIE signals between 100 and 400 ns after the peak signal by using a log function. Results are shown in Figs. 14a and14b for carbon black and TiO 2 , respectively. Here, the fluence values before the evident vaporization regime (at the beginning of the plateau curves) are considered, i.e. F=0.12 J/cm 2 and F=0.19 J/cm 2 for carbon black and highpurity TiO 2 , respectively. For other laser fluences, the same linear tendencies have been observed (not shown).
Concerning carbon black nanoparticles, the decay characteristic time τ increases with the wavelength in accordance with Eq. ( 4), which has been derived for LII signals assuming a constant primary particle diameter d p . Similarly, for pure TiO 2 nanoparticles, the decay characteristic time τ values are linearly proportional to the emission wavelength, confirming the LII nature of the delayed LIE as derived from the LII theory of Sec. 2.1. Finally, the temporal evolution of quantity ∆T -1 defined in Eq. ( 5) is reported for carbon black nanoparticles in Fig. 15 for two wavelengths and various fluences from the temporal LIE profile starting at t 0 = 100 ns from prompt. In the case of carbon black, the ∆T -1 for λ em = 640 and 710 nm are identical for fluences of 0.06 and 0.12 J/cm 2 , confirming the incandescent nature of signal for all signal durations. Once again, discrepancies are observed for the highest fluence (F=0.23 J/cm 2 ), where sublimation is suspected. The evolution of ∆T -1 of TiO 2 of high purity in an inert environment is illustrated in Fig. 16. The evolution of ∆T -1 at λ em = 650 nm and λ em =710 nm shows the same decreasing tendency for the two fluences values. Therefore, the LII behavior of pure TiO 2 is demonstrated at least for the LIE signals at these wavelengths and time delays.
Conclusion
Laser diagnostics are increasingly employed for the characterization of nanoparticles production in flame synthesis. Among the different available techniques, the LII satisfies the need to characterize the volume fraction with an in-situ approach with possible application to turbulent flames. However, the adaptation of LII from conventional soot particles to non-soot metal oxide nanoparticles can be challenging in terms of phase change during the LII process, the selection of excitation laser wavelength, and the existence of nonthermal laser-induced emissions as interference during the measurement. To the authors' knowledge, studies applying LII on TiO 2 have been done until now under the risk of carbon trace in signals from the hydrocarbon fuel or carbon-containing precursors [START_REF] Cignoli | Laser-induced incandescence of titania nanoparticles synthesized in a flame[END_REF][START_REF] Maffi | Spectral effects in laser induced incandescence application to flame-made titania nanoparticles[END_REF][START_REF] De Iuliis | Light emission of flame-generated TiO 2 nanoparticles: Effect of ir laser irradiation[END_REF][START_REF] De Iuliis | Laser-induced emission of tio 2 nanoparticles in flame spray synthesis[END_REF]. These studies were mainly performed for high flame temperatures close to the melting point of TiO 2 , leading to uncertainty about the source of the incandescent signal. This study aims to definitely assess the feasibility of laser-induced incandescence measurements for TiO 2 nanoparticles by considering high-purity TiO 2 aerosol. The spectral and temporal behavior of laser-induced emission of TiO 2 were investigated by comparison with carbon black nanoparticles. High-purity TiO 2 produced by sol-gel process and carbon black nanoparticles are dispersed by N 2 gas flow into a non-reacting optical cell. A nearly top-hat-shaped laser at 355 nm successfully heats the particles. Their spectral and temporal behavior was measured via spectrometer and PMT, respectively. The ambient temperature inert gas dispersion prevents the melting before the laser irradiation, the interference from flame-generated emission, and the possible formation of a carbonaceous layer on the particles.
For carbon black, the blue shift with increasing fluence was observed. C 2 emission was observed at high laser fluence possibly due to sublimation. TiO 2 nanoparticles show different emission spectra at prompt depending on the laser fluence. At low fluence, the emission spectra of TiO 2 resemble those reported in the literature for LIF with a minor dissimilarity due to the difference in particle nature and/or experimental parameters. At high fluence, sharp emission features superimposed with broadband emission spectra were detected. The distinct features might be interpreted as narrower LIF emissions under higher fluence or PS-LIBS atomic line emission. Focusing on the part of the signal not influenced by the interferences, the experimental system is validated with conventional carbon black particles by demonstrating that the emission spectra with a delayed gate follow the black-body radiation behavior. Then, for titania, despite the difficulties in excitation and detection of LII measurements due to a small absorption function E(m λ ), the delayed gate emission spectra show that the TiO 2 nanoparticles present a black-body-like radiation response during the cooling process. The LII on pure TiO 2 is possible when (1) particles are heated with enough laser fluence and (2) delayed detection is applied to avoid short-duration non-LII emissions. In a temporally resolved signal, those interferences disappear after 100 ns with an apparent change of the decaying slope. The LII nature of TiO 2 LIE is also proven based on the LII theory with temporal evolution in different emission wavelengths.
This study demonstrates the possibility of the in-situ LII on pure TiO 2 in a cold environment. In the Supplementary materials, its feasibility is extended to flame-synthesized TiO 2 nanoparticles. Nevertheless, the analysis of the LII signal for non-soot nanoparticles remains challenging. Specifically, the parasite laser-induced emissions, LIF and PS-LIBS atomic emissions, prevent the possibility of measuring the effective temperature of particles at the timing of the signal peak, which is necessary to interpret the LII signal in terms of volume fraction. Data analysis to obtain size information will be more difficult with unknown and/or unfavorable optical properties. The possible coexistence of different particle populations should also be taken into account during the LII signal interpretation.
Declarations. All authors declare that they have no conflicts of interest.
Fig. 4 :
4 Fig. 4: Schematic presentation of the particle phase information related to the temperature (top) and to the LIE characteristic emission duration (bottom) as a function of the typical fluence range of different LIE signals.
Fig. 5 :
5 Fig. 5: Schematic presentation of the experimental set-up developed to investigate laser-induced emission from TiO 2 nanoparticle-laden aerosol using a pulse YAG laser at 355 nm with a nearly top-hat spatial profile (bottom-right).
Fig. 6 :
6 Fig. 6: Histogram of laser fluence across the laser beam detected by the beam profiler.
Fig. 9 :
9 Fig. 9: Normalized emission spectra of TiO 2 for F = 0.19 J/cm 2 with (a) gaussian fit of band emission of LIF from literature [15, 20, 72, 73, 75, 76] and (b) Ti and O atomic emissions from NIST [77] corresponding to peak location of emission spectra. The intensities of each transition are adapted for a clear view with the curve, not expressing the relative intensities.
Fig. 11 :
11 Fig. 11: Temporal evolution of laser-induced emission of (a) carbon black and (b) TiO 2 nanoparticles at 450 ± 10 nm detection wavelength. Each profile is normalized by its maximum value.
Fig. 13 :
13 Fig. 13: Normalized temporal evolution of high-purity TiO 2 nanoparticles at different laser fluences at detection wavelengths of (a) 450 ± 10, (b) 550 ± 10, (c) 650 ± 10 and (d) 710 ± 10 nm. All profiles are normalized with the value of the signal at 100 ns.
Fig. 14 :
14 Fig. 14: Decay characteristic times of LIE signal for (a) carbon black and (b) TiO 2 as a function of detection wavelength. The R 2 value inside the legend represents the coefficient of determination for the fitting procedure. The dotted lines are the results of the linear fitting.
Fig. 15 :Fig. 16 :
1516 Fig. 15: Time evolution of ∆T -1 of carbon black nanoparticles LIE for three laser fluences at (a) F=0.06, (b) F=0.12 and (c) F=0.23 J/cm 2 .
2 O 3 . Ni, Au, Cu, Al, and Fe have spectrally different absorption functions when they are in a liquid state, fluctuating close to 1. Ag, Si, and Al 2 O 3 have a much higher absorption function in a liquid state.
5800 Si Fe Au Ag Ni Al Cu Al 2 O 3
5450 Al 2 O 3
40
20 Si Ni
10 Fe Al
Au Cu
Ag Al2O3
1
300 400 500 600 700
Wavelength [nm]
Table 1 :
1 Characteristics of the laser-induced emissions of TiO 2 nanoparticles in terms of operating fluence range and characteristics time (at 1 atm).
Laser λex LIF PS-LIBS LIBS LII
UV continuous wave (∼mW) 0.4-32J/cm 2 at 355nm [63] - 0.0007-0.05J/cm 2 (filter) 0.08-0.15J/cm 2 (flame) at 266nm [54]
532nm - 1.6-56J/cm 2 [25, 63, 64] - -
1064nm - 10-45J/cm 2 [64] 24-75J/cm 2 [65, 66] 0.01-0.56J/cm 2 [42]
Table 2 :
2 Detectable LIE signals with our detection system for different laser wavelengths for carbon black and pure TiO 2 nanoparticles.
Wavelengths 1064 nm 532 nm 355 nm
Carbon black YES YES YES
TiO 2 NO NO YES
4.1 Spectral analysis
sharp features
1.2 1.2
Normalized emission [a.u.] 0.4 0.6 0.8 1 C 2 Carbon black C 3 At prompt 0.23 J/cm 2 (3670K) 0.12 J/cm 2 (3600K) 0.06 J/cm 2 (3520K) 0.03 J/cm 2 (3040K) Laser fluence Normalized emission [a.u.] 0.4 0.6 0.8 1 0.24 J/cm 2 0.19 J/cm 2 0.06 J/cm 2 0.10 J/cm 2 Laser fluence TiO 2 At prompt non-LII LII ?
450 500 550 600 650 700 450 500 550 600 650 700
Wavelength [nm] Wavelength [nm]
(a) (b)
J/cm 2 (not shown).
Normalized emission [a.u.] 0.2 0.4 0.6 0.8 1 + 20 ns (3320K) + 40 ns (3290K) + 60 ns (3240K) Detection gate delay + 200 ns (2953K) + 300 ns (2800K) + 400 ns (2710K) Carbon Black F=0.06 J/cm 2 Normalized emission [a.u.] 0.2 0.4 0.6 0.8 1 1.2 TiO 2 F=0.06 J/cm 2 + 20 ns Detection gate delay
0 450 500 550 + 100 ns (3140K) 600 650 + 500 ns (2610K) 700 0 450 500 550 600 650 + 40 ns 700
Wavelength [nm] Wavelength [nm]
(a) (b)
Normalized emission [a.u.] 0.2 0.4 0.6 0.8 1 + 20 ns (3420K) + 40 ns (3320K) + 60 ns (3170K) Detection gate delay + 200 ns (3000K) + 300 ns (2960K) + 400 ns (2820K) Carbon Black F=0.12 J/cm 2 Normalized emission [a.u.] 0.2 0.4 0.6 0.8 1 1.2 TiO 2 F=0.10 J/cm 2 + 20 ns + 40 ns Detection gate delay + 100 ns + 200 ns
0 450 500 550 + 100 ns (3110K) 600 650 + 500 ns (2760K) 700 0 450 500 550 600 + 60 ns 650 700
Wavelength [nm] Wavelength [nm]
(c) (d)
Normalized emission [a.u.] 0.2 0.4 0.6 0.8 1 + 20 ns (3370K) + 40 ns (3260K) + 60 ns (3130K) Detection gate delay + 200 ns (3000K) + 300 ns (2960K) + 400 ns (2810K) Carbon Black F=0.23 J/cm 2 Normalized emission [a.u.] 0.2 0.4 0.6 0.8 1 1.2 TiO 2 F=0.24 J/cm 2 + 20 ns + 40 ns + 60 ns Detection gate delay + 200 ns + 300 ns + 400 ns
0 450 500 550 + 100 ns (3080K) 600 650 + 500 ns (2750K) 700 0 450 500 550 600 + 100 ns 650 + 500 ns 700
Wavelength [nm] Wavelength [nm]
(e) (f)
Assuming dp(t) constant means that sublimation is negligible during the duration of the LII process.
Characterization of Laser-Induced Emission of high-purity TiO 2 nanoparticles
Acknowledgments. This project has received the European Research Council (ERC) support under the European Union's Horizon 2020 research and innovation program (grant agreement No. 757912). The authors thank Dr. G. E. (Jay) Jellison of the Oak Ridge National Lab and Dr. Han-Yin Liu at National Sun Yat-Sen University for providing data for the refractive index and extinction coefficient. |
04107946 | en | [
"math"
] | 2024/03/04 16:41:22 | 2023 | https://hal.science/hal-04107946/file/Note_on_CG_hal.pdf | A note about Grothendieck's constant
Jean-Louis Krivine
Université Paris-Cité, C.N.R.S. [email protected]
May 24, 2023
Grothendieck's constant, denoted K G , is defined as the smallest real K > 0 such that we can write x, y = KE(sign(U x )sign(V y )) with x, y ∈ S (the unit sphere of l 2 ) and U x , V y are random variables, which are measurable functions of x, y respectively. Its existence was proved by A. Grothendieck in [START_REF] Grothendieck | Résumé de la théorie métrique des produits tensoriels topologiques[END_REF] where it is shown that : π/2 ≤ K G ≤ sh(π/2) = 2.301 . . . This constant is important in various areas such as functional analysis, algorithmic complexity and quantum mechanics : see [START_REF] Pisier | Grothendieck's theorem, past and present[END_REF]. Its exact value is unknown. In [START_REF] Krivine | Constantes de Grothendieck et fonctions de type positif sur les sphères[END_REF], it is shown that K G ≤ π/2 ln(1 + √ 2) = 1.782 . . . This result is improved in the prominent article [START_REF] Braverman | The Grothendieck constant is strictly smaller than Krivine's bound[END_REF] which proves :
K G < π 2 ln(1 + √ 2) (1)
Of course, the method gives a better upper bound but the authors did not considered it useful to give it explicitly for the moment. The important fact in (1) is the symbol <.
As explained below, the proof given in [START_REF] Braverman | The Grothendieck constant is strictly smaller than Krivine's bound[END_REF] is divided into two parts and the aim of the present note is to give another proof of the first part.
Let (X i , Y i )(0 ≤ i ≤ n -1) be independent pairs of centered gaussian normal random variables, such that E(X i Y i ) = t. Let X = (X 0 , . . . , X n-1 ), Y = (Y 0 , . . . , Y n-1
), x = (x 0 , . . . , x n-1 ), y = (y 0 , . . . , y n-1 ). Let F, G : R n → R be two odd (i.e. F (-x) = -F ( x)) measurable functions. We set :
Φ F,G (t) = E sign(F ( X))sign(G( Y ))
or else :
Φ F,G (t) = (2π √ 1 -t 2 ) -n R 2n sign(F ( x))sign(G( y))e -x 2 + y 2 -2t x, y 2(1-t 2 ) d xd y (2)
Φ F,G (t) is an odd function of t, which is analytic around 0, in fact for |Re(t)| < 1. Therefore Φ F,G (i)/i is real. We have :
Φ F,G (i)/i = (2π √ 2) -n R 2n sign(F ( x))sign(G( y))e -x 2 + y 2 4 sin x, y 2 d xd y (3)
The first part of the proof of (1), which is section 4 of [START_REF] Braverman | The Grothendieck constant is strictly smaller than Krivine's bound[END_REF], consists in showing the :
1 Theorem 1.
There exists an integer n ≥ 1 and two odd functions F, G : R n → R such that :
Φ F,G (i)/i > 2 π ln(1 + √ 2).
H. König has shown that n must be > 1 ; a proof of this is given in section 6 of [START_REF] Braverman | The Grothendieck constant is strictly smaller than Krivine's bound[END_REF].
In [START_REF] Braverman | The Grothendieck constant is strictly smaller than Krivine's bound[END_REF], the authors choose n = 2 ; F ( X) = G( X) = X 0 + ǫH 5 (X 1 ) where H n (x) is the Hermite polynomial of degree n ; we have H 5 (x) = x 5 -10x 3 + 15x ; ǫ is a positive real which decreases to 0.
Here we take n = 3 with :
F ( X) = X 1 cos(ǫH 2 (X 0 ))+X 2 sin(ǫH 2 (X 0 )) and G( Y ) = Y 1 cos(ǫH 2 (Y 0 ))-Y 2 sin(ǫH 2 (Y 0 )).
where ǫ is a fixed positive real and
H 2 (x) = x 2 -1.
Applying the well known formula E(sign(X)sign(Y )) = 2 π Arcsin(E(XY )), we get :
E sign(F ( X))sign(G( Y )) = 2 π E Arcsin t cos(ǫ(X 2 0 + Y 2 0 -2)) or else : Φ F,G (t) = 2 π R 2 Arcsin t cos(ǫ(x 2 + y 2 -2)) e -x 2 +y 2 -2txy 2(1-t 2 ) dx dy 2π √ 1 -t 2 Let t = i and η = 2ǫ : Φ F,G (i)/i = 2 π R 2 Argsh cos(ǫ(x 2 + y 2 -2)) e -x 2 +y 2 4 cos xy 2 dx dy 2π √ 2 = 2 π ∞ 0 π -π
Argsh cos(ǫ(r 2 -2)) e -r 2 4 cos r 2 sin 2θ 4
r dr dθ 2π √ 2 = 2 π ∞ 0 π 0 Argsh (cos(η(2ρ -1))) e -ρ cos (ρ sin θ) √ 2 π dρ dθ
This integral is not difficult to compute with a suitable software, which also gives a good If the inverse power series of Φ F,G (t) had been alternating, we would have obtained in this way an upper bound for K G , that is i/Φ F,G (i) < 1, 7806. But it is not, as is easily checked, using the same computation tools. The second part of the proof in [START_REF] Braverman | The Grothendieck constant is strictly smaller than Krivine's bound[END_REF] must therefore now be applied. It occupies section 5 of this article and uses only the above theorem 1.
Here are the details of the calculations in Mathematica and Maxima :
value for η ; with η = 0.228 we find 0.56161447 > 2 π ln(1 + √ 2) = 0.56109985 . . .
Computation in Mathematica
e = 0.228; 2*(Sqrt[2]/Pi^2)*NIntegrate[ArcSinh[Cos[e*(2r-1)]]
*Exp[-r]*Cos[r*Sin[t]],{r,0,Infinity},{t,0,Pi}]
0.561614475916681
Computations in Maxima
e:0.228$ float((2*sqrt(2)/(%pi)^2)*romberg(romberg((%e^(-r)*cos(r*sin(t))
*asinh(cos(e*(2*r-1)))),r,0,30),t,0,%pi));
0.5616148084478034
e:0.228$ float(2*sqrt(2)/(%pi)^2)*quad_qag(romberg((%e^(-r)*cos(r*sin(t))
*asinh(cos(e*(2*r-1)))),r,0,30),t,0,%pi,3);
[0.5616145048484699,5.20841189541884*10^-9,8.883967107886724,0] |
00861422 | en | [
"spi.auto"
] | 2024/03/04 16:41:22 | 2013 | https://centralesupelec.hal.science/hal-00861422/file/Building%20energy%20management%20system%20based%20on%20distributed%20model%20predictive%20control.pdf | A Lefort
H Guéguen
R Bourdais
email: [email protected]
; G Ansanay-Alex
A BUILDING ENERGY MANAGEMENT SYSTEM BASED ON DISTRIBUTED MODEL PREDICTIVE CONTROL
Keywords: Building Energy Management, Distributed Model Predictive Control, Load Shedding, Peak Reduction 1
.
In this paper, a BEM system based on distributed predictive control is proposed. The idea is to schedule the actions of the various controllable systems to minimize the energy cost while maintaining the occupant comfort and systems constraints. This scheduling is based on the knowledge of the future data profiles as well as the future cost of energy. The cost reduction is ensured by means of the building storage capacities and by shifting the house consumption periods if the future price is high. Each building is different from another, because of its construction, its systems and its occupants. Consequently, BEM systems have to be modular. This point is ensured by its distributed architecture: one agent is dedicated to each controllable system, and a coordinator agent ensures an optimized global behavior.
METHOD
Nowadays, the interest to develop a control for building is to give it the ability to shift or erase its grid consumptions in order to save money and/or reduce its consumption. All this, while ensuring the occupant comfort and system constraints. For that, it is necessary to integrate all the controllable building systems in a BEM system which disposes of a global view.
System formulation
The proposed method is based on a system view of the building installation. It considers that each building system has its own objective and constraints to satisfy. For example, a heating system can be seen as a producer unit, with capacities constraints, which has to ensure the thermal comfort on the building. From a control point of view it is defined as:
Problem 1 Heating System problem : where C(t) depends on the control objective. Here, it is supposed to be the time varying energy price. Q hp (t) is the thermal heating power provide through the heating system P hp (t), T ambiant (t) is the internal building temperature, D(t) is the uncontrollable thermal gain variable, f is the thermal dynamics coefficients function while T , T , P hp and P hp are the temperature and power limits. From this system view, we define a MPC problem in order to optimize the system consumption during the day.
minimize J = H 0 C(t).P hp (t)dt (1) with respect to ∀t ∈ R Q hp (t) = ηP hp (t) Ṫambiant (t) = f (T ambiant (t),Q hp (t), B d D(t)) P hp ≤ P hp (t) ≤ P hp T ≤ T ambiant (t) ≤ T (2)
MPC problem formulation
The model predictive control approach refers to a class of control algorithms that compute a sequence of control moves based on an explicit prediction of outputs within some future horizon. • T ambiant (t) : the current states of the system • D hs,T and C T : the uncontrollable variables and the time varying price at the instant
[t, t + H] such as D hs,T (k) = D hs (t + k.T ) with k ∈ {0, . . . , N -1}.
and known f T the discretized dynamics coefficients function of the systems at the sampling time T. The optimization problem is: min
P hp (0,...,N -1) J = N -1 j=0 c(j).P hp (j) (3)
with respect to ∀k ∈ {0, . . . , N -1}:
Q hp (k) = ηP hp (k) T ambiant (k + 1) = f T (T ambiant (k),Q hp (k), D hs,T (k)) P hp ≤ P hp (k) ≤ P hp T ≤ T ambiant (k) ≤ T (4) with T ambiant (0) = T ambiant (t).
Solving the problem provides, at each sampling time T , the command vector P * hp (0, . . . , N -1). Only the first column P * hp (0) is sent to the process. The MPC method enables to anticipate high price period or production period or better systems efficiencies periods. Thus, and thanks to the building storages capacities ( walls inertia, batteries, water storage tank, ...), the BEM MPC leads to a shifting strategy [START_REF] Lefort | Hierarchical control method aplied to energy management of a residential house[END_REF].
BEM problem
In order to optimize the whole building consumption and to take into account the several interactions between the systems, we formulate the building optimization problem by gathering together all the building installations. Using the previous system view the building optimization problem is formalized as follows:
Problem 3 BEM MPC problem :
The optimization problem is:
min U J = N -1 j=0 C(j).U(j) (5)
with respect to ∀k ∈ {0, . . . , N -1}:
E 1 u 1 (k) = f 1 (k) . . . . . . E i u i (k) = f i (k) . . . . . . E n u n (k) = f ns (k) A 1 u 1 (k)+ . . . A i u i (k) . . . +A n u ns (k) = M (k) (6) 0 ≤ h 1 u 1 (k) ≤ b up 1 (k) . . . 0 ≤ h i u i (k) ≤ b up i (k) . . . 0 ≤ h n u ns (k) ≤ b up ns (k) (7)
such as, E i , f i , h i and b up i are the dynamics coefficients and constraints ∀i ∈ [1, . . . , n s ], A i refer to the global coefficients and M the global constraints, n s corresponds to the number of installations. This (block-angular) problem is effectively solved by the Dantzig-Wolfe method (see [START_REF] Dantzig | Decomposition principle for linear programs[END_REF] for details). This distributed resolution method is exact (gives an optimal solution) and enables to bring modularity to the BEM system control structure (see Figure 3). Each sub-system is independent, it has its own objective and constraints, and is only connected to the coordinator. The latter has to ensure the global building objective and common constraint. The modularity aspect is linked to the system integration. We can note that if we add, delete or modify a system, the concerned system only has to be treated and linked to the coordinator.
SIMULATIONS AND RESULTS
In order to evaluate the adaptability and the efficiency of the proposed BEM control, it has been implemented, in simulation, on two buildings; from one thing the house A, a very inert residential house with very low thermal losses,composed principally of hydraulic radiators, an heat pump (air/water) and a solar water panel combined to a hot water sanitary storage tank; and from another, a high insulated house B with less inertia than the the house A, composed of electrical radiators, a battery, an electrical solar panel and an electric hot water sanitary storage tank. profiles, and the simulation models of the process are not the ones implemented in the controllers but are from the SIMBAD library, a Matlab-Simulink toolbox, dedicated to the building behavior, developed by the CSTB. The temperature set point is set to 19 • C during the occupation period and is free during the inoccupation period. In this article, we proposed to evaluate the BEMS behavior of the two buildings in response to electricity tariffs profiles.
The simulations results highlight two different strategies. The internal temperature regulation does not vary a lot in the house A (see Figure 4). This is due to its strong inertia and its small heating capacities. Moreover, without electricity storage, the house A regulation is seen as a smoothing strategy even if the hot water tanks are warmed during the LP period.
But, the house B strategy is different. The BEM regulation leads to decrease the temperature until 16 • C during inoccupation period (Figure 5). This is due to its smaller thermal inertia. Moreover, thanks to the battery, the house B changes its heating electrical source and so decreases its impact/consumption on the electrical network. It notes, on figure 6, that the BEM control anticipates the HP periods storing energy in the different systems during the LP period. This leads to shift the building grid consumption. The tariff profile acts as a load shedding strategy.
CONCLUSION
This study proposed an adaptable BEM structure to optimize the energy consumption of residential houses. The modularity is brought by a systemic view combined with a distributed resolution approachs using Dantzig-Wolfe method. For this article, this BEM system architecture, based on MPC, is implemented on two different residential houses. The study highlights that the control performance strongly depends on the building characteristics. For a high insulated house with slow dynamic systems, the optimal control results in smoothing of the load, whereas, for a less insulated house with faster dynamic systems, it results in a more reactive control which shifts the peak load consumption.
Figure 1 :
1 Figure 1: System view example.
sys
Figure 3 :
3 Figure 3: Global system view
2 am 4 Figure 4 :
44 Figure 4: Internal air temperature of the house A regulation
2 am 4 Figure 5 :Figure 6 :
456 Figure 5: Internal air temperature of the house B regulation
Table 1 :
1 The simulation scenarii have been performed with disturbances on the data predictions Daily period (hour) 0 . . . 6 . . . 13 . . . 15 . . . 17 . . . Prices profiles, LP is the Low Price period (0.09e/kW h), HP is the High Price period (0.11e/kW h) and CPP the Critical-Peak Pricing period (0.21e/kW h).
19 . . . 22 . . . |
00932778 | en | [
"spi",
"spi.nrj"
] | 2024/03/04 16:41:22 | 2013 | https://minesparis-psl.hal.science/hal-00932778/file/Criterion%20to%20decrease%20the%20computational%20burden%20in%20multistage%20distribution%20system.pdf | Abdelouadoud Seddik Yassine
email: [email protected]
Robin Girard
François-Pascal Neirac
Guiot Thierry
email: [email protected]
A
Keywords: Distributed power generation, power system planning, smart grids I. CONTEXT
As the interest for distributed generation and storage grows, so does the need for new tools to assist in the planning of the distribution network. Indeed, as the current passive network transforms into an Active Distribution Network (ADN) [START_REF] Pilo | Active distribution network evolution in different regulatory environments[END_REF] with the introduction of partially and totally controllable generation and storage means, planning studies based solely on power flows for extreme load conditions will not be adapted anymore. Considering the similarities between the current transmission network and the future ADN, it is a safe bet to assume that the Optimal Power Flow, a tool first introduced in 1962 by Carpentier [START_REF] Carpentier | Contribution à l'étude du dispatching économique[END_REF] and now widely used for the planning and operation of the transmission network, will prove useful for this purpose. Consequently, the adaptation of the OPF concept and resolution algorithms to the distribution network has been the subject of numerous publications in the last decade such as [START_REF] Gabash | Active-Reactive Optimal Power Flow in Distribution Networks With Embedded Generation and Battery Storage[END_REF], [START_REF] Ahmadi | Optimal power flow for autonomous regional active network management system[END_REF], [START_REF] Swarnkar | Optimal power flow of large distribution system solution for combined economic emission dispatch problem using partical swarm optimization[END_REF], [START_REF] Dolan | Using optimal power flow for management of power flows in active distribution networks within thermal constraints[END_REF]. Among the structural differences between transmission and distribution network, the radial topology and higher R/X ratio of line impedances are often identified as the main obstacles to the direct use of transmission network OPF algorithms to distribution systems as explained in [START_REF] Bruno | Unbalanced three-phase optimal power flow for smart grids[END_REF]. With this in mind, an OPF for a distribution system will still be a highly dimensional nonlinear nonconvex programming problem. Thus, in a planning study for which OPFs have to be solved for each hour of the year and multiple planning options evaluated, the computational burden can become dissuasive. And so, heuristics that allow to decrease the computational burden while controlling the loss of optimality can have practical applications, as demonstrated in [START_REF] Paudyaly | Three-phase distribution OPF in smart grids: Optimality versus computational burden[END_REF]. In the work presented here, we make use of the characteristics of the problem studied to propose a methodology aiming at evaluating the voltage constraints only when they are binding, thus drastically reducing the computational cost of the resolution.
II. METHODOLOGY
A. Problem studied
First, let us define the general form of the OPF problem for which the methodology presented here applies. It consists in finding, for each time steps, P, Q, V and α that verify:
) ( min arg P f P (1) 0 ) , , , ( Q P V DF (2) 0 ) , ( V h (3) 0 ) , ( Q P k (4) 0 ) , ( Q P m (5)
where P and Q are the vectors of, respectively, controllable active and reactive power algebraic injections, V is the vector of node voltage magnitude, α is the tap setting of the PowerTech 2013, Towards carbon free society through smarter grids, 16-20 June 2013, Grenoble, France substation transformer on-load tap changer (OLTC), f represents the objective function, DF the set of DistFlow network equations used in, for example, [START_REF] Dukpa | An accurate voltage solution method of radial distribution system[END_REF], h the inequality voltage constraints and k and m represent the other equality and inequality constraints implemented in the problem (e.g. limit on apparent powers, reserve provision requirements or limits on storage means energy capacity). We emphasize here the fact that the methodology proposed is applicable only if the reactive power injections have no influence on the objective function.
B. Problem simplification
In many instances, the problem complexity can be greatly reduced by ignoring ( 2) and (3). Indeed, for example, if the objective function consists in the sum of generation piecewise linear costs and (5) represents bounds on active and reactive power, the problem defined by ( 1), ( 4) and ( 5) is a simple linear programming problem. Moreover, we aim at taking advantage of the fact that voltage limit violation occurrence are infrequent phenomena, as evidenced by [START_REF] Widén | Impacts of distributed photovoltaics on network voltages: Stochastic simulations of three Swedish low-voltage distribution grids[END_REF]. This finding supports the assumption that (2) and (3) will be binding constraints only during a relatively small number of time steps. To support this claim, we present the results of a simulation undertaken on a 69-bus medium-voltage distribution system (see [START_REF] Baran | Optimal capacitor placement on radial distribution system[END_REF] for the detailed characteristics of the network). Residential-type loads are simulated for each bus and scaled so that the maximal annual active load is equal to the one used in [START_REF] Baran | Optimal capacitor placement on radial distribution system[END_REF] and the power factor remains constant. Then, 17 buses are equipped with photovoltaic and energy storage systems in increasing order of driving-point impedance magnitude, with varying penetration levels (the penetration level at a given bus is defined by the ratio between the maximum power of the storage and photovoltaic systems and the annual maximal load at this bus). The storage systems have a 5 hour discharge time and their charge and discharge efficiency are both equal to 0.95. The objective function is defined as a measure of market transaction cost. In the graph below, we observe the number of time steps where the voltage constraints are active or binding, as a function of the penetration level. Consequently, we propose to separate the initial problem into a master problem composed of ( 1), ( 4) and ( 5) that is easier to solve and a slave problem consisting in finding P,Q,V, θ and α that verify:
2 ) ( min arg P P master P , ( 7
)
while satisfying the constraints ( 2), ( 3), ( 4) and ( 5). The slave problem is still a highly dimensional nonlinear nonconvex programming problem. We will now introduce a procedure that will allow us to solve it only when it is necessary, i.e. when its result has an impact on the objective function of the initial problem.
C. Criticality Criterion 1) Guiding principle
For the remainder of this paper, we will adopt the following convention: a time step will be deemed critical if (3) is a binding constraint, semi-critical if it is active but not binding, and noncritical if it is inactive. On a more practical level, the noncritical time steps are those for which the active powers resulting from the master problem directly verify (3), the semi-critical are those for which the reactive power and/or tap settings but not the active powers have to be adjusted and the critical time steps are those for which the active powers have to be modified. In order to solve the slave problem only when necessary, we need a reliable and computationally effective method to separate the critical time steps from the others. To achieve this, we calculate a relative criticality criterion aimed at ranking the time steps so that the critical time steps are first, followed by the semi-critical and noncritical ones. First, let us set aside the non-critical time steps as these can be identified with a mere power flow to concentrate on separating the semi-critical from the critical ones. Essentially, this means finding a measure of the ability of the reactive powers and OLTC to correct the voltage violations incurred from the behavior of the loads, distributed generation and storage systems resulting from the master problem solution.
2
) Voltage violation and OLTC influence
In order to take into account the influence of the on-load tap changer, we define as the set of permissible tap-settings and t OLTC V , the substation downstream voltage at time step t corresponding to the tap position . The first step of the criticality criterion calculation is to obtain a power flow for a chosen initial tap setting 0 and then expand the result to all possible tap settings for each bus k by making the following approximation:
.
t OLTC t OLTC t k t k V V V V , , , , 0 0 (8)
We then obtain an estimation of the voltage limit violation during time step t at bus k for the tap setting in the following way:
min , , min max , min max , max , , if , if , 0 if , V V V V V V V V V V V V t k t k t k t k t k t k . ( 9
)
This gives us an estimation of the absolute voltage violation to be compensated at each bus and for each permissible tap-setting.
3) Reactive power voltage violation compensation capability
We base our reasoning on the concept of Voltage Change Potential introduced in [START_REF] Auld | Analysis and visualization method for understanding the voltage effect of distributed energy resources on the electric power system[END_REF] and aimed at evaluating the local voltage variations caused by the connection of a given distributed energy resource. It is based on a linearization of the network equations, under the assumption that the actual voltage change is small compared to the pre-existing voltage, and is defined as follows:
V P R Q X V , ( 10
)
where X is the driving-point reactance, R the driving-point resistance, Q the reactive power injected at the considered bus, P the active power and V the pre-existing voltage.
For our purpose, we define the Reactive Voltage Change Potential during time step t at bus k as follows:
t k t k k t k Q V Q X V , , , (11) t k Q V ,
is effectively a measure of the ability of a given reactive power source to modify the voltage at its bus.
4) Formal definition of the criterion
We now have at our disposal a measure of the voltage violations taking into account the role of the OLTC and an estimation of the reactive power local compensation capability. We can combine them in order to define a local compensated voltage violation in the following way:
t k Q t k t k Q V V V , , , , (12)
As we need a single measurement by time step in order to establish our criticality criterion, we define it as the minimum of the L1-norm of the compensated voltage violation, while respecting constraints on reactive power availability and permissible tap-settings. This can be formally expressed as:
K k t k Q Q t V C , , , min (13) t k t k Q Q , max , 0 (14) (15)
Where K is the set of network buses and t k Q , max is the available reactive power at bus k during time step t .
5) Calculation
As is a discrete variable, we can define, for each :
K k t k Q Q t V C , , min , ( 16
)
while respecting (14). We can observe that, for a fixed , the terms in the sum are independent from one another. Thus, we also have :
K k t k Q Q t V C , , min (17)
where :
otherwise 0 0 if min , , , , , , max max t k Q t k Q t k Q Q V V V (18)
We can thus easily calculate
III. APPLICATION
A. Case Study 1) Criticality criterion calculation
We apply this methodology to the case study introduced in paragraph B. The master problem is solved using IBM ILOG CPLEX and the slave problem is solved using IPOPT [START_REF] Wächter | On the Implementation of a Primal-Dual Interior Point Filter Line Search Algorithm for Large-Scale Nonlinear Programming[END_REF], both through their Matlab interface. Below are two graphs presenting the ranking obtained with the criticality criterion for the various types of time steps. In the first simulation considered, 17 buses have been equipped with photovoltaic generators and storage systems, and the level of penetration has been set at a 100%, while in the second, 22 buses have been equipped with a level of penetration of 110%. We can observe that the criticality criterion fulfills its goal, as we obtained a strict separation between the critical time steps and the semi-critical ones.
2
) Computational time spared
As the stated goal of this methodology is to decrease the computational burden associated with such calculations, we now present an evaluation of the computational time spared for various levels of penetration and in the case where 17 buses have been equipped. These results have been on obtained on a desktop computer with a 2.4 GHz processor and 6 GB of RAM. We observe that the computational gain obtained increase substantially starting from a level of penetration of 60%, which also corresponds to the occurrence of a significant and growing number of semi-critical time steps (see fig. 1). This is to be expected, as the slave problem takes typically 0.02 seconds to be solved for non-critical time steps, while it takes between 2.5 and 3 seconds for a semi-critical time step.
3) Voltage control impact
A possible application of the methodology introduced is to evaluate the deoptimization introduced to the master problem scheduling by the voltage control implemented. First, let us observe the solution from the slave problem for a critical time step. We can see that the active power injections from the storage systems are effectively reduced to cope with overvoltage occurrences. This has two effects: the first one is the decrease in voltage due to lower active power injections and the second one is the freeing up of reactive power capabilities that allows for reactive power consumption, further reducing the voltage.
To evaluate the impact of the voltage control, we first compute the gains obtained in terms of scheduling costs through the deployment of photovoltaic generators associated with storage systems, for various scenarios. These are expressed in percentage of the cost of scheduling when only the loads are present. Then we compute the cost associated with voltage control in terms of deoptimization of the scheduling, and in this case, the results are also expressed in percentage of the cost of scheduling when only the loads are present.
B. Limitation of the criticality criterion
We have presented results derived from the use of the criticality criterion in cases where the separation obtained was satisfactory. However, this is not always the case, especially when the level of penetration is increased substantially. For example, the ranking obtained when 22 nodes are equipped and the level of penetration is 140 percents is as follows: The ranking obtained is not satisfactory, as a significant number of critical time steps have lower criticality criterion values than some semi-critical time steps (hereafter named missed critical time steps). This is to be expected, in the sense that the criticality criterion, in essence, aims at describing the behaviors of heterogeneous buses with only one value. Thus, when the behaviors, for a given time steps, of various buses starts to diverge significantly (for example, a high load is present on a bus without storage while a high level of power is injected from another bus), we cannot expect the criticality criterion to be efficient. In order to investigate the limits to the efficiency of the criticality criterion, we define the maximal difference of active voltage change potential, as a proxy measurement of the divergence in behaviors between various buses. It is calculated in the following way: We observe here that the criticality criterion remains efficient until a level of penetration of 130 percents is attained. At this level, the maximum voltage change potential difference is close to the span of the range of permissible voltages (chosen here to be between 0.95 p.u. and 1.05 p.u.). This observation gives us a hint that we may be able to define a domain of applicability for the criticality criterion through the mere calculation of the maximal difference of active voltage change potential. To investigate this claim, we have completed various sets of simulation while modifying several relevant parameters: the R/X ratio of the lines, the storage discharge time and the number of nodes equipped. For each set of simulation, the level of penetration varies between 10 and 150 percents. The results are presented below. We observe that the number of critical time steps missed remains relatively low (under 60) compared to the total number of time steps (8760) when the maximal difference of active voltage change potential is below the span of permissible voltage range. As the original goal stated is to find the minimal total cost associated with the operation of such a system, not taking into account certain critical time steps will have the consequence to under evaluate this cost. Thus, we compute the difference between the total cost taking into account all the critical time steps and the total cost obtained through the use of the criticality criterion, and define this as the loss of precision incurred by the implementation of the criticality criterion. It is expressed in percentage of the total cost and is presented below as a function of the maximal difference of active voltage change potential, with varying levels of penetration and numbers of nodes equipped. We can observe that the loss of precision that can be attributed to the use of the criticality criterion remains fairly low, even when the maximum difference of active voltage change potential surpasses the span of the permissible voltage range. This can also be used to determine a domain of applicability for the criticality criterion dependent on the level of precision required by the considered application.
t k t k k t k P V P R V , , , (19) t k P K k t k P K k T t P V V V , , max min _ max max (20)
IV. CONCLUSION
We have presented a methodology aimed at significantly reducing the computational burden associated with the resolution of optimal power flows for distribution systems, in order to render it more practical for planning purposes. We have verified its validity and the computational gains obtained for a practical case study. Afterwards, we have defined an indirect measurement of the domain of validity with a negligible computational cost and have tested it against several extensions of the aforementioned case study. Finally, we have examined the relationship between the loss of precision incurred from the use of the criticality criterion and the position in the domain of validity. Further research has to be done to try and characterize this relationship in order to be able to determine a priori the loss of precision -or an upper bound for it. This will be done by applying this methodology to representative practical distribution networks, with realistic loading conditions and testing several options for the objective function. Another area of improvement lies in the resolution of the slave problem, which could be sped up by drawing on its specificities, especially the fact that it is close to being a quadratically-constrained quadractic programming problem. Finally, this methodology will be integrated in a comprehensive distribution system planning framework adapted to the development of distributed generation and storage
Figure 1 :
1 Figure 1 : Total active power load for a winter and a summer week
Figure 2 :
2 Figure 2 : Evolution of the influence of voltage constraints as a function of penetration level
C
by choosing the minimum value of t C .
Figure 3 :Figure 4 :
34 Figure 3 : Criticality criterion ranking, with 17 nodes equipped and a level of penetration of 100%
Figure 5 :
5 Figure 5 : Computational time spared as a function of the level of penetration
Figure 6 :
6 Figure 6 : Bus voltages before and after solving the slave problem
Figure 7 :
7 Figure 7 : Active powers resulting from the slave problem solution
Figure 8 :
8 Figure 8 : Reactive powers resulting from the slave problem solution
Figure 9 :Figure 10 :
910 Figure 9 : Gains obtained by the deployment of storage and PV systems, for various levels of penetration and numbers of nodes equipped
Figure 11 :
11 Figure 11 : Criticality criterion ranking, with 22 nodes equipped and a level of penetration of 140%
Figure 12 :
12 Figure 12 : Evolution of criticality criterion efficiency and maximum difference of active voltage change potential, as a function of the level of penetration
Figure 13 :
13 Figure 13 : Number of critical time steps missed, as a function of maximal difference of voltage change potential, for several simulation setups
Figure 14 :
14 Figure 14 : Loss of precision as a function of the maximal active voltage change potential |
00410809 | en | [
"info.info-ts",
"spi.signal"
] | 2024/03/04 16:41:22 | 2009 | https://hal.science/hal-00410809/file/fusion.pdf | Evangéline Pollard
email: [email protected]
Benjamin Pannetier
email: [email protected]
Michèle Rombaut
email: [email protected]
Evangeline Pollard
Convoy detection processing by using the hybrid algorithm (GMCPHD/VS-IMMC-MHT) and dynamic bayesian networks
Keywords: Multitarget Tracking, GMTI, Convoy detection, GMCPHD, VS-IMMC-MHT, dynamic bayesian network
Convoy detection processing by using the hybrid algorithm (GMCPHD/VS-IMMC-MHT) and dynamic bayesian networks
Introduction
In the battlefield surveillance domain, ground target tracking is a first challenging task to assess the situation [START_REF] Hall | An introduction to multisensor data fusion[END_REF]. Data used for tracking comes from Ground Moving Target Indicator (GMTI) sensors which detect moving targets only by measuring their Doppler frequency. The goal is to have a real ground picture: the number of targets, their dynamics, their relationship... in order to discover military events of interests. In this article, we focus on convoy detection. Some studies on convoy detection based on GMTI signatures already exist [START_REF] Corbeil | Data mining of GMTI radar databases[END_REF][START_REF] Klemm | Tracking of convoys by airborne STAP radar[END_REF][START_REF] Koch | Information fusion apects related to GMTI convoy tracking[END_REF], but our purpose is convoy detection by using target tracks. In this context, two steps are proposed : (1) process a hardy multitarget tracking algorithm in order to detect vehicle aggregates with precision in term of cardinality and state estimation, (2) check if the detected aggregates are convoys or not, by introducing other data types (Synthetic Aperture Radar (SAR), video,. . . ) and by using a data fusion method. This purpose is summarized in Figure 1.
Very efficient tracking algorithms exist today and they have to be adaptated to the very complex ground environment. First, the traffic density is very high and generates a large number of measurements. This characteristic eliminates, in our application, Monte-Carlo techniques [START_REF] Kreucher | Multitarget tracking using the joint multitarget probability density[END_REF][START_REF] Angelova | Extended Object Tracking Using Monte Carlo Methods[END_REF][START_REF] Gordon | Bayesian target tracking after group pattern distortion[END_REF]. Moreover measurements are noisy and can contain many false alarms. Also vehicles on the ground are usually quite manoeuvrable over short periods of time according to the sensor scanning time T . Finally, vehicles are detected by the sensor with probability P D and according to the sensor resolution. In other words, when vehicles are very close together, one measurement can be missing generating the spawned targets. This phenomena added to the problem of data association make the classical algorithms, like SDassignment [START_REF] Deb | A generalized S-D assignment algorithm for multisensor-multitarget state estimation[END_REF] or MHT [START_REF] Bar-Shalom | Multitarget-Multisensor Tracking: Principle and Techniques[END_REF] less efficient to track convoys. Mahler by using his work on FInite Set STatistics [START_REF] Mahler | Detecting, tracking and classifying group targets: a unified approach[END_REF] (FISST) and Random Sets. This filter leads to a new class of algorithms [START_REF] Vo | Performance of PHD Based Multi-Target Filters[END_REF] based on the study of joint density probability of the Random Finite Sets (RFS) describing target dynamics and measurements. The first order moment of this RFS, called the intensity function, is the function whose the integral in any region on state space is the expected number of targets in that region. Points with highest density are then expected targets. To improve the number of targets estimation, Mahler proposes a generalization of the PHD called the CPHD [START_REF] Mahler | Multitarget Bayes filtering via firstorder multitarget moments[END_REF], which jointly propagates the intensity function and the entire probability distribution of the number of targets. Under Gaussian assumptions on target dynamics and birth process, Vo proposes a CPHD recursion called the Gaussian Mixture Cardinalized PHD [START_REF] Vo | Analytic Implementations of the Cardinalized Probability Hypothesis Density Filter[END_REF][START_REF] Ulmke | Gaussian Mixture Cardinalized PHD Filter for Ground Moving Target Tracking[END_REF] (GM-CPHD). This approach gives very encouraging results, in particular for the estimation of the number of targets and seems adaptated to convoy tracking.
Nevertheless, as we will show in Table 7, with manoeuvring targets, the GMCPHD has problems with velocity estimation. From this point of view, the GM-CPHD and the IMM-MHT can be seen as complementary algorithms: the first for the estimation of the number of targets and for an approximate position estimation, and the latter can be used to specify state estimation. The proposed hybridization is described in Figure 1. In this approach, we use a special version of the MHT: the VS-IMMC-MHT [START_REF] Pannetier | Multiple Ground Target Tracking with a GMTI Sensor[END_REF] (Variable Structure -Interacting Multiple Model with Constraints -Multiple Hypothesis Tracking) which uses road segment position from Geographical Information System (GIS) to improve the state estimation. Other authors proposed to combine a PHD filter with other filters [START_REF] Lin | Data association combined with the probability hypothesis density filter for multitarget tracking[END_REF]. By using outputs of our algorithms, we are able to detect aggregates with precision. The second step is to define if they are convoys or not.
Before discuss our approach, we define a convoy as a group of vehicles, evolving on the road, having the same dynamics and generally composed of more than two military vehicles. The distance between two vehicles depends on the environment, but most of the time it is over 100m. Giving these restrictions, we want to produce a general convoy model, able to discriminate convoys from a group of vehicles. We use the Dynamic Bayesian Network (DBN) formalism which seems adaptated to this problematic [START_REF] Johansson | Implementation and integration of a Bayesian Network for prediction of tactical intention into a ground target simulator[END_REF].
The paper is organized as follows: Section 2 is a description of the existing GMCPHD filter, Section 3 details how we use this algorithm in a hybrid version, Section 4 explains how DBNs are used for convoy detection. Finally Section 5 describes our simulation and compares results before we conclude in Section 6.
2 Background on the GMCPHD filter
The PHD filter
A Random Finite Set (RFS) is a finite-set valued random variable which can be generally characterized by a discrete probability distribution and a family of joint probability densities representing the existence probabilities of the target set. Considering the RFS of survival targets S k|k-1 between iterations k -1 and k, the RFS of spawned targets B k|k-1 and the RFS of spontaneous birth targets σ k , the global RFS characterizing the multitarget set can be written as:
X k = ζ∈X k-1 S k|k-1 (ζ) ∪ ζ∈X k-1 B k|k-1 (ζ) ∪ σ k (1)
In the same manner, the multitarget set observation Z k can be seen as a global RFS composed by the RFS of measurements originally from the targets X k and by the RFS of false alarms K k :
Z k = x∈X k Θ k (x) ∪ K k (2)
The PHD traditionally evolves in two steps: prediction and estimation that propagate the multitarget posterior density of the target RFS also called the intensity function v. The prediction state is based on the a posteriori intensity function v k-1 at the previous time k -1, the probability P S for a target to survive between times k -1 and k, the transition function f k|k-1 (.|ζ) given the previous state ζ and the intensity of target birth γ k .
v k|k-1 (x) = P s (ζ).f k|k-1 (x|ζ).v k-1 (ζ)dζ + γ k (x)
(3) Knowing the measurement random set Z k , it is possible to update the intensity function as follows:
v k (x) = (1 -P D )v k|k-1 (x)+ z∈Z k P D .g(z|x)v k|k-1 (x) κ k (z) + P D .g(z|ζ)v k|k-1 (ζ)dζ (4)
where g(z|x) is the likelihood of a measurement z knowing the state of a target x, κ k is the clutter intensity which is modeled by a Poisson process.
The GMCPHD filter
The GMCPHD, proposed by Vo [START_REF] Vo | Analytic Implementations of the Cardinalized Probability Hypothesis Density Filter[END_REF], combines a Gaussian mixture model for the intensity function with the Cardinalized generalization of the PHD filter. That means that the posterior target intensity can be written as a Gaussian mixture:
v k (x) = J k i=1 w k,i N (x; m k,i , P k,i ) (5)
where w k,i , m k,i and P k,i are the weight, mean and covariance of the current Gaussians and J k is their number. Moreover, added to the operations ( 3) and ( 4), the probability to have n targets is predicted and estimated in the same way, as, ∀n ∈ N ⋆ ,
p k|k-1 (n) = n j=0 p Γ (n -j)× ∞ l=j C l j P s , v k-1 j 1 -P s , v k-1 l-j 1, v k-1 l p k-1 (l) (6)
with p Γ (nj) the birth probability of (nj) target and C l j the binomial coefficient with parameters (n, j). Following the Bayes theorem, the estimated cardinality distribution p k|k can be written as a likelihood ratio:
p k|k (n) = L(Z k |n) L(Z k ) p k|k-1 (n) (7)
where L(Z k |n) is the likelihood of the measurements set Z k knowing that there are n targets and L(Z k ) is a normalizing constant.
3 The VS-IMMC-MHT / GM-CPHD hybridization
The labeled GMCPHD
In the classical version of the GMCPHD, the problem of track labeling is not considered. Yet, this step is quite important for complex multitarget scenario. Clark and Panta [START_REF] Clark | The GM-PHD Filter Multiple Target Tracker[END_REF][START_REF] Panta | Data Association and Track Management for the Gaussian Mixture Probability Hypothesis Density[END_REF] proposes method but not adaptated to Gaussian mixture and to a large number of targets. Also, we propose, as an alternative, using the track score for the track initialization and in addition to the statistical distance between peak and predicted track to take into account the global weight for the peak to track association.
Let G be the Gaussian set given by the GMCPHD written:
G k = {w k,i , m k,i , P k,i } i∈{1,...,N G k } = G k,1 , . . . , G k,N G k (8) where N G k is the number of Gaussians (N G k > Nk ) at time k.
A track can be defined as a sequence of estimated states describing the dynamics of one target. The goal of tracking is to offer a list of tracks corresponding to all of the targets. That is why this labeling step is necessary in order to provide a track set chosen amongst the Gaussian set G k . A track T k,i is defined at time k by a state xk,i , a covariance P k,i and a score s k,i :
T k,i = {x k,i , P k,i , s k,i } i∈{1,..., Nk } (9)
The track set is finally written:
T k = T k,1 , . . . , T k, Nk (10)
with Nk the estimation of the number of targets given by the GMCPHD. We define a set of association matrices A k of size Nk × N G k to associate the Gaussian set to the tracks. ∀(m, n) ≤ ( Nk , N G k ), an association matrix A k,i is written as:
A k,i (m, n) = 1 if G k,n
can be associated to T k|k-1,m 0 otherwise (11) with T k|k-1,m the predicted track m and knowing that a track is associated at most to one Gaussian. A Gaussian peak n is said associable to a track m if it satisfies a gating test around the predicted position of the track.
We define a weight matrix
W k of size Nk ×N G k defined as follows, ∀(m, n) ≤ ( Nk , N G k ): W k (m, n) = w k,n if G k,n can be ass. to T k|k-1,m 0 otherwise (12) If Nk > Nk-1 ,
one or more new tracks must be initialized and each Gaussian is a potential new track. In matrix
W k , ∀m ∈ {1, . . . , N G k }, ∀l ∈ { Nk-1 +1, . . . , Nk }, W k (m, l) = w k,l (13)
In the same way, if Nk < Nk-1 , some tracks must be deleted. Weakly weighted tracks cannot be deleted because of the detection probability, which is why tracks with the lowest score are deleted.
Finally, we compute the set of global weight of an association:
W g k = Nk m=1 N G k n=1 A k (m, n).W k (m, n) (14)
And the association matrices which maximize the weight are written as:
A ⋆ k = argmax A k W g k (15)
Similarly, the cost matrix
C k of size Nk × N G k is writ- ten as, ∀(m, n) \ ( Nk , N G k ), C k (m, n) = c(m, n) if G k,n
can be ass. to T k|k-1,m 0 otherwise (16) with c(m, n) the cost of the association of the predicted track m with the Gaussian n written as the negative Napierian logarithm of the likelihood ratio,
∀(m, n) ∈ ( Nk , N G k ), c(m, n) = -ln P D .Λ(G k,n |T k|k-1,m ) β F A (17)
with β F A the spatial false alarm density and Λ(G k,n ) the likelihood of the Gaussian n knowing the predicted position of the track m, calculated as a Gaussian density.
Finally, the global association cost is computed as:
C g k = Nk m=1 N G k n=1 A ⋆ k (m, n).C k (m, n) (18)
And the best association A ⋆⋆ is computed like the minimal cost matrix:
A ⋆⋆ k = argmin A ⋆ k C g k (19)
The hybridization
The GMCPHD produces a reliably estimation of the number of targets, whereas the VS-IMMC-MHT is effective to give a good estimation of the target state by introducing road coordinates when targets are not close together, because of the problem for MHT algorithm to evaluate the number of targets. We propose therefore to use these two algorithms as complementary filters: the first estimates the number of targets and the approximate target position and the second increases the accuracy for the target state estimation. The two algorithms are running simultaneously. Then, a gating process is applied around the target position given by the GMCPHD, to select MHT tracks. Finally, MHT tracks which have the highest score are selected. If a PHD track is not associated to any MHT track, the GMCPHD track is kept.
This approach combines the advantages of the different algorithms without increasing the processing time:
• Robust to target maneuvers by using IMM
• Good precision for state estimation by using road coordinates
• Good estimation of the number of targets
• No performance decrease when targets are close together Different algorithms performances are compared in Section 5 in a complex scenario. But before let us define the proposed convoy detection method.
Description of a convoy 4.1 Some definitions
A convoy is defined as a vehicle set evolving approximatively with the same dynamics during a long time. These vehicles are moving on the road under a limited velocity (<20m/s). They must stay at sight with almost constant distances between them (mostly 100m). Criteria describing a convoy are manifold and of different natures, moreover variables are discrete. That is why, bayesian networks represent an interesting formalism in our application as in similar thematics [START_REF] Okello | Threat assessment using Bayesian networks[END_REF][START_REF] Denis | Spatio-temporal pattern detection using dynamic Bayesian networks[END_REF][START_REF] Johansson | Implementation and integration of a Bayesian Network for prediction of tactical intention into a ground target simulator[END_REF][START_REF] Singh | Modeling threats[END_REF].
A Bayesian Network (BN) is a graphical model for representing dependency relation between a set of random variables. Graphically, each variable is represented by a node and an arc, from a node X i to a node X j , means that X i "causes" X j , ∀(i, j) ∈ {1, . . . , N } 2 . Finally, the joint probability is computed as:
P (X 1 , . . . , X N ) = N i=1 P (X i |P a(X i )) (20)
where P a(X i ) are parent nodes of X i . The Dynamic Bayesian Networks (DBN) are an extension of BN, which take into account the time evolution of random variables. The convoy detection approach is bounded to the time evolution as shown in Figure 2. For example, variable X 5 is time depending, because the type information can come from heterogeneous sources (SAR, video, . . . ) with different scanning times, and variable X 9 is confirmed with time.
X 1 X 2 X 3 X 4 X k 5 X 8 X 6 X 7 X k 9 X1: Velocity < 80km/h
Conditional Probability Distribution evaluation
If independency relation between variables can be very intuitively established, one difficulty with DBN is to evaluate the Conditional Probability Distribution (CPD) of each node given its parents. If data sets are available, these prior probabilities can be learned, but in our case, they are evaluated by experts, according to a certain weight to each parameters. For example, if a convoy is detected at time k -1, the probability to detect one at time k is high and the prior probability given to this parameter must be "relatively" high. As described in [START_REF] Benavoli | An approach to threat assessment based on evidential networks[END_REF], we propose heuristic rules to represent relationships between variables :
5 × X k-1 9 + 1.5 × X 3 + X 4 + X 5 + 1.5 × X 8 = X k 9 (21)
As said, this rule means that the probability to have a convoy at time k is half-depending on the fact to have a convoy at time k -1, and that we care more criteria on distance and velocity than criteria "on road" or "vehicle type".
Probability transformation
Another difficulty is the transformation of numerical data (number of targets, target position and velocity, road position) into a probability. This step is done by using probability distributions or fuzzy transformations like linear transformation (cf. • p(X 1 ) is computed according to a Rayleigh distribution.
• p(X 2 ) is following a fuzzy linear transform using the difference between velocity mean at time k and at previous times.
• p(X 4 ) is computed according to a χ 2 distribution.
• p(X 6 ) is computed using a fuzzy gaussian transformation by studding the distribution of distances between vehicles of the aggregate.
• p(X 7 ) is computed using a fuzzy linear transformation by examining the variation of the convoy length over time
Inference
The next step consists to propagate the information through the network. It is called the inference. Many algorithms exist like JLO [START_REF] Jensen | Bayesian updating in recursive graphical models by local computation[END_REF] from the names of its authors or Expectation-Maximization (EM) algorithm. We choose arbitrarily the JLO algorithm adaptated to discrete nodes and available in the Murphy's Bayes net toolbox [START_REF] Murphy | Bayes net toolbox[END_REF].
Targets number estimation
Computing the probability p(X 9 ) for an aggregate to be a convoy is a first step (cf. Figure 9), but it is possible to take into account the average number of targets belonging to the convoy. First, we know the number of targets in the aggregate and moving in the same direction. If for instance at time k, we detect N (k) = 5, while there was 4 until there, we have to propagate the information and to compute simultaneously the probability to have a convoy with 5 vehicles, and a convoy with the 4 best located target tracks. Mathematically, it means we compute p(X 9 , N C ), with N C the set of different values taken by N k , where N k = {N (1), . . . , N (k)} is the sequence of mean number of targets in the aggregate, moving in the same direction.
However, as shown in Figure 11, it is not easy to discriminate certain cases, here the cases N C = 5 and N C = 6 (the reality is N C = 6). If, at the beginning of the simulation, we detect N C = 5, we must continue to compute the probability to have a 5 target convoy, because we are possibly in the case of an overtaking, but it is not realistic, to support this assumption against the 6 target convoy if the sequence of measurement never gives again N (k) = 5. That is why we introduce the local estimated cardinality of the Gaussian mixture on the aggregate surface, computed as
N C k = Nmax i=1
i.p C k|k (i) knowing the sequence of average number of targets.
Finally, the probability becomes:
p(X k 9 , N C , N C k |N k ) = p(X 9 , N C ).p( N C k |N k ) (22)
By considering a Markovian assumption and Bayes theorem, the probability is computed as:
p( N C k |N k ) = p(N (k)| N C k , N (k -1)).p( N C k |N (k -1)). 1 c (23) with c a normalization constant, p( N C k |N k-1 ) = N ( N C k ; N k-1 , σ 2 N
) is computed as the normal density with mean N k-1 and variance σ 2 N , and
p(N (k)| N C k , N k-1
) is computed by using a linear transformation.
Simulation and results
In the following, we present some simulation results that illustrate the performances of the proposed hybridization. These are compared to the performances of a classical IMM-MHT, a labeled GMCPHD, a VS-IMMC-MHT and an hybridization GMCPHD/IMM-MHT. Then we present some results on the convoy detection.
Scenario
The GMTI sensor has a linear trajectory, its velocity is 30m/s and its altitude is 4000m. The typical measurement error is 20m in range and 0.008rad in azimuth. The sensor scan time is T = 10s. Scenario time is limited to 500s. The false alarm density is β F A = 8.92.10 -9 and the detection probability is P D = 0.9. Target trajectories are illustrated in Figure 4, while cumulated MTI reports are shown in Figure 5. In the scenario, one 6 target convoy (Target 1-6) is moving on the main road with a constant velocity of 10m/s from South to North. An independant target (Target 7) is moving on the same road in the same direction but with a constant velocity of 15m/s and overtakes the convoy between time t=150s and t=350s approximately.
Results
The performances of tracking algorithms have been compared for 100 independent Monte Carlo runs. Fig- Concerning the convoy probability, p(X 9 ) is evolving progressively from 0.5 to 0.6 (cf. Figure 9) with some picks which indicate a change of cardinality in the aggregate. By introducing N C , we begin to estimate the number of targets in the aggregate (cf. Figure 11). We discriminate the case N C = 7, but we cannot decide between N C = 5 or 6. Finally, by introducing N C k knowing N k (cf. Figure 10), the case N C = 6 appears as the
Conclusion
The new approach for convoy detection has shown its efficiency on a complex multitarget scenario. Several theoretical contributions have been proposed. The first one concerns the labeled version of the GMCPHD that allows to differentiate the tracks. The second con- tribution concerns the hybridization of the GMCPHD algorithm to the VS-IMMC-MHT algorithm in order to improve the performances, specially for group of closely spaced objects. Finally, the third contribution concerns the convoy model by using DBN that proposes an original answer to convoy detection process. This has been tested on several scenarios not presented in the paper. The next step is now the problem of make a decision which stays entire.
Figure 1 :
1 Figure 1: Convoy detection process
Figure 2 :
2 Figure 2: Dynamic bayesian network for convoy detection. The gray nodes represent states depending on their previous state.
Figure 3 :
3 Figure 3: Examples of transformation
Fig 3 (a)) or Gaussian transformation (cf. Fig 3 (b)):
Figure 4 :
4 Figure 4: Scenario
Figure 5 :
5 Figure 5: Cumulated MTI reports The simulation parameters are presented in Tables 1 to 5. Name Value CV model noise 1 0.05m.s -2 CV model noise 2 0.8m.s -2 Model noise 3 (STOP) 0m.s -2
ure 6
6 shows the average RMSE (Root Mean Square Error) of each target in position, Figure7average RMSE in velocity and Figure8is the track length ratio of each target. The IMM-MHT offers acceptable performances in state estimation, while the VS-IMMC-MHT improves highly position estimation. The GMCPHD produces lower performances in term of state estimation, but the track length ratio is close to 1. The hybrid version (Hybrid 1 is the hybridization of IMM-MHT and GMCPHD, Hybrid 2 is the hybridization of VS-IMMC-IMM-MHT and GMCPHD) is a good compromise between the two sorts of algorithms. The track length ratios have similar values as the GMCPHD, whereas, the state estimation are similar for Hybrid 1 to the IMM-MHT and for Hybrid 2 to the VS-IMMC-MHT.
Figure 6 :
6 Figure 6: Average RMSE in position of each target
Figure 7 :
7 Figure 7: Average RMSE in velocity of each target
Figure 8 :
8 Figure 8: Track length ratio
Figure 9 :
9 Figure 9: p(X 9 )
Table 1 :
1 The IMM parameters
Table 2 :
2 The MHT parameters
Name Value
Survival probability 0.98
Initial Gaussian weight 10 -3
Pruning threshold 10 -2
Merging threshold 20
Maximum number of targets 50
Maximum number of Gaussians 50
Average number of birth
Model noise 2
Maximum velocity 20
Table 3 :
3 The GMCPHD parameters
Name Value
CV model noise 1 in normal direction 0.1
CV model noise 1 in orthogonal direction 0.1
CV model noise 2 in normal direction 0.6
CV model noise 2 in orthogonal direction 0.4
Maximum value for off road velocity 9m.s -1
Table 4 :
4 The VS-IMMC parameters
Name Value
Number of iterations for score calculation 3
Weight threshold for new track 0.8
Table 5 :
5 The hybridization parameters |
04108185 | en | [
"math"
] | 2024/03/04 16:41:22 | 2023 | https://hal.science/hal-04108185/file/preprint_herman_kluk_propagator.pdf | Keywords: Matrix-valued Hamiltonian, smooth crossings, codimension 1 crossings, wave-packet, Bargmann transform, Herman-Kluk propagators, thawed and frozen Gaussian approximations
Chapter 1. Introduction 1.1. First overview 1.2. The setting 1.3. Classical quantities 1.4. Thawed and frozen Gaussian approximations 1.5. Wave packets propagation at any order through generic smooth crossings 1.6. Detailed overview Part 1. Initial value representations Chapter 2. Frequency localized families 2.1. The Bargmann transform 2.2. Frequency localized families 2.3. Operators built on Bargmann transform Chapter 3. Convergence of the thawed and the frozen Gaussian approximations 3.1. Strategy of the proofs 3.2. Thawed Gaussian approximations with transfers terms 3.3. Frozen Gaussian approximations with transfers terms Part 2. Wave-packet propagation through smooth crossings Chapter 4. Symbolic calculus and diagonalization of Hamiltonians with smooth crossings 4.1. Formal asymptotic series 4.2. 'Rough' reduction 4.3. Superadiabatic projectors and diagonalization Chapter 5. Propagation of wave packets through smooth crossings 5.1. Propagation faraway from the crossing area 5.2. Propagation close to the crossing area 5.3. Propagation through the crossing set 5.4. Propagation of wave packets -Proof of Theorem 1.21 Appendix A. Matrix-valued Hamiltonians Appendix B. Elements of symbolic calculus : the Moyal product B.1. Formal expansion B.2. Symbols with derivative bounds v vi CONTENTS Appendix C. Elements of semi-classical calculus: perturbation of scalar systems C.1. Egorov Theorem C.2. Asymptotic behavior of the propagator C.3. Propagation of wave packets
Introduction
Since the early days of semi-classical analysis, operators that approximate the dynamics of a semi-classical propagator have been the object of major attention. The theory of Fourier integral operators answers to this question by proposing methods for constructing approximative propagators of a scalar semi-classical Schrödinger equation (see [START_REF] Zworski | Semiclassical analysis[END_REF]Chapter 12] or [START_REF] Dimassi | Spectral Asymptotics in the Semi-Classical Limit[END_REF]). Pioneering work about a semi-classical theory of FIOs is [START_REF] Chazarain | Spectre d'un hamiltonien quantique et mécanique classique[END_REF], extended in [START_REF] Helffer | Comportement semi-classique du spectre des hamiltoniens quantiques elliptiques[END_REF] and [START_REF] Robert | Autour de l'approximation semi-classique volume 68 of Progress in Mathematics Birhauser[END_REF].
Few results exist for systems except for those that are called adiabatic, because the eigenvalues of the underlying Hamiltonian matrix are of constant multiplicity. The analysis of such systems can be reduced to those of scalar equations through a diagonalization process using the so-called super-adiabatic projectors. The super-adiabatic approach has been carried out by Martinez and Sordoni [START_REF] Martinez | Twisted pseudodifferential calculus and application to the quantum evolution of molecules[END_REF] as well as Spohn and Teufel [START_REF] Spohn | Adiabatic decoupling and time-dependent Born-Oppenheimer theory[END_REF], see also [START_REF] Emmrich | Geometry of the transport equation in multicomponent WKB approximation[END_REF][START_REF] Nenciu | On the adiabatic theorem of quantum mechanics[END_REF][START_REF] Nenciu | Linear adiabatic theory. Exponential estimates[END_REF][START_REF] Bily | Propagation d'états cohérents et applications[END_REF] for earlier results or [START_REF] Volker | Superadiabatic transition histories in quantum molecular dynamics[END_REF][START_REF] Panati | Space-adiabatic perturbation theory[END_REF] for more recent results in a similar direction. The present study gives the first complete construction of an integral representation of the propagator associated to a Hamiltonian generating non-adiabatic dynamics in a very general situation. It focuses on those Hamiltonian matrices that have smooth eigenprojectors, with smooth eigenvalues, though of non constant multiplicity. The framework applies to generic situations where two eigenvalues cross along a hypersurface on points where the Hamiltonian vector fields associated with these eigenvalues are transverse to the crossing hypersurface. This set-up has already been the one of the work of Hagedorn [START_REF] Hagedorn | Molecular Propagation through Electron Energy Level Crossings[END_REF]Section 5] and Jecko [START_REF] Jecko | Semiclassical resolvent estimates for Schrödinger matrix operators with eigenvalues crossings[END_REF]. The Fourier integral operators approximating the propagator associated with these non-adiabatic Hamiltonians are based on Gaussian wave-packets and the Bargmann transform, in the spirit of the Herman-Kluk propagator.
The Herman-Kluk propagator has been introduced in theoretical chemistry (see [START_REF] Heller | Time-dependent approach to semiclassical dynamics[END_REF][START_REF] Kay | Integral expressions for the semi-classical time-dependent propagator[END_REF][START_REF] Herman | A semiclassical justification for the use of non-spreading wavepackets in dynamics calculations[END_REF][START_REF] Kay | The Herman-Kluk approximation: derivation and semiclassical corrections[END_REF]) for the analysis of molecular dynamics for scalar equations. The mathematical analysis has been performed later by Rousse and Swart [START_REF] Swart | A mathematical justification for the Herman-Kluk Propagator[END_REF] and Robert [START_REF] Robert | On the Herman-Kluk Semiclassical Approximation[END_REF], independently. The action of the Herman-Kluk propagator consists in the continuous decomposition of the initial data into semiclassical Gaussian wave-packets and the implementation of the propagation of the wave-packets as studied in the 70s and 80s by Heller [START_REF] Heller | Time-dependent approach to semiclassical dynamics[END_REF], Combescure and Robert [START_REF] Combescure | Coherent states and applications in mathematical physics[END_REF], and Hagedorn [24]. It involves time-dependent quantities that are called classical quantities because they can be interpreted in terms of Newtonian mechanics. Such an approximative description of the propagator in terms of several Gaussian wave packets motivates numerical methods that naturally combine with probabilistic sampling techniques, see [START_REF] Kluk | Comparison of the propagation of semiclassical frozen Gaussian wave functions with quantum propagation for a highly excited anharmonic oscillator[END_REF] or more recently [START_REF] Lasser | Discretising the Herman-Kluk Propagator[END_REF][START_REF] Kröninger | Sampling strategies for the Herman-Kluk propagator of the wavefunction[END_REF].
We prove the convergence of two types of approximations, respectively called thawed and frozen Gaussian approximations, both built of continuous superpositions of Gaussian wave-packets, the frozen one in the spirit the original Herman-Kluk propagator. Their difference mainly consists in the way the width matrices resulting from the propagation of the individual semi-classical Gaussian wave packets are treated. The presence of crossings requires to add to the semi-classical Gaussian wave packet propagation some transitions between the crossing hypersurfaces. Therefore, these
INTRODUCTION
Fourier integral operators incorporate classical transport along the Hamiltonian trajectories associated with the eigenvalues of the Hamiltonian and a branching process along the crossing hypersurface. Some of these ideas have been introduced in [START_REF] Fermanian Kammerer | Propagation of wave packets for systems presenting codimension one crossings[END_REF][START_REF] Fermanian Kammerer | Adiabatic and non-adiabatic evolution of wave packets and applications to initial value representations[END_REF], in particular in [START_REF] Fermanian Kammerer | Propagation of wave packets for systems presenting codimension one crossings[END_REF] where the propagation of wave-packets through smooth crossings has been studied. Here, we revisit and extend these results, by proving that a wave-packet propagated through a smooth generic crossing remains asymptotically a wave-packet to any order in the semi-classical parameter. We then prove uniform estimates for the associated semi-classical approximations of propagators when acting on families of initial data that are frequency localized in the sense that their L 2 -mass does not escape in phase space to ∞ when the semi-classical parameter goes to 0, neither in position, nor in momentum. This class of initial data is typically met for the numerical simulation of molecular quantum systems.
Previous results. The analysis of the propagation through smooth eigenvalue crossings has been pioneered by Hagedorn in [26, Chapter 5]. He considered Schrödinger operators with matrixvalued potentials and propagated initial data that are known as semi-classical wave packets or generalized coherent states [START_REF] Combescure | Coherent states and applications in mathematical physics[END_REF]Chapter 4]. The core of the wave-packet had to be chosen such that it classically propagates to the crossing. In the same framework adjusted to the context of solid states physics, Watson and Weinstein [START_REF] Watson | Wavepackets in inhomogeneous periodic media: propagation through a one-dimensional band crossing[END_REF] analyze the propagation of wave-packets through a smooth crossing of Bloch bands. The results developed here extend [START_REF] Hagedorn | Molecular Propagation through Electron Energy Level Crossings[END_REF]Chapter 5] and [START_REF] Watson | Wavepackets in inhomogeneous periodic media: propagation through a one-dimensional band crossing[END_REF] in two ways. The single wave-packet is turned into an initial value representation with uniform control for frequency localized initial data. The Schrödinger and Bloch operators are generalized to Weyl quantized operators with smooth time-dependent symbol.
First overview
The remainder of the introduction specifies the mathematical setting (assumptions on the Hamiltonian operator and the initial data), discusses the classical quantities involved in the approximation, reviews the known results on the thawed and frozen initial value representations in the adiabatic setting, and then presents the main results of this paper: Theorem 1.18 on the thawed approximation with hopping trajectories, Theorem 1. [START_REF] Kammerer | An Egorov theorem for avoided crossings of eigenvalue surfaces[END_REF] and Theorem 1.20 on the frozen approximation with hopping trajectories, that are pointwise and averaged in time, respectively, and Theorem 1.21 on wave-packet propagation through smooth crossings to arbitrary order.
We prove Theorems 1.18, 1.19 and 1.20 in Chapters 2 and 3. These proofs rely on Theorem 1.21, that is proved in Chapters 4 and 5.
Chapter 2 recalls elementary facts about the Bargmann transform. Then, it introduces the new notion of frequency localization, which will be crucial for controlling the remainder estimates for both the frozen and the thawed initial value representations in Chapter 3.
The refined wave-packet analysis of Chapters 4 and 5 does not depend on the theory of initial value representations and can be read independently from Chapters 2 and 3. It propagates wavepackets through smooth crossings in two steps: using a rough diagonalisation of the Hamiltonian operator in the crossing region and super-adiabatic projectors for the outside. Both constructions rely on pseudo-differential calculus for matrix-valued symbols that is developed in Chapter 4 and complemented by additional technical points in the appendices.
Notations and conventions. All the functional sets that we shall consider in this article can have values in C (scalar-valued), C m (vector-valued) or in C m,m (matrix-valued). We denote by g, f = R d f (x)g(x)dx the inner product of L 2 (R d , C). If π is a projector, then π ⊥ denotes the projector π ⊥ = I -π. We set D x = 1 i ∂ x . In the context of Assumption 1.3, we shall say that a matrix A is diagonal if A = π 1 Aπ 1 + π 2 Aπ 2 and off-diagonal if A = π 1 Aπ 2 + π 2 Aπ 1 .
1.2. The setting 1.2.1. The Schrödinger equation. We consider the Schrödinger equation
(1.1) iε∂ t ψ ε (t) = H ε (t)ψ ε (t), ψ ε |t=t0 = ψ ε 0 . in L 2 (R d , C m ), m ≥ 2
, where H ε (t) is the semi-classical quantization of an Hermitian matrix symbol
H ε (t, z) ∈ C m,m .
Here, t ∈ R, z = (x, ξ) ∈ R d × R d and ε is the semi-classical parameter, ε 1. Moreover, for a ∈ C ∞ (R 2d ) being a smooth scalar-, vector-or matrix-valued function with adequate control on the growth of derivatives, the Weyl operator a = op w ε (a) is defined by
op w ε (a)f (x) := af (x) := (2πε) -d R 2d a x + y 2
, ξ e iξ•(x-y)/ε f (y) dy dξ for all f ∈ S(R d ).
In full generality, we could assume that the map (t, z) → H ε (t, z) is a semi-classical observable in the sense that the function H ε (t, z) is an asymptotic sum of the form j≥0 ε j H j (t, z). However, in this asymptotic sum, the important terms are the principal symbol H 0 (t, z) and the sub-principal one H 1 (t, z); the terms H j (t, z) for j ≥ 2 only affect the solution at order ε, which is the order of the approximation we are looking for. Therefore, we assume that the self-adjoint matrix H ε writes H ε (t, z) := H 0 (t, z) + εH 1 (t, z).
1.2.2. Assumptions on the Hamiltonian. We work on a time interval of the form I := [t 0 , t 0 + T ], t 0 ∈ R and T > 0 and consider subquadratic matrix-valued Hamiltonians. Definition 1.1 (Subquadratic ). The ε-dependent Hamiltonian
H ε = H 0 + εH 1 ∈ C ∞ (I × R 2d , C m,m )
is subquadratic on the time interval I if and only if one has the property:
(1.2) ∀j ∈ {0, 1}, ∀γ ∈ N d , ∃C j,γ > 0, sup (t,z)∈I×R 2d |∂ γ z H j (t, z)| ≤ C j,γ z (2-j-|γ|)+
Assuming that H ε is subquadratic on the time interval I ensures that the system (1.1) is well-posed in L 2 (R d , C m ) for t ∈ I, and, more generally (see [START_REF] Maspero | On time dependent Schrödinger equations: global well-posedness and growth of Sobolev norms[END_REF]), in the functional spaces
Σ k ε (R d ) = f ∈ L 2 (R d ), ∀α, β ∈ N d , |α| + |β| ≤ k, x α (ε∂ x ) β f ∈ L 2 (R d ) , k ∈ N endowed with the norm f Σ k ε = sup |α|+|β|≤k x α (ε∂ x ) β f L 2 .
INTRODUCTION
We denote by U ε H (t, t 0 ) the unitary propagator defined by iε∂ t U ε H (t, t 0 ) = H ε (t)U ε (t, t 0 ), U ε (t 0 , t 0 ) = I C m . It is a bounded operator of the Σ k ε (R d ) spaces, uniformly in ε (see [START_REF] Maspero | On time dependent Schrödinger equations: global well-posedness and growth of Sobolev norms[END_REF]): there exists C T > 0 such that sup t∈I U ε H (t, t 0 ) L(Σ k ε ) ≤ C T . We assume that the principal symbol H 0 (t, z) of H ε (t, z) has two distinct eigenvalues that present a smooth crossing in the sense of the definitions of [START_REF] Fermanian Kammerer | Propagation of wave packets for systems presenting codimension one crossings[END_REF]. Namely, we consider different properties of the crossings. Definition 1.2.
(1) (Smooth crossing). The matrix
H 0 ∈ C ∞ (I × R 2d , C m,m
) has a smooth crossing on the set Υ ⊆ I × R 2d if there exists h 1 , h 2 ∈ C ∞ (I × R 2d ) and two orthogonal projectors
π 1 , π 2 ∈ C ∞ (I × R 2d , C m,m ) such that H 0 = h 1 π 1 + h 2 π 2 and h 1 (t, z) = h 2 (t, z) ⇐⇒ (t, z) ∈ Υ.
(2) Set f (t, z) = 1 2 (h 1 (t, z) -h 2 (t, z)) and v(t, z) = 1 2 (h 1 (t, z) + h 2 (t, z)).
(a) (Non-degenerate crossing). The crossing is non-degenerate at (t , ζ ) ∈ Υ if
d t,z (H 0 -v I C m ) (t , ζ ) = 0
where d t,z is the one differential form in the variables (t, z). (b) (Generic crossing points). The crossing is generic at (t , ζ ) ∈ Υ if one has
(1.3) ∂ t f + {v, f }(t , ζ ) = 0.
Note that there then exists an open set Ω ⊂ I × R 2d containing (t , ζ ) such that the set Υ ∩ Ω is a manifold.
Above, we denote by {f, g} the poisson bracket of the functions f and g defined on R 2d
x,ξ :
{f, g} = ∇ ξ f • ∇ x g -∇ x f • ∇ ξ g.
With these definitions in hands, we introduce one of the main assumptions on the crossing points of the Hamiltonian H ε . Assumption 1.3 (Crossing set). The Hamiltonian H ε = H 0 + εH 1 has a smooth crossing set Υ and all the points of Υ are non degenerate and generic crossing points.
In order to consider the unitary propagators U t,t0 h1 and U t,t0 h2 and be endowed with convenient bounds on the growth of the projectors, we shall make additional assumptions on the growth of the eigenvalues and of their gap function. Our setting will be the following: Assumption 1.4 (Growth conditions for smooth crossings). Let H ε = H 0 +εH 1 ∈ C ∞ (R×R 2d ) be subquadratic on the time interval I and have a smooth crossing on the set Υ. We consider the two following assumptions :
(i) The growth of H 0 (t, x), h 1 (t, z) and h 2 (t, z) is driven by the function v(t, z), i.e. for j ∈ {1, 2}
(1.4) ∀γ ∈ N 2d , |γ| = 1, ∃C γ > 0, ∀(t, z)
∈ I × R 2d , |∂ γ z (H 0 -v I C m )(t, z)| + |f (t, z)| ≤ C γ . (ii)
The eigenvalues h 1 and h 2 are subquadratic, i.e. (1.5) ∀γ ∈ N 2d , |γ| ≥ 2, ∃C γ > 0, ∀(t, z) ∈ I × R 2d , |∂ γ z h j (t, z)| ≤ C γ .
(iii) The gap is controlled at infinity, i.e. there exist R > 0 and n 0 ∈ N such that
(1.6) ∀t ∈ I, ∀|z| > R, |f (t, z)| ≥ C z -n0 ,
and in the case n 0 = 0, the functions z → π 1 , π 2 are assumed to have bounded derivatives at infinity.
Remark 1.5.
(1) The fact that the eigenvalues h 1 (t, z) and h 2 (t, z) are of subquadratic growth guarantees the existence of the unitary propagators U ε hj (t, t 0 ) for j ∈ {1, 2} and of the classical quantities associated with the Hamiltonians h 1 and h 2 that we will introduce below.
(2) The growth conditions of Assumption 1.4 imply that the eigenprojectors π j (t), j = 1, 2, and their derivatives have at most polynomial growth. However, when n 0 = 0, they may actually grow. This is proved in Lemma A. [START_REF] Volker | Superadiabatic transition histories in quantum molecular dynamics[END_REF]. It is for this reason that we assume that the projectors have bounded derivatives when n 0 = 0 in Point (iii).
(3) If one has (1.4) and (1.6) with n 0 = 0, then (1.5) holds. However, the examples below contain interesting physical situations for which n 0 = 0.
Example 1.6.
(1) Examples of matrix-valued Hamiltonian are given in molecular dynamics (see [ Chapter 5] in [START_REF] Hagedorn | Molecular Propagation through Electron Energy Level Crossings[END_REF]) by Schrödinger operators with matrix-valued potential,
H S = - ε 2 2 ∆ x I C 2 + V (x), V ∈ C ∞ (R d , C 2×2 ).
When V presents a codimension 1 crossing (as defined in [START_REF] Hagedorn | Molecular Propagation through Electron Energy Level Crossings[END_REF], then the crossing points (x, ξ) are non degenerate and generic when ξ = 0. (2) Another class of examples appear in solid state physics in the context of Bloch band decompositions (see [START_REF] Watson | Wavepackets in inhomogeneous periodic media: propagation through a one-dimensional band crossing[END_REF][START_REF] Chabu | Effective mass theorems with Bloch modes crossings[END_REF] for example) with Hamiltonians of the form
H A = A(-iε∇ x ) + W (x)I C 2 , A ∈ C ∞ (R d , C 2×2 ), W ∈ C ∞ (R d , C).
(3) Finally, in [START_REF] Fermanian Kammerer | Adiabatic and non-adiabatic evolution of wave packets and applications to initial value representations[END_REF], the authors have considered the operator
H k,θ = ε i d dx I C 2 + kx 0 e iθx e -iθx 0 , with d = 1, N = 2, θ ∈ R + , k ∈ R * .
1.2.3. Assumptions on the data. We consider vector-valued initial data
ψ ε 0 ∈ L 2 (R d , C m ) of the form ψ ε 0 = V φ ε 0
where z → V (z) is a smooth function, bounded together with its derivatives and φ ε 0 ∈ L 2 (R d , C) is frequency localized in the sense of the next definition. For stating it, we denote the Gaussian of expectation q, variance √ ε that oscillates along p according to (1.7)
g ε z (x) = (πε) -d/4 e -(x-q) 2 ε + i ε p•(x-q) , ∀x ∈ R d .
Definition 1.7 (Frequency localized functions). Let (φ ε ) ε>0 be a family of functions of L 2 (R d ). The family (φ ε ) ε>0 is frequency localized if the family is bounded in L 2 (R d ) and if there exist R 0 , C 0 , ε 0 > 0 and N 0 > d + 1 2 such that for all ε ∈ (0,
ε 0 ], (2πε) -d/2 | g ε z , φ ε | ≤ C 0 z -N0 for all z ∈ R d with |z| > R 0 .
One then says that (φ ε ) ε>0 is frequency localized.
INTRODUCTION
We will introduce a more precise definition in Chapter 2. For families that are frequency localized, the set of z ∈ R 2d in the identity (2.2) can be restricted to a compact set (see Lemma 2.6). The analysis of the examples given below is performed in Lemma 2.9.
Example 1.8.
(1) The Gaussian wave packets (g ε z0 ) ε>0 are frequency localized families. (2) Define (WP ε z0 (u)) ε>0 by
(1.8) WP ε z0 (u)(x) = ε -d/4 e i ε p0•(x-q0) u x -q 0 √ ε , x ∈ R d ,
for u ∈ S(R d ) and z 0 = (q 0 , p 0 ) ∈ R 2d . They are frequency localized families. (3) Lagrangian (or WKB) states
ϕ ε (x) = a(x)e i ε S(x) with a ∈ C ∞ 0 (R d , C) and S ∈ C ∞ (R d , R
), also are frequency localized families.
Our vector-valued initial data will have a scalar part consisting in a frequency localized family.
Assumption 1.9. The initial data ψ ε 0 in (1.1) satisfies (1.9)
ψ ε 0 (x) = V φ ε 0 (x), x ∈ R d where (i) The family (φ ε 0 ) ε>0 is frequency localized with constants R 0 , N 0 , C 0 , ε 0 in Definition 1.7. (ii) The function z → V (z) is a function of C ∞ (R 2d , C m )
, bounded together with its derivatives, and valued in the set of normalized vectors.
We point out that any vector-valued bounded family in L 2 (R d ) writes as a sum of data of the form V φ ε 0 (x) for (φ ε 0 ) ε>0 bounded. As a consequence, assuming the initial data ψ ε 0 satisfies 1.9 is not really restrictive. Of course, the vector valued function V can be turned into -V by changing φ ε 0 into -φ ε 0 .
Classical quantities
In this section, we introduce classical quantities associated with the Hamiltonian H ε . These quantities will be used to construct the approximations of the propagator U ε H (t, t 0 ) that are the subject of this text. They are called classical because they do not depend on the semi-classical parameter ε and are obtained by solving ε-independent equations that mainly are ODEs instead of PDEs. Thus, the numerical realization of the resulting propagator's approximations avoids the difficulties induced by the 1 ε -oscillations and is applicable in a high-dimensional setting, see [START_REF] Lasser | Computing quantum dynamics in the semiclassical regime[END_REF] for a recent review on this topic. Besides their definition, we shall also recall well-known results about their role in the description of Schrödinger propagators.
In this section, we assume that H ε = H 0 + εH 1 is subquadratic on the time interval I (as defined in Definition 1.1), with smooth eigenprojectors π 1 and π 2 , and eigenvalues h 1 and h 2 , the latter being subquadratic (as in (ii) of Assumption 1.4).
1.3.1. The flow map. Let ∈ {1, 2}, we associate with h :
I × R 2d → R, (t, z) → h (t, z)
the functions z (t) = (q (t), p (t)) which denote the classical Hamiltonian trajectory issued from a phase space point z 0 at time t 0 , that is defined by the ordinary differential equation
ż (t) = J∂ z h (t, z (t)), z (t 0 ) = z 0 1.3. CLASSICAL QUANTITIES with (1.10) J = 0 I R d -I R d 0 .
We note that J is the matrix associated with the symplectic form
σ(z, z ) = Jz, z = p • q -p • q, z = (q, p), z = (q , p ) ∈ R 2d .
The trajectory z (t) = z (t, t 0 , z 0 ) depends on the initial datum and defines the associated flow map Φ t,t0 h of the Hamiltonian function h via
z → Φ t,t0 h (z) := z (t, t 0 , z 0 ), z ∈ R 2d .
We will also use the trajectory's action integral
(1.11) S (t, t 0 , z 0 ) = t t0 (p (s) • q (s) -h (s, z (s))) ds,
and the Jacobian matrix of the flow map, also called stability matrix
(1.12) F (t, t 0 , z 0 ) = ∂ z Φ t,t0
h (z 0 ). Note that F (t, t 0 , z 0 ) is a symplectic 2d × 2d matrix, that satisfies the linearized flow equation
(1.13) ∂ t F (t, t 0 , z 0 ) = JHess z h (t, z (t)) F (t, t 0 , z 0 ), F (t 0 , t 0 , z 0 ) = I R 2d .
We denote its blocks by
(1.14) F (t, t 0 , z 0 ) = A (t, t 0 , z 0 ) B (t, t 0 , z 0 ) C (t, t 0 , z 0 ) D (t, t 0 , z 0 ) . 1.3.2.
The metaplectic transform and Gaussian states. It is standard to associate with the time-dependent symplectic map F (t, t 0 , •) a unitary evolution operator, the metaplectic transformation that acts on square integrable functions in L 2 (R d ) as a unitary transformation.
M[F (t, t 0 , z 0 )] : u 0 → u(t)
and associates with an initial datum u 0 the solution at time t of the Cauchy problem i∂ t u(t) = op w 1 Hess z h t, Φ t,t0 h (z 0 ) z • z u, u(t 0 ) = u 0 . This map is called the metaplectic transformation associated with the matrix F (t, t 0 , z 0 ) (see [START_REF] Maspero | On time dependent Schrödinger equations: global well-posedness and growth of Sobolev norms[END_REF]). It satisfies for all ε > 0 and for all symbol a compactly supported or polynomial
(1.15) M[F (t, t 0 , z 0 )] -1 op ε (a)M[F (t, t 0 , z 0 )] = op ε (F (t, t 0 , z 0 )z).
All these classical quantities are involved in the description of the propagation of Gaussian states by U ε h (t, t 0 ), that are a generalization of the Gaussian families (g ε z ) ε>0 that we have already seen. Gaussian states are wave packets WP ε z (g Γ ) with complex-valued Gaussian profiles g Γ , whose covariance matrix Γ is taken in the Siegel half-space S + (d) of d × d complex-valued symmetric matrices with positive imaginary part,
S + (d) = Γ ∈ C d×d , Γ = Γ τ , ImΓ > 0 . More precisely, g Γ depends on Γ ∈ S + (d) according to (1.16) g Γ (x) := c Γ e i 2 Γx•x , x ∈ R d , 1. INTRODUCTION where c Γ = π -d/4 det 1/4 (ImΓ) is a normalization constant in L 2 (R d ).
It is a non-zero complex number whose argument is determined by continuity according to the working environment. The propagation of Gaussian states by a metaplectic transform is well-known: for Γ 0 ∈ S + (d), we have
(1.17) M[F (t, t 0 , z 0 )]g Γ0 = g Γ (t,t0,z0) ,
where the width Γ (t, t 0 , z 0 ) ∈ S + (d) and the corresponding normalization c Γ (t,t0,z0) are determined by the initial width Γ 0 and the Jacobian F (t, t 0 , z 0 ) according to
Γ (t, t 0 , z 0 ) = (C (t, t 0 , z 0 ) + D (t, t 0 , z 0 )Γ 0 )(A (t, t 0 , z 0 ) + B (t, t 0 , z 0 )Γ 0 ) -1 (1.18) c Γ (t,t0,z0) = c Γ0 det -1/2 (A (t, t 0 , z 0 ) + B (t, t 0 , z 0 )Γ 0 ).
The branch of the square root in det -1/2 is determined by continuity in time. Besides, the action of U ε h (t, t 0 ) on Gaussian wave packets WP ε z (g Γ ) obey to
U ε h (t, t 0 )WP ε z0 (g Γ0 ) = e i ε S (t,t0,z0) WP ε Φ t,t 0 h (z0) (g Γ (t,t0,z) ) + O( √ ε)
in any space Σ k ε (R d ) (see [START_REF] Combescure | Coherent states and applications in mathematical physics[END_REF]). 1.3.3. Parallel transport. For systems, the wave function is valued in L 2 (R d , C m ) and thus vector-valued. The propagation then involves transformation of the vector part of the eigenfunctions that is called parallel transport.
Denoting by π ⊥ the projector π ⊥ = I -π , we define self-adjoint matrices H adia ,1 by
π ⊥ H adia ,1 π ⊥ = 0, π H adia ,1 π = π H 1 + 1 2i {H 0 , π } π , (1.19) π ⊥ H adia ,1 π = π ⊥ (i∂ t π + i{h , π }) π . One then introduces the map R (t, t 0 , z) defined for ∈ {1, 2} by (1.20) i∂ t R (t, t 0 , z) = H adia ,1 t, Φ t,t0 h (z) R (t, t 0 , z), R (t 0 , t 0 , z) = I m . The map t → H adia ,1 t, Φ t,t0
h (z) is a locally Lipschitz map valued in the set of self adjoint matrices. Therefore, the existence of R (t, t 0 , z) comes from solving a linear time dependent ODE by the Cauchy Lipschitz Theorem.
Lemma 1.10. For all (t, z) ∈ I × R 2d and ∈ {1, 2}, the matrices R (t, t 0 , z) are unitary matrices. Besides, they satisfy
(1.21) R (t, t 0 , z)π (t 0 , z) = π t, Φ t,t0 (z) R (t, t 0 , z).
This Lemma is proved in Appendix A. The relation (1.21) implies that whenever a vector V 0 is in the eigenspace of H 0 (t 0 , z 0 ) for the eigenvalue h (t 0 , z 0 ), then the vector R (t, t 0 , z) V 0 is in the range of π (t, Φ t,t0 (z)). In other words, we have constructed a map that preserves the eigenspaces along the flow: R (t, t 0 , z) : Ran (π (t 0 , z)) → Ran π (t, Φ t,t0 (z)) . The matrices R (t, t 0 , z) are sometimes referred to as Larmor precession (see [START_REF] Combescure | Coherent states and applications in mathematical physics[END_REF]).
The map V 0 → R (t, t 0 , z)π (t 0 , z) V 0 is a parallel transport in the Hermitian vector fiber bundle (t, z) → Ran(π (t, z)) over the phase space I × R 2d ⊂ R 1+2d , associated with the curve s → γ(s) = s, Φ s,t0 h (z 0 ) s∈I and the matrix H adia ,1 . Indeed, the covariant derivative along the curve (γ(s)) s∈I is given by
∇ γ(s) = ∂ t + Jdh • ∇ z
and the relation X s, Φ s,t0 h (z 0 ) = R (t, t 0 , z)π (t 0 , z) V 0 defines a smooth section along the path γ that satisfies ∇ γ(s) X(t, x) = -iH adia ,1 (t, x) X(t, x). The map R , ∈ {1, 2} plays a role on the quantum side in the adiabatic setting for the propagation of wave packets. The proof of the next statement can be found in [START_REF] Combescure | Coherent states and applications in mathematical physics[END_REF]Chapter 14] and [START_REF] Fermanian Kammerer | Propagation of wave packets for systems presenting codimension one crossings[END_REF], and Chapter 5 of this memoir.
Proposition 1.11 (Vector-valued wave packets). Let k ∈ N and assume that H ε = H 0 + εH 1 satisfies Assumptions 1.3 and 1.4. Let ∈ {1, 2} and (t 0 , z 0 ) ∈ I × R d . Then, for any ϕ 0 ∈ S(R d , C) and V 0 ∈ Ran π (t 0 , z 0 ), there exists a constant C > 0 such that
sup t∈J U ε H (t, t 0 ) V 0 WP ε z0 ϕ 0 -e i ε S (t,t0,z0) V (t, t 0 ) WP ε Φ t,t 0 h (z0) ϕ ε (t) Σ k ε ≤ C √ ε,
where the profile function ϕ ε (t) is given by
ϕ ε (t) = M[F (t, t 0 , z 0 )]ϕ 0 , and V (t, t 0 , z) = R t, t 0 , Φ t,t0 h (z 0 ) V 0 .
Thawed and frozen Gaussian approximations
Thawed and frozen Gaussian approximations have been introduced in the 80's in theoretical chemistry [START_REF] Herman | A semiclassical justification for the use of non-spreading wavepackets in dynamics calculations[END_REF][START_REF] Kay | Integral expressions for the semi-classical time-dependent propagator[END_REF][START_REF] Kay | The Herman-Kluk approximation: derivation and semiclassical corrections[END_REF]. The frozen one has become popular as the so-called Herman-Kluk approximation. They rely on the fact that the family of wave packets (g ε z ) z∈R 2d forms a continuous frame and provides for all square integrable functions f ∈ L 2 (R d ) the reconstruction formula
f (x) = (2πε) -d z∈R 2d g ε z , f g ε z (x)dz.
The leading idea is then to write the unitary propagation of general, square integrable initial data
ψ ε 0 ∈ L 2 (R d ) as U ε H (t, t 0 )ψ ε 0 = (2πε) -d z∈R 2d g ε z , ψ ε 0 U ε H (t, t 0 )g ε z dz,
and to take advantage of the specific properties of the propagation of Gaussian states to obtain an integral representation that allows in particular for an efficient numerical realization of the propagator. Such a program has been completely accomplished in the scalar case. However, the mathematical proof of the convergence of this approximation is more recent [START_REF] Swart | A mathematical justification for the Herman-Kluk Propagator[END_REF][START_REF] Robert | On the Herman-Kluk Semiclassical Approximation[END_REF] and can be easily extended to the adiabatic setting (see [START_REF] Fermanian Kammerer | Propagation of wave packets for systems presenting codimension one crossings[END_REF]). We recall in the first subsection these adiabatic results and then we explain how we extend this approach to systems presenting smooth crossings via a hopping process. Surface hopping has been popularized in theoretical chemistry by the algorithm of the fewest switches (see [START_REF] Tully | Trajectory surface hopping approach to nonadiabatic molecular collisions: the reaction of H + with D 2[END_REF]) and been combined with frozen Gaussian propagation in various instances, see for example [START_REF] Wu | A justification for a nonadiabatic surface hopping Herman-Kluk semiclassical initial value representation of the time evolution operator[END_REF][START_REF] Lu | Frozen Gaussian approximation with surface hopping for mixed quantumclassical dynamics: A mathematical justification of fewest switches surface hopping algorithms[END_REF]. Here, it is the first time that the combination is achieved in a fully rigorous manner.
1.4.1. The adiabatic situation. Whenever the eigenvalues are of constant multiplicity, the classical quantities that we have introduced above are enough to construct an approximation of the propagator. For ∈ {1, 2}, we define the first order thawed Gaussian approximation for the -th mode as the operator J t,t0
,th defined on functions of the form
ψ = V f , f ∈ L 2 (R d ), (1.22) J t,t0 ,th ( V f ) = (2πε) -d R 2d e i ε S (t,t0,z) g ε z , f V (t, t 0 , z)g Γ (t,t0,z),ε Φ t,t 0 (t,z) dz, 1. INTRODUCTION with (1.23) V (t, t 0 , z) = R (t, t 0 , z)π (t 0 , z) V (z).
This family of operators is bounded in
L L 2 (R d ), Σ k ε (R d ) (see Corollary 2.17). Notice that the operator f → J t,t0
,th ( V f ) has a Schwartz distribution kernel and defines a Fourier integral operator with an explicit complex phase. Theorem 1.12 (Thawed Gaussian approximation [START_REF] Kay | Integral expressions for the semi-classical time-dependent propagator[END_REF][START_REF] Robert | On the Herman-Kluk Semiclassical Approximation[END_REF][START_REF] Swart | A mathematical justification for the Herman-Kluk Propagator[END_REF][START_REF] Fermanian Kammerer | Propagation of wave packets for systems presenting codimension one crossings[END_REF]). Assume h is an eigenvalue of constant multiplicity of a matrix H ε = H 0 + εH 1 of subquadratic growth on the time interval I. Let t 0 , T ∈ R with [t 0 , T ] ⊂ I. Then, there exists
C T > 0 such that for all φ ε 0 ∈ L 2 (R d ), V ∈ C ∞ (R 2d , C m ) bounded with bounded derivatives, for all t ∈ [t 0 , t 0 + T ] U ε H (t, t 0 ) π (t 0 ) V φ ε 0 -J t,t0 ,th ( V φ ε 0 ) L 2 ≤ C T ε φ ε 0 L 2 .
Remark 1.13.
(1) Of course, there is no unicity of the writing ψ = V f . However, changing ( V , f ) into (k V , 1 k f ) for some constant k ∈ C does not affect the result. One can also think to modifying V by multiplying it by a non-vanishing function a ∈ C ∞ (R 2d ) such that a and 1 a have bounded derivatives. Then, it is enough to turn f into a -1 f (see Remark 2.23).
(2) The approach of thawed and frozen approximations that we develop in this text allows to extend the convergence to the spaces Σ ε k provided the initial data (φ ε 0 ) ε>0 is frequency localized and k satisfies
N 0 > k + d + 1 2 (N 0 being associated to (φ ε 0 ) ε0 by Definition 1.7).
As first proposed in [START_REF] Herman | A semiclassical justification for the use of non-spreading wavepackets in dynamics calculations[END_REF], it is also possible to get rid of the time-dependent variance matrices Γ by introducing the Herman-Kluk prefactors for the -th modes, a , defined by (1.24) a (t, t 0 , z) = 2 -d/2 det 1/2 (A (t, t 0 , z) + D (t, t 0 , z) + i(C (t, t 0 , z) -B (t, t 0 , z)) .
One then defines the first order frozen Gaussian approximation for the -th mode as the operator J t,t0 ,fr defined by
(1.25) J t,t0 ,fr ( V f ) = (2πε) -d R 2d e i ε S (t,t0,z) g ε z , f a (t, t 0 , z) V (t, t 0 , z)g ε Φ t,t 0 1 (z) dz.
Here again, this family of operators is bounded in Corollary 2.17). The next result then is a consequence of Theorem 1.12 Theorem 1.14 (Frozen Gaussians approximation [START_REF] Kay | Integral expressions for the semi-classical time-dependent propagator[END_REF][START_REF] Robert | On the Herman-Kluk Semiclassical Approximation[END_REF][START_REF] Swart | A mathematical justification for the Herman-Kluk Propagator[END_REF][START_REF] Fermanian Kammerer | Propagation of wave packets for systems presenting codimension one crossings[END_REF]). Assume h is an eigenvalue of constant multiplicity of a matrix H = H 0 + εH 1 of subquadratic growth on the time interval I.
L L 2 (R d ), Σ k ε (R d ) (see
Let t 0 , T ∈ R with [t 0 , T ] ⊂ I. Then, there exists C T > 0 such that for all φ ε 0 ∈ L 2 (R d ), V ∈ C ∞ (R 2d , C m )
bounded with bounded derivatives, and for all t ∈ [t 0 , t 0 + T ],
U ε H (t, t 0 ) π (t 0 ) V φ ε 0 -J t,t0 ,fr ( V φ ε 0 ) L 2 ≤ C T ε φ ε 0 L 2 .
The terminology thawed/frozen for these Gaussian approximations was introduced by Heller [START_REF] Heller | Time-dependent approach to semiclassical dynamics[END_REF] to put emphasis on the fact that, on the first case, the covariance of the matrix was evolving "naturally" by following the classical motion wile, on the other one, the covariance is "frozen" (constant). The possibility of freezing the covariance matrix was realized by Herman and Kluk (see [START_REF] Herman | A semiclassical justification for the use of non-spreading wavepackets in dynamics calculations[END_REF]) by computing the kernel of the time dependent propagator.
As for Theorem 1.12, one can extend the result to an approximation in Σ k ε provided the data (φ ε 0 ) ε0 is frequency localized family and k ∈ N is chosen so that N 0 > k + d + 1 2 (N 0 being associated to (φ ε 0 ) ε0 by Definition 1.7).
In the next sections, we present our results and an extension of these statements to systems with crossings. The method we develop also allow to prove the approximations of Theorems 1.14 and 1.12 in the spaces Σ k ε (R d ), with additional assumptions on the initial data. 1.4.2. Initial value representations for codimension 1 crossing at order √ ε. Our first result consists in an extension of the range of validity of Theorems 1.12 and 1.14 to Hamiltonians presenting smooth crossing and satisfying (1.3) at the prize of a loss in the accuracy of the approximation.
Theorem 1.15 (Leading order thawed/frozen Gaussian approximation). Let k ∈ N. Assume H ε = H 0 + εH 1 satisfies Assumptions 1.3 and 1.4 on the interval I. Then, there exist constants C T,k > 0, such that for all initial data ψ ε 0 = V φ ε 0 that satisfies Assumptions 1.9 with frequency localization index N 0 > k + d + 1 2 , there exists ε 0 > 0 such that for all t ∈ I and ε ∈ (0, ε 0 ], we have
U ε H (t, t 0 ) π (t 0 ) V φ ε 0 -J t,t0 ,th/fr φ ε 0 Σ k ε ≤ C T √ ε ( φ ε 0 L 2 + C 0 ) .
The remarks below also holds for Theorems 1.18, 1.19 and 1.20.
Remark 1.16.
(1) Of course the result also holds for initial data
ψ ε 0 (x) = V φ ε 0 (x) + r ε 0 (x), x ∈ R d when the family (r ε 0 ) ε>0 satisfies r ε 0 L 2 (R d ) = O(ε) in Σ k ε for the index k considered in the statement.
(2) The fact of being frequency localized with N 0 > k + d + 1 2 implies that (φ ε 0 ) ε>0 is bounded in Σ k ε (see Section 2.2.6). Thus, (U ε H (t, t 0 )ψ ε 0 ) ε>0 also is bounded in Σ k ε and this space is the natural space where studying the approximation.
(3) The control of the approximation in terms of the initial data by φ ε 0 L 2 + C 0 instead of φ ε 0 Σ ε k is due to the method of the proof, which has to account for the presence of the crossing. The constant C 0 (and the L 2 -norm) control the Σ ε k -norm. The loss of accuracy of the approximation, in √ ε instead of ε, is also due to the presence of the crossing set Υ. It induces transitions between the modes that are exactly of order √ ε and cannot be neglected. If the initial data is frequency localized in a domain such that all the classical trajectories issued from its microlocal support at time t 0 do not reach the crossing set before the time t 0 + T , then an estimate in ε will hold. However, if these trajectories pass through the crossing, some additional terms of order √ ε have to be added to obtain an approximation at order ε. Let us now introduce the hopping trajectories that we will consider and the branching of classical quantities that we will use above the crossing set.
1.4.3. Hopping trajectories and branching process. Assume H ε = H 0 + εH 1 satisfies Assumptions 1.3 and 1.4 on the interval I. For considering initial data ψ ε 0 = V φ ε 0 that are frequency localized in a compact set K ⊂ B(0, R 0 ), we are going to make assumptions on the set K.
We consider sets K that are connected compact subsets of R 2d and that do not intersect the crossing set Υ. If one additionally assumes that the trajectories Φ t,t0 (z) issued from points z ∈ K intersect Υ on generic crossing points, then, because of their transversality to Υ, a given 1. INTRODUCTION trajectory Φ t,t0 (z) issued from z ∈ K meets Υ only a finite number of times. We then denote by (t (t 0 , z), ζ (t 0 , z)) the first crossing point in Υ: (1.26) ζ (t 0 , z) = Φ t (t0,z),t0 (z).
For the -th mode and the compact K, we define t ,max (t 0 , K) = max{t (t 0 , z), z ∈ K} and t ,min (t 0 , K) = min{t (t 0 , z), z ∈ K}.
We shall assume that K is well-prepared in the sense that all trajectories issued from K for one of the mode have passed through Υ (if they do) before the ones for the other mode start to reach Υ.
Assumption 1.17 (Well-prepared frequency domain). The set K is a connected compact subset of R 2d that does not intersect the crossing set Υ. The trajectories Φ t,t0 (z) issued from points z ∈ K intersect Υ on generic crossing points and one has t 1,max (t 0 , K) < t 2,min (t 0 , K).
A space-time crossing point (t (t 0 , z), ζ (t 0 , z)) is characterized by three parameters
µ ∈ R, (α , β ) ∈ R 2d
given by
µ (t 0 , z) = 1 2 (∂ t f + {v, f }) t (t 0 , z), ζ (t 0 , z) , (1.27) α (t 0 , z), β (t 0 , z) = J∇ z f t (t 0 , z), ζ (t 0 , z) . (1.28)
The hopping process is affected with a transition coefficient τ 1,2 (t, t 0 , z) that restrict the space time variables (t, z) to trajectories that have met the crossing set Υ (1.29) τ 1,2 (t, t 0 , z) = I t≥t 1 (t0,z) 2iπ µ (t 0 , z) .
Note that when K satisfies Assumption 1.17, then if t < t 1,min (K) and z ∈ K, one has τ 1,2 (t, t 0 , z) = 0. Moreover, if t ∈ t 1,max (K), t 2,min (K) , z → τ 1,2 (t, t 0 , z) is smooth.
One then introduces hopping trajectories by setting
(1.30) Φ t,t0 1,2 (z) = Φ t,t 1 (t0,z) 2 ζ 1 (t 0 , z) , t > t 1 (t 0 , z).
This trajectory Φ t,t0 1,2 (z) t>t (t0,z) is the branch of a generalized trajectory that has hopped from the mode = 1 to the mode = 2 at the crossing point (t 1 (t 0 , z), ζ 1 (t 0 , z)). One could define similarly trajectories hopping from the mode = 2 to = 1 by exchanging the role of the indices 1 and 2.
Along these trajectories, one defines classical quantites as follows:
(a) The function S 1,2 (t, t 0 , z) is the action accumulated along the hopping trajectories, i.e. between times t 0 and t 1 (t 0 , z) on the mode = 1 and then on the mode = 2
(1.31) S 1,2 (t, t 0 , z) = S 1 (t 1 (t 0 , z), t 0 , z) + S 2 (t, t 1 (t 0 , z), ζ 1 (t 0 , z)), (b)
The matrix Γ 1,2 (t, t 0 , z) is generated according to (1.18) for the mode = 2 along the trajectory (Φ 1,2 (t, t 0 , z)) t>t 1 (t0,z) starting at time t 1 = t 1 (t 0 , z) from the matrix
(1.32) Γ (t 0 , z) = Γ 1 (t 1 , t 0 , z) - (β -Γ 1 (t 1 , t 0 , z)α ) ⊗ (β -Γ 1 (t 1 , t 0 , z)) 2µ -α • β + α • Γ 1 (t 1 , t 0 , z)α , (c) The vector V 1,2 (t, t 0 , z
) is obtained by propagating the vector V 1 t 1 , t 0 , z for the mode = 2 along the trajectory (Φ 1,2 (t, t 0 , z)) t>t (t0,z) starting at time t 1 from the vector
π 2 (t 1 , ζ 1 ) V 1 t 1 , t 0 , z with ζ 1 = ζ 1 (t 0 , z). One has (1.33) V 1,2 (t, t 0 , z) = R 2 (t, t 1 , ζ 1 )π 2 t 1 , ζ 1 V 1 t 1 , t 0 , z .
(d) The matrices F 1,2 (t, t 0 , z) are associated with the flow maps
(1.34) F 1,2 (t, t 0 , z) = ∂ z Φ t,t0 1,2 (z) = A 1,2 (t, t 0 , z) B 1,2 (t, t 0 , z) C 1,2 (t, t 0 , z) D 1,2 (t, t 0 , z) .
(e) The transitional Herman-Kluk prefactors depend on Γ 1,2 (t, t 0 , z) and τ 1,2 (t, t 0 , z) according to
a 1,2 = τ 1,2 det 1/2 (C 1,2 -iD 1,2 -i(A 1,2 -iB 1,2 )) det 1/2 (C 1,2 -iD 1,2 -Γ 1,2 (A 1,2 -iB 1,2 )) (1.35) = τ 1,2 det 1/2 (A 1,2 + D 1,2 + i(C 1,2 -B 1,2 )) det 1/2 (D 1,2 + iC 1,2 -iΓ 1,2 (A 1,2 -iB 1,2 ))
where we have omitted to mark the dependence on (t, t 0 , z) for readability.
With these quantities in hands, we can define the correction terms of order √ ε of the thawed & frozen approximations and state our main results.
1.4.4. Thawed Gaussian approximation at order ε. With the notations of the preceding section, one defines the thawed Gaussian correction term for the mode = 1 as
(1.36) J t,t0 1,2,th ( V f ) = (2πε) -d z∈K τ 1,2 (t, t 0 , z)e i ε S1,2(t,t0,z) g ε z , f V 1,2 (t, t 0 , z)g Γ1,2(t,t0,z),ε Φ t,t 0 1,2 (z)
dz
The formula (1.36) defines a family of operators that is bounded in L(L 2 (R d ), Σ k ε (R d )) (see Corollary 2.17). The restriction t > t 1 (t 0 , z) introduces a localization of the domain of integration on one side of the hypersurface {t = t 1 (t 0 , z)}.
The thawed Gaussian correction term for the mode = 2, denoted by J t,t0
2,1,th would be defined by exchanging the roles of the indices 1 and 2. These correction terms allow to ameliorate the accuracy of the thawed gaussian approximation and to obtain an approximation at order ε.
Theorem 1.18 (Thawed Gaussian approximation with hopping trajectories). Let k ∈ N. Assume H ε = H 0 + εH 1 satisfies Assumptions 1.3 and (1.4) on the interval I. Then, there exists constants C T,k > 0, such that for all initial data ψ ε 0 = V φ ε 0 that satisfies Assumptions 1.9 in a compact K satisfying Assumption 1.17, there exists ε 0 > 0 such that for all t ∈ I we have for
1. INTRODUCTION ε ∈ (0, ε 0 ], U ε H (t, t 0 )ψ ε 0 -J t,t0 1,th π 1 (t 0 ) V φ ε 0 -J t,t0 2,th π 2 (t 0 ) V φ ε 0 - √ εJ t,t0 1,2,th π 1 (t 0 ) V φ ε 0 Σ k ε ≤ C T,k ε (C 0 + φ ε 0 L 2 ) .
This result emphasizes that for systems with smooth crossings, a term of order √ ε is generated by the crossing.
1.4.5. Frozen Gaussian approximation at order ε. In order to freeze the covariance of the Gaussians Γ 1,2 (t, t 0 , z) that appear in the formula of the thawed Gaussian correction term (1.36), we use the correction prefactors a 1,2 and a 2,1 introduceded in (1.35) and define the frozen Gaussian correction term for the mode = 1 as
(1.37) J t,t0 1,2,fr ( V f ) = (2πε) -d z∈ K a 1,2 (t, t 0 , z)e i ε S1,2(t,t0,z) g ε z , f V 1,2 (t, t 0 , z)g ε Φ t,t 0 1,2 (z) dz.
Notice that the map f → J t,t0 1,2,fr ( V f ) defines a Fourier-integral operator with a complex phase associated with the canonical transformations Φ t,t0
1,2 that define the hopping flow. We first state a point-wise approximation.
Theorem 1.19 (Point-wise time frozen Gaussian approximation with hopping trajectories). Let k ∈ N. Assume H ε = H 0 + εH 1 is of subquadratic growth and satisfies Assumptions 1.3 and 1.4 on the interval I. Then, there exists constants C T,k > 0, such that for all initial data ψ ε 0 = V φ ε 0 that satisfies Assumptions 1.9 in a compact K satisfying Assumption 1.17, there exists ε 0 > 0 such that for all t ∈ I satisfying t < t 1,min (t 0 , K) or t 1,max (t 0 , K) ≤ t < t 2,min (t 0 , K),
we have for ε ∈ (0, ε 0 ],
U ε H (t, t 0 )ψ ε 0 -J t,t0 1,fr π 1 (t 0 ) V φ ε 0 -J t,t0 2,fr π 2 (t 0 ) V φ ε 0 - √ εJ t,t0 1,2,fr π 1 (t 0 ) V φ ε 0 Σ k ε ≤ C T,K ε ( φ ε 0 L 2 + C 0 ) .
The proof of Theorem 1.19 is based on integration by parts and requires differentiability. When t < t 1,min (K), then the transfer coefficient τ 1,2 (t, t 0 , z) = 0 for all z ∈ K. When t ∈ [t 1,max (K), t 2,min (K)), then z → τ 1,2 (t, t 0 , z) is smooth. It is for that reason, that we have to restrict the time validity of the approximation.
Averaging in time allows to overcome this difficulty and to obtain an approximation result that holds almost everywhere on intervals of time such that the classical trajectories issued from K and associated with the level = 2 have not yet reached Υ.
Theorem 1.20 (Time averaged frozen Gaussians approximation with hopping trajectories). Let k ∈ N. Assume H ε = H 0 + εH 1 is of subquadratic growth and satisfies Assumptions 1.3 and 1.4 on the interval I. Then, there exists constants C T,k > 0, such that for all initial data ψ ε 0 = V φ ε 0 1.5. WAVE PACKETS PROPAGATION AT ANY ORDER THROUGH GENERIC SMOOTH CROSSINGS that satisfies Assumptions 1.9 in a compact K satisfying Assumption 1.17, there exists
ε 0 > 0 such that for all χ ∈ C ∞ 0 t 0 , t 2,min (t 0 , K) , R χ(t) U ε H (t, t 0 )ψ ε 0 -J t,t0 1,fr π 1 (t 0 ) V φ ε 0 -J t,t0 2,fr π 2 (t 0 ) V φ ε 0 - √ εJ t,t0 1,2,fr π 1 (t 0 ) V φ ε 0 dt Σ k ε ≤ C T,k ε χ L ∞ ( φ ε 0 L 2 + C 0 ) .
To go beyond the time t 2,min (t 0 , K), one has to consider new transitions that would now go from the level = 2 to the level = 1, each time a trajectory for the level = 2 hits Υ. The process can be understood as a random walk: each time a trajectory passes through Υ a new trajectory arises on the other mode with a transition rate of order √ ε.
The averaging in time can be understood as the result of a non-pointwise observation, that takes place over some time interval, that might even be a short one.
For proving Theorem 1.19, 1.20 we use an accurate analysis for the propagation of individual wave-packets. We prove that a Gaussian wave-packet stays a generalized Gaussian wave packet modulo an error term of order ε µ , for any µ ∈ N, and in any space Σ k ε , k ∈ N, as well before, or after, hitting the crossing hypersurface Υ.
The proof consists first in proving the thawed approximations and in then deriving the frozen approximation from the thawed one. The arguments developed in Section 3.3 will show that one can "freeze" the Gaussian on any state g Γ0,ε z . The choice of some Γ 0 instead of iI will imply a slight modification of the definition of the Herman-Kluk prefactors a , and the transitional ones a , , , ∈ {1, 2}.
Wave packets propagation at any order through generic smooth crossings
Our results crucially rely on the analysis of the propagation of wave-packets (including the ones with Gaussian amplitude functions) through smooth crossings. We consider a Hamiltonian H ε = H 0 + εH 1 that satisfies Assumptions 1.4 on the time interval I and presents a smooth crossing on a set Υ. We fix a point z 0 = (q 0 , p 0 ) / ∈ Υ and times t 0 , T such that (1.38) t 0 < t 1 (t 0 , z 0 ) < t 0 + T < t 2 (t 0 , z 0 ).
We assume that the point Φ
ψ ε 0 = V 0 WP ε z0 (f 0 ) with f 0 ∈ S(R d ) and V 0 ∈ C m .
Let ψ ε (t) be the solution of (1.1) with initial data ψ ε 0 . There exist κ 0 ∈ N and three families of differential operators B ,j (t) j∈N , ∈ {1, 2} and B 1→2,j (t) j∈N such that setting for δ > 0 and
1. INTRODUCTION t ∈ I δ = [t 0 , t 1 (t 0 , z 0 ) -δ] ∪ [t 1 (t 0 , z 0 ) + δ, t 0 + T ] ψ ε,N 1 (t) = e i ε S1(t,t0,z0) WP ε z1(t) (f ε 1 (t)) , ψ ε,N 2 (t) = e i ε S2(t,t0,z0) WP ε z2(t) (f ε 2 (t)) + I t>t e i ε S1,2(t,t0,z0) WP ε Φ1,2(t,t0,z0) (f ε 1→2 (t)) , with f ε (t) = R (t, t 0 ) M[F (t, t 0 )] 0≤j≤N ε j/2 B ,j (t)f 0 , ∈ {1, 2}, f ε 1→2 (t) = R 2 (t, t 1 ) M[F 2 (t, t 1 )] 1≤j≤N ε j/2 B 1→2,j (t)f 0 ,
one has the following property: for all k, N, M ∈ N, there exists
C M,N,k > 0 such for all t ∈ I δ ψ ε (t) -ψ ε,N 1 (t) + ψ ε,N 2 (t) Σ k ε ≤ C M,N,k √ ε δ N +1 δ -κ0 + δ M .
Moreover, the operators B ,j (t) are differential operators of degree ≤ 3j with time dependent smooth vector-valued coefficients and satisfy for ∈ {1, 2},
B ,0 (t) = π (t 0 , z 0 ) V 0 and B ,j (t 0 ) = 0 ∀j ≥ 1, (1.40) B ,1 (t) = |α|=3 1 α! 1 i t t0 ∂ α z h (s, z (s)) op w 1 [(F (s, t 0 , z 0 )z) α ] ds (1.41) + 1 i t t0 ∇ z H adia, 1 (s, z s ) • op 1 (F (t 0 , s)z)ds π (t 0 , z 0 ) V 0 , B 1→2 (t) = W 1 (t 1 , ζ 1 ) * T 1→2 M[F 1 (t 1 , t 0 )]π 1 (t 0 , z 0 ) V 0 (1.42)
where the scalar transfer operator T 1→2 is defined by
(1.43) T 1→2 ϕ(y) = +∞ -∞ e i(µ -α •β /2)s 2 e isβ •y ϕ(y -sα )ds, ∀ϕ ∈ S(R d )
and the transfer matrix W 1 (t , ζ ) is given by
(1.44) W 1 = π 1 H 1 π 2 + iπ 1 ∂ t π 1 + 1 2 {h 1 + h 2 , π 1 } π 2 .
In other words, Theorem 1.21 says that if ψ ε 0 a polarized wave packet, then, for t ∈ I, t = t 1 (t 0 , z 0 ), the solution ψ ε (t) of (1.1) is asymptotic at any order to an asymptotic sum of wave packets. Indeed, if n ∈ N is fixed, choosing δ = ε α and M, N large enough will give an approximation in O(ε n ).
The polarization of the wave packets ψ ε,N (t) is first described by the vectors B ,j (t 0 ) that evolves through R (t, t 0 ) M[F (t, t 0 )]. Such evolution preserves the eigenmode. Secondly, in ψ ε,N 2 (t), one sees a √ ε contribution that comes from a transfer from the mode 1 to the mode 2. The change of polarization is performed by the matrix W * 1 which maps Ran(π 1 ) to Ran(π 2 ). Indeed, one has
W * 1 = π 2 H 1 π 1 -iπ 2 ∂ t π 1 + 1 2 {h 1 + h 2 , π 1 } π 1 = π 2 H 1 π 1 + iπ 2 ∂ t π 2 + 1 2 {h 1 + h 2 , π 2 } π 1 .
The latter equation shows that the result is symmetric with respect to the modes and one can exchange their roles.
Theorem 1.21 was proved in [START_REF] Fermanian Kammerer | Propagation of wave packets for systems presenting codimension one crossings[END_REF] up to order o(ε). The notations are compatible. However, in [START_REF] Fermanian Kammerer | Propagation of wave packets for systems presenting codimension one crossings[END_REF], a coefficient γ appears in the definition of the transfer operator. It corresponds to a normalization process that we avoid here by using the projector π 2 (t 1 , ζ 1 ) instead of taking the scalar product with a normalized eigenvector.
For proving the initial value representations of U ε H of Theorems 1.15 to 1.20, we shall use two consequences of Theorem 1.21:
(i) the wave packet structure up to any order in ε of U ε H V 0 WP ε z0 (g iI ), (ii) the exact value of the action of B 1,0 , B 2,0 , and of B 1→2 (t) when f 0 is the Gaussian g iI . We recall that the action of the operators R (t, s) M[F (t, s)] on focalized Gaussians preserves the Gaussian structure and the focalization: in view of (1.17) and (1.23),
R (t, s) M[F (t, s, z)]π (s, z) V 0 g iI = V (t, s, z)g Γ (t,s,z)
where V ∈ Ran(π (t, Φ t,s (z))) and the matrix Γ (t, s, z) is given by (1.18) with Γ 0 = iI. Besides, regarding the transfer term, with the notations of Corollary 3.9 of [START_REF] Fermanian Kammerer | Propagation of wave packets for systems presenting codimension one crossings[END_REF] and those of (1.27), (1.28), (1.32) and (1.33)
B 1→2 (t 1 )g iI = 2iπ µ (t 0 , z 0 ) V 1,2 (t 1 , t 0 , z 0 )g Γ (t 1 ,t0,z0) .
These elements may enlighten the construction of the operator J ,fr/th and J t,t0 , ,fr/th for indices , ∈ {1, 2}, = .
Detailed overview
The main results of this paper are Theorems 1.18, 1.19, 1.20, and 1.21. We prove Theorems 1.18, 1.19 and 1.20 in Chapters 2 and 3. These proofs rely on Theorem 1.21 that is proved later. Chapter 2 starts with Section 2.1 that recalls elementary facts about the Bargmann transform. Then, in Section 2.2, we analyze the notion of frequency localization that we have first introduced in this text and that is crucial in the setting of frozen and thawed initial value representations. Indeed, these approximations rely on a class of operators that is studied in Section 2.3. Endowed with these results, we are able to prove the approximations of Theorems 1.18, 1.19, 1.20 in Chapter 3. We describe the general proof strategy in Section 3.1 and develop the proof of Theorem 1.18 in Section 3.2. We explain in Section 3.3 how to pass from a thawed to a frozen approximation, and thus obtain Theorems 1.19 and 1.20.
Chapters 4 and 5 are devoted to the proof of Theorem 1.21. These two chapters are independent of the preceding ones, and one can start reading them, skipping Chapters 2 and 3. In Chapter 4, we construct the different diagonalisations of the Hamiltonian H ε that we are going to use. In the crossing region, we use a rough diagonalisation (see Section 4.2), and outside this region we use super-adiabatic projectors as proposed in [START_REF] Bily | Propagation d'états cohérents et applications[END_REF][START_REF] Martinez | Twisted pseudodifferential calculus and application to the quantum evolution of molecules[END_REF][START_REF] Teufel | Adiabatic perturbation theory in quantum dynamics Lecture Notes in Mathematics 1821[END_REF] (see Section 4.3). These constructions rely on a symbolic calculus that we present in Section 4.1. The analysis of the propagation of wave-packets is then performed in Chapter 5.
The Appendices are devoted to the proof of some technical points used in the proofs of Chapters 4 and 5.
Part 1
Initial value representations
Frequency localized families
In this chapter, we study frequency localized families as introduced in the Introduction. We shall use the more precise following definition.
Definition 2.1 (Frequency localized functions of order β). Let β ≥ 0. Let (φ ε ) ε>0 be a family of functions of L 2 (R d ). The family (φ ε ) ε>0 is frequency localized of order β if the family is bounded in L 2 (R d ) and if there exist R β , C β , ε β > 0 and N β > d + 1 2 such that for all ε ∈ (0, ε β ], (2πε) -d/2 | g ε z , φ ε | ≤ C β ε β z -N β for all z ∈ R d with |z| > R β . One then says that (φ ε ) ε>0 is frequency localized of order β on the ball B(0, R β ). Above, g, f = R d f (x)g(x)dx denotes the inner product of L 2 (R d ).
In this chapter, we first recall some facts about the Bargmann transform, then we study frequency localized families and, finally, the class of operators built by use of Bargmann transform and to which the thawed/frozen Gaussian approximations belong.
The Bargmann transform
The thawed/frozen approximations that we aim at studying are constructed thanks to the Bargmann transform. They belong to a class of operators obtained by integrating the Bargmann transform against adapted families.
Recall that the Bargman transform is the map
B B : L 2 (R d ) f → B[f ] ∈ L 2 (R 2d ), defined by (2.1) B[f ](z) = (2πε) -d 2 g ε z , f , z ∈ R 2d .
The Bargmann transform is an isometry and one has
R 2d |B[f ](z)| 2 dz = f 2 L 2 .
Indeed, the Gaussian frame identity writes
(2.2) f (x) = (2πε) -d R 2d g ε z , f g ε z (x)dz = (2πε) -d 2 R 2d B[f ]g ε z (x)dz,
where the function
g ε z is introduced in (1.7), g ε z = WP ε z (g iI ) with the notation (1.16). Equation (2.2) is equivalent to f (x) = (2πε) -d 2 R 2d B[f ](z)g ε z (x)dz, ∀f ∈ L 2 (R d ).
More generally, the Bargmann transform characterizes the Σ k ε spaces according to the next result that we prove in Section 2.2.6 below.
Lemma 2.2. Let k ∈ N, there exists a constant c k such that for all f ∈ S(R d ),
f Σ k ε ≤ c k z k B[f ] L 2 (R 2d
) . The condition of spectral localization introduced in Definition 2.1 expresses in terms of the Bargmann transform: The family (φ ε ) ε>0 is frequency localized at the scale
β ≥ 0 if there exists constants R β , C β , , ε β > 0, N β ∈ N, N β > 2d such that for |z| > R β and ε ∈ (0, ε β ], |B[φ ε ](z)| ≤ C β ε β z -N β .
In other words, the Bargmann transform of (φ ε ) ε>0 has polynomial decay at infinity and is controlled by
ε β outside a ball B(0, R β ).
The operators in which we are interested are built on the Bargmann transform. Consider a smooth family of the form
(z → θ ε z ) ∈ C ∞ (R 2d z , L 2 (R d )). We then denote by J [θ ε z ] the operator acting on φ ∈ L 2 (R d ) according to (2.3) J [θ ε z ](φ)(x) = (2πε) -d 2 R 2d B[φ](z)θ ε z (x)dz = (2πε) -d R 2d g ε z , φ θ ε z (x)dz, x ∈ R d .
The thawed/frozen operators of equations (1.22), (1.25), (1.36) and (1.37) are of that form. The Gaussian frame identity (2.2) also writes with these notations
J [g ε z ] = I L 2 (R d ) .
Note that the formal adjoint of
J [θ ε z ] is (2.4) J [θ ε z ] * : φ → (2πε) -d R 2d θ ε z , φ g ε z dz.
In the first Section 2.2, we study the properties of frequency localized families, which is the type of data we consider in our main results. Then, in Section 2.3, we analyze some properties of the operators of the form (2.3). Finally, we prove Theorems 1.15 and 1.18 in Section 3.2, and Theorems 1.19 and 1.20 in Section 3.3.
Along the next sections of this chapter, we shall use properties of wave packets that we sum-up here.
Lemma 2.3. if f, g ∈ S(R d ) and z, z ∈ R 2d , then (2.5) WP ε z (f ), WP ε z (g) = e i ε p •(q-q ) W [f, g] z -z √ ε where the function W [f, g] is the Schwartz function on R 2d defined by W [f, g](ζ) = R d f (x)g(x -q)e ip•x dx, ζ = (q, p).
Moreover, for all n ∈ N, there exists a constant C n > 0 such that
(2.6) ∀ζ ∈ R 2d , ζ n |W [f, g](ζ)| ≤ C n 0≤n ≤n f Σ n g Σ n-n .
Proof. The formula for WP ε z (f ), WP ε z (g) comes from a simple computation. Then, for α, γ ∈ N d and z = (q, p) ∈ R 2d , we observe
|q γ p α W [f, g](z)| = q γ R d D α x (f (x)g(x -q))e ix•p dx ≤ q |γ| R d D α x (f (x)g(x -q)) dx ≤ 2 |γ| 2 R d x |γ| x -q |γ| D α x (f (x)g(x -q)) dx
where we have used Peetre inequality
(2.7) ∀t ∈ R, ∀ ∈ Z, t t ≤ 2 2 t -t | | .
The conclusion then follows.
Frequency localized families
We investigate here the properties of families that are frequency localized in the sense of Definition 2.1 and we use the notation (2.1).
We point out that Definition 2.1 is enough to treat vector-valued families by saying that a vector-valued family is frequency localized at the scale β ≥ 0 if and only if all its coordinates are frequency localized at the scale β. For this reason, we focus below on scalar-valued frequency localized families.
First properties of frequency localized functions.
It is interesting to investigate the properties of this notion. The first properties are straightforward.
Proposition 2.4. The set of frequency localized function is a subspace of L 2 (R d ). Moreover, we have the following properties:
(1) If (φ ε 1 ) ε>0 and (φ ε 2 ) ε>0 are two frequency localized families at the scales β 1 and β 2 respectively, then for all a, b ∈ C, the family (aφ ε 1 + bφ ε 2 ) ε>0 is frequency localized at the scale min(β 1 , β 2 ).
(2) If (φ ε ) ε>0 is frequency localized at the scale β ≥ 0, then it is also frequency localized at the scale β for all β ∈ [0, β].
This notion is microlocal. Indeed, defining the ε-Fourier transform by
F ε f (ξ) = (2πε) -d 2 R d e i ε ξ•x φ ε (x)dx = (2πε) -d 2 f ξ ε , f ∈ S 2 (R d ).
Proposition 2.5. Let (φ ε ) ε>0 be a bounded family in L 2 (R d ). Then, (φ ε ) ε>0 is frequency localized family at the scale β ≥ 0 if and only if (F ε φ ε ) ε>0 is frequency localized at the scale β ≥ 0.
Proof. This comes from the observation that for all z ∈ R 2d ,
| g ε z , φ ε | = | g ε Jz , F ε φ ε |
where J is the matrix defined in (1.10). Thus it is equivalent to state the fact of being frequency localized for a family or for the family of its ε-Fourier transform.
Frequency localized families and Bargmann transform.
The Gaussian frame identity (2.2) allows to decompose a function of L 2 (R d ) into a (continuous) sum of Gaussians. After discretization of the integral, this sum may be turned into a finite one, which opens the may to approximation's strategies (see [START_REF] Lasser | Discretising the Herman-Kluk Propagator[END_REF] where this observation is used for numerical purposes). It is thus important to identify assumptions that allow to compactify the set of integration in z. The notion of frequency localized families plays this role according to the next result.
Lemma 2.6. Let (φ ε ) ε>0 be a frequency localized family at the scale β ≥ 0. Let R β , C β and N β be the constants associated to
(φ ε ) ε>0 according to Definition 2.1. Let k ∈ N with N β > d + k. Then, for all χ ∈ L ∞ (R) supported in [0, 2] and equal to 1 on [0, 1], there exists C > 0 such that for R > R β , φ ε -J g ε z χ |z| R (φ ε ) Σ k ε (R d ) ≤ C C β ε β |z|>R z -2(N β -k) dz 1/2 .
In the following, we will use the notation
(2.8) φ ε R,< := J g ε z χ |z| R (φ ε ) = B -1 I |z|<R B[φ ε ](z) .
Remark 2.7. Lemma 2.6 can be used in different manners.
(
) If β > 0, then J [g ε z χ(|z|/R)](φ ε ) approximates φ ε in L 2 (R d ) as ε goes to 0 in any space Σ k ε (R d ) with k ∈ N such that N β > k + d + 1 2 1
, and uniformly with respect to R > R β . (2) If β ≥ 0 (which includes β = 0), then the same approximation holds by letting R go to +∞, and it is uniform with respect to ε. In particular, when β = 0 we have lim sup
ε→0 φ ε -J g ε z χ |z| R (φ ε ) Σ k ε (R d ) ≤ CR -(N β -k-d-1 2 ) .
Proof. We set
r ε (x) = (2πε) -d |z|≥R g ε z , ϕ ε g ε z (x)dz and consider k ∈ N. For R > R β , α, γ ∈ N d with |α| + |γ| = k, we have x α (εD x ) γ r ε 2 L 2 (R d ) ≤ (2πε) -2d R d |z|>R |z |>R g ε z , ϕ ε g ε z , ϕ ε g α,γ ε,z (x) g α,γ ε,z (x) dx dz dz .
where (2.9)
g α,γ ε,z = x α (εD x ) γ g ε z = WP ε z (q + √ εy) α (p + √ εD y ) γ g iI ) , z = (q, p).
We will use that for all n ∈ N, there exists c n,k > 0 such that for all z ∈ R 2d
(2.10)
g α,γ ε,z Σ n ≤ c n,k z k . By (2.5), we obtain x α (εD x ) γ r ε 2 L 2 (R d ) ≤ C 2 β ε 2β (2πε) -d |z|>R |z |>R z -N β z -N β W [g α,γ ε,z , g α,γ ε,z ] z -z √ ε dz dz .
Besides, by (2.6), there exists a constant C n,k such that
W [g α,γ ε,z , g α,γ ε,z ](ζ) ≤ C n,k ζ -n z k z k .
We deduce the existence of c > 0 such that
x α (εD x ) γ r ε 2 L 2 (R d ) ≤ c C 2 β ε 2β ε -d |z|>R |z |>R z -N β +k z -N β +k z -z √ ε -n dzdz ≤ c C 2 β ε 2β |z|>R z -N β +k z + √ εζ -N β +k ζ -n dzdζ. Since -N β + k ≤ 0, Peetre inequality gives z + √ εζ -N β +k ≤ 2 N β -k 2 √ εζ N β -k z -N β +k ≤ 2 N β -k 2 ζ N β -k z -N β +k ,
by restricting ourselves to ε ≤ 1. Therefore, there exists a constant c > 0 such that
x α (εD x ) γ r ε 2 L 2 (R d ) ≤ c C 2 β ε 2β R 2d ζ -n+N β -k dζ |z|>R z -2(N β -k) dz
and we conclude the proof by choosing n = N β + k + 2d + 1.
Examples. A first fundamental example consists in bounded families in
L 2 (R d ) that are compactly supported. Lemma 2.8. Let (φ ε 0 ) ε>0 be a bounded family in L 2 (R d ) such that φ ε 0 = I |x|≤M φ ε 0 for some M > 0. Then, (φ ε 0 ) ε>0 is frequency localised at any scale β ≥ 0.
Proof. There exists a constant C > 0 such that for all z = (q, p) ∈ R 2d
|B[φ ε 0 ](z)| ≤ C g ε z I |x|≤M L 2 .
Besides, one can find a smooth real-valued function χ compactly supported in {|x| ≤ 2} such that
g ε z I |x|≤M 2 L 2 = (πε) -d 2 R 2d χ( x M )χ( y M )g ε z (x)g ε z (y)dxdy.
We set
L = (|p| 2 + |q -x| 2 ) -1 (-ip + q -x) • ∇ x and we observe that εLg ε z (x) = g ε z (x) for all x ∈ R d . Besides, if |z| > 8M , either |q| > 4M and if |x| ≤ 2M ≤ |q| 2 , then |x -q| > |q| 2 > 4M , or |q| ≤ 4M and |p| > 2M . In any case, (|p| 2 + |q -x| 2 ) -1 > 2|z| -1 and for all N ∈ N, there exists a constant c N,M such that (L * ) N χ( x M ) ≤ c N,M |z| -N ,
Using the vector field L, we perform N integration by parts, and we obtain the existence of a constant C > 0 such that
g ε z I |x|≤M 2 L 2 ≤ C ε N -d 2 |z| -N R 2d I |x|,|y|≤M dxdy, whence the boundedness of ε -β z N β g ε z I |x|≤M L 2 for all β ≥ 0 and N β ∈ N.
An important consequence of this result is related with the notiont notion of compacity and of ε-oscillation that are often considered in semi-classical analysis. We recall that the uniformly bounded family (φ ε ) ε>0 is said to be compact if lim sup
ε→0 |x|>R |φ ε (x)| 2 dx -→ R→+∞ 0.
It is said ε-oscillating when lim sup
ε→0 |ξ|> R ε | φ ε (ξ)| 2 dξ -→ R→+∞ 0,
or, equivalently, when its ε-Fourier transform (F ε φ ε ) ε>0 is compact. Therefore, a compact family or an ε-oscillatory family can be approached by frequency-localized families. Note however that The notion of compacity or ε-oscillation is weaker than being frequency localized. For example, the family
(2.11) u ε (x) = | ln ε| d 2 a(x| ln ε|), x ∈ R d ,
is a compact and ε-oscillating family which has no scale of frequency localization.
Let us now analyze the examples given in the Introduction.
Lemma 2.9.
(1) Let u ∈ S(R d ) and z 0 = (q 0 , p 0 ) ∈ R 2d . Then, the family (WP ε z0 (u)) ε>0 is frequency localized at the scale β for any
β ≥ 0. (2) Let a ∈ C ∞ 0 (R d ) and S ∈ C ∞ (R d ).
Then, the family (e i ε S(x) a) ε>0 is frequency localized at the scale β for any β ≥ 0.
Proof. 1-By (2.5), we have for z ∈ R 2d B[WP ε z0 (u)](z) = (2πε) -d 2 WP ε z (g iI ), WP ε z0 (u) = (2πε) -d 2 e i ε p0•(q-q0) W [g iI , u] z 0 -z √ ε .
Let N ∈ N, the estimate (2.6) implies the existence of a constant C N such that
|B[WP ε z0 (u)](z)| ≤ C N ε -d 2 z 0 -z √ ε -N . Choosing |z| > max(2|z 0 |, 1), we have 2|z 0 -z| ≥ 2(|z| -|z 0 |) ≥ |z| and we deduce z 0 -z √ ε -N = ε ε + |z -z 0 | 2 N 2 ≤ 4ε 4ε + |z| 2 N 2 ≤ (4ε) N 2 |z| -N ,
whence the existence of a constant c N > 0 such that for all z ∈ R 2d and N ∈ N,
|B[WP ε z0 (u)](z)| ≤ c N ε N -d-1 2 z -N .
2-One has
B[e i ε S(x) a](z) = (2π) -d/2 π -d/4 ε -d/4 e i ε S(q) R d a(q + √ εy) × Exp - i √ ε p • y + i ε S(q + y √ ε) e -|y| 2 2 dy.
This term has a very specific structure involving the symbol y → a(y), a rapidly decaying function
y → e -|y| 2 2
and an oscillating phase
y → Λ ε (y) := - 1 √ ε p • y + 1 ε S(q + y √ ε).
We are going to show that the terms defined for j ∈ {1, • • • , d} by
A ε j := q j R d a(q + √ εy)e -|y| 2 2 e iΛ ε (y) dy and B ε j := p j R d a(q + √ εy)e -|y| 2 2 e iΛ ε (y) dy,
have the same structure. Then, it will be enough to consider only one of these terms and to prove that they are controlled by a power of ε, this will implies the adequate control on |B[e i ε S(x) a](z)|. Let us first transform A ε j and B ε j . Indeed, we have
A ε j = R d (q j + √ εy j )a(q + √ εy)e -|y| 2 2 e iΛ ε (y) dy - √ ε R d a(q + √ εy) y j e -|y| 2 2
e iΛ ε (y) dy.
The first integral of the right hand side has the same structure with the symbol y → a(y) and the second one with the rapidly decaying function y → y j e -|y| 2 2 . Besides, observing
p j e iΛ ε (y) = -i √ ε∂ yj (e iΛ ε (y) ) -∂ yj S(q + √ εy)e iΛ ε (y) ,
we obtain with an integration by parts
B ε j = - R d ∂ yj S(q + √ εy)a(q + √ εy)e -|y| 2 2 e iΛ ε (y) dy + i √ ε R d ∂ yj a(q + √ εy)e -|y| 2 2
e iΛ ε (y) dy.
Here again the right hand side has the same structure with different symbols and rapidly decaying term.
We now focus in proving that one typical term (2.12)
L ε := R d a(q + √ εy)e -|y| 2 2 e iΛ ε (y) dy is of order ε N for all N ∈ N. The decay of y → e -|y| 2 2
allows to reduce the set of integration. Indeed, we have
|y|>ε -1 4 a(q + √ εy)e -|y| 2 2 e iΛ ε (y) dy ≤ e - √ ε 4 a L ∞ R d e -|y| 2 4 dy.
Therefore, there exists a constant c > 0 such that
|L ε | ≤ c |y|≤ε -1 4 a(q + √ εy)e -|y| 2 2 e iΛ ε (y) dy + e -1 4 √ ε
.
We now use the oscillations of the phase for treating the integral in |y| ≤ ε -1 4 . We observe that there exists
R 0 > 0 such that if |z| > R 0 , then z / ∈ {|p -∇S(q)| ≤ 1, dist(q, supp(a)) ≤ 1}.
We choose |z| > R 0 and we have the following alternative:
either dist(q, supp(a)) > 1, or (dist(q, supp(a)) ≤ 1 and |p -∇S(q)| > 1) .
If dist(q, supp(a)) > 1, there exists ε 0 > 0 such that if ε ∈ (0, ε 0 ] and |y| ≤ ε 1/4 , then q + √ εy / ∈ supp(a). The integral thus is zero and we are reduced to the case where dist(q, supp(a)) ≤ 1 and |p -∇S(q)| > 1. One can find ε 1 > 0 such that for ε ∈ (0, ε 1 ] and |y| ≤ ε 1 4 ,
∇S(q + y √ ε) -p > 1 2 .
We then consider the differential operator
L ε = √ ε ∇S(q + y √ ε) -p |∇S(q + y √ ε) -p| 2 • ∇ y
and we write
|y|≤ε -1 4 a(q + √ εy)e -|y| 2 2 e iΛ ε (y) dy = |y|≤ε -1 4 a(q + √ εy)e -|y| 2 2 (L ε ) N e iΛ ε (y) dy = |y|≤ε -1 4 (L ε ) N * a(q + √ εy)|∇S(q + y √ ε) -p| -2N e -|y| 2 2
e iΛ ε (y) dy.
There exists a constant C > 0 independent of z such that for all ε ∈ (0, ε 1 ] and |y| ≤ ε
1 4 , (L ε ) N * a(q + √ εy)|∇S(q + y √ ε) -p| -2N e -|y| 2 2 ≤ Cε N 2 e -|y| 2 2 .
We deduce
|y|≤ε -1 4 a(q + √ εy)e -|y| 2 2 e iΛ ε (y) dy ≤ Cε N 2
and (2.12) writes
|L ε | ≤ c ε N 2 + e -1 4 √ ε
for some constant c > 0. This terminates the proof.
2.2.4. Characterization of frequency localized families. The characterization of frequency localized families can be done by using other families of wave packets than Gaussian ones and the cores z can be distributed in different manners.
Proposition 2.10. The family (φ ε ) ε>0 is frequency localized at the scale β ≥ 0 if and only if for all
C 1 -diffeomorphism Φ satisfying ∃a, b > 0, ∀z ∈ R 2d , a|z| ≤ Φ(z) ≤ b|z|, for all θ ∈ S(R d ), there exists C β , N β , R β and ε β such that for all ε ∈ (0, ε β ] and for |z| > R β (2πε) -d 2 | WP ε Φ(z) (θ), φ ε | ≤ C β ε β z -N β max 1, 1 a N β θ Σ 2d+1+N β .
Moreover, for all family
(λ ε ) ε>0 bounded in L ∞ (R 2d ), ε ∈ (0, ε β ] and R > R β , J e i ε λ ε (z) WP ε Φ(z) (θ)I |z|>R (φ ε ) Σ k ε ≤ C C β ε β |z|>R z -N β dz .
Proof. We only have to prove that if (φ ε ) ε>0 is frequency localized at the scale β ≥ 0, then the property holds for some given profile θ and diffeomorhism Φ. Then, the equivalence will follow. We consider the constants C β , N β , R β and ε β given by Definition 2.1 and we take ε ∈ (0, ε β ]. We observe
(2πε) -d 2 WP ε Φ(z) (θ), φ ε = (2πε) -3d 2 R d WP ε Φ(z) (θ), g ε z g ε z , φ ε dz = I 1 + I 2 with I 1 = (2πε) -3d 2 |z |>R β WP ε Φ(z) (θ), g ε z g ε z , φ ε dz .
Let us study I 1 . Using (2.5), (2.6) and that (φ ε ) ε>0 is frequency localized at the scale β ≥ 0, we deduce the existence of c β , N β > 0 such that we have
|I 1 | ≤ c β ε β (2πε) -d |z |>R β W [θ, g iI ] z -Φ(z) √ ε z -N β dz ≤ c β θ Σ n ε β (2πε) -d R d z -Φ(z) √ ε -n z -N β dz ≤ c β θ Σ n ε β R d ζ -n Φ(z) + √ εζ -N β dζ,
where the constant c β may have changed between two successive lines. We observe that Peetre's inequality (2.7) yields
Φ(z) + √ εζ -N β ≤ 2 N β 2 Φ(z) -N β √ εζ N β ≤ 2 N β 2 Φ(z) -N β ζ N β , whence by choosing n > 2d + 1 + N β , |I 1 | ≤ c β θ Σ 2d+1+N β ε β Φ(z) -N β R d ζ -(2d+1) dζ,
for some new constant c β > 0. We conclude by observing that
Φ(z) -N β ≤ max 1, 1 a N β z -N β ,
whence, by modifying c β ,
|I 1 | ≤ c β θ Σ 2d+1+N β ε β max 1, 1 a N β z -N β .
We now study I 2 . Using (2.5), (2.6), we write for n ∈ N
|I 2 | ≤ φ ε L 2 (2πε) -3d 2 |z |≤R β z -Φ(z) √ ε -n dz . We observe that if |z| > 2aR β , then for |z | ≤ R β ≤ 1 2a |z|, we have |z -Φ(z)| ≥ |Φ(z)| -|z | ≥ 1 2a |z|. Therefore z -Φ(z) √ ε -n = ε ε + |z -Φ(z)| 2 n 2 ≤ (2a) n ε n 2 |z| -n .
Using that (φ ε ) ε>0 is a bounded family in L 2 , we obtain that there exists a constant c such that for |z| > 2aR β and any n ∈ N,
|I 2 | ≤ c ε n-3d 2 |z| -n .
The proof of the last property follows the line of the proof of Lemma 2.6 combined with adapted change of variables. This terminates the proof.
2.2.5. Frequency localized families and semi-classical pseudodifferential calculus. With these elements in hands, we can prove some properties that frequency localized families enjoy with respect to pseudodifferential calculus. Proposition 2.11. Let (φ ε ) ε>0 be a frequency localized family at the scale β ≥ 0.
(1) For all semi-classical symbol a ∈ C ∞ c (R 2d ), the family a φ ε ε>0 is frequency localized at the scale β ≥ 0.
(2) For all subquadratic Hamiltonian h ∈ C ∞ (R × R 2d ), for all t, t 0 ∈ R, the vector-valued family U ε h (t, t 0 )φ ε ε>0 is frequency localized at the scale β ≥ 0.
Proof. (1) We can assume without loss of generality that a is real-valued. We write
B[ aφ ε ] = (2πε) -d/2 ag ε z , φ ε . Since g ε
z is a wave packet, we have
ag ε z = a WP ε z (g iI ) = WP ε z (g ε a ), g ε a = a(z + √ ε•)g iI .
The function g ε a is of Schwartz class on R d and its Schwartz semi-norms are uniformly bounded in ε because a is compactly supported. We deduce from Proposition 2.10,
|B[ aφ ε ]| ≤ C β ε β z -N β g ε a 2d+1+N β , which concludes the proof. (2) We write B [U ε h (t, t 0 )φ ε ] = (2πε) -d/2 U ε h (t, t 0 )g ε z , φ ε . Since g ε z is a wave packet, we have (2.13) U ε h (t, t 0 )g ε z = e i ε S(-t,z) WP ε Φ -t,0 h (z) (g Γ(-t,z) + √ ε r ε z (t))
with the notations of the introduction. Besides, for all n ∈ N, there exists a constant C = C(n, t) such that r ε z (t) Σ n ≤ C(n, t). We deduce from Proposition 2.10,
|B [U ε h (t, t 0 )φ ε ]| ≤ C β ε β z -N β g Γ(t,z) + √ εr ε z (t) 2d+1+N β , which concludes the proof.
Remark 2.12.
(1) The proof of Proposition 2.11 (1) extends to smooth functions a with polynomial growth
∃N 0 ∈ N, ∀γ ∈ N d , ∀z ∈ R 2d , |∂ γ a(z)| ≤ z N0-|γ|
provided the integer N β associated with the frequency localisation at the scale β ≥ 0 of the family (φ ε 0 ) ε>0 verifies N β > 2d + 1 + N 0 .
(2) The proof of Proposition 2.11 (2) also extends to adiabatic smooth matrix-valued Hamiltonian H that are subquadratic according to Definition 1.1. However, it is not clear whether the same result holds for Hamiltonians with crossings, either they are smooth as in this article or conical as in the Appendix of [START_REF] Fermanian Kammerer | Propagation of Coherent States through Conical Intersections[END_REF]. Indeed, even though one knows that
U ε H (t, t 0 )(g ε z V
) is asymptotic to a wave packet, it is not clear that the remainder of the approximation has a wave-packet structure as in (2.13).
2.2.6. Frequencies localized families and Σ k ε -regularity. The size of N β in Definition 2.1 gives an information about the regularity of the family. Lemma 2.13. Let (φ ε ) ε>0 a frequency localized family at the scale β ≥ 0, let C β , N β , ε β are the constants associated by Definition 2.1. Assume such that k ∈ N is such that
N β > d + k + 1 2 , then (φ ε ) 0<ε<ε β is uniformly bounded in Σ k ε and there exists c > 0 independent of ε φ ε Σ k ε ≤ c(C β + φ ε 0 L 2 )
. This Lemma is a simple consequence of Lemma 2.2 that we are going to prove now.
Proof of Lemma 2.2. Let k ∈ N and α, γ ∈ N d such that |α| + |γ| = k. we consider the operator T α,γ = B • (x α (εD x ) γ ) • B -1 • z -k .
The kernel of this operator is the function
R 4d (X, Y ) → k ε (X, Y ) = (2πε) -d g ε X , x α (εD x ) γ g ε Y Y -k . Therefore, by (2.5), there exists a constant c k such that Y ∈R 2d sup X∈R 2d |k ε (X, Y )|dY = (2πε) -d Y ∈R 2d sup X∈R 2d W [g iI , g α,γ ε,Y ] X -Y √ ε Y -k dY.
We deduce from (2.6) and (2.5) the existence of a constant c k > 0 such that
Y ∈R 2d sup X∈R 2d |k ε (X, Y )|dY ≤ c k .
Similarly, we have
X∈R 2d sup Y ∈R 2d |k ε (X, Y )|dX = (2πε) -d X∈R 2d sup Y ∈R 2d W [g iI , g α,γ ε,Y ] X -Y √ ε Y -k dX ≤ c k .
Therefore, the Schur test yields the boundedness of T α,γ . One then deduces that for f ∈ S(R d ), one has
x α (εD x ) γ f L 2 (R d ) = B[x α (εD x ) γ f L 2 (R 2d ) = T α,γ z k B[f ] L 2 (R 2d ) ≤ c k z k B[f ] L 2 (R 2d ) ,
which concludes the proof.
Let us now prove Lemma 2.13
Proof of Lemma 2.13.
Since N β > d + k, we have for |z| > R β , z k |B[φ ε ](z)| ≤ ε β C β z -N β +k ∈ L 2 (R 2d ). Moreover, z → z 2k |B[φ ε ](z)| 2
is locally integrable and we can write
z k B[φ ε ](z) 2 L 2 (R 2d ) ≤ R β 2k |z|≤R β |B[φ ε ](z)| 2 dz + ε 2β C 2 β R 2d z -2(N β -k) dz ≤ R β 2k B[φ ε ](z) 2 L 2 (R 2d + ε 2β C 2 β R 2d z -2(N β -k) dz ≤ R β 2k φ ε 2 L 2 (R d ) + ε 2β C 2 β R 2d z -2(N β -k) dz,
whence the conclusion since the right hand side is bounded for 2(N β -k) -2d > 1.
Operators built on Bargmann transform
We investigate here the properties of the operators defined in (2.3). We shall investigate two cases :
(a) The case where the family (θ ε z ) ε>0 is only uniformly bounded in L 2 (R d ), which is a light assumption, but with uniform bounds in z on adequate semi-norms or norms. (b) The case where the family (θ ε z ) ε>0 is a wave packet (up to a phase), which is a stronger assumption on the family. The thawed/frozen approximation operators belong to the type (b). We will consider operators of type (a) in the proofs of Theorems 1.15 and 1.18, when taking for the family (θ ε z ) ε>0 a term of rest appearing in the expansion of the action of the propagator on a Gaussian wave packet. The Theorems 1.19 and 1.20 are consequences of Theorem 1.18.
In the Subsection 2.3.1, we analyze the action of these operators on Σ k ε spaces. In Subsection 2.3.2, we prove special properties of the operators corresponding to families of the type (b) involving classical quantities linked with the propagation of Gaussian wave packets by Schrödinger evolution.
Action in Σ k
ε of operators built on Bargmann transform. This section is devoted to the proof of the following result.
Theorem 2.14. Let ε 0 > 0.
(1) Let R > 0. There exists c 0 > 0 such that for all measurable z-dependent family
(θ ε z ) ε>0 , for all k ∈ N, ε ∈ (0, ε 0 ], for all φ ∈ L 2 (R d ) J [θ ε z I |z|<R ](φ) Σ k ε ≤ (2πε) -d c 0 φ L 2 R 2d sup |z|≤R θ ε z Σ k ε .
(
) Assume θ ε z = λ ε (z)WP ε Φ(z) (θ) with θ ∈ S(R d ), (λ ε ) ε>0 a bounded family in L ∞ (R 2d , C) and Φ a smooth diffeomorphism of R 2d such that ∃c > 0, ∃ ∈ N, ∀z ∈ R 2d , |J Φ (z)| + |J Φ (z) -1 | ≤ c z . 2
Then, there exists
c 0 > 0 such that for all φ ∈ L 2 (R d ), k ∈ N, ε ∈ (0, ε 0 ], J [θ ε z ](φ) Σ k ε ≤ c 0 λ ε L ∞ φ L 2 θ Σ k+ +2d+1 .
The properties of the operators J [θ ε z ] extend to its adjoint (see (2.4)).
Corollary 2.15. Under the assumptions of Theorem 2.14, the family of operators J [θ ε z ] * (see (2.4)) satisfies the same kind of estimates than the family J [θ ε z ]. A straightforward consequence of Theorem 2.14 and of Lemme 2.6 is given in the next statement.
Corollary 2.16. Assume (θ ε z ) ε>0 satisfies the assumptions of Theorem 2.14 [START_REF] Arnold | Ordinary differential equations[END_REF]. Let (Φ ε ) ε>0 be a frequency localized family at the scale β ≥ 0 and C β > 0, N β ∈ N be the constants associated with Definition 2.1. Then, for all k > 0 such that N β > k + d, there exists a constant c k such that for all R > 0,
J [θ ε z ](φ ε -φ ε R,< ) Σ ε k ≤ c k C β ε β R -(N β -k-d-1 2 )
. where the family (φ ε R,< ) ε>0 is introduced in (2.8).
Proof of Theorem 2.14.
(1) The proof is similar to the first part of the proof of [START_REF] Alinhac | Pseudo-differential operators and the Nash-Moser theorem[END_REF]. By Cauchy-Schwartz inequality, for x ∈ R d , we have
J [θ ε z I |z|≤R ](φ ε ) 2 L 2 ≤ (2πε) -2d φ ε 2 L 2 |z|,|z |≤R x∈R d θ ε z (x)θ ε z (x)dx dz dz ≤ (2πε) -2d φ ε 2 L 2 |z|,|z |≤R θ ε z L 2 θ ε z L 2 dz dz ≤ c 1 R 4d (2πε) -2d φ ε 2 L 2 sup |z|≤2R θ ε z 2 L 2
where c 1 > 0 is a universal constant.
(2) Let us first prove the L 2 -estimate (k = 0). Let (x, y) → k ε (x, y) be the integral kernel of the operator J [θ ε z ]. Since the Bargmann transform is an isometry, it is equivalent to consider the operator
B • J [θ ε z ] • B -1 , the kernel of which is the function (R 2d ) 2 (X, Y ) → k ε B (X, Y ) defined by k ε B (X, Y ) = (2πε) -d R 2d g ε X (x)g ε Y (y)k ε (x, y)dxdy = (2πε) -2d z∈R 2d g ε z , g ε Y g ε X , θ ε z dz. Therefore, by (2.5), k ε B (X, Y ) satisfies |k ε B (X, Y )| ≤ (2πε) -2d z∈R 2d λ ε (z)W [g iI , g iI ] Y -z √ ε W [g iI , θ] Φ(z) -X √ ε dz.
We deduce
R 2d |k ε B (X, Y )|dX ≤ M λ ε L ∞ R 2d |W [g iI , g iI ](z)|dz R 2d |W [g iI , θ](X)|dX , R 2d |k ε B (X, Y )|dY ≤ M λ ε L ∞ R 2d |W [g iI , g iI ](Y )|dY R 2d |W [g iI , θ](z)J -1 Φ (z)|dz ,
with M = sup ε∈(0,1] λ ε L ∞ , and, by equations (2.6) and (2.10), we deduce the existence of C > 0 such that
R 2d |k ε B (X, Y )|dX + R 2d |k ε B (X, Y )|dY ≤ CM θ Σ 2d+ +1 .
We then conclude by Schur Lemma and obtain
B • J [θ ε z ] • B -1 L(L 2 (R 2d )) ≤ CM λ ε L ∞ θ Σ 2d+ +1
, and so it is for J [θ ε z ]. For concluding the proof when k = 0, we again use that for α, γ ∈ N d and φ ∈ S(R d ),
x α (ε∂ x ) γ J [θ ε z ] = J [x α (ε∂ x ) γ θ ε z ]
, and the additional observation
x α (ε∂ x ) γ WP ε z (θ) = WP ε (q + √ εx) α (p + √ εD x ) γ θ .
We then conclude by observing that, as in the estimate (2.10), we have for all n ∈ N,
(q + √ εx) α (p + √ εD x ) γ θ Σ n ≤ z k θ Σ n+k .
This finishes the proof.
Theorem 2.14 has consequences for the thawed/frozen approximation operators introduced in Chapter 1.
Corollary 2.17. Assume the Hamiltonian H ε = H 0 + εH 1 satisfies Assumptions 1.3 and 1.4. Let k ∈ N and t ∈ I.
(1) The families of operators J t,t0 ,th/fr ε>0 defined in (1.22) and (1.25)
are bounded families in L(L 2 (R d , C m ), Σ k ε (R d , C m )).
(2) Assume moreover that the compact K satisfies Assumptions 1.17. Then, the family of operators J t,t0 1,2,th/fr ε>0 defined in (1.36) and (1.37) are bounded families in the space
L(L 2 (R d , C m ), Σ k ε (R d , C m )).
Remark 2.18. If one assumes that (t, z) → ∂ t f + {v, f } is bounded from below and ∂ t f is bounded, then one can replace the compact K by R 2d in the definition of J t,t0
1,2,th and one obtains a bounded family in
L(L 2 (R d , C m ), Σ k ε (R d , C m )).
Proof. Let ∈ {1, 2}. Let us first discuss J t,t0 ,th . We write J t,t0 ,th = J [θ ε z ] with
θ ε z = λ ε (z)WP ε Φ t,t 0 (z) (g Γ (t, t 0 , z)) and λ ε (z) = e i ε S (t,t0,z) V (t, t 0 , z).
We observe that for all t ∈ I and z ∈ R 2d ,
V (t, t 0 , z) C m = V (t 0 , t 0 , z) C m = π (t 0 ) V C m ≤ V C m,m
which is independent of z. Therefore, the family (λ ε ) ε>0 is bounded in L ∞ (R 2d ). Besides, by Proposition A.4, the flow map (t, z) → Φ t,t0 (z) satisfies the assumptions of (2) of Theorem 2.14. Similarly, the map (t, z) → Γ (t, t 0 , z) is bounded on I × R 2d . Therefore, for any N ∈ N, there exists c N,t0,T > 0 such that
∀t ∈ I, x α ∂ β x g Γ (t, t 0 , •) Σ N ≤ c N,t0,T .
We then conclude by (2) of Theorem 2.14. The proof for J t,t0
,fr follows exactly the same lines. The proof for J t,t0 1,2,th/fr requires additional observations. We need to consider the transition coefficient map (t, z) → τ 1,2 (t, t 0 , z) (see (1.29)) and the matrix-valued maps z → Γ (t 0 , z) (see (1.32)), which requires the analysis of the function parametrizing the crossing (see (1.27) and (1.28)), (2.14) z → α (t 0 , z), β (t 0 , z), µ (t 0 , z) .
By the condition (1.4) of Assumption 1.3, with n 0 = 0, the derivatives of (t, z) → f (t, z) are uniformly bounded in z. Moreover, if one takes z in a compact K that satisfies Assumptions 1.17, one has the additional properties that ∂ t f and µ are bounded below. As a consequence z → α (t 0 , z), z → β (t 0 , z) and z → µ (t 0 , z) are bounded functions on R 2d for all t ∈ I, the map defined in (2.14) is smooth. One then argues as before by including the coefficient τ 1,2 (t, t 0 , z) in the definition of λ ε and the result follows from Theorem 2.14 (2).
2.3.2. Some properties of operators built on Bargmann transform via families with wave packet structure. In this section we analyze the properties of the operators J
[θ ε z ] when (θ ε z ) ε>0 is of the form (2.15) θ ε z = e i ε S(z) u(z)WP ε Φ(z) (θ(z, •)), where θ ∈ C ∞ (R 2d , S(R d )), S ∈ C ∞ (R 2d , R), u ∈ C ∞ (R 2d , C
) and Φ a smooth diffeomorphism sarisfying the assumptions of Theorem 2.14. We are interested in the case where S and Φ are linked in the same manner as when they are the flow map and the action associated with classical trajectories. Therefore, we consider the following set of Assumptions.
Assumption 2.19. Let S ∈ C ∞ (R 2d z , R), u ∈ C ∞ (R 2d z , C
) and Φ a smooth diffeomorphism. We assume the following properties:
(i) There exists c > 0 and ∈ N such that
∀z ∈ R 2d , |J Φ (z)| + |J Φ (z) -1 | ≤ c z .
(ii) Setting Φ(z) = (Φ q (z), Φ p (z)) and
∂ z Φ = A(z) B(z) C(z) D(z) ,
we have
∇ q S(z) = -p + A(z)Φ p (z) and ∇ p S(z) = B(z)Φ p (z), z = (q, p). (iii) For all k ∈ N, the z-dependent seminorms u(z) Σ k and sup |α|≤k ∂ α z S L ∞ are uniformly bounded in z.
The next technical lemma will be useful for proving our main results. It contains all the information needed to pass from the thawed approximation to the frozen one.
Lemma 2.20. C) and Φ be a smooth diffeomorphism satisfying Assumptions 2.19. Then, the following equality between operators in L(L 2 (R d ), Σ k ε ) holds for k ∈ N:
Let d = ∂ q -i∂ p . Let θ ∈ C ∞ (R 2d , S(R d )), S ∈ C ∞ (R 2d , R), u ∈ C ∞ (R 2d ,
J u e i ε S WP ε Φ ((dΦ p x -dΦ q D x )θ) = -i √ ε J du e i ε S WP ε Φ (θ) -i √ ε J u e i ε S WP ε Φ (dθ) .
Note that with the notation of Lemma 2.20, we have
(2.16) dΦ p (z) = C(z) -iD(z) and dΦ q (z) = A(z) -iB(z).
Besides, if condition (ii) of Assumption 2.19 is satisfied, then the equality of Remark 2.20 holds formally. The condition (i) ensures the boundedness of the operators involved in the estimates.
Proof. The integral kernel of the operator J u e
i ε S WP ε Φ (θ) is the function (x, y) → z∈R 2d k(z, x, y)dz defined by k(z, x, y) = u(z, x)e i ε S(z) g ε z (y)WP ε Φ(z) (θ(z, •))(x), (x, y) ∈ R d , z ∈ R 2d .
We aim at calculating dk. We observe for z = (q, p) ∈ R 2d , y ∈ R d and
dS(z) = -p + (A(z) -iB(z))dΦ p (z), d g ε z (y) = i ε [(dp + idq)(y -q) + pdq]g ε z (y) = i ε p g ε z (y), d WP ε Φ(z) (θ(z, •)) = WP ε Φ(z) (dθ(z, •)) + i √ ε WP ε Φ(z) ((dΦ p (z)x -dΦ q (z)D x )θ(z, •)) - i ε WP ε Φ(z) ((A(z) -iB(z))dΦ p (z)θ(z, •)) .
We obtain
dk(z, x, y) = e i ε S(z) du(z, x) g ε z (y) WP ε Φ(z) (θ(z, •))(x) + u(z, x) g ε z (y) WP ε Φ(z) (dθ(z, •))(x) + i √ ε u(z, x) g ε z (y) WP ε Φ(z) ((dΦ p (z)x -dΦ q (z)D x )θ(z, •)) (x)
The result then follows from the integration in z ∈ R 2d .
The case of Gaussian functions θ is of particular interest. Indeed, if θ(z,
•) = g Θ(z) with Θ ∈ C ∞ (R 2d , S + (d)), we have for x ∈ R d and z ∈ R 2d , (2.17) (dΦ p (z)x -dΦ q (z)D x )g Θ(z) (x) = (dΦ p (z) -dΦ q (z)Θ(z))x g Θ(z) (x).
We set M Θ (z) := dΦ p (z) -dΦ q (z)Θ(z).
By (2.16), we have the equality between matrix-valued functions
(2.18) M Θ = (C -iD) -(A -iB)Θ = (A -iB) (A -iB) -1 (C -iD) -Θ .
Note that this matrix M Θ is invertible because (A + iB) -1 (C + iD) -Θ ∈ S + (d) (as the sum of two elements of S + (d)). These observations are in the core of the proof of the next result which is a corollary of Lemma 2.20, when applied to Gaussian profiles.
Corollary 2.21. Let k ∈ N. Let Θ ∈ C ∞ (R 2d , S + (d)) such that M Θ is bounded together with its inverse, let S ∈ C ∞ (R 2d , R), u ∈ C ∞ (R 2d , C) and Φ a smooth diffeomorphism satisfying Assumptions 2.19. Then, in L(L 2 (R d ), Σ k ε (R d )), we have (2.19) J u e i ε S WP ε Φ xg Θ = O( √ ε). Besides, for all L ∈ C ∞ (R 2d , C d,d ), in L(L 2 (R d ), Σ k ε (R d )), we have (2.20) J u e i ε S WP ε Φ Lx • xg Θ = 1 i J u Tr L M -1 Θ dΦ q e i ε S WP ε Φ g Θ + O(ε). with LM -1 Θ dΦ q = L (A -iB) -1 (C -iD) -Θ -1 .
Remark 2.22.
(1) This result has an interesting consequence concerning the pseudodifferential calculus. Indeed, for a real-valued and with bounded derivatives, in view of (2.4)
a = J ( ag ε z ) = J [a(z)g ε z ] + √ ε J WP ε z ∇a(z) • x D x g iI + O(ε),
where we have used the properties of wave packets. Using ∇g iI = xg iI allows to conclude by Corollary 2.21 a = J [a(z)g ε z ] + O(ε) in any space Σ k ε . This was already proved in [START_REF] Swart | A mathematical justification for the Herman-Kluk Propagator[END_REF]. (2) More can be said about the √ ε-order term on the right-hand side of (2.19). By revisiting the proof below, one sees that there exists a real-valued smooth function z → c(z) such that
J u e i ε S WP ε Φ xg Θ = -i √ εJ u e i ε S WP ε Φ c(z)g Θ + O(ε).
Remark 2.23. The latter remark allows to prove Remark 1.13 [START_REF] Alinhac | Pseudo-differential operators and the Nash-Moser theorem[END_REF]. We observe that in L 2 ,
ψ = a V ( (a -1 )f ) + O(ε) and U ε H (t, t 0 )ψ = U ε H (t, t 0 ) a V ( (a -1 )f ) + O(ε)
Turning the pair ( V , f ) into ( a V , (a -1 )f ) consists in replacing V (t, t 0 , z) by
V ,a (t, t 0 , z) := R (t, t 0 , z)π (t 0 , z)(a(z) V (z)) = a(z) V (t, t 0 , z).
The two thawed Gaussian approximation constructed in that two different manner then differs one from the other by O(ε): the analysis developed in Section 2.3 (in particular, the arguments of Remark 2.22) shows that in Σ k ε (R d ),
J t,t0 ,th ( V f ) = (2πε) -d R 2d e i ε S (t,t0,z) g ε z , (a -1 )f V ,a (t, t 0 , z)g Γ (t,t0,z),ε Φ t,t 0 (t,z) dz + O(ε).
Proof of Corollary 2.21. One uses (2.17) and the first relation of Lemma 2.20 that we apply to θ = g Θ . It gives that in
L(L 2 (R d ), Σ k ε (R d )), J u e i ε S WP ε Φ xg Θ = J u e i ε S WP ε Φ M -1 Θ (dΦ p x -dΦ q D x )g Θ = O( √ ε), whence (2.19). Secondly, if l L ∈ C ∞ (R 2d , C d,d
), we consider the matrix L such that L = t L M Θ . We observe that
(dΦ p x -dΦ q D x ) • (L xg Θ ) = ( t L (dΦ p -dΦ q Θ))x • x -Tr( t L dΦ q ) g Θ . It remains to prove that in L(L 2 (R d ), Σ k ε (R d )), we have (2.21) J u e i ε S WP ε Φ (dΦ p x -dΦ q D x ) • (L xg Θ ) = O(ε).
We first apply Lemma 2.20 to the function θ = L xg Θ and we write
J u e i ε S WP ε Φ (dΦ p x -dΦ q D x ) • (L xg Θ ) = -i √ εJ du e i ε S WP ε Φ L xg Θ + ue i ε S WP ε Φ L x d(g Θ ) .
We use the relation (2.19) and obtain in
L(L 2 (R d ), Σ k ε (R d )), (2.22) J u e i ε S WP ε Φ (dΦ p x -dΦ q D x ) • (L xg Θ ) = -i √ εJ ue i ε S WP ε Φ L x d(g Θ ) + O(ε). We calculate d(g Θ ) = dc Θ c Θ g Θ + (dΘx • x)g Θ , with (dΘx • x) x g Θ = (dΘx • x)M -1 Θ (dΦ p x -dΦ q D x )g Θ = M -1 Θ (dΦ p x -dΦ q D x ) (dΘx • x)g Θ -2M -1
Θ dΦ q dΘ x g Θ . Therefore, there exists matrices L 1 and L 2 such that, setting θ = (dΘx • x)g Θ , we have
L x d(g Θ ) = L 1 xg Θ + L 2 (dΦ p x -dΦ q D x ) θ. We deduce J u e i ε S WP ε Φ (dΦ p x -dΦ q D x ) • (L xg Θ ) = -i √ εJ [L 1 xg Θ ] -i √ εJ [L 2 (dΦ p x -dΦ q D x ) θ]
and we obtain (2.21) by Lemma 2.20 applied to the function θ, and by the relation (2.19), which concludes the proof 2.3.3. Operators built on Bargmann transform via classical quantities. We now apply the results of the preceding section to the diffeomorphism Φ given by a flow map associated to a Hamiltonian h. We are going to derive the results induced by Lemma 2.20 and Corollary 2.21 for time dependent quantities after integration in time. We will use the resulting formula for the Hamiltonians h 1 and h 2 associated with the matrix-valued Hamiltonian H ε . Lemma 2.24. Let k ∈ N. Let h be a subquadratic Hamiltonian on I × R 2d , I = [t 0 , t 0 + T ]. We consider
(1) the classical quantities associated to h as in Section 1.3 on the interval I:
z → S(t, z), Φ t,t0 (z), F (t, t 0 , z), (2)
a smooth function defined on I × R 2d , bounded and with bounded derivatives, (t, z) → u(t, z), (3) a smooth map from I × R 2d into S(R d ), (t, z) → θ(t, z), (4) a smooth function from R 2d into I, z → t (z). Then, for all χ ∈ C ∞ 0 (I), we have the following equality between operators in
L(L 2 (R d ), Σ k ε (R d )), R χ(t)J I t>t (z) u(t) e i ε S(t) WP ε Φ t,t 0 (dΦ t p x -dΦ t,t0 q D x )θ(t) dt = i √ ε R J I t>t (z) du(t) e i ε S(t) WP ε Φ t,t 0 (θ(t)) dt + i √ ε R χ(t)J I t>t (z) u(t) e i ε S(t) WP ε Φ t,t 0 (dθ(t)) dt -i √ εJ χ(t ) dt u(t ) e i ε S(t ) WP ε Φ t ,t 0 (θ(t )
) . Note that the result of this lemma is an equality. Thus, we have not emphasized assumptions that make these operators bounded. One could for example assume global boundedness of all the quantities involve and of their derivatives, or, what would be enough, that θ is compactly supported in z.
We also emphasize that the functions denoted by (χ
• t ) u(t ) e i ε S(t ) WP ε Φ t (θ(t )) is the map z → χ t (z) u t (z), z e i ε S(t (z),z) WP ε Φ t (z)
,t 0 (z) θ t (t), z . Note also that, by construction, the flow map Φ t,t0 and the action S satisfy Assumptions 2.19 (see [START_REF] Combescure | Coherent states and applications in mathematical physics[END_REF][START_REF] Robert | On the Herman-Kluk Semiclassical Approximation[END_REF][START_REF] Lasser | Computing quantum dynamics in the semiclassical regime[END_REF]).
Proof. The proof follows the lines of the one of Lemma 2.20, using the relation (2.23)
d I t>t (z) = dt (z)δ(t -t (z))
that produces an additional term.
As a Corollary, for Gaussian profiles, we have the following Corollary.
Corollary 2.25. With the same assumptions as in Lemma 2.24, we additionally assume
θ(t) = g Θ(t) , with Θ ∈ C ∞ (I × R 2d , S + (d)). Then, for all L ∈ C ∞ (R 2d , C d,d ), we have the following equality in L(L 2 (R d ), Σ k ε (R d )), χ(t)J I t>t (z) u e i ε S(t,t0) WP ε Φ t,t 0 Lx • xg Θ(t,t0) dt = 1 i χ(t)J 1 t>t (z) ũ(t, t 0 ) e i ε S(t,t0) WP ε Φ t,t 0 g Θ(t,t0) dt + O(ε) with u(t, t 0 ) = Tr L (A(t, t 0 ) -iB(t, t 0 )) -1 (C(t, t 0 ) -iD(t, t 0 )) -Θ(t, t 0 ) -1 .
Proof. The proof follows the lines of the one of Corollary 2.21, using the relation (2.23).
CHAPTER 3
Convergence of the thawed and the frozen Gaussian approximations
Strategy of the proofs
Our aim in this section is to prove the initial value representations of Theorems 1.15. We also explain the overall strategy that is also used for proving Theorems 1.18, 1.19 and 1.20.
Let k ∈ N. Let ψ ε 0 = V φ ε 0 be as in Assumption 1.9 with φ ε 0 ∈ L 2 frequency localized at the scale β ≥ 0 with
N β > d + k + 1 2 (which implies φ ε 0 ∈ Σ k ε .
Without loss of generality, we assume V = π (t 0 ) V for some ∈ {1, 2} that is now fixed.
We start with the Gaussian frame equality (2.2)
ψ ε 0 = (2πε) -d z∈R 2d g ε z , V φ ε 0 g ε z dz.
Writing g ε z , V φ ε 0 = V g ε z , φ ε 0 and using Remark 2.22, we have in Σ k ε ,
ψ ε 0 = J [ V (z)g ε z ] * (φ ε 0 ) + O(ε) = J [ V (z)g ε z ](φ ε 0 ) + O(ε φ ε 0 L 2
). Corollary 2.16 yields that, in Σ k ε , we have
ψ ε 0 = J [I |z|<R V (z)g ε z ](φ ε 0 ) + O(ε φ ε 0 L 2 ) + O(ε β C β R -n β ) = J [ V (z)g ε z ]((φ ε 0 ) R,< ) + O(ε φ ε 0 L 2 ) + O(ε β C β R -n β
) with the notations of Corollary 2.16 and setting n β = N β -k -d -1 2 > 0. Now that the data has been written in a convenient form, we apply the propagator U ε H (t, t 0 ) and we take advantage of its boundedness in L(Σ k ε ) to write
U ε H (t, t 0 )ψ ε 0 = (2πε) -d |z|≤R g ε z , φ ε 0 U ε H (t, t 0 ) V (z)g ε z dz + O(ε φ ε 0 L 2 ) + O(ε β C β R -n β ) = J I |z|≤R U ε H (t, t 0 ) V (z)g ε z (φ ε 0 ) + O(ε φ ε 0 L 2 ) + O(ε β C β R -n β ).
We then use the description of the propagation of wave packets by U ε H (t, t 0 ), as stated in Theorem 1.21: for
N ≥ d + 1, in Σ k ε (R d ), we have U ε H (t, t 0 ) V (z)g ε z = ψ ε,N (t) + O(ε N ). Therefore, by (1) of Theorem 2.14 in L(L 2 (R d ), Σ k ε (R d , C m )), (3.1) U ε H (t, t 0 )ψ ε 0 = J I |z|≤R ψ ε,N (t) + O(ε N -d R d φ ε 0 L 2 ) + O(ε β C β R -n β )
Besides, using that ψ ε,N (t) is a linear combination of wave packets and considering the explicit formula of Theorem 1.21, (2) of Theorem 2.14 implies that in
L(Σ k ε (R d , C m )), U ε H (t, t 0 )ψ ε 0 = J t,t0 ,th V (φ ε 0 ) R,< + O( √ ε φ ε 0 L 2 ) + O(ε N -d R d φ ε 0 L 2 ) + O(ε β C β R -n β ) = J t,t0 ,th V φ ε 0 + O( √ ε φ ε 0 L 2 ) + O(ε N -d R d φ ε 0 L 2 ) + O(ε β C β R -n β )
where we have used again Corollary 2.16. If β < 1 2 , we perform an appropriate choice of R and N :
we choose R = ε -γ with γ ≥ 1 n β ( 1 2 -β) and N ≥ 1 2 + d(1 + γ).
At this stage of the description, the thawed Gaussian approximation of Theorem (1.15) is proved. For obtaining the frozen one, we shall argue as in the scalar case considered in [START_REF] Robert | On the Herman-Kluk Semiclassical Approximation[END_REF] (Lemma 3.2 and Lemma 3.4). We will detail this argument later in Section 3.3 below.
The proofs of the order ε approximations of Theorems 1.18, 1.19 and 1.20 start with the same lines. However, one includes in the approximation the two first terms of the asymptotic expansion of ψ ε,N (t): the one of order ε 0 and the one of order ε 1 2 . The terms of order √ ε are twofold:
(i) The one along the same mode as the initial data, here denoted by . This term will be proved to be of lower order because its structure allows to use the first part of Corollary 2.21. (ii) The one generated by the crossing along the other mode. This one is not negligible.
At that stage of the proofs, one will be left with the thawed approximation. The derivation of the frozen approximation from the thawed one involves the second part of Corollary 2.21. However, complications are induced in the treatment of term described in (ii) above because of the singularity in time that it contains. This difficulty is overcome by averaging in time and using Corollary 2.25. We implement this strategy in the next sections.
Thawed Gaussian approximations with transfers terms
We prove here the higher order approximation of Theorem 1.18 for initial data ψ ε 0 = V φ ε 0 with (φ ε 0 ) ε>0 frequency localized at the scale β ≥ 0 in a compact set K. As in the preceding section, we assume V = π (t 0 ) V and, without loss of generality, we suppose = 1.
We start as in the preceding section and transform equation (3.1) by taking the terms of order ε 0 and ε 1 2 in the expansion of ψ ε,N . We obtain
U ε H (t, t 0 )ψ ε 0 (x) = J I |z|<R (ψ ε,1 1 (t) + ψ ε,1 2 (t)) (φ ε 0 ) + O ε β C β R -n β + O(ε N -d R d ψ ε 0 L 2 ). The rest in O(ε N -d R d ψ ε 0 L 2 )
comes from the remainder of the approximation of U ε H (t, t 0 )g ε z while the term O(ε ψ ε 0 L 2 ) comes from the terms of order ε j for j ≥ 1 of the approximation, these terms having a wave packet structure while the rest is just known as bounded in Σ k ε . We write for ∈ {1, 2}
ψ ε,1 (t) = 1 j=0 ε j 2 ψ ε,1 ,j (t).
Because the assumptions on K induce that there is only one passage through the crossing, Theorem 1.21 implies that ψ ε,1 2,0 (t) = 0 and ψ ε,1 2,1 (t) only depends on the transfer profile f ε 1→2 (indeed, we have assumed V = π 1 (t 0 ) V ). Moreover, for the mode 1, we have for ∈ {1, 2} and j ∈ {0, 1}
ψ ε,1 ,j (t) = e i ε S (t,t0,z0) WP ε z (t) R (t, t 0 ) M[F (t, t 0 )] B ,j (t)g iI .
We recall that B ,1 is given by (1.41). We use the structure of the term
R (t, t 0 ) M[F (t, t 0 )] B ,1 (t)g iI
(see [START_REF] Robert | Propagation of coherent states in quantum mechanics and applications[END_REF] section 3 or the book [START_REF] Combescure | Coherent states and applications in mathematical physics[END_REF]): it writes
R (t, t 0 ) M[F (t, t 0 )] B ,1 (t)g iI (x) = a(t)xg Γ (t,t0,z0) (x)
for some smooth and bounded vector-valued map (t, z) → a(t, z). Therefore, Corollary 2.21 yields
J I |z|<R ψ ε,1 1,1 (t) (φ ε 0 ) = O( √ ε φ ε 0 L 2 )
and we are left with
U ε H (t, t 0 )ψ ε 0 (x) = J I |z|<R (ψ ε,1 1,0 (t) + √ ε ψ ε,1 2,1 (t)) (φ ε 0 ) + O ε β C β R -n β + O(ε N -d R d ψ ε 0 L 2 ) = J ψ ε,1 1,0 (t) + √ ε ψ ε,1 2,1 (t) ((φ ε 0 ) R,< ) + O ε β C β R -n β + O(ε N -d R d ψ ε 0 L 2 ) = J ψ ε,1 1,0 (t) + √ ε ψ ε,1 2,1 (t) (φ ε 0 ) + O ε β C β R -n β + O(ε N -d R d ψ ε 0 L 2 )
by Corollary 2.16. Identifying the terms, we deduce
U ε H (t, t 0 )ψ ε 0 (x) = J t,t0 1,th V φ ε 0 + √ ε J t,t0 1,2,th V φ ε 0 + O ε β C β R -n β + O(ε N -d R d ψ ε 0 L 2 ). If β < 1, we choose R = ε -γ , N = 1 + d(γ + 1) with γ ≥ 1 n β (1 -β).
This gives Theorem 1.18. More precisely, for a general V = π 1 (t 0 ) V + π 2 (t 0 ) V , we obtain
U ε H (t, t 0 )ψ ε 0 (x) = J t,t0 1,th π 1 (t 0 ) V φ ε 0 + J t,t0 2,th π 2 (t 0 ) V φ ε 0 + √ ε J t,t0 1,2,th π 1 (t 0 ) V φ ε 0 (3.2) + O (ε(C β + ψ ε 0 L 2 )) .
Frozen Gaussian approximations with transfers terms
It remains to pass from the thawed to the frozen approximation. As we have already mentioned, we use the argument developed in Lemma 3.2 and 3.4 of [START_REF] Robert | On the Herman-Kluk Semiclassical Approximation[END_REF]. It is based on an evolution argument which crucially uses Corollary 2.21. We now explain that step.
End of the proof of Theorem 1.15. We start from the approximation given by the first part of Theorem 1.15: in Σ k ε (R d ), we have
U ε H (t, t 0 )ψ ε 0 (x) = J t,t0 1,th π 1 (t 0 ) V φ ε 0 + O( √ ε(C β + φ ε 0 L 2 ))
and our aim is to prove that in Σ k ε (R d )
J t,t0 1,th π 1 (t 0 ) V φ ε 0 = J t,t0 1,fr π 1 (t 0 ) V φ ε 0 + O(ε).
Of course, a remainder of size O( √ ε) would be enough for proving Theorem 1.15; however, it will be usefull to have it in order to prove Theorems 1.19 and 1.20.
The strategy dates back to [START_REF] Robert | On the Herman-Kluk Semiclassical Approximation[END_REF]. We follow the presentation of [START_REF] Fermanian Kammerer | Adiabatic and non-adiabatic evolution of wave packets and applications to initial value representations[END_REF]. We set for s ∈ [0, 1] Θ(s, z) = (1 -s)Γ (t, t 0 , z) + isI where Γ is given by (1.18). We consider the partially normalised Gaussian function
g(t, s) = (π) -d/4 e i 2 Θ(s,z)x•x , x ∈ R d
and we set g
Θ(s),ε Φ t,t 0 (z) (x) = WP ε Φ t,t 0 (z) ( g(t, s)) ,
The aim is to construct a map s → a(s, z) such that for all s
∈ [0, 1] in L(L 2 (R d ), Σ k ε ), d ds J a(s, z) V (t, t 0 , z) g Θ(s),ε Φ t,t 0 (z) = O(ε).
Choosing a(0, z) = 1, we have
J t,t0
,th = J a(0, z) V (t, t 0 , z) g
Θ(0),ε Φ t,t 0 (z)
, and we will obtain that for any
f ∈ L 2 (R d ), we have in Σ k ε (R d ) J t,t0 ,th ( V f ) = J a(1, z) V (t, t 0 , z) g Θ(1),ε Φ t,t 0 (z) (f ) + O(ε) = J t,t0 ,fr ( V f ) + O(ε) provided a(1, z) = a (t, t 0 , z) as defined in (1.24).
For constructing the map s → a(s, z), we compute d ds J a(s, z) V (t, t 0 , z) g
Θ(s),ε Φ t,t 0 (z) = J ∂ s a(s, z) V (t, t 0 , z) g Θ(s),ε Φ t,t 0 (z) + i 2 J a(s, z) V (t, t 0 , z)WP ε Φ t,t 0 (z) ∂ s Θ(s)x • x g Θ(s),ε .
We use equation (2.20) of Corollary 2.21 to transform the second term of the right-hand side and obtain
J a(s, z) V (t, t 0 , z)WP ε Φ t,t 0 (z) (∂ s Θ(s)x • xg Θ(s) ) = 1 i J a(s, z) V (t, t 0 , z)WP ε Φ t,t 0 (z) (Tr(Θ 1 (s))g Θ(s) ) + O(ε) in L(L 2 (R d ), Σ k ε ) and with Θ 1 (s) = ∂ s Θ(s) (A -iB ) -1 (C -iD ) -Θ -1
where M (s, z) is associated to Θ(s, z) according to (2.18). In particular, we have
∂ s M (s) = -(A -iB )∂ s Θ(s). We deduce Θ 1 (s) = -(A -iB ) -1 ∂ s M (s)M (s) -1 (A -iB ) and Tr(Θ 1 (s)) = -Tr(∂ s M (s)M (s) -1 ) = -detM (s) -1 ∂ s (detM (s)) .
Therefore, the condition
∂ s a(s, z) - 1 2 Tr(∂ s M (s, z)M (s) -1 )a(s, z) = 0
that we have to fulfilled, is realized by a(s, z) = detM (s) detM (0) a(0, z) = a (t, t 0 , z).
Proof of Theorem 1.19. We now start from the result of Theorem 1.18, that is equation (3.2). In view of what has been done in the end proof of the proof of Theorem 1.15, we only have to prove
J t,t0 1,2,th π 1 (t 0 ) V φ ε 0 = J t,t0 1,2,fr π 1 (t 0 ) V φ ε 0 + O √ ε(C β + φ ε 0 L 2 .
As noticed in the introduction, when t < t 1,min (K), then τ 1,2 (t, t 0 , z) = 0 for all z ∈ K and when t ∈ [t 1,max (K), t 2,min (K)), z → τ 1,2 (t, t 0 , z) is smooth. Therefore, one can use the perturbative argument allowing to froze the covariances of the Gaussian terms as in the proof of Theorem 1.15 and one obtains the formula (1.35).
Proof of Theorem 1.20. One now has to cope with the discontinuity of the transfer coefficient τ 1,2 (t, t 0 , z). We use Lemma 2.24.
Part 2
Wave-packet propagation through smooth crossings CHAPTER 4
Symbolic calculus and diagonalization of Hamiltonians with smooth crossings
In this section, we revisit the diagonalization of Hamiltonians in the case of the smooth crossings in which we are interested. We settle the algebraic setting that we will use in Section 5 for the propagation of wave packets.
We will use the Moyal product about which we recall some facts: if A ε , B ε are semi-classical series, their Moyal product is the formal series
C ε := A ε B ε where C ε = j≥0 ε j C j C j (x, ξ) = 1 2 j |α+β|=j (-1) |β| α!β! (D β x ∂ α ξ A).(D α x ∂ β ξ B)(x, ξ), j ∈ N. (4.1)
We also introduce the Moyal bracket
{A ε , B ε } := A ε B ε -B ε A ε .
Let us now consider a smooth matrix-valued symbol H ε = H 0 +εH 1 , where the principal symbol H 0 = h 1 π 1 + h 2 π 2 has two smooth eigenvalues h 1 and h 2 with smooth eigenprojectors π 1 and π 2 . We allow for a non-empty crossing set Υ as in Definition 1.2. By standard symbolic calculus with smooth symbols, we have for ∈ {1, 2} the relations
(4.2) π (iε∂ t -H ε ) = (iε∂ t -h ) π = O(ε).
We are going to see two manners to replace the projector π and the Hamiltonian h by asymptotic series so that the relation above holds at a better order.
We call "rough" the first diagonalization process that we propose. It will hold everywhere, including Υ and is comparable the reduction performed in [START_REF] Volker | Superadiabatic transition histories in quantum molecular dynamics[END_REF] for avoided crossings. It is the subject of Section 4.2.
The second one, more sophisticated, will require to work in a domain that does not meet Υ. Based on the use of superadiabatic projectors, as developed in [START_REF] Bily | Propagation d'états cohérents et applications[END_REF][START_REF] Martinez | Twisted pseudodifferential calculus and application to the quantum evolution of molecules[END_REF][START_REF] Spohn | Adiabatic decoupling and time-dependent Born-Oppenheimer theory[END_REF][START_REF] Teufel | Adiabatic perturbation theory in quantum dynamics Lecture Notes in Mathematics 1821[END_REF]. This strategy is implemented in Section 4.3. The new element comparatively to the references that we have mentioned, is that we keep a careful memory of the dependence of the constructed elements with respect to the distance of their support from Υ. For this reason, we will use a pseudodifferential setting that we precise in the next Section 4.1.
Formal asymptotic series
We consider formal semi-classical series
A ε = j≥0 ε j A j
where all the functions A j are smooth (matrix-valued) in an open set
D ⊂ R × R 2d , that is, A j ∈ C ∞ (D, C m,m ).
Notation. If A ε = j≥0 ε j A j is a formal series and N ∈ N, we denote by A ε,N the function
(4.3) A ε,N = 0≤j≤N ε j A j .
The formal series that we will consider in Section 4.3 will present two small parameters: the semi-classical parameter ε > 0 and another parameter δ > 0 that controls the growth of the symbol and of its derivatives. For our intended application, δ is related to the size of the gap between the eigenvalues of the Hamiltonian's symbol.
A ε = j≥0 ε j A j is in S µ ε,δ (D) if A j ∈ S µ-2j δ (D) for all j ∈ N. We set S ε,δ (D) := S 0 ε,δ (D). Remark 4.2. ( 1
) If A ∈ S µ δ , B ∈ S µ δ then AB ∈ S µ+µ δ while {A, B} ∈ S µ+µ -2 δ . Besides, if A ∈ S µ δ (D), then ∂ γ z A ∈ S µ-|γ| δ (D).
(2) When δ = 1, as in the next Section 4.2, then for all µ ∈ R, S µ 1 = S 0 1 coincides with the standard class of Calderón-Vaillancourt symbols, those smooth functions that are bounded together with their derivatives. Similarly, S µ ε,1 = S µ ε coincides with asymptotic series of symbols.
(3) The parameter δ can be understood as a loss that appears at each differentiation. However, in the asymptotic series, one loose δ 2 when passing from some j-th term of to the (j +1)-th. one. In Section 4.3, δ will monitor the size of the gap function f in the domain D.
The Moyal bracket satisfies the property stated in the next lemma.
Lemma 4.3. Let δ A , δ B ∈]0, 1]. If A ε and B ε are formal series of S ε,δ A (D) and S ε,δ B (D), respectively. then A ε B ε is a formal series of S ε,min(δ A ,δ B ) (D). Besides, for N ∈ N, A ε,N B ε,N = 0≤j≤N ε j C j + ε N +1 R ε,N A,B
where for all γ ∈ N 2d , there exists C N,γ independent on δ A,B and ε such that
|∂ γ z R ε,N A,B (t, z)| ≤ C N,γ [min(δ A , δ B )] -N -κ0 , ∀ε ∈]0, 1], ∀(t, z) ∈ D
, where κ 0 is a universal constant depending only on the dimension d.
Proof. The estimate is a direct consequence of [START_REF] Bouzouina | Uniform semi-classical estimates for the propagation of quantum observables[END_REF]Theorem A1]. In Appendix B, Theorem B.1 we give a detailed proof.
When δ A = 1 and δ B = δ ∈ (0, 1], min(δ A , δ B ) = δ. This shows that
S ε,1 (D) S ε,δ (D) ⊂ S ε,δ (D).
Let us conclude this Section by comments on the quantization of symbols of the classes S µ δ . The Calderón-Vaillancourt estimate for pseudodifferential operators (see [START_REF] Dimassi | Spectral Asymptotics in the Semi-Classical Limit[END_REF][START_REF] Zworski | Semiclassical analysis[END_REF]) states that there exists a constant C > 0 such that for all a ∈ C ∞ (R 2d ),
op ε (a) L(L 2 (R d )) ≤ C sup 0≤|γ|≤d+1 ε |γ| 2 sup z∈R d |∂ γ z a(z)|.
Actually, the article [START_REF] Calderón | On the boundedness of pseudo-differential operators[END_REF] treats the case ε = 1 and the estimate in the semi-classical case comes from the observation that
op ε (a) = Λ * ε op 1 (a( √ ε•))Λ ε
where Λ ε is the L 2 -unitary scaling operator defined on function f ∈ S(R d ) by
Λ ε f (x) = ε -d 4 f x √ ε , x ∈ R d .
One can derive an estimate in the sets Σ k ε by observing
x α (ε∂ x ) β • op ε (a) = |γ1|+|γ2|+|γ3|≤k ε |γ 1 | 2 c γ1,γ2,γ3 (ε) op ε (∂ γ1 z a) • (x γ2 (ε∂ x ) γ3 )
for some coefficients c γ1,γ2,γ3 (ε), uniformly bounded with respect to ε ∈ [0, 1]. This implies the boundedness of op ε (a) in weighted Sobolev spaces: for all k ∈ N, there exists a constant C k > 0 such that for all a ∈ S(R 2d ),
(4.4) op ε (a) L(Σ k ε ) ≤ C k 0≤|γ|≤d+k+1 ε |γ| 2 sup z∈R d |∂ γ z a(z)|. Proposition 4.4. Let A ∈ S µ-2j δ for µ ∈ R, j ∈ N. Then, for k ∈ N op ε (A) L(Σ k ε ) ≤ C k sup 0≤|γ|≤d+1 ε |γ| 2 δ µ-2j-k-|γ| Therefore, if δ ≥ √ ε (4.5) op ε (A) L(Σ k ε ) ≤ C k δ µ-k-2j
. We will use such estimates. Questions related with symbolic calculus in the classes S µ δ,ε are discussed in Appendix B.2.
'Rough' reduction
The next result gives a reduction of the Hamiltonian in a block diagonalized form. We will use this reduction on small interval of times. Theorem 4.5. Assume H ε = H 0 +εH 1 with H 0 having smooth eigenprojectors and eigenvalues. There exist matrix-valued asymptotic series
π ε 1 = π 1 + j≥1 ε j π 1,j , h ε = h I + j≥1 ε j h ,j , W ε = j≥1 ε j W j , ∈ {1, 2}
such that for all N ∈ N, π ε,N
1 and π ε,N 2 = 1 -π ε,N 1 are approximate projectors (4.6) π ε,N π ε,N = π ε,N + O(ε N +1 ), ∈ {1, 2}
and
H ε = H 0 + εH 1 reduces according to π ε,N 1 (iε∂ t -H ε ) = (iε∂ t -h ε,N 1 ) π ε,N 1 + W ε,N π ε,N 2 + O(ε N +1 ), (4.7) π ε,N 2 (iε∂ t -H ε ) = (iε∂ t -h ε,N 2 ) π ε,N 2 + (W ε,N ) * π ε,N 1 + O(ε N +1 ). (4.8)
Moreover, for all ∈ {1, 2} and j ≥ 1, the symbols π ,j and h ,j are self-adjoint, the matrices W j are the off-diagonal (see equation (1.44) for the value of W 1 ) and
h ,1 = π H 1 π + (-1) i 2 (h 1 -h 2 )π {π 1 , π 1 }π . (4.9)
If H ε also satisfies Assumption 1.4 on the time interval I, then the 4×4 matrix-valued Hamiltonian
H ε := h ε 1 W ε (W ε ) * h ε 2 is subquadratic according to Definition 1.1.
Note that in H ε , the off-diagonal blocks are of lower order than the diagonal ones since the asymptotic series W ε has no term of order 0.
Theorem 4.5 allows to put the equation (1.1) in a reduced form by setting
ψ ε = t (ψ ε 1 , ψ ε 2 ) with ψ ε = π ε ψ ε . Indeed, we then have (4.10) iε∂ t ψ ε = H ε ψ ε + O(ε ∞ ), ψ ε |t=0 = t π ε 1 ψ ε 0 , π ε 2 ψ ε 0 .
We deduce the corollary.
Corollary 4.6. Formally, we have for t ∈ I,
U ε H (t, t 0 )ψ ε 0 = ψ ε 1 + ψ ε 2
, where ψ ε solves (4.10).
Proof. The proof relies on a recursive argument. The case N = 0 is equivalent to (4.2) The case N = 1 has been proved in Lemma B.2 in [START_REF] Fermanian Kammerer | Propagation of wave packets for systems presenting codimension one crossings[END_REF]. However, we revisit the proof in order to compute W 1 . We first compute π , by requiring π ε,( ) π ε,( ) = π ε,( ) + O(ε 2 ), which admits the solution
π 1,1 = -π 2,1 = - 1 2i π 1 {π 1 , π 1 }π 1 + 1 2i π 2 {π 1 , π 1 }π 2 .
We recall that {π 1 , π 1 } is diagonal and skew-symmetric (see Lemma B.1 in [START_REF] Fermanian Kammerer | Propagation of wave packets for systems presenting codimension one crossings[END_REF]). Then, we observe
π (iε∂ t -H ε ) = (iε∂ t -h ) π + εΘ + O(ε 2 ) with Θ = - 1 2i {π , H 0 } -π H 1 -i∂ t π + 1 2i {h , π } or, equivalently Θ 1 = 1 2i (h 2 -h 1 ){π 1 , π 1 } -i∂ t π 1 -π 1 H 1 + 1 i {h 1 , π 1 }π 1 + 1 2i {h 1 + h 2 , π 1 }π 2 , Θ 2 = 1 2i (h 1 -h 2 ){π 2 , π 2 } -i∂ t π 2 -π 2 H 1 + 1 i {h 2 , π 2 }π 2 + 1 2i {h 1 + h 2 , π 2 }π 1 = - 1 2i (h 2 -h 1 ){π 1 , π 1 } + i∂ t π 1 -π 2 H 1 - 1 i {h 2 , π 1 }π 2 - 1 2i {h 1 + h 2 , π 2 }π 1
We observe
Θ * 2 = - 1 2i (h 2 -h 1 ){π 1 , π 1 } -i∂ t π 1 -H 1 π 2 + 1 i π 2 {h 2 , π 1 } + 1 2i π 1 {h 1 + h 2 , π 2 } and (4.11) π 1 Θ * 2 π 2 = π 1 Θ 1 π 2 . Thus, we have to solve -π 1,1 H 0 = -h 1,1 π 1 -h 1 π 1,1 + i∂ t π 1 + Θ 1 + W 1 π 2 , -π 2,1 H 0 = -h 2,1 π 2 -h 2 π 2,1 + i∂ t π 2 + Θ 2 + W * 1 π 1 .
Multiplying on the right the first equation by π 1 and the second by π 2 , we obtain that h 1,1 and h 2,1 have to solve
h 1,1 π 1 = i∂ t π 1 π 1 + Θ 1 π 1 and h 2,1 π 2 = i∂ t π 2 π 2 + Θ 2 π 2
which is solved by taking the self-adjoint matrices
h 1,1 = i∂ t π 1 π 1 + Θ 1 π 1 -iπ 1 ∂ t π 1 π 2 + π 1 Θ * 1 π 2 , h 2,1 = i∂ t π 2 π 2 + Θ 2 π 2 -iπ 2 ∂ t π 2 π 1 + π 2 Θ * 2 π 1 .
Multiplying on the right the first equation by π 2 and the second by π 1 , we obtain that W 1 has to solve
W 1 π 2 = -(h 2 -h 1 )π 1,1 π 2 -Θ 1 π 2 and W * 1 π 1 = (h 2 -h 1 )π 2,1 π 1 -Θ 2 π 1 . Using the relations π * 1,1 = π 1,1 = -π 2,1 , π 1 ∂ t,z π 1 = ∂ t,
W 1 π 2 = π 1 W 1 = π 1 H 1 + i∂ t π 1 + 1 2 {h 1 + h 2 , π 1 } π 2 ,
whence (1.44).
One can now perform the recursive argument. Assume that we have obtained (4.6), (4.7) and (4.8) for some N ≥ 1 and let us look for π 1,N +1 , h 1,N +1 , h 2,N +1 and W N +1 such that the relations for N + 1 too.
We start with π 1,N +1 . We write
π ε,N 1 π ε,N 1 = π ε,N 1 + ε N +1 R ε ,
where R ε is an asymptotic series with first term R N . We first observe that R N is diagonal. Indeed, we have
(1 -π ε,N 1 ) π ε,N 1 π ε,N 1 = π ε,N 1 π ε,N 1 (1 -π ε,N 1 ) and (1 -π ε,N 1 ) π ε,N 1 π ε,N 1 = -ε N +1 R ε π ε,N 1 , π ε,N 1 π ε,N 1 (1 -π ε,N 1 ) = -ε N +1 π ε,N 1 R ε . This yields π ε,N 1 R ε = R ε π ε,N 1 and imply π 1 R N = R N π 1 . We now look to π 1,N +1 that must satisfy π 1,N +1 = R N + π 1,N +1 π 1 + π 1 π 1,N +1
. This relation fixes the diagonal part of π 1,N +1 according to
π 1 π 1,N +1 π 1 = -π 1 R N π 1 and π 2 π 1,N +1 π 2 = π 2 R N π 2 ,
We will see later that we do not need to prescribe off-diagonal components to π 1,N +1 .
Let us now focus on h 1,N +1 , h 2,N +1 and W N +1 . We write
π ε,N 1 (iε∂ t -H ε ) = (iε∂ t -h ε,N 1 ) π ε,N 1 + W ε,N π ε,N 2 + ε N +1 Θ ε 1
where Θ ε 1 is an asymptotic series of first term Θ 1,N . For obtaining information about Θ 1,N , we compute π ε,N (iε∂ t -H ε ) π ε,N for different choices of , ∈ {1, 2}.
• Taking = gives two relations
π ε,N 1 (iε∂ t -H ε ) π ε,N 2 = π ε,N 1 W ε,N π ε,N 2 + ε N +1 π ε,N 1 Θ ε 1 π ε,N 2 + O(ε N +2 ) π ε,N 2 (iε∂ t -H ε ) π ε,N 1 = π ε,N 2 (W ε,N ) * π ε,N 1 + ε N +1 π ε,N 2 Θ ε 2 π ε,N 1 + O(ε N +2 ), from which we deduce π 2 Θ 2,N π 1 = (π 1 Θ 1,N π 2 ) * . • Taking = gives the relations π ε,N (iε∂ t -H ε ) π ε,N = π ε,N (iε∂ t -h ε,N ) π ε,N + ε N +1 π ε,N Θ ε π ε,N + O(ε N +2 ),
whence the self-adjointness of the diagonal part of Θ ε . We now enter into the construction of h 1,N +1 , h 2,N +1 and W N +1 . We write the asymptotic series
π ε,N +1 1 (iε∂ t -H ε ) = π ε,N 1 (iε∂ t -H ε ) -ε N +1 π 1,N +1 H 0 + O(ε N +2 ), (iε∂ t -h ε,N +1 1 ) π ε,N +1 1 + W ε,N +1 π ε,N +1 2 = (iε∂ t -h ε,N 1 ) π ε,N 1 + W ε,N π ε,N 2 + ε N +1 (i∂ t π 1,N -h 1,N +1 π 1 -h 1 π 1,N +1 + W N +1 π 2 ) + O(ε N +2 ).
Therefore, we look for h 1,N +1 and W N +1 such that
-π 1,N +1 H 0 = i∂ t π 1,N -h 1,N +1 π 1 -h 1 π 1,N +1 + W N +1 π 2 + Θ 1,N or equivalently 0 = i∂ t π 1,N -h 1,N +1 π 1 + (h 2 -h 1 )π 1,N +1 π 2 + W N +1 π 2 + Θ 1,N .
By multiplying on the right by π 1 2 and π 2 , we are left with the two equations
(4.12) h 1,N +1 π 1 = Θ 1,N π 1 + i∂ t π 1,N π 1 and W N +1 π 2 = Θ 1,N π 2 + (h 2 -h 1 )π 1,N +1 π 2 + i∂ t π 1,N π 2 .
Considering similarly the conditions for the mode h 2 , we obtain that h 2,N +1 and W * N +1 have to satisfy
h 2,N +1 π 2 = Θ 2,N π 2 + i∂ t π 2,N π 2 and W * N +1 π 1 = Θ 2,N π 1 -(h 2 -h 1 )π 2,N +1 π 1 + i∂ t π 2,N π 1 .
Since π 2,N = -π 1,N for N ≥ 1, we are left with the relation
(4.13) h 2,N +1 π 2 = Θ 2,N π 2 -i∂ t π 1,N π 2 and W * N +1 π 1 = Θ 2,N π 1 + (h 2 -h 1 )π 1,N +1 π 1 -i∂ t π 1,N π 1 .
We set
h 1,N +1 = Θ 1,N π 1 + i∂ t π 1,N π 1 + π 1 Θ * 1,N π 2 -iπ 1 ∂ t π 1,N π 2 , h 2,N +1 = Θ 2,N π 2 + i∂ t π 2,N π 2 + π 2 Θ * 2,N π 1 -iπ 2 ∂ t π 2,N π 1 .
Then, h 1,N +1 and h 2,N +1 are self-adjoint and satisfy the first part of (4.12) and (4.13) respectively.
The construction of W N +1 requires to be more careful because there is a compatibility condition between (4.12) and (4.13). We look for W N +1 of the form
W N +1 = Θ 1,N π 2 + (h 2 -h 1 )π 1,N +1 π 2 + i∂ t π 1,N π 2 + U N +1 π 1 ,
which guarantees (4.12). Then, one has
W * N +1 = π 2 Θ * 1,N + (h 2 -h 1 )π 2 π 1,N +1 -iπ 2 ∂ t π 1,N + π 1 U * N +1
and
W * N +1 π 1 = π 2 Θ * 1,N π 1 -iπ 2 ∂ t π 1,N π 1 + π 1 U * N +1 π 1 = π 2 Θ 2,N π 1 -iπ 2 ∂ t π 1,N π 1 + π 1 U * N +1 π 1
where we have used the first property of the matrices Θ 1,N and Θ 2,N that we have exhibited, together with the fact that π 1,N +1 is diagonal. It is then enough to choose
U N +1 = π 1 Θ * 2,N + (h 2 -h 1 )π 1,N +1 + i∂ t π 1,N π 1 since it implies W * N +1 π 1 = π 2 Θ 2,N π 1 -iπ 2 ∂ t π 1,N π 1 + π 1 (Θ 2,N + i∂ t π 1,N + (h 2 -h 1 )π 1,N +1 )π 1 = Θ 2,N π 1 + (h 2 -h 1 )π 1,N +1 π 1 -iπ 2 ∂ t π 1,N π 1 ,
where we have used π 1,N +1 π 1 = π 1 π 1,N +1 π 1 . As a consequence, the second part of (4.13) is satisfied. This concludes the recursive argument and the proof of the Theorem 4.5 since the growth properties of the matrices that we have constructed come with the recursive equations.
Superadiabatic projectors and diagonalization
One now wants to get rid of the off-diagonal elements W ε , which is possible outside Υ. We are going to take into account how far from the crossing set we are by introducing a gap assumption.
Assumption 4.7 (Gap assumption). Let t 0 < t 1 , I an open interval of R containing [t 0 , t 1 ]
and Ω an open subset of R 2d . We say that the eigenvalue h has a gap larger than δ ∈ (0, 1] in
D := I × Ω if one has (N C δ ) d (h(t, z), Sp(H 0 (t, z))) ≥ δ, ∀(t, z) ∈ D.
The construction of superadiabatic projectors dates to [START_REF] Bily | Propagation d'états cohérents et applications[END_REF] which was inspired by the paper [START_REF] Emmrich | Geometry of the transport equation in multicomponent WKB approximation[END_REF]. It has then been carefully developed in [START_REF] Martinez | Twisted pseudodifferential calculus and application to the quantum evolution of molecules[END_REF] and [START_REF] Spohn | Adiabatic decoupling and time-dependent Born-Oppenheimer theory[END_REF] (see also the book [START_REF] Teufel | Adiabatic perturbation theory in quantum dynamics Lecture Notes in Mathematics 1821[END_REF]). We revisit here the construction of superadiabatic projectors, in order to control their norms with respect to the parameter δ.
We follow the construction of the Section 14.4 of the latest edition of [START_REF] Combescure | Coherent states and applications in mathematical physics[END_REF] (2021), that we adapt to our context. One proceeds in two steps: first by defining the formal series for the projectors and then for the Hamiltonians. In order to simplify the notations in the construction, we just consider an eigenvalue h and we will then apply the result to the eigenvalues h 1 and h 2 .
Formal superadiabatic projectors.
Theorem 4.8 (semiclassical projector evolution). Assume the eigenvalue h of the Hamiltonian H 0 satisfies Assumption 4.7 in D. Then, there exists a unique formal series j≥1 ε j-1 Π j in S -1 ε,δ (D) such that setting Π 0 (t, z) = π(t, z), the formal series
Π ε (t, z) = j≥0 ε j Π j (t, z)
is a formal projection and
(4.14) iε∂ t Π ε (t) = [H ε (t), Π ε (t)] .
Moreover the sub-principal term Π 1 (t) is an Hermitian matrix given by the following formulas:
π(t)Π 1 (t)π(t) = - 1 2i π(t){π(t), π(t)}π(t), (4.15) π(t) ⊥ Π 1 (t)π(t) ⊥ = 1 2i π(t) ⊥ {π(t), π(t)}π(t) ⊥ , π(t) ⊥ Π 1 (t)π(t) = π(t) ⊥ (H 0 (t) -h(t)) -1 π(t) ⊥ R 1 (t)π(t), where R 1 (t) = i∂ t π(t) - 1 2i ({H 0 (t), π(t)} -{π(t), H 0 (t)}) -[H 1 (t), π(t)]. Proof. With Notations 4.3, Π ε,N Π ε,N -Π ε,N = ε N +1 S N +1 + O(ε N +2 ), (4.16) iε∂ t Π ε,N -H 0 + εH 1 , Π ε,N = ε N +1 R N +1 + O(ε N +2 ). (4.
17)
Step N = 1. We start with N = 0. We have Π (0) = π ∈ S 0 δ (D). Since π 2 = π and [H 0 , π] = 0, we obtain
S 1 = 1 2i {π, π} and R 1 = i∂ t π -1 2i ({H 0 , π} -{π, H 0 }) -[H 1 , π],
and we have R 1 , S 1 ∈ S 0 δ (D). Two structural observations are in order:
(1) The matrix S 1 is symmetric and satisfies πS 1 π ⊥ = π ⊥ S 1 π = 0.
(2) The matrix R 1 is skew-symmetric. It satisfies
πR 1 π = 0 and π ⊥ R 1 π ⊥ = [H 0 , π ⊥ S 1 π ⊥ ].
If H 0 has only two eigenvalues, H 0 expresses only in terms of π and the expression of R 1 given above shows that R 1 is off-diagonal. The situation is more complicated if H 0 has strictly more than two distinct eigenvalues. For verifying [START_REF] Arnold | Ordinary differential equations[END_REF] in that case, one uses the Poisson bracket rule {A, BC} -{AB, C} = {A, B}C -A{B, C} two times. We obtain
{H 0 , π} -{π, H 0 } = {H 0 , π 2 } -{π 2 , H 0 } = {hπ, π} + {H 0 , π}π -H 0 {π, π} -{π, hπ} + {π, π}H 0 -π{π, H 0 } = π{h, π} -{π, h}π + [{π, π}, H 0 ] + {H 0 , π}π -π{π, H 0 }, which implies π ⊥ ({H 0 , π} -{π, H 0 })π ⊥ = π ⊥ [{π, π}, H 0 ]π ⊥ .
For determining the π-diagonal component, we choose A = π, B = H 0 π ⊥ , and C = π to obtain
0 = {π, H 0 π ⊥ }π -π{H 0 π ⊥ , π} = {π, H 0 }π -{π, hπ}π -π{H 0 , π} + π{hπ, π} = {π, H 0 }π -{π, h}π -π{H 0 , π} + π{h, π}.
This relation implies π({H 0 , π} -{π, H 0 })π = 0.
For constructing the matrix Π 1 that defines Π (1) = π + εΠ 1 , we need to satisfy
πΠ 1 + Π 1 π -Π 1 = -S 1 and -[H 0 , Π 1 ] = -R 1 .
The first of these two equations uniquely determines the diagonal blocks of Π 1 , while the second equation uniquely determines the off-diagonal blocks. We obtain
πΠ 1 π = -πS 1 π and π ⊥ Π 1 π ⊥ = π ⊥ S 1 π ⊥ , πΠ 1 π ⊥ = -πR 1 π ⊥ (H 0 -h) -1 and π ⊥ Π 1 π = (H 0 -h) -1 π ⊥ R 1 π.
For concluding this first step, we deduce from R 1 , S 1 ∈ S 0 δ (D) that Π 1 ∈ S -1 δ (D).
Step N ≥ 1. Next we proceed by induction and assume that we have constructed the matrices Π j (t) ∈ S 1-2j δ (D) for 1 ≤ j ≤ N such that (4.16) and (4.17) hold. Note that by Lemma 4.3, this implies
R N +1 (t) ∈ S -2N δ (D) and S N +1 (t) ∈ S -2N δ (D). Indeed, iε∂ t Π ε,N -H 0 + εH 1 , Π ε,N is a formal series of ε S -2 ε,δ (D) while Π N Π ε,N -Π ε,N is a formal series of εS -1 ε,δ (D).
In order to go one step further, we see that Π N +1 has to satisfy
πΠ N +1 + Π N +1 π -Π N +1 = -S N +2 and -[H 0 , Π N +1 ] = -R N +2 .
For solving these equations, and achieving the recursive process, we need to verify that at each step [START_REF] Alinhac | Pseudo-differential operators and the Nash-Moser theorem[END_REF] The matrix S N is symmetric and satisfies πS N π ⊥ = π ⊥ S N π = 0.
(2) The matrix R N is skew-symmetric and off-diagonal. It satisfies
πR N π = 0 and π ⊥ R N π ⊥ = [H 0 , π ⊥ S N π ⊥ ].
Then, we will be able to construct Π N +1 (t) ∈ S -2N -1) δ
(D) and we will have as a by product
R N +2 (t) ∈ S -2N -2 δ (D), S N +2 (t) ∈ S -2N -2 δ
(D) because of equations (4.16), (4.17) and Lemma 4.3. For proving (1), we take advantage of the fact that
Z := Π ε,N (Π ε,N ) 2 -Π ε,N (I -Π ε,N ) = ε N +1 πS N +1 π ⊥ + O(ε N +2 ),
while one also has by construction
Z = (Π ε,N ) 2 -Π ε,N Π ε,N -(Π ε,N ) 2 = O(ε 2N +2 ).
This implies that πS N +1 π ⊥ = 0 and, using that S N +1 is hermitian, we deduce that it is diagonal. For proving (2), we argue similarly with
Z := Π ε,N iε∂ t Π ε,N -H 0 + εH 1 , Π ε,N Π ε,N = ε N +1 πR N +1 π + O(ε N +2 ),
which also satisfies
Z = (Π ε,N ) 2 -Π ε,N (H 0 + εH 1 )Π ε,N -Π ε,N (H 0 + εH 1 ) (Π ε,N ) 2 -Π ε,N = O(ε 2N +2 ).
This implies πR N +1 π = 0 and one can argue similarly with 1-Π ε,N for obtaining the other relation
π ⊥ R N +1 π ⊥ = 0.
4.3.2. Formal adiabatic decoupling. The second (and decisive) part of the analysis is a formal adiabatic decoupling using the superadiabatic projectors introduced before. Theorem 4.9 (formal adiabatic decoupling). Assume the eigenvalue h of the Hamiltonian H 0 satisfies Assumption 4.7 in D. There exists a formal time dependent Hermitian Hamiltonian in D,
H adia,ε = j≥0 ε j H adia j such that (4.18) Π ε (iε∂ t -H ε ) = (iε∂ t -H adia,ε ) Π ε
with the following properties:
(1) The principal symbol is
H adia 0 = h I C m . (2)
The subprincipal term H adia 1 is a Hermitian matrix satisfying
π ⊥ H adia 1 π = π ⊥ (i∂ t π + i{h, π}) π and π H adia 1 π = πH 1 π + 1 2i π{H 0 , π}π (see (1.19
)) and we can choose π ⊥ H adia,ε
1 π ⊥ = 0. (3) We have ε -2 H adia,ε -hI m,m -εH adia 1 ∈ S -1 ε,δ (D). ( 4
) Finally, π(t) satisfies a transport equation along the classical flow for h(t).
(4.19) ∂ t π + {h, π} = 1 i [H adia 1 , π].
Remark 4.10. Note that equations (1.19) implies that H adia 1 (t, z) is smooth everywhere, including on the crossing set, if any.
The above construction applied to the Hamiltonian H ε with two smooth eigenvalues (h 1 , h 2 ) and two smooth eigenprojectors (π 1 , π 2 ) imply the construction of two pairs of formal series
(4.20) Π ε = j≥0 ε j Π ,j and H adia,ε = j≥0 ε j H adia ,j .
Corollary 4.11. At the level of the evolution operator, the result implies
U ε H (t, t 0 ) = Π ε 1 (t)U adia,ε 1 (t, t 0 ) Π ε 1 (t 0 ) + Π ε 2 (t)U adia,ε 2 (t, t 0 ) Π ε 2 (t 0 )
where for ∈ {1, 2}, U adia,ε (t, t 0 ) are the evolution operators associated with the Hamiltonian H adia,ε Proof. This result is Theorem 80 of Chapter 14 in [START_REF] Combescure | Coherent states and applications in mathematical physics[END_REF] combined with Lemma 4.3. We first observe that equation (4.18) reduces to proving
(4.21) Π ε (iε∂ t -H ε ) = (iε∂ t -H adia,ε ) Π ε .
For proving the latter relation, one first observes that if H adia 0 = h, then we have
(H adia,ε -H 0 -εH 1 ) Π ε = ε (h -H 0 )Π 1 + (H adia 1 -H 1 )π + 1 2i {h -H 0 , π} + i∂ t π + O(ε 2 ).
Therefore, H 1 has to be chosen so that
(H adia 1 -H 1 )π = (H 0 -h)Π 1 + 1 2i {H 0 -h, π} + i∂ t π.
In view of (4.15), this requires
π(H adia 1 -H 1 )π = 1 2i π{H 0 , π}π,
which is given by the second relation of (
π ⊥ (H adia 1 -H 1 )π = π ⊥ (R 1 + 1 2i {H 0 -h, π} + i∂ t π)π = π ⊥ (i∂ t π + 1 2i {π, H 0 } - 1 2i {h, π} -H 1 )π 1.19), and, using again (4.15)
which is also given by the first relation of (1.19) in view of the observation that π ⊥ {π, H 0 , π}π = -π ⊥ {h, π}π.
For proving this relation, one uses the Poisson bracket rule
{A, BC} -{AB, C} = {A, B}C -A{B, C} several times. First, one gets π{π, π}π
⊥ = 0 = π ⊥ {π, π}π. Then, taking A = π ⊥ , B = π, C = H 0 , one gets {π ⊥ , hπ} -0 = {π ⊥ , π}H 0 -π ⊥ {π, H 0 }, whence -π ⊥ {π, h}π = -π ⊥ {π, H 0 }π. Finally, for concluding the construction of H adia 1 , It remains to check that (H 0 -h)Π 1 + 1 2i {H 0 -h, π} + i∂ t π π ⊥ = 0
which comes from the latter observation about {H 0 , π}. where Ω 0 and Ω 2 are constructed so that for any initial data z ∈ Ω 0 the flows are staying in Ω 2 :
Now that H adia
Φ t,t0 h (z) ∈ Ω 2 , ∀t ∈ [t 0 , t 1 ], ∀z ∈ Ω 0 , ∀ ∈ {1, 2}.
We associate cut-offs to these subsets. We take χ 0 ∈ C ∞ 0 (Ω 0 ) with χ 0 = 1 near z 0 . Then, we choose K 0 a compact neighborhood of z 0 in Ω 0 , which implies that Ω 2 is a neighborhood of
K ,0 := {Φ t,t0 h (K 0 ), t 0 ≤ t ≤ t 1 }, ∈ {1, 2}. So we can choose χ 2 ∈ C ∞ 0 (Ω) with χ 2 = 1 on K 0 = K 1,0 ∪ K 2,0
. Finally, we take χ 1 , χ 3 ∈ C ∞ 0 (Ω) with χ 1 = 1 on supp(χ 2 ) and χ 3 = 1 on supp(χ 1 ).
For ∈ {1, 2}, we set
H adia,ε,N (t) = χ 3 H adia,ε,N (t),
which is a smooth subquadratic Hamitonian, and we consider U adia,ε,N (t, s) the propagator associated with the Hamiltonian χ 1 H adia,ε,N (t).
The next result is the usual adiabatic decoupling that results from the preceding analysis. (i) For any ∈ {1, 2}, we have in
L(Σ k ε ), iε∂ t -H ε (t) op ε χ 1 Π N,ε (t) op ε (χ 2 ) = (5.1) op ε χ 1 Π N,ε (t) iε∂ t -op ε χ 3 H adia,N,ε (t) op ε (χ 2 ) + O(ε N +1 ). (ii) Let ψ ε 0 ∈ Σ k ε such that χ 0 ψ ε 0 = ψ ε 0 + O(ε ∞ ). Set ψ ε,N (t) = op ε χ 1 Π N,ε (t) U adia,N,ε (t, t 0 ) op ε χ 0 Π N,ε (t 0 ) ψ ε 0 , ∈ {1, 2}. Then we have in Σ k ε , (5.2) U ε H (t, t 0 )ψ ε 0 = ψ ε,N 1 (t) + ψ ε,N 2 (t) + O(ε N +1 ), ∀t ∈ [t 0 , t 1 ]. Remark 5.3. (1)
The assumption satisfied by (ψ ε 0 ) ε>0 in (ii) of Proposition 5.2 is sometimes referred in the literature as having a compact semi-classical wave front set.
(2) In the proof below, the reader will notice that we do not need to assume that H ε (t) is sub-quadratic, we only need to know that H ε (t) defines a unitary Schrödinger propagator in L 2 (R d , C m ). However, we use the boundedness of the derivatives of the projectors.
Proof. (i) We fix ∈ {1, 2}. Using Theorem 4.9, we obtain
(iε∂ t -H ε (t)) (χ 1 Π ε,N (t) = χ 1 (iε∂ t -H ε (t)) χ 1 Π ε,N (t) + ε N +1 R ε N = χ 1 Π ε,N (t) iε∂ t -H adia,ε,N (t) + ε N +1 R ε N = χ 1 Π ε,N (t) iε∂ t -H adia,ε,N (t) + ε N +1 R ε N
where the R ε N s are rest terms that may change from one line to the other one and all satisfy χ 2 R ε N = 0 and where we have used that χ 3 χ 1 = χ 1 . The relation on operators follows from Theorem B.1 and Corollary B.2 (with δ = 1).
(ii) Let K 0 = supp(χ 0 ). We apply (5.1) and we use that χ 2 is identically equal to 1 on the compact set K 0 . Hence, using Egorov Theorem of Appendix C.1, we have for ∈ {1, 2}
χ 1 U adia,N,ε (t, t 0 ) χ 0 = χ 1 U adia,N,ε (t, t 0 ) χ 0 U adia,N,ε (t, t 0 ) -1 U adia,N,ε (t, t 0 ) = U adia,N,ε (t, t 0 ) χ 0 + O(ε N +1 ).
Hence we deduce (5.2) from (5.1) using again the Egorov Theorem and that
op ε χ 1 Π N,ε 1 + χ 1 Π N,ε 2 χ 0 = χ 0 + O(ε N +1 ).
The adiabatic decoupling of Proposition 5.2 and Egorov Theorem (see Proposition C.1) allows to give an explicit description at any order of the solution of equation (1.1) for initial data that are focalized wave packets.
Indeed, by the technics of Appendix C that are classic when δ = 1 (see for example the recent edition of [START_REF] Combescure | Coherent states and applications in mathematical physics[END_REF]), one constructs two maps R 1 (t, t 0 , z) and R 2 (t, t 0 , z) introduced in (1.20)) and one has the following result (see Proposition C.5).
Theorem 5.4. Assume that ψ ε 0 a polarized wave packet:
ψ ε 0 = V 0 WP ε z0 (f 0 ), with f 0 ∈ S(R d ) and V 0 ∈ C m . Let N ≥ 1 and k ≥ 0. Then, there exists a constant C N,k > 0 such that the solution ψ ε (t) of (1.1) satisfies for all t ∈ [t 0 , t 1 ], ψ ε (t) -ψ ε,N 1 (t) + ψ ε,N 2 (t) Σ k ε ≤ C N,k ε N ,
with for ∈ {1, 2} and for all M ≥ 0,
ψ ε,N (t) = e i ε S (t,t0,z0) WP ε z (t) R (t, t 0 , z 0 )M[F (t, t 0 )] 0≤j≤M ε j/2 B ,j (t)f 0 + O(ε M/2 ),
where B ,j (t) are differential operators of degree ≤ 3j with vector-valued time-dependent coefficients satisfying (1.40) and (1.41).
Propagation close to the crossing area
Our goal in this section is to extend the result of Theorem 5.4 up a the time t -cδ for some c > 0 and δ 1. We follow the same strategy as in the preceding section and checks carefully the dependence in δ of the estimates.
More precisely, the situation is the following: we consider a wave packet at initial time t 0 that is focalized along the mode h 1 at some point z 0 . We let it evolve along that mode according to Theorem 5.4, up to (t 1 , z 1 ) conveniently chosen and we consider ψ ε (t 1 ) as a new initial data, knowing that it is a wave-packet, modulo (1) There exist η 0 > 0 and c 0 > 0 such that we have
O(ε ∞ ). The point (t 1 , z 1 ) is chosen close enough to (t , ζ ) such that |t 1 -t | + |z 1 -ζ | ≤ η 0 where η 0 is defined in the next Lemma.
|f (t, Φ t,t1 h1 (z))| ≥ c 0 |t -t | if |t -t | + |z -ζ | ≤ η 0 .
(2) There exists c, M > 0 such that for all (t, z) satisfying z -Φ t,t1 h1 (z 1 ) ≤ cδ, we have
|f (t, z)| ≥ c 0 δ -M cδ ≥ c 0 2 δ.
Proof. The result comes readily from the transversality of the curve t → Φ t,t1 h1 (z) to the set Υ = {f = 0}. Recall that this transversality is due to Point (b) of Assumption 1.2.
Our goal is to prove accurate estimates for the evolution of the solution ψ ε (t) of the Schrödinger equation with the initial data ψ ε (t 1 ), for t ∈ [t 1 , t -cδ]. We thus have to improve in this precise setting the accuracy of the estimates obtained before for fixed δ = δ 0 .
We use the control in the small parameter δ of the Moyal product rule for ε-Weyl quantization as stated in Lemma 4.3 and the estimates in the Egorov Theorem for symbols in the classes S ε,δ . Finally, the construction of the cut-off functions relies on the fact that due to Point (b) of Assumption 1.2, we can apply a straightening theorem for vector fields.
In several place we need to replace δ by cδ, for a finite number of 0
< c = c 0 , c 1 , • • • , c L (L ∈ N).
We will not mention that point each time.
5.2.1.
Localization up to the crossing region. We construct the cut-off functions by using thin tubes along the classical trajectories. We use a straightening theorem for non singular vector fields. We set D(z 1 , ρ 1 ) = {|z -z 1 | ≤ ρ 1 } and consider a branch of trajectory
T 1 := {Φ t,t1 h (z 1 ), t ∈ [t 1 , t + 1 ]}, t + 1 > t .
Lemma 5.6. [START_REF] Arnold | Ordinary differential equations[END_REF] Let be P 1 a transverse hyperplane to the curve T 1 in z 1 . There exist ρ 1 > 0 and t -
1 < t 1 < t < t + 1 such that the map (t, z) → Φ t,t1 h (z) is a diffeomorphism from ]t - 1 , t + 1 [×D(z 1 , ρ 1 ) onto a neighborhood W 1 of T 1 in P 1 .
Hence for any z in the tube W 1 , we have
z = Φ τ (z),t1 h (Y (z))
where τ and Y are smooth functions of z, τ (z) ∈ [t 1 , t + 1 ], Y (z) ∈ D(z 1 , ρ 1 ). We then define the cut-off functions as follows: consider 2 , where c > 0, C > 0 and η > 0 is a small enough constant. By adapting the constants c and C conveniently, we construct some functions
• ζ ∈ C ∞ 0 (] -2, 2[) equal to 1 in [-1, 1], • θ ∈ C ∞ (R) with θ(t) = 0 if t ≤ -1 and θ(t) = 1 if t ≥ 1, we set for δ > 0, χ δ (z) = θ τ (z) -t - 1 η (1 -ζ) τ (z) -t cδ ζ |z -Φ τ (z),t1 h (z 1 )| 2 (Cδ)
χ δ j ∈ C ∞ 0 (R 2d , [0, 1]), j ∈ {1, 2, 3}, such that (1) χ δ j = 1 on t0≤t≤t1 B Φ t0,t h (z 0 ), c j δ and χ δ j is supported in t0≤t≤t1 B Φ t0,t h (z 0 ), 2c j δ , (2)
for all γ ∈ N 2d , there exists C γ such that for all z ∈ R 2d
|∂ γ z χ δ j (z)| ≤ C γ δ -|γ| , (3) χ δ 3 = 1 on suppχ δ 1 and χ δ 1 = 1 on suppχ δ 2 . Finally, with χ 0 ∈ C ∞ 0 (R 2d , [0, 1]
) satisfying χ 0 = 1 on B(0, 1) and χ 0 (z) = 0 for |z| ≥ 2, we associate
χ δ 0 (z) = χ 0 z 1 -z δ .
And we consider χ 4 a smooth fixed cut-off (δ-independent).
5.2.2.
Adiabatic decoupling close to the gap. Omitting the mode index, we set
(5.3) H adia,N,ε (t) = χ 4 h(t) + εH adia 1 (t) + χ δ 3 2≤j≤N ε j H adia j (t) .
Notice that, because the crossing is smooth, the eigenavalues h , π and the first adiabatic correctors H adia ,1 are smooth, even in a neighborhood of (t , ζ ). Let U adia,N,ε (t, s) be the quantum propagator associated with the Hamiltonian H adia,N,ε (t) (omitting once again the index = 1). The following result is a consequence of the sharp estimates given in [START_REF] Bouzouina | Uniform semi-classical estimates for the propagation of quantum observables[END_REF] concerning propagation of quantum observables (see (ii) of the Egorov Theorem C.1).
Proposition 5.7. Consider the cut-off functions χ δ 0 and χ δ 2 defined above and set for t ∈ [t 1 , t -cδ] op ε (χ δ 0 (t, t 1 )) := U adia,ε,N (t, t 1 ) op ε (χ δ 0 ) U adia,ε,N (t 1 , t). Then, for any M ≥ 1, z ∈ R 2d and t ∈ [t 1 , t -cδ], we have:
(1 -χ δ 2 ) χ δ 0 (t, t 0 , z) = ε δ 2 M ζ M (t) with ζ M (t, z) ∈ S δ 2 (D).
Revisiting the proof of Proposition 5.2, using Lemma 4.3 for the formal series Π ε ∈ S 0 δ 2 (D) and using (3) of Theorem 4.9 about H ε,adia , we obtain the following result.
Proposition 5.8 (adiabatic decoupling -II). With the previous notations, we have the following properties.
(ii) For t 1 ≤ t ≤ t -δ, we have
iε∂ t -H ε (t) op ε χ δ 1 Π N,ε (t) op ε (χ δ 2 ) (5.4) = op ε χ δ 1 Π N,ε (t) iε∂ t -op ε ( H adia,N,ε (t)) op ε (χ δ 2 ) + O ε δ 2 N +1 δ -κ0
where κ 0 ∈ N is N -independent.
(ii) Set for = 1,
ψ ε,N (t) = op ε χ δ 1 Π N,ε (t) U adia,N,ε (t, t 1 ) op ε χ δ 0 Π N,ε (t 1 ) ψ ε (t 1 )
where ψ ε (t 1 ) = U ε H (t 1 , t 0 )ψ ε (t 0 ). Then we have, for N ≥ 2 and for all t ∈ [t 1 , t -δ],
U ε H (t, t 0 )ψ ε 0 = ψ ε,N 1 (t) + ψ ε,N 2 (t) + O ε δ 2 N +1 δ -κ0 .
where U adia,N,ε is the propagator associated with the Hamiltonian H adia,N,ε (t).
The Remark 5.3 is valid also for this Proposition, furthermore here the coefficients of the expansion of ψ ε,N j (t) in √ ε are δ-dependent for the order ε k/2 for k ≥ 2. Note that the integer κ 0 stems from the symbolic calculus estimates of Theorem B.1. 5.2.3. Application to wave packets. As in Theorem 5.4, the previous results have consequences for wave packets propagation and give an asymptotic expansion mod O(ε ∞ ) for any α < 1/2 if δ ≈ ε α . In other words the super-adiabatic approximation is valid for times t such that |t -t | ≥ ε 1/2-η , for any η > 0. The results of Appendix C give the following result.
Theorem 5.9. Consider
ψ ε 1 := ψ ε (t 1 ) = WP ε z1 (ϕ ε 1 ), ϕ ε 1 ∈ S(R d ) modulo O(ε ∞ ).
There exist N 0 ∈ N and two families of differential operators B ,j (t) j∈N , ∈ {1, 2} such that setting for t ∈ [t 1 , t -δ]
(5.5)
ψ ε,N (t) = e i ε S (t,t0,z0) WP ε z (t) R (t, t 0 ) M[F (t, t 0 )] 0≤j≤2N ε j/2 B ,j (t)ϕ 0 ,
one has the following property: for all k ∈ N, N ∈ N, there exists C N,k > 0 such that the solution ψ ε (t) of (1.1) satisfies for all t ∈ [t 1 , t -δ].
ψ ε (t) -ψ ε,N 1 (t) + ψ ε,N 2 (t) Σ k ε ≤ C N,k ε δ 2 N +1 δ -κ0 .
Moreover the operators B ,j (t) are differential operators of degree ≤ 3j with time dependent smooth vector-valued coefficients and satisfy (1.40) and (1.41).
Propagation through the crossing set
We now use the rough reduction of section 4.2 to treat the zone around the crossing. We fix the point (t , ζ ) ∈ Υ and consider trajectories z 1 (t) and z 2 (t) arriving simultaneously at time t in the point ζ . We consider N ∈ N and we set ψ ε,N (t) = π ε,N (t)ψ ε (t), ∈ {1, 2}. By Theorem 4.5, if k ∈ N, the solution ψ ε (t) of the Schrödinger equation (1.1) satisfies in Σ k ε , (5.6)
ψ ε (t) = ψ ε,N 1 (t) + ψ ε,N 2 (t) + O(ε N +1
). Our aim in this section is to determine ψ ε (t +δ) in terms of ψ ε (t -δ) by using the description (5.6) of ψ ε (t).
The family
ψ ε,N = t (ψ ε,N 1 , ψ ε,N 2 ) satisfies (5.7) iε∂ t ψ ε,N = H ε,N (t)ψ ε,N with H ε,N (t) := h ε,N 1 (t, z) 0 0 h ε,N 2 (t, z) + 0 W ε,N (t) W ε,N (t) * 0 .
According to Theorem 4.5, the Hamiltonian H ε is subquadratic (see Definition 1.1), thus for k, N ∈ N there exists C k,N > 0 such that for all ε > 0 and t ∈ I,
(5.8) W ε,N (t) L(Σ k+1 ε ,Σ k ε ) ≤ C k,N ε.
We have used here the fact that the asymptotic series W ε, starts with the term of order ε and we recall that the value of W 1 is given in (1.44).
Let us summarize the information about the data that comes from the preceding section. Let δ > 0, for all s ∈ (t -δ, t -δ 2 ),
(5.9)
ψ ε,N (s) = t WP ε z1(s) (ϕ ε,N 1 (s)), WP ε z2(s) (ϕ ε,N 2 (s)) ,
with for = 1, 2,
ϕ ε,N = N j=0 ε j 2 ϕ j, , ϕ j, ∈ S(R d ).
Our aim is to prove that the description of ψ ε,N (s) given in Equation (5.9) extends to s = t + δ and to derive precise formula for ϕ j, (t + δ) when j = {0, 1} and ∈ {1, 2}. We consider the Hamiltonians
H ε,N diag (t, z) = h ε,N 1 (t, z) 0 0 h ε,N 2 (t, z) and εH ε,N adiag (t, z) = 0 W ε,N (t) W ε,N (t) * 0 .
so that H ε,N = H ε,N diag + εH ε,N adiag . We fix N large enough and, for simplifying the notations, we drop the mentions of N in the following. We use the notations
U ε H (t, s) and U ε H diag (t, s)
for the propagators associated to the truncated Hamiltonians H ε,δ,N (t) and H ε,N diag (t) respectively, omitting the mention of δ in U ε H (t, s). The action of U ε H (t, s) on wave packets is described by the next Theorem on which we focus now. It gives a precise description of the action of U ε H (t + δ, t -δ) on a wave packet and describes the propagation of a wave packet through the crossing set, in particular the exchange of modes at the crossing points.
Theorem 5.10. Let k, N, M ∈ N with M ≤ N . Let δ > 0 such that δ 2 ≥ √ ε. Then, there exists C > 0 and an operator Θ ε,δ M such that for all ε ∈ (0, 1), (5.10)
U ε H (t + δ, t -δ) = U ε diag (t + δ, t ) Θ ε,δ M U ε diag (t , t -δ) + R ε,δ M with R ε,δ M L(Σ k+M +1 ε ,Σ k ) ≤ Cδ M +1 and Θ ε,δ M (t ) = I + 1≤m≤M Θ ε,δ m,M (t ).
Moreover, there exists ε 0 > 0, and a family of operators
(T ε,δ m,M ) m≥1 such that for all ϕ ∈ S(R d , C 2 ), m ≥ 1, ε ∈ (0, ε 0 ), (5.11) Θ ε,δ m,M WP ζ (ϕ) = WP ε ζ (T ε,δ m,M ϕ) + O( √ ε ϕ Σ k+2m+2 ) with (5.12) T ε,δ m,M ϕ Σ k ≤ c k,m,M ε m 2 | log ε| max(0,m-1) ϕ Σ k+2m+1
for some constants c k,m > 0. Besides, with the notation (1.43)
(5.13)
T ε,δ 1,M = 0 W 1 (t , ζ )T 2→1 W 1 (t , ζ ) * T 1→2 0 .
We point out that some additional action effects will appear when applying U ε H (t + δ, t -δ) to a wave packet via the operators U ε diag (t + δ, t ) and U ε diag (t , t -δ). When applied to a Gaussian wave packets, i.e. when ϕ ,1 = g Γ in (5.9), the leading order correction term at time t + δ due to the crossing is
√ ε e i ε S1(t +δ,t ,ζ )+ i ε S2(t ,t0,z0) WP ε z1(t +δ) (ϕ 1 ) e i ε S2(t +δ,t ,ζ )+ i ε S1(t ,t0,z0) WP ε z2(t +δ) (ϕ 2 )
with
ϕ 1 = M[F 1 (t + δ, t , ζ )]W 1 (t , ζ )T 2→1 M[F 2 (t , t 0 , z 0 )]g Γ 2 , ϕ 2 = M[F 2 (t + δ, t , ζ )]W 1 (t , ζ ) * T 1→2 M[F 1 (t , t 0 , z 0 )]g Γ 1 .
Recall that W 1 is the off-diagonal matrix described in (1.44).
The remainder of this section is devoted to the proof of Theorem 5.10. The use of Dyson series allows to obtain the decomposition (5.10) (see Section 5.3.1). Then, the analysis of each terms of the series is made in Sections 5.3.2 and 5.3.3. Finally, we recall how to compute explicitly the quantities T ε,δ m,M and S 1,m in Section 5.3.5, which was already done in [START_REF] Fermanian Kammerer | Propagation of wave packets for systems presenting codimension one crossings[END_REF].
Before starting the proof, we introduce a cut-off
χ δ (t) = χ t-t δ , χ 0 ∈ C ∞ 0 ] -1, 1[, χ 0 (t) = 1 if |t| ≤ 1/2. We set H ε,δ,N (t) = H ε,N diag (t) + εχ δ (t -t )H ε,N adiag (t)
and we consider the propagator U ε H δ (t, s) associated with H ε,δ,N . We claim that if Theorem 5.10 holds for U ε H δ (t, s), then it also holds for U ε H (t, s). Indeed, we have for t ∈ [t + δ, t
+ δ 2 ], U ε H (t, t -δ) = U ε H δ (t +δ, t -δ)ψ ε,N (t -δ)+i t t -δ U ε H δ (t, s)(1-χ δ )(s-t ) H ε,N adiag (s)U ε H (s, t -δ)ds.
This formula allows to obtain (5.10) for U ε H (t, t -δ). It remains to consider the action of U ε H (t, t -δ) on asymptotic sum of wave packets We observe that in the support of the integral, |t -s| > δ 2 and -δ ≤ s -t ≤ δ 2 . Therefore, s ∈ [t -δ, t -δ 2 ] on the support of the integral. where we know that U ε H (s, t -δ) propagates wave packets, whence the expansion in wave packets To conclude, we observe that since |t -s| > δ 2 on the support of the integral term, we have
U ε H (t + δ, t -δ) = U ε H δ (t + δ, t -δ) + O(δ 2 ) = U ε H δ (t + δ, t -δ) + o( √ ε)
so the formula for the first two terms of the asymptotic assumptions are the same.
In view of these considerations, we focus in the next sections in proving Theorem 5.10 for the Hamiltonian H ε,δ,N (t).
Dyson expansion.
We perform a Dyson expansion via the Duhamel formula. A first use of Duhamel formula gives for t 1 , t 2 ∈ R,
(5.14) U ε H (t 2 , t 1 ) = U ε H diag (t 2 , t 1 ) + i -1 t2 t1 U ε H (t 2 , s 1 ) H ε adiag (s 1 ) U ε H diag (s 1 , t 1 )ds 1 .
With one iteration of the Duhamel formula, we obtain
U ε H (t 2 , t 1 ) = U ε H diag (t 2 , t 1 ) + i -1 t2 t1 U ε H diag (t 2 , s 1 ) H ε adiag (s 1 )U ε H diag (s 1 , t 1 )ds 1 - t2 t1 t2 s1 U ε H (t 2 , s 2 ) H ε adiag (s 2 ) U ε H diag (s 2 , s 1 ) H ε adiag (s 1 ) U ε H diag (s 1 , t 1 )ds 1 ds 2 .
With two iterations, we have
U ε H (t 2 , t 1 ) = U ε H diag (t 2 , t 1 ) + i -1 t2 t1 U ε H diag (t 2 , s 1 ) H ε adiag (s 1 )U ε H diag (s 1 , t 1 )ds 1 - t2 t1 t2 s1 U ε diag (t 2 , s 2 ) H ε adiag (s 2 ) U ε H diag (s 2 , s 1 ) H ε adiag (s 1 ) U ε H diag (s 1 , t 1 )ds 1 ds 2 - 1 i t2 t1 t2 s2 t2 s1 U ε H diag (t 2 , s 3 ) H ε adiag (s 3 )U ε diag (s 3 , s 2 ) H ε adiag (s 2 ) × U ε H diag (s 2 , s 1 ) H ε adiag (s 1 ) U ε H diag (s 1 , t 1 )ds 1 ds 2
After M iterations, M ∈ N, we have the Dyson formula
U ε H (t 2 , t 1 ) = U ε H diag (t 2 , t 1 ) I + 1≤m≤M (i) -m P(t2,t1) F ε (s 1 , • • • , s m , t 1 )ds m • • • ds 1 + R ε M (t 2 , t 1 ) with (5.15) F ε (s 1 , • • • , s m , t 1 ) = E(s m , t 1 )E(s m-1 , t 1 ) • • • E(s 2 , t 1 )E(s 1 , t 1 ),
where the operators E(s, t 1 ) are given by (5.16)
E(s, t 1 ) = U ε H diag (t 1 , s) H ε adiag (s)U ε H diag (s, t 1 ),
and the set of integration P(t 2 , t 1 ) ⊂ R M satisfies
P(t 2 , t 1 ) = {t 1 ≤ s M ≤ • • • ≤ s 2 ≤ s 1 ≤ t 2 }.
Besides, by (5.8) there exists a constant C > 0 such that
R ε M (t 2 , t 1 ) L(Σ k+M +1 ε ,Σ k ε ) ≤ C|t 2 -t 1 | M +1 .
We apply this formula to t 1 = t , t 2 = t + δ,
U ε H (t + δ, t -δ) = U ε H diag (t + δ, t ) I + 1≤m≤M (i) -m s M ∈R s M -∞ • • • s2 -∞ F ε (s 1 , • • • , s M , t )ds M • • • ds 1 U ε H diag (t , t -δ) + R ε M (t + δ, t )U ε H diag (t , t -δ) which gives equation (5.10) with Θ ε,δ m,M = (i) -m s M ∈R s M -∞ • • • s2 -∞ F ε (s 1 , • • • , s M , t )ds M • • • ds 1 and R ε,δ M = R ε M (t + δ, t -δ)U ε H diag (t , t -δ) satisfies R ε,δ M L(Σ k+M +1 ε ,Σ k ) ≤ Cδ M +1 .
The operators Θ ε,δ m,M contain all the information about the interactions between the modes h 1 and h 2 modulo O(δ ∞ ) when M goes to +∞.
In the next sections, we focus in understanding the action of Θ ε,δ m,M on wave packets of the form
WP ε ζ ( ϕ) = WP ε ζ (ϕ 1 ) WP ε ζ (ϕ 2
) and in proving equations (5.11), (5.12) and (5.13).
Analysis of the matrices E(s, t ).
We have
E(s, t ) = 0 I(s, t ) I * (s, t ) 0 , s ∈ [t -δ, t + δ]
with (5.17)
I(s, t ) = U ε h ε,N 1 (t , s)χ δ (s -t )W ε,N (s) U ε h ε,N 2 (s, t ).
This operator combines conjugation of the pseudodifferential operator χ δ (s -t )W
I(s, t ) = U ε h ε,N 1 (t , s)W ε,N (s) U ε h ε,N 1 (s, t ) • U ε h ε,N 1 (t , s)U ε h ε,N 2 (s, t ) .
The conjugation of a pseudo by a propagator is perfectly understood and is described in our setting by the Egorov Theorem of Appendix C (with δ = 1). The operator
U ε h ε,N 2 (t , s)W ε,N (s)U ε h ε,N 2 (s, t ) has an asymptotic expansion. U ε h ε,N 2 (t , s)W ε,N (s) U ε h ε,N 2 (s, t ) = j≥1 ε j W j (s).
Similarly, for I * (s, t ), one writes
I * (s, t ) = U ε h ε,N 2 (t , s)W ε,N (s) * U ε h ε,N 2 (s, t ) • U ε h ε,N 2 (t , s)U ε h ε,N 1 (s, t ) .
Note that the actions of these two operators are perfectly adapted to the geometric context: I(s, t ) picks the contribution of the second component (the lower one), which lives on the mode 2, transforms it into something related with the mode 1 (the upper one), via the operator
U ε h ε,N 1 (t , s)U ε h ε,N
2
(s, t ), and then, an operator related to the first mode acts on what is now a component living on this precise mode. And conversely for I * (s, t ).
The action on wave packets of two different propagators acting one backwards and the other one forwards has been studied in [START_REF] Fermanian Kammerer | Propagation of wave packets for systems presenting codimension one crossings[END_REF] (see Section 5.2). Using Egorov theorem, the action of scalar propagators on wave packets, and the precise computation of the operator
U ε h ε,N 1 (t , s) U ε h ε,N 2 (s, t )
performed therein (which involves the canonical transformation of the phase space z → Φ t ,s
1 • Φ s,t
2 (z)), one obtains the analogue of Lemma 5.3 of [START_REF] Fermanian Kammerer | Propagation of wave packets for systems presenting codimension one crossings[END_REF], which writes in our context as follows.
Lemma 5.11. Let k ∈ N. There exist -A smooth real-valued map s → Λ(s) with Λ(0) = 0, Λ(0) = 0, Λ(0) = 2µ + α • β , -A smooth vector-valued map s → z(s) = (q(s), p(s)) with z(0) = 0, ż(0) = (α , β ), -A smooth map σ → Q ε (s) of pseudodifferential operators, that maps Schwartz functions to Schwartz functions, with
Q ε (s) = M j=0 ε j Q j (s) + Q ε M +1 , Q 0 (0) = W 1 (t , ζ ) *
such that for all ϕ ∈ S(R d ),
I(s, t ) * WP ε (ϕ)(y) = WP ε e i ε Λ(s-t ) Q ε (s -t )e ipε(s-t )•(y-qε(s-t )) ϕ(y -q ε (s -t )) + R ε ϕ with, for some c M > 0 Q j (s)ϕ Σ k ≤ c M ϕ Σ k+1 , ∀j ∈ {1, • • • , M }, Q M +1 (s)ϕ Σ k ≤ c M ϕ Σ k+1+κ 0 , R ε ϕ Σ k ≤ c M ϕ Σ k+1+κ 0 ,
where we have used the scaling notation z ε (s) = z(s)/ √ ε and where κ 0 is the universal constant of Theorem B.1.
A similar result holds for I(s, t ) by replacing W 1 (t , ζ ) * by W 1 (t , ζ ) and exchanging the roles of the modes h 1 and h 2 . 5.3.3. Uniform estimates for the elements of the Dyson series. We now focus on the operators Θ ε,δ m,M . For s ∈ [t -δ, t +δ], we define recursively the quantities J
m (s) for m ∈ {1, • • • M } by J 1 (s) = -i s -∞ E(s 1 , t )ds 1 and for m ≥ 2, J m (s) = -i s -∞ E(s m , t )J m-1 (s m )ds m .
We recall that E 1 (s, t ) is supported on |t -s| ≤ δ due to the cut-off function χ δ (s -t ) that appears in (5.17). With these notations, Θ ε,δ m,M = J m (t + δ).
We are reduced to proving the existence of operators s → T ε,δ m (s) such that for all s ∈ [t -δ, t + δ] and ϕ ∈ S(R d , C 2 ), (5.18)
J m (s)WP ε ζ ( ϕ) = WP ε ζ (T ε,δ m (s) ϕ)
, with for all k ∈ N, the estimate 5.12. Note that we are omitting the index M of the notations of Theorem 5.10.
If the functions T ε,δ m exist, they satisfy for s ∈ [t -δ, t + δ] the recursive equations
WP ε ζ T ε,δ m+1 (s) ϕ 2 = -i s -∞ I(s, t ) * WP ε ζ T ε,δ m (s) ϕ 1 ds and WP ε ζ T ε,δ m+1 (s) ϕ 1 = -i s -∞ I(s, t )WP ε ζ T ε,δ m (s) ϕ 2 ds.
Therefore, by Lemma 5.11, if the functions T ε,δ m exist, they satisfy for s ∈ [t -δ, t +δ] the recursive equations
T ε,δ m+1 (s) ϕ 2 = s -∞ e i ε Λ(s-t ) Q ε (s -t )e ipε(s-t )•(y-qε(s-t )) (T ε,δ m (s) ϕ) 1 (y -q ε (s -t ))ds
and some analogue equation for the other components.
At the stage of the proof, the operators T ε,δ m are defined by a recursive process that we are going to study for proving (5.12). Therefore, we focus on the analysis for scalar valued functions ϕ of
I m ϕ := s -∞ e i ε Λ(s-t ) Q ε (s -t )e ipε(s-t )•(y-qε(s-t )) (T ε,δ m (s)ϕ)(y -q ε (s -t ))ds.
In the following sections, we prove (5.18) recursively:
(1) In Section 5.3.4, we prove that if m > 1 and if (5.18) holds for m = 1, 2, • • • , m -1, then it also holds for m = m. (2) In Section 5.3.5, we prove (5.18) when m = 1. 5.3.4. Proof of Lemma 5.12: the recursive argument. We use Lemma 5.11 to perform a recursive argument on the structure of J m (s). Lemma 5.12. Assume there exists m ∈ {1, 2, • • • , M -1} such that for all k ∈ N, we have (5.18) with the inequality (5.12) for m and for the integers between 1 and m. Then, for all k ∈ N, there exists a constant C k,m+1 such that for all ϕ ∈ S(R d ), we have
I ϕ Σ k ≤ C k,m+1 ε m+1 2 | log ε| max(0,m) ϕ Σ k+2(m+1)+1 .
This lemma, together with the precise computation of J 1 (t + δ) (see [START_REF] Fermanian Kammerer | Propagation of wave packets for systems presenting codimension one crossings[END_REF] and Section 5.3.5 below) concludes the proof of Theorem 5.10.
Note that at each step of the recursion one loose 2 degrees of regularity. One degree is lost because the operator Q ε that may has linear growth, and another loss is due to an integration by parts that will involve the term y • p ε in the phase and the argument q ε inside the function ϕ. The initial loss of regularity when m = 1 comes from similar reasons: one step is due to the presence of the operator Q ε and the two other ones by integration by parts.
Let us start by exhibiting basic properties of I ϕ . After the change of variables s = t + σ √ ε and letting appear a cut-off χ such that χ χ = χ, we write
I ϕ = √ ε (s -t )/ √ ε -∞ e i ε Λ(σ √ ε) χ(σ/λ)Q ε (σ √ ε)e ipε(σ √ ε)•(y-qε(σ √ ε)) × (T ε,δ m (t + σ √ ε)ϕ)(y -q ε (σ √ ε))dσ with λ = δ √ ε .
One sees that the change of variable has exhibited a power √ ε, which is exactly what one wants to earn for the recursive process. However, even though the integrand is bounded, the size of the support of the integral is large: it is of size λ = δ √ ε , which spoils that gain of √ ε. This integral will turn out to be smaller than what gives this rough estimate because of the oscillations of the phase. The proof then consists in integration by parts. For this reason, we are interested in derivatives and we observe that the recursive assumption yields (5. [START_REF] Kammerer | An Egorov theorem for avoided crossings of eigenvalue surfaces[END_REF])
∂ s T ε,δ m (s) ϕ Σ k ≤ C k,m ε m-1 2 | log ε| max(0,m-2) ϕ Σ k+2(m-1)+1
and similarly for the integers between 2 and m. Analysis of the phase. We know analyze the phase of the integral I ϕ . We set
φ ε (σ) = Λ(σ √ ε) - 1 2 p ε (σ √ ε) • (y -q ε (σ √ ε)) and L = β • y -α • D y .
We will use that e isL maps Σ k into itself continuously for all s ∈ R and k ∈ N. Besides, for δ ≤ δ 0 , δ 0 > 0 small enough, we have The function b ε (s) is uniformly bounded, as well as its derivatives, for |s| ≥ 1/2. Then, following [START_REF] Fermanian Kammerer | Propagation of wave packets for systems presenting codimension one crossings[END_REF], we have
I m ϕ = √ ε (s -t )/ √ ε -∞ e iφ ε (σ) Q ε (σ √ ε)χ(σ/λ)e iLσ T ε,δ m (t + σ √ ε)ϕdσ
and there exists a smooth function f such that
φ ε (σ) = µ σ 2 + √ ε σ 3 f (σ √ ε).
At the stage of the proof, all the elements have been collected to perform the recursive argument.
Because of the considerations we have made on the support of the integral and because the phase φ ε (s) is oscillating for s far away from 0, we use the cut-off function χ to write
I m ϕ = I m,1 ϕ + I m,2 ϕ with I m,1 ϕ = √ ε (s -t )/ √ ε -∞ e iφ ε (σ) χ(σ)χ(σ/λ)Q ε (σ √ ε)e iLσ T ε,δ m (t + σ √ ε)ϕdσ
The compactly supported term. The term I m,1 ϕ is the easiest to deal with since it has compact support, independently of ε and δ. Therefore,
I m,1 ϕ Σ k ≤ C √ ε sup s∈[t -δ,t +δ] T ε,δ m (s)ϕ Σ k+1 ≤ Cε m+1 2 | log ε| max(0,m-1) ϕ Σ k+2m+2 .
Note that the presence of Q ε induces a loss of regularity of 1.
The oscillating term. For dealing with the term I m,2 ϕ , we take advantage of the oscillating phase for compensating the fact that the support is large and we perform integration by parts. We write
I m,2 ϕ = √ ε (s -t )/ √ ε -∞ ∂ σ e iφ ε (σ) 1 σ b ε (σ)(1 -χ)(σ)χ(σ/λ)Q ε (σ √ ε) × e iLσ (T ε,δ m (t + σ √ ε)ϕ)dσ = - √ ε (s -t )/ √ ε -∞ e iφ ε (σ) ∂ σ 1 σ b ε 1 (σ)e iLσ T ε,δ m (t + σ √ ε)ϕ dσ + √ ε e iφ ε (σ) 1 σ b ε 1 (σ)e iLσ σ= s -t √ ε T ε,δ m (s )ϕ with b ε 1 (σ) = (1 -χ)(σ)χ(σ/λ)Q ε (σ √ ε). Note that the operator-valued functions σ → b ε 1 (σ) and σ → 1 σ b ε 1 (σ) are bounded form Σ k+1 ε to Σ k ε
(with a loss due to the presence of the operator Q ε ), and similarly for its derivatives. We write
I m,2 ϕ = - √ ε (s -t )/ √ ε -∞ e iφ ε (σ) ∂ σ 1 σ b ε 1 (σ) e iLσ (T ε,δ m (t + σ √ ε)ϕ)dσ - √ ε (s -t )/ √ ε -∞ e iφ ε (σ) 1 σ b ε 1 (σ)e iLσ (iLT ε,δ m (t + σ √ ε)ϕ + √ ε∂ s T ε,δ m (t + σ √ ε)ϕ)dσ + √ ε e iφ ε (σ) 1 σ b ε 1 (σ)e iLσ σ= s -t √ ε T ε,δ m (s )ϕ
and we check
I m,2 ϕ Σ k ≤ C √ ε × sup s∈[t -δ,t +δ] T ε,δ m (s)ϕ Σ k+1 + ( LT ε,δ m (s)ϕ Σ k+1 + √ ε ∂ s T ε,δ m (s)ϕ Σ k+1 ) 1 2 ≤|s|≤ δ √ ε dσ σ ≤ C √ ε ε m 2 | log ε| max(0,m-1) ϕ Σ k+2m+3 + ε| log ε|ε m-1 2 | log ε| max(0,m-1) ϕ Σ k+2m ≤ Cε m+1 2 | log ε| max(0,m) ϕ Σ k+2(m+1)+1 .
We point out that it is at that very last stage that we loose some log ε coefficient. Note also that the loss of regularity in the recursive process is covered by the m → m + 1 process. It will appear in the initialization process (m = 1) in which we will get rid of the logarithmic loss | log ε| by trading it as a loss of regularity.
5.3.5. Proof of Lemma 5.12: the initialization of the recursion. For initializing the recursive process, we have to study J 1 (s), i.e. the integral I m ϕ replacing the transfer operator of the integrand by I, the identity operator. We obtain
I m=1 ϕ = √ ε (s -t )/ √ ε -∞ e iφ ε (σ) Q ε (σ √ ε)χ(σ/λ)e iLσ ϕ dσ.
Besides the estimate of the norm of I m=1 ϕ , we also want to calculate the leading order term when s = t + δ. The main difference with the preceding analysis is that we can push the integration by parts at any order because the integrand is simpler. Lemma 5.13. Let k ∈ N. Then, there exists a constant C k,1 such that for all ϕ ∈ S(R d )
I m=1 ϕ Σ k ≤ C k,1 √ ε ϕ k+3 .
Moreover, there exists an operator
Θ ε,δ = Θ ε,δ 1 + √ εΘ ε,δ 2
and a constant c k,M such that for all M ∈ N,
I m=1 ϕ s =t +δ - √ εW 1 (t , ζ ) * T 1→2 ϕ -ε Θ ε,δ ϕ Σ k ≤ C k,M √ ε √ ε δ M +1 ϕ Σ k+M +3 with for = 1, 2, Θ ε,δ ϕ Σ k ≤ c k,M ϕ Σ k+3 .
We recall that T 1→2 is the transfer operator defined in (1.43). In [START_REF] Fermanian Kammerer | Propagation of wave packets for systems presenting codimension one crossings[END_REF], this Lemma has already be proved with M = 1. We ameliorate here this result. This terminates the proof of Theorem 5.10 with (T
ε,δ 1,M ) 2 = W 1 (t , ζ ) * T 1→2 .
Proof. Following the same lines of proofs than above, we write
I m=1 ϕ = I 1,1 ϕ + I 1,2 ϕ with I 1,1 ϕ = √ ε (s -t )/ √ ε -∞ e iφ ε (σ) χ(σ)χ(σ/λ)Q ε (σ √ ε)e iLσ ϕdσ.
We point out that we use here the same notation than in the preceding section for a simpler integrand. Like before, this term satisfies the estimate
I 1 ϕ Σ k ≤ C √ ε ϕ Σ k+1 .
We proceed to integration by parts in I 2 ϕ . We obtain
I 1,2 ϕ = - √ ε (s -t )/ √ ε -∞ e iφ ε (σ) ∂ σ 1 σ b ε 1 (σ) e iLσ ϕdσ -i √ ε (s -t )/ √ ε -∞ e iφ ε (σ) 1 σ b ε 1 (σ)e iLσ L ϕdσ + √ ε e iφ ε (σ) 1 σ b ε 1 (σ)e iLσ σ= s -t √ ε ϕ.
We write
∂ σ 1 σ b ε 1 (σ) = b ε 2 (σ) + 1 σ b ε 3 (σ) with b ε 3 (σ) = ∂ σ b ε 1 (σ) + √ ε(1 -χ)(σ)χ(σ/λ)∂ s Q ε ( √ εσ).
where the maps σ
→ b ε 2 (σ), σ → b ε 3 (σ) and σ → 1 σ b ε 3 (σ) are bounded from Σ k+1 ε to Σ k ε (with a loss of regularity because of the presence of Q ε (s)) with the additional property R b ε 2 (σ) L(Σ k+1 ε , Σ k ε ) dσ < c 0 < +∞
for some constant c 0 independent of ε and δ. Therefore, in Σ k ,
I 1,2 ϕ = - √ ε (s -t )/ √ ε -∞ e iφ ε (σ) 1 σ (b ε 3 (σ) + ib ε 1 (σ)L)e iLσ ϕdσ + O( √ ε ϕ Σ k+1 ).
Another integration by parts gives
I 1,2 ϕ = √ ε (s -t )/ √ ε -∞ e iφ ε (σ) ∂ σ 1 σ 2 b ε (σ)(b ε 3 (σ) + ib ε 1 (σ)L) e iLσ ϕdσ - √ ε e iφ ε (σ) 1 σ 2 b ε (σ)(b ε 3 (σ) + ib ε 1 (σ)L)e iLσ ϕ σ= s -t √ ε + O( √ ε ϕ Σ k+1 ) = O( √ ε ϕ Σ k+3 ),
since the integrand has gained integrability. Note that it is at that very place that we have a loss of 3 momenta and derivatives in the estimate. We have obtained the first inequality that allows to initiate the recursive process of the preceding section. It remains to focus on the case s = t + δ.
We now consider the operator
I ϕ = 1 √ ε I m=1 ϕ s =t +δ -W 1 (t , ζ )T 1→2 .
Note first that by the construction of the function χ,
I m=1 ϕ s =t +δ = √ ε R e iφ ε (σ) χ(σ/λ)Q ε (σ √ ε)e iLσ ϕdσ
Following [START_REF] Fermanian Kammerer | Propagation of wave packets for systems presenting codimension one crossings[END_REF] Section 5.3, we first transform the expression I m=1 ϕ s =t +δ by performing the change of variable
z = σ(1 + √ εσf (σ √ ε)/µ ) 1/2
and observe that σ = z(1
+ √ εzg 1 (z √ ε)) and ∂ σ z = 1 + √ εzg 2 (z √ ε)
for some smooth bounded functions g 1 and g 2 with bounded derivatives. Note that we have used that σ √ ε is of order δ, thus small, in the domain of the integral. Besides, there exists a family of operator Q
ε (z) such that Q ε (σ √ ε) = Q ε (z √ ε) with Q ε (0) = Q ε (0) and Q 0 (0) = W 1 (t , ζ
) and a compactly supported function χ, such that
I m=1 ϕ s =t +δ = √ ε R e iµ z 2 χ(z/λ) Q ε (z √ ε)e iz(1+ √ εzg1(z √ ε)) L ϕ dz 1 + √ εzg 2 (z √ ε) .
A Taylor expansion allows to write
Q ε (z √ ε)e i √ εz 2 g1(z √ ε)) L 1 1 + √ εzg 2 (z √ ε) = Q 0 (0) + √ εz( Q ε 1 (z √ ε) + z Q ε 2 (z √ ε))
for some smooth operator-valued maps z → Q
ε j (z √ ε) mapping S(R d ) into itself, such that for all ϕ ∈ S(R d ) the family Q ε j (z √ ε)ϕ Σ k ≤ c j ϕ Σ k+2
(because of the loss of regularity involved by L and Q ε ) We obtain
I ϕ = I 1 ϕ + I 2 ϕ with I 1 ϕ = √ ε R z e iµ z 2 χ(z/λ)( Q ε 1 (z √ ε) + z Q ε 2 (z √ ε))e izL ϕdz, I 2 ϕ = Q 0 (0) R e iµ z 2 (1 -χ)(z/λ) e izL ϕdz.
Let us study I 1 ϕ . Arguing by integration by parts as previously, we obtain
I 1 ϕ = - √ ε 2iµ R e iµ z 2 d dz χ(z/λ)( Q ε 1 (z √ ε) + z Q ε 2 (z √ ε)e izL dz = - ε 2iµ R e iµ z 2 χ(z/λ)∂ z Q ε 1 (z √ ε)e izL dz - √ ε 2iµ R χ(z/λ)( e iµ z 2 ∂ z Q ε 2 (z √ ε) + √ εz∂ z Q ε 2 (z √ ε)e izL dz - √ ε 2iµ R e iµ z 2 χ (z/λ) λ -1 Q ε 1 (z √ ε) + z λ Q ε 2 (z √ ε) e izL dz.
One then performs M + 2 integration by parts in the last term of the right-hand side that is supported in |z| > λ 2 and we obtain
I 1 ϕ = - ε 2iµ R e iµ z 2 χ(z/λ)(∂ z Q ε 1 (z √ ε) + 2∂ z Q ε 2 (z √ ε))e izL dz - ε 2iµ R χ(z/λ) e iµ z 2 z∂ 2 z Q ε 2 (z √ ε)e izL dz + O √ ε ε δ M +1 ϕ Σ k+M +3 = √ ε Θ ε,δ 1 ϕ + O ε δ M ϕ Σ k+M +3 + ε (2iµ ) 2 R e iµ z 2 ∂ z χ(z/λ)∂ 2 z Q ε 2 (z √ ε)e izL dz = √ ε Θ ε,δ 1 ϕ + ε Θ ε,δ 2 ϕ + O ε δ M ϕ Σ k+M +3 with Θ ε,δ 1 Σ k ≤ c δ ϕ Σ k+3 .
It remains to observe that the term I 2 ϕ satisfies after M + 1 integration by parts
I 2 ϕ Σ k ≤ C √ ε δ M +1 ϕ Σ k+M +3 .
Propagation of wave packets -Proof of Theorem 1.21
Let k ∈ N and let be ψ ε 0 a polarized wave packet as in (1.39):
ψ ε 0 = V 0 WP ε z0 (f 0 ) with f 0 ∈ S(R d ) and V 0 ∈ C m .
Let δ > 0. By Theorem 5.9, for t ∈ [t 0 , t -δ], and in Σ k ε , ψ ε (t) is an asymptotic sum of wave packets and writes
ψ ε (t) = ψ ε,N 1 (t) + ψ ε,N 2 (t) + O ε δ 2 N δ -κ0 .
with ψ ε,N (t) given by (5.5).
We now take the vector
ψ ε,N (t -δ) = t ψ ε,N 1 (t -δ), ψ ε,N 2 (t -δ) , ψ ε,N (t -δ) = ψ ε,N (t -δ), = 1
, 2, as initial data in the system (5.7). It is a sum of N wave packets. By construction, in particular because of the linearity of the equation, we have for all t ∈ [t -δ, t + δ],
ψ ε (t) = ψ ε,N 1 (t) + ψ ε,N 2 (t) + O ε δ 2 N δ -κ0
in Σ k ε . When t = t + δ, we deduce from Theorem 5.10 that in Σ k ε ,
ψ ε,N (t + δ) = U ε diag (t + δ, t ) I + 1≤m≤M Θ ε,δ m,M U ε diag (t , t -δ)ψ ε,N (t -δ) + O(δ M ).
By Proposition C.5 of the Appendix, ψ ε,N (t + δ) is a sum of wave packets
ψ ε,N (t + δ) = 0≤m≤M ε m 2 ψ ε,m,M,N (t + δ)
where each term ψ ε,m,M,N (t + δ) involves a term of action. When m = 0 and m = 1, these terms have been computed precisely:
• If m = 0, for = 1, 2, ψ ε,0,M,N (t + δ) = e i ε S (t +δ,t -δ,ζ ) WP ε Φ t +δ,t 0 (z0) M[F (t + δ, t 0 , z 0 )]π (t 0 , z 0 ) V 0 f 0 where we have used the property of the scalar propagation of wave packets. where W 1 is the off-diagonal matrix computed in (1.44). At that stage of the proof, we have obtained that ψ ε (t + δ) is a sum of wave packet in Σ k ε up to O ε δ 2 N δ -κ0 + δ M +1 and we know precisely the terms of order ε 0 and ε 1 2 . For concluding, we take the vector ψ ε app (t + δ) := ψ ε,N 1 (t + δ) + ψ ε,N 2 (t + δ) as initial data at time t = t + δ in the equation (1.1). The function ψ ε app (t + δ) is an approximation of ψ ε (t + δ) at order O ε δ 2 N δ -κ0 + δ M +1 in Σ k ε . By construction and because of the linearity of the equation, for all times t ∈ [t + δ, t 0 + T ],
ψ ε (t) = U H (t, t + δ)ψ ε app (t + δ) + O ε δ 2 N δ -κ0 + δ M +1 .
We then applies Theorem 5.9 between times t + δ and t. Indeed, the classical trajectories involved in the construction do not meet Υ again and we are in an adiabatic regime, as in theorem 5.9. This concludes the proof of Theorem 1.21.
The eigenvalues of H are ±ρ and the eigenprojector associated with the eigenvalue ρ is π(x) = 1 2 1 + cos(θ(x)) sin(θ(x)) sin(θ(x)) -cos(θ(x))
.
Its derivative π (x) = 1 2 θ (x) -sin(θ(x)) cos(θ(x)) cos(θ(x)) sin(θ(x)) is not bounded. On the other side, ρ and H are sub-quadratic. Indeed, for |x| > 1, the derivatives of the coefficient of H are of the form 1 x 3 p 1 1 x , ln x cos θ(x) + p 2 1 x , ln x sin θ(x)
for p 1 and p 2 two polynomial functions of two variables and thus bounded.
A manner of controlling the growth of the potential consists in requiring a lower bound on the gap function f at infinity. We obtain a control of the form z n0+2( -1) . Besides, it allows to perform a recursive argument by writing for γ ∈ N 2d ,
∂ γ ∂ 2 ij h Tr C m,m (π) = ∂ γ (Tr C m,m (∂ i π∂ j H) + Tr C m,m (π∂ ij H)) - 2≤|α|,α≤γ c α ∂ γ-α ∂ 2 ij h Tr C m,m (∂ α π) = α≤γ c α Tr C m,m (∂ γ-α ∂ i π ∂ α ∂ j H) + Tr C m,m (∂ γ-α π∂ α ∂ ij H) - 2≤|α|,α≤γ c α ∂ γ-α ∂ 2 ij h Tr C m,m (∂ α π)
for some coefficients c α . One can then conclude recursively to |∂ γ ∂ 2 ij h| ≤ c γ z (|γ|+1)n0+2( -1) .
These two Lemmata allow to derive the consequences of Assumptions 1.4 for a Hamiltonian H ε = H 0 + εH 1 . We now fix m = 2.
Proposition A.4. Assume that H ε = H 0 + εH 1 satisfies the Assumptions 1.4. Then, for j ∈ {1, 2} we have the following properties:
(1) For all γ ∈ N 2d with |γ| ≥ 2, there exists a constant C γ > 0 such that ∀(t, z) ∈ I × R 2d , |∂ γ z π j (t, z)| ≤ C γ z |γ|n0 and |∂ γ z h j (t, z)| ≤ C γ z (|γ|-1)n0 . (2) If moreover n 0 = 0 in (1.6), then the maps z → ∂ γ π j (t, z) and z → ∂ γ z h j (t, z) for |γ| ≥ 2 are bounded. As a consequence, the Hamiltonian trajectories Φ t0,t h (z) are globally defined for all z ∈ R. Besides, there exists C > 0 such that |Φ t0,t
hj (z)| ≤ C|z|e C|t-t0| and the Jacobian matrices F j (t, z) = ∂ z Φ t,t0 hj (z) (see (1.12)) satisfy F j (t, z) C 2d,2d ≤ Ce C|t-t0| .
Remark A.5. Note that under the assumptions of Proposition A.4, for all j ∈ {1, • • • , 2d}, the matrices
f ∂ zj π 1 = -f ∂ zj π 2 = 1 2 ∂ zj (H -v) -∂ zj f (π 2 -π 1 )
are bounded. This comes from the differentiation of the relation H = vI + f (π 2 -π 1 ).
Proof. Note first that Point 2 is a consequence of Point 1. We thus focus on Point 1. We use that π 2 is the projector of the matrix H 0 -vI for the eigenvalue f . And the matrix H 0 -vI satisfies the assumptions of Lemma A.3 with = 1. Therefore, we have for (t, z) ∈ I × R d with |z| ≥ 1 and for γ ∈ N d .
|∂ γ z π j (t, z)| ≤ C γ z |γ|n0 and |∂ γ z f (t, z)| ≤ C γ z (|γ|-1)n0 . One concludes by observing that the function v = 2Tr(H) is subquadratic, whence the property of h 2 = v + f . One argues similarly for h 1 .
We close this Section with the proof of Lemma 1.10
Proof of Lemma 1.10. The map t → R (t, t 0 , z) is valued in the set of unitary maps because the matrix H adia ,1 is self adjoint. Besides, the map (t, z) → Z (t, z) = π (Φ t,t0 (z)R (t, t 0 , z)π ⊥ (z) satisfies the ODE Proof of Lemma B.3. The lemma is proved in a standard way, using integration by parts and stationary phase argument. For the sake of completeness, we give here a proof. We introduce a cut-off χ 0 ∈ C ∞ 0 (R) such that χ 0 (x) = 1 for |x| ≤ 1/2 and χ 0 (x) = 0 for |x| ≥ 1.
i∂ t Z = -i (π (∂ t π + {h, π })) • Φ t,
We split I(λ) into too pieces and write I(λ) = I 0 (λ) + I 1 (λ) with
I 0 (λ) = λ 2d R 2d ×R 2d
exp[-iλσ(u, v)]χ 0 (u 2 + v 2 )F (u, v)dudv,
I 1 (λ) = λ 2d R 2d ×R 2d
exp[-iλσ(u, v)](1 -χ 0 )(u 2 + v 2 ))F (u, v)dudv.
We notice that (u, v) → σ(u, u) is a quadratic non-degenerate real form on R 4d . Let us estimate I 1 (λ). We can integrate by parts with the differential operator
L = i |u| 2 + |v| 2 Ju • ∂ ∂v -Jv • ∂ ∂u ,
using that Le -iλσ(u,v) = Le -iλJu•v = λe -iλσ(u,v) . For I 1 (λ), the integrand is supported outside the ball of radius 1/ √ 2 in R 4d . Performing 4d + 1 integrations by parts for gaining enough decay to ensure integrability in (u, v) ∈ R 4d , we get a constant c d such that
|I 1 (λ)| ≤ c d sup u,v∈R 2d |µ|+|ν|≤4d+1 |∂ µ u ∂ ν v F (u, v)|.
To estimate I 0 (λ) we apply the stationary phase. The symmetric matrix of the quadratic form σ(u, v) is
A σ = 0 -J J 0 .
So the stationary phase Theorem ( [START_REF] Hörmander | The Analysis of Linear Partial Differential Operators I-III[END_REF], Vol.I, section 7.7), we obtain the existence of two constants
C.2. Asymptotic behavior of the propagator
In this section, we analyze the propagator U ε K (t, t in ) and compare it with U ε KS (t, t in ) the propagator for KS(t) = k(t)I m + εK 1 (t).
Lemma C.3. For all J ∈ N, there exists
W ε (t, t in ) = 0≤j≤J ε j W j (t, t in ) + ε J+1 R ε J (t, t in )
with W j (t, t in ) ∈ S -2j δ such that for all t ∈ R, U ε K (t, t in ) = W ε (t, t in )U ε KS (t, t in ). Besides, W 0 (t, t in ) = I m and for all γ ∈ N 2d , there exists C J,γ > 0 such that
sup z∈R d |∂ γ z R ε J (t, t in , z)| ≤ C J,γ δ -2(J+1+|γ|+κ0)
where κ 0 is the universal constant of Theorem B.1.
Remark C.4. Using the estimate (4.4), we deduce that for all k ∈ N, there exists
C k > 0 such that op ε (R ε J (t, t in )) L(Σ k ε ) ≤ C k ε δ 2 |γ| 2 δ -2(J+1+k+κ0) .
As a consequence, for δ = ε α with α ∈ (0,
C.3. Propagation of wave packets
When ψ ε 0 is a wave packet, the action of U ε KS (t, t 1 )ψ ε 0 on ψ ε 0 can be described precisely. Following Section 14.2 of [START_REF] Combescure | Coherent states and applications in mathematical physics[END_REF], Theorem 77, we have the following result.
Proposition C.5. Assume we have
ψ ε = WP ε z0 (f ε ) V with V ∈ C m , f ε = 0≤j≤J ε j/2 f j , f j ∈ S(R d ).
There exists a family ( U j (t)) j≥0 defined on the interval I δ such that (i) For all j ∈ N and t ∈ I δ , U j (t) ∈ S(R d ), (ii) For all k, j ∈ N, there exists a constant C k,j such that sup
t∈I δ sup |α|+|β|=k x α ∂ β x U j (t) L ∞ ≤ C k,j δ -j .
(iii) For all k ∈ N and N ∈ N, there exists C k,N and N k such that for all t ∈ R, we have ∂ z H 1 (s, z s )op 1 (F (t in , s)z)ds, (C.9) F (t, t in ) is the stability matrix for the flow z t := Φ t,tin k (z 0 ) (see (1.12)) and R(t, t in ) satisfies the equation (C.3).
U ε K (t,
Proof. Let k ∈ N. Using Lemma C.3 and the estimate (C.5), we obtain
U ε K (t, t in )ψ ε = W ε (t, t in ) U ε KS (t, t in )ψ ε = 0≤j≤J ε j W j (t, t in ) U ε KS (t, t in )ψ ε + ε J+1 R ε J (t, t in )U ε KS (t, t in )ψ ε = 0≤j≤J ε j W j (t, t in ) U ε KS (t, t in )ψ ε + O ε J+1 δ -2(J+1+κ0)
in Σ k ε (R d ). We then use the standard result of propagation of wave packets for KS(t) (see [START_REF] Combescure | Coherent states and applications in mathematical physics[END_REF])
U ε KS (t,
Definition 4 . 1 (
41 Symbol spaces). Let µ ∈ R and δ ∈ (0, 1]. (i) We denote by S µ δ (D) the set of smooth (matrix-valued) functions in D such |∂ γ z A(t, z)| ≤ C γ δ µ-|γ| , ∀(t, z) ∈ D. Notice that the set S δ (D) := S 0 δ (D) has the algebraic structure of a ring. (ii) We shall say that a formal series
0 and H adia 1 are 3 H
13 constructed, one uses a recursive argument: assume that one has constructed H adia j for 0 ≤ j ≤ N with H adia j ∈ S -2j δ for j ∈ {2, • • • , N } and such that has (4.21) holds up to O(ε N +1 ). Let us construct H adia N +1 . Setting as in Notation 4.adia,ε,N = N j=1 ε j H adia j
Proposition 5 . 2 (
52 adiabatic decoupling -I). Let k ∈ N.
Lemma 5 . 5 .
55 Assume (t , ζ ) is a generic smooth crossing point as in Definition 1.2 and consider δ ∈ (0, 1].
d ds φ ε (s) ≥ c 0 |s|, ∀|s| ≤ λ and e iφ ε (s) = 1 s b ε (s)∂ s e iφ ε (s) with b ε (s) := s i∂ s φ ε (s).
• 2 =
2 If m = 1, only the term with = 2 contributes andψ ε,1,M,N 2 (t + δ) = e i ε S2(t +δ,t ,ζ )+ i ε S1(t ,t0,z0) WP ε Φ t +δ,t 2 M[F 2 (t + δ, t , ζ )]W 1 (t , ζ ) * T 1→2 M[F 1 (t , t 0 , z 0 )]π 1 (t 0 , z 0 ) V 0 f 0 ,
Lemma A. 3 .
3 Let ∈ {1, 2}. Assume that the matrix-valued function H ∈ C ∞ (I × R d , C m,m ) satisfies (A.1). Assume that for all (t, z) ∈ I × R 2d , H(t, z) has a smooth eigenvalue h with smooth associated eigenprojectors π(t, z) of constant rank for |z| > m. Assume there exists C, n 0 > 0 such that for (t, z)∈ I × R d with |z| > m, dist (h(t, z), Sp(H(t, z)) \ {h(t, z)}) ≥ C z -n0 .Then, for all γ ∈ N 2d with |γ| ≥ 2, there exists a constant C γ > 0 such that∀(t, z) ∈ I × R 2d , |∂ γ z π(t, z)| ≤ C γ z |γ|n0+ -1 and |∂ γ z h(t, z)| ≤ C γ z (|γ|-1)n0+2( -1) . Proof. We work for |z| > m and fix j ∈ {1, 2}. The relation (A.2) also implies∂ j π(H -h) = (H -h)∂ j π = ∂ j (H -h)where we keep the notation ∂ j := ∂ zj for j ∈ {1, • • • , 2d}. Using that ∂ j π is off diagonal and (H -h) invertible on Range(1 -π), we deduce∂ j π = (1 -π)∂ j π π + π ∂ j π(1 -π) with (1 -π)∂ j π π = (H -h) -1 (1 -π)∂ j (H -h) and π ∂ j π(1 -π) = π ∂ j (H -h)(1 -π)(H -h) -1 = π∂ j (H -h)(H -h) -1 (1 -π).On the range of π, the resolvent (H -h) -1 is invertible, more precisely, there exists c > 0 such that(H -h) -1 (1 -π) L(Ran(1-π)) ≤ c dist (h, Sp(H) \ {h}) -1 ≤ c C z n0+ -1 , whence |∂ j π(t, z)| ≤ C z n0 .For analyzing the derivatives of π, one observes that the relation π = π 2 implies∂ γ π = π∂ γ π + ∂ γ π π + 1<|α|,|β|<|γ| c α,β ∂ α π∂ β πfor some coefficients c α,β . A recursive argument then gives the estimate on the growth of the eigenprojectors.Let us now consider the eigenvalue h. The relation Hπ = πH = hπ gives∂hπ = ∂Hπ -∂π(H -h) = π∂H -(H -h)∂π.Multiplying on both side by π, using (H -h)π = 0 and taking the matricial trace, we obtain∂h Tr C m,m (π) = Tr C m,m (π∂H).This implies |∂h| ≤ C z -1 . To get the relation on higher derivatives, we differentiate this relation, which gives for i, j ∈ N 2d ,∂ 2 ij h Tr C m,m(π) = ∂ i (Tr C m,m (π∂ j H)) -∂ j hTr C m,m (∂ i π). Since Tr C m,m (∂ i π) = 0, we are left with ∂ 2 ij h Tr C m,m (π) = Tr C m,m (∂ i π∂ j H) + Tr C m,m (π∂ ij H).
( 1 -
1 t0 Z , Z (t 0 , z) = 0, and thus coincides with the solution Z (t) = 0.with κ 0 = 4d + 2.The estimate 4.4 allows to evaluate the norms of the operators involved in Theorem B.1.Corollary B.2. If A ∈ S µ ε,δ and B ∈ S µ ε,δ , then A B ∈ S µ+µ ε,δ(see Remark 4.2) and for all N, k ∈ N, there exists a constant C N,k > 0 such thatop ε (R N (A, B; z; ε)) Σ k ε ≤ C N,k ε N +1 δ µ+µ -2(N +1+k+κ0) . Proof. By Fourier transform computations and application of the Taylor formula, we get the following formula for the remainder,(B.5) R N (A, B; z; ε) t) N R N,t (z; ε)dt,whereR N,t (z; ε) = (2πεt) -2d R 2d ×R 2d exp -i 2tε σ(u, v) σ N +1 (D u , D v )A(u + z)B(v + z)dudv.Notice that the integral is an oscillating integral as we shall see below. We now use Lemma B.3 for A, B ∈ S(R 2d ) with the integrandF N,γ (z; u, v) = π -2d ∂ γ z σ N +1 (D u , D v )A(u + z)B(v + z)and the parameter λ = 1/(2tε). We then have|∂ γ z R N,t (z; ε)| ≤ C d sup u,v∈R 2d |α|+|β|≤4d+1 |∂ α u ∂ β v F N,γ (z; u, v)|.Moreover, there holds the elementary estimate|σ N +1 (D u , D v )A(u)B(v)| ≤ (2d) N +1 sup |α|+|β|=N +1 |∂ α x ∂ β ξ A(x, ξ)∂ β y ∂ α η B(y, η)|.Together with the Leibniz formula, we then get the claimed results with universal constants. For symbols A ∈ P(µ A ) and B ∈ P(µ B ) we argue by localisation. We use A η (u) = e -ηu 2 A(u) and B η (v) = e -ηv 2 B(v) for η > 0 and pass to the limit η → 0.Lemma B.3. There exists a constant C d > 0 such that for anyF ∈ S(R 2d × R 2d , C m,m ) the integral (B.6) I(λ) = λ 2d R 2d ×R 2dexp[-iλσ(u, v)]F (u, v)dudv. satisfies (B.7) |I(λ)| ≤ C d sup u,v∈R 2d |α|+|β|≤4d+1 |∂ α u ∂ β v F (u, v)|.
c 1 , c 2 >
12 0 such that (B.8) |I 0 (λ) -λ -2d c 1 | ≤ c 2 sup u,v∈R 2d |α|≤2 |∂ α (χ 0 (u 2 + v 2 )F (u, v)|.
1 2 ], we have(C.5) op ε (R ε J (t, t in )) L(Σ k ε ) ≤ C k δ -2(J+1+k+κ0) . Similarly, by (4.5), for such δ,(C.6) op ε (W j (t, t in )) L(Σ k ε ) ≤ C k δ -2j-k , j ∈ N.Proof. If such a W ε (t, t in ) exists, it must satisfy the following equationiε∂ t W ε (t, t in ) W ε (t, t in )U ε KS (t in , t) K(t) -KS(t) U ε KS (t, t in ) * , W ε (t in , t in ) = I mApplying Egorov Theorem of Propostion C.1, we know thatU ε KS (t in , t) K(t) -KS(t) U ε KS (t in , t) * = ε 2 L ε (t, t in ) where L ε (t, t in ) = j≥0 ε j L j (t, t in ) ∈ S -1ε,δ with estimates on L j (t, t in ) and on the remainder term in the asymptotic expansion. Using the estimates proved in Appendices A and B, it is enough to solve as formal series in ε the equation
(C.7) i∂ t ε j W j (t, t in ) = ε ε j W j (t, t in ) ε j L j (t, t in ) .
j≥0 j≥0 k≥0
t in )ψ ε -e
i ε S(t,tin,z0) WP ε Φ t,t in k (z0) 2N j=0 ε 2 U j (t) j ε Σ k ≤ C k,N ε δ 2 N δ -N -N k .
Besides
(C.8) U |α|=3 1 α! t tin ∂ α z h(s, z s )op w 1 ((F (t in , s)z) α )ds
+ 1 i t tin
0 (t) = R(t, t in ) M[F (t, t in )]f 0 V and U 1 (t) = R(t, t in ) M[F (t, t in )]b 1 (t, t in )f 0 V where b 1 (t, t in ) = 1 i
t in )ψ ε = e J+1 ) with B j (t) = U j (t) as in (C.8) for j = 0, 1, and B j (t) is determined by a recursive equation in terms of B 0 (t), • • • , B j-1 (t). This description relies on the observation that setting WP ε zt -∂t (S + ξ t • x t ) B j + √ ε żt • z B j + iε∂ t B j WP ε zt op 1 k(t, z t ) + √ ε∇k(t, z t ) • z + ε 2 Hess k(t, z t )z • z + εK 1 (t, z t ) + O(εwhere we have set z t = (x t , ξ t ). These relations also prove (C.8) with the additional remark that W 0 (t, t in ) = I and, for j ∈ N,W j (t, t in )WP ε zt B j (t) = WP ε zt op 1 W j (t, t in , z t + √ εz) B j (t)and using the estimate (C.6) with ε = 1.
i ε S(t,tin,z0) WP ε zt j (t) = e i ε S(t,tin,z0) WP ε 2J ε j 2 B j (t) zt B j (t) zt KS(t, z t + we have the two relations iε∂ t ϕ ε j (t) = e i ε S KS(t)ϕ ε j (t) = e i ε S WP ε √ εz) B j (t) + O(ε ϕ ε = e i ε S 3 2 B j
j=1
FREQUENCY LOCALIZED FAMILIES
Clotilde Fermanian Kammerer acknowledges the support of the Region Pays de la Loire, Connect Talent Project High Frequency Analysis of Schrödinger equations (HiFrAn). Caroline Lasser was supported by the Centre for Advanced Study in Oslo, Norway, research project Attosecond Quantum Dynamics Beyond the Born-Oppenheimer Approximation.
we write (H adia,ε,N -H 0 -εH 1 ) Π ε,N = ε N +1 T N + O(ε N +2 ) with T N ∈ S -2N -3 δ and we look for H adia N +1 such that πH adia N +1 = T N . This is doable as long as π ⊥ T N = 0, which comes form the observation that
by the properties of superadiabatic projectors. Besides, H adia N +1 ∈ S -2N -3 δ , which fits with (ii) of Definition 4.1
(3) comes from Lemma 4.3.
(4) comes from (1.19). CHAPTER 5
Propagation of wave packets through smooth crossings
In this section, we prove Theorem 1.21. We consider a subquadratic Hamiltonian H ε = H 0 +εH 1 satisfying Assumptions 1.2 in I ×Ω, Ω ⊂ R 2d , and we are interested in the description of the solution to equation (1.1) for initial data that is a wave packet as in (1.8).
The proof consists in three steps: one first propagates the wave packet from time t 0 to some time t -δ, δ > 0 in a zone that is at a distance of Υ of size larger than cδ for some constant c > 0. In this zone, we use the superadiabatic projectors. Then, we propagate the wave packet from time t -δ to t + δ, using the rough diagonalization in the crossing region. Finally, between times t + δ and t 1 , we are again at distance larger than cδ to Υ and the analysis with superadiabatic projectors apply. The parameter δ will be taken afterwards as δ ≈ ε α ; the analysis of Section 5.2 will ask for α < 1 2 (see Theorem 5.9). In order to explain carefully each step of the proof, we start by proving the propagation faraway from the crossing area in Section 5.1. That allows us to settle the arguments, before doing it precisely close to Υ in Section 5.2. Then, Section 5.3 is devoted to the calculus of the transitions in the crossing region.
All along Section 5, we will use Assumption 4.7 and the following dynamical Assumption 5.1.
Assumption 5.1 (Dynamical assumption). We say that Ω 1 and t 1 satisfy the dynamical assumption (DA) for the mode h if we have (DA) Φ t,t0 (Ω 1 ) ⊂ Ω for all t ∈ [t 0 , t 1 ].
Propagation faraway from the crossing area
In this section, we analyze the propagation of wave packets in a region where the gap is bounded from below. It gives the opportunity to introduce the method that we shall use in the next section for a small gap region. So, we fix δ = δ 0 , δ 0 > 0 small but independent on ε and we work in the open set
where I is an open interval of R containing [t 0 , t 1 ] and where the gap condition is also satisfied. We associate with H ε the formal series of Theorems 4.9 and 4.8 for each of the modes:
and we will use the notation introduced in (4.3). With z 0 ∈ Ω, we associate the open sets Ω 0 , Ω 1 , Ω 2 and Ω 3 such that
Matrix-valued Hamiltonians
We explain here the set-up and the technical assumptions that we make on the Hamiltonian H ε . It is the occasion of motivating the set of Assumptions 1.4 and deriving their consequences. The objectives of these assumptions are first to ensure the existence of the propagators associated with the full matrix-valued Hamiltonian and with its eigenvalues, and secondly to guarantee adequate properties of growth at infinity which are used in our analysis.
In this section, we work with m × m (m ∈ N) matrix-valued Hamiltonians H that are subquadratic:
and differentiating these two relations, we obtain for all j ∈ {1,
Multiplying from the left and the right with π and using that ∂π is off-diagonal, we obtain the relation π∂ j Hπ = ∂ j hπ, whence with c = Rank(π) = cte, ∂ j h = c Tr(π∂ j Hπ).
This implies |∂
This proof shows that the study of higher derivatives of the eigenvalues requires a control on the derivatives of the eigenprojectors. The following example shows that the situation may become very intricate and one can have smooth subquadratic eigenvalues while the derivatives of the projectors are unbounded.
.
Elements of symbolic calculus : the Moyal product
In this section, we revisit results about the remainder estimate for the Moyal product, aiming at their extension to the setting of the sets S µ δ (D) that we have introduced in Definition 4.1.
B.1. Formal expansion
We first recall the formal product rule for quantum observables with Weyl quantization. Let A, B ∈ S(R 2d , C m,m ). The Moyal product C := A B is the semi-classical observable C such that A • B = C. Some computations with the Fourier transform give the following well known formula [34, Theorems 18.1.8]
where σ is the symplectic bilinear form σ((q, p), (q , p )) = p • q -p • q and D = i -1 ∇. By expanding the exponential term, we obtain
So that C = j≥0 ε j C j is a formal power series in ε with coefficients given by (4.1).
B.2. Symbols with derivative bounds
For µ ≥ 0 denote by P(µ) the linear space of matrix-valued C ∞ symbols A : R 2d → C m,m such that for any γ ∈ N 2d with |γ| ≥ µ, there exists C γ > 0 such that Theorem B.1. For every N ∈ N and γ ∈ N 2d , there exists a constant K N,γ such that for any A ∈ P(µ A ), B ∈ P(µ B ) the Moyal remainder
satisfies for every z ∈ R 2d and ε ∈ (0, 1],
Elements of semi-classical calculus: perturbation of scalar systems
In this Appendix, we revisit several well-known results concerning a Hamiltonian K(t) valued in the set C m,m of m × m matrices (m ∈ N), and which is a perturbation of a scalar function k(t). We consider an interval I δ ⊂ R that may depend on δ > 0 and assume that K(t) is defined on I δ and of the form
with k scalar-valued and k(t)I m + εK 1 (t) is subquadratic on the time interval I δ according to Definition 1.1.
The difference with the classical setting is that we assume
Therefore, we have to revisit the results to take care of the loss in δ and control all the classical estimates with respect to this parameter. We denote by U ε K (t 0 , t) the unitary propagator associated with K(t).
These assumptions are those satisfied by the Hamiltonian that we consider in the adiabatic region (see Section 5.2): by (3) of Theorem 4.9, the Hamiltonians H adia,N,ε (t), defined for ∈ {1, 2} in (5.3), satisfy the assumptions made on the Hamiltonian K(t) on the interval [t 0 , t -δ] and on the interval [t + δ, t 0 + T ], for adequate domains D given by the cut-offs. Therefore, the analysis below allows to deduce Theorem 5.9 from Proposition 5.8. In the gap region, we also use these results in the simpler case δ = 1 (see Sections 5.3 and 5.4).
C.1. Egorov Theorem
The Egorov Theorem describes the evolution of an observable when it is conjugated by the propagator U ε K (t, s) associated with the operator K(t). It is important to notice that this propagator maps Σ k ε in itself for all k ∈ N. Indeed, for 1
Therefore, one deduces the L 2 -boundedness of the families (x j ψ ε (t)) and (εD xj ψ ε (t)) for all 1 ≤ j ≤ d, whence the boundedness of ψ ε (t) in Σ 1 ε for all t ∈ R. The reader will have understood that a recursive process will give the boundedness of ψ ε (t) in any Σ k ε , for t ∈ R and k ∈ N. In this setting, our aim is to revisit the evolution of U ε K (t in , t) A U ε K (t, t in ) for matrix-valued observables A ∈ S δ and in spaces Σ k ε , with a precise estimate of the remainders.
Proposition C.1. With the above notations, for any matrix-valued symbol A ∈ S δ , there exists a formal series (t, t in ) → j≥0ε j A j (t, t in ) defined on I δ × I δ and such that for any J ≥ 1, we have for all t, t in ∈ I δ , J+1+κ0) , and the matrix A 0 (t, t in ) is given by
where the unitary matrices R(t, t in , z) solve the transport equation
Remark C.2. In particular we have the propagation law of the supports:
supp(A j (t, t in )) = Φ tin,t k (supp(A)) for any j ≥ 0.
In the scalar time independent case case (k = k(z) and K j = 0 for j ≥ 1), the Egorov theorem
is well-known (see [START_REF] Combescure | Coherent states and applications in mathematical physics[END_REF][START_REF] Dimassi | Spectral Asymptotics in the Semi-Classical Limit[END_REF][START_REF] Zworski | Semiclassical analysis[END_REF] for example). In the time-dependent matrix-valued case considered here, the dynamics on the observable is driven by the classical flow twisted by the precession R (see also Section 1.3.3 where such terms appear). The proof also requires a careful treatment of the time.
Proof. We perform a recursive argument. The starting point comes from the analysis of the auxiliary map defined for τ, t ∈ I δ and valued in S δ by A → A(t, τ ) := (R(τ, t)A R(τ, t) * ) • Φ t,τ k . Because we are going to differentiate in τ , we use the relation i∂ τ Φ t,τ k = -J∇k(τ, Φ t,τ k ), (where J is the matrix defined in (1.10)), which implies for all z ∈ R 2d , using also that the flow map is symplectic and preserves the Poisson bracket,
Let us know starts with the proof of the result for J = 0. We choose s, τ, t ∈ I δ and consider the quantity s). The times s, τ, t can be understood as s ≤ τ ≤ t with s an initial time (that will be taken as s = t in ) and t the time at which we want to prove the property. We then have the boundary properties
Differentiating in τ , we have
where the matrix B ε 1 ∈ S -2 ε,δ stems from the Moyal product (see Corollary B.2). We deduce by integration between the times s and t
which gives the first step of the recursive argument.
We now assume that we have obtained for J ≥ 0
. We write
. Then, the preceding equation writes
We focus on the term involving Q ε B J+1 (t, s, τ ) that we treat as in the preceding step. We obtain |
04108239 | en | [
"chim.mate"
] | 2024/03/04 16:41:22 | 1990 | https://hal.science/hal-04108239/file/1990-115.pdf | P Hagenmuller
M Pouchard
J C Grenier
NONSTOICffiOMETRY IN OXIDES: EXTENDED DEFECTS IN PEROVSKITE-RELATED PHASES*
Formation of point defects in oxides corresponds to balance between enthalpy provided for creating vacancies or inserting interstitial atoms into the lattice and the resulting entropy increase. In fact, pairing or clustering of the defects minimizes free energy and may lead, if they are numerous enough, to micro-and even macrophases. An excess of defects will give rise to a second phase. For perovskite or perovskite related oxides extended defects may lead to cation ordering, introduction of intennediate Na Cl-type slabs, or, if they involve oxygen vacancies, to new coordination polyhedra of the B-cations. Their various configurations depend on the electronic structure of B. Specific physical properties may result, e.g. superconducting behavior in some copper oxides.
A notion which is quite important for understanding the behavior of oxides is the concept of defect pairing. 2 If, f o r instance, Wvv is fonnally the interaction energy of two anion vacancies (wvv is negative when the defects attract each other) one may show that there is a critical temperature Tc= -Wvv/2k above which the vacancies are randomly distributed, but below which the y tend to order. If the def ects are sufficiently numerous and lwvvl large enough, for T < Tc two possibilities may occur:
(a) formation of a unique ordered phase; (b) separation into two distinct phases. The structural mechanism is actually f o rmation of local clusters (wvv involving several neighboring cations) which, if sufficiently large, will nucleate a novel "microphase" with less anions. A second phase will appear for a critical composition representing the maximum number of vacancies the structure is able to accommodate. Both phases may be related in fact by an intergrowth phenomenon if they have structural analogies.
When the vacancy clusters develop enough to give rise to a microphase with rearrangement of the remaining anions, formation of "extended defects" is observed. Appearance of shear planes (Wadsley defects) as in the Magneli W 0 03 0 .1 or W 0 03 0 . 2 oxides, or alignment of the point defects in fluorite-type vacancy oxides are classical examples.
As far as Ml decreases due to ordering of the defects into regular arrays and may largely compensate for decreasing configuration entropy, particularly at low temperature, a sequence of ordered phases will be often more stable than a single phase with wide compositional variations. To understand the defect association mechanism choice of appropriate formulations and annealing process will be useful as.well as electron diffraction and HREM characterization. Nonstoichiometry of oxides of cubic perovskite-type and related structures is of particular interest due to the diversity of the crystallographic features, but also to their impact on many different properties (e.g. magnetic, electronic, dielectric, optical, catalytic). 3
It is well known that cubic symmetry for an AB03 perovsk:ite corresponds to a so-called Goldschmidt tolerance factor equal to 1 or slightly smaller. If t decreases enough (smaller A or bigger B cation) cation displacements and tilting of corner sharing B06
--vvv
Ba 2 Ca 2 (Tl Cu 3 )0 9
Similarities of the framework structures of LaCa2Fe:30g, YBa2Cu307 and Ba2Ca2(11Cu3)09. In the latter phase distorted "TIO" layers appear between the tops of the square-based Cu-pyramids, analogous to the tetrahedra layers between the octahedra sheets in LaCa2Fe:30g. Besides corner sharing square planar Cu04 groups form intennediate copper-oxygen layers between the pyramid bases.
La2-xSrxCuO 4 and La2-xBaxCuO4 are superconducting for small values of x close to metal-insulator transition. Tc is in fact small (i.e. in the 20-40 K range) but these materials have a historical importance as they were the first discovered high Tc oxides, giving rise afterwards to many investigations.56-60 Possible superconducting behavior has been recently suggested for the homologous La2-xSrxNiO 4 series (x 0.2).9 1
• La2CuO4 is a non-superconducting insulator, but a small excess of oxygen introduced either by anodic oxidation in a basic aqueous solution or by application of high oxygen or fluorine pressures leads to metallic behavior at room temperature and to a superconducting transition close to 40 K. The extra oxygen atoms occupy new tetrahedral sites in the NaCl layers of the K2NiF4 structure, inducing an anionic rearrangement. The presence in the perovskite layers of elongated copper octahedra is consistent with formation of some Cu 3+ , but the short 0-0 distances resulting form the anion excess evidenced by neutron diffraction (1.64 A) do not exclude temporary occurrence of peroxide species. 9 3, 94 Ba2 YCu3O7_ 0 bas a superconducting behavior which depends on o, as far it is orthorhombic, with Tc often exceeding 90 K. The decreasing values of Tc (Fig. 15) at rising 8 may be related to cutting of the square chains resulting from oxygen departure. The plateau observed in the 0.2 < 8 < 0.4 range which is also reflected by magnetic and electric layer structures, as far as they will be determined in detail, the present disagreement being lifted, should allow to formulate the hypothesis that superconductivity results simultaneously from an equilibrium involving Cu + and Cu 3+ as well as Bi 3+ and Bis+ (or TI+ and TI 3+ ). Presence of Pb 2+ seems to enhance slightly the superconductivity of the thallium oxides.72 Rising number of copper-oxygen planes apparently increases also Tc but multiplication of the Cu-O layers makes more difficul_ t growing single crystals with homogeneous stacking and leads quickly to saturation of such possibilities. 69
G Ba
Layer structures proposed for some recent bismuth or thallium superconducting oxides. For most of these phases the oxygen positions are still controversial.
The occurrence of two disproportionation mechanisms in the bismuth and thallium cuprates, so far it would be confinned, could account for the larger resistivity dropping domain observed at decreasing temperature for the concerned oxides.
The crystal chemical developments in this area, where phonon electron interactions are essential, are delayed by the difficulty to obtain pure phases, stable temperature and with a crystal quality sufficient to allow precise structural detenninations, at least by TEM.
Despite complexity of the conduction mechanism the present copper • oxides are narrowly related to the defect perovskite-like oxides previously investigated. The type of defects differs indeed with the formulation, the prevailing factor being the electronic configuration of the B cation. The fact that the only 3d-element leading unambiguously to superconductivity is copper and that in the superconducting oxides substitution of copper by homologous cations leads quickly to vanishing of the phenomenon may be attributed to a relatively unique feature, most likely the mentioned activation energy-free disproportionation mechanism favored by the crystal structure. Such a coïncidence probably explains why so few superconducting oxide series have been found and why most of the layer type materials concerned have strong structural similarities. [START_REF] Pouchard | Proc. MRS Meeting, High Tc Oxide Symp[END_REF] The recent breakthrough in the field of superconducting copper oxides emphasizes in any case two general rules in solid state chemistry: the necessity never to renounce to prepare new materials and the importance not to separate new phenomena from their scientific background.
FIG. 6 Idealized representation of the manganese-oxygen framework in CaMnQi.s. Other members of the series are Ca2LaFe3Og, Ca3Fe2TiOg (n=3) and C84,Fe2Ti2O11 (n=4) with layer sequences ootoot and oootooot, respectively (Fig. 7). Intergrowth phases have been reported. If they have a relevant composition and if the corresponding slabs are thin enough a regular succession of the previous motives may be detected by HREM. Ca5Fe4TiO13 and C84 YFe5O13 (n=2.5) contain for instance altematively n=2 (otot) and n=3 (ootoot) sheets. 3 0 Ca7Fe 6 TiO1 8 (n=2.33) should be characterized by an ototoot sequence, but many stacking faults actually appear due to the complexity of the expected packing (Fig. 8). Isolated point defects occur only in iron oxides for a very small number of oxygen defects. As their number increases they result in the formation of tetrahedra rather than bipyramids as already expected 10 years ago from conductivity vs. oxygen pressure measurements. 8 6 The tetrahedra align easily for a rising proportion of oxygen vacancies these files may form planes, which in turn tend to order. But when Fe 4+ is present, for example in CaxLa1-xFeO3 -Y. samples (0.25 � y < 0.5) prepared in air at 1400 °C, whereas x-ray diffraction seems to characterize a cubic perovskite-type structure (a� 3.85 A), electronic diffraction and HREM show actual formation of three dimensionally oriented intergrown microdomains of brownmillerite or Ca2LAFe3Og (grenierite) bulk compositions. The dimension of the
e
FIG.17 |
04108375 | en | [
"math"
] | 2024/03/04 16:41:22 | 2023 | https://hal.science/hal-04108375/file/Sobczyk-fr.pdf | Garret Sobczyk
email: [email protected]
Calculus of Compatible Nilpotents
Keywords: AMS Subject Classification: 05C20, 15A66, 15A75 Clifford geometric algebra, Grassmann algebra, Lorentzian spacetime
come
Introduction
The concept of compatible null vectors reflects in many ways the standard definition of dual vector spaces in linear algebra, with one important difference. Whereas the usual concept of duality between vector spaces ties together the structure of these distinct vector spaces in a unique way, the concept of compatible null vectors ties together properties between two geometric algebras defined on the same vector space. The purpose of this work is to further explore the strange properties that reveal themselves when a canonical basis of null vectors is chosen instead of the standard basis of orthonormal vectors in a geometric algebra of a Minkowski space defined by a Lorentz metric [START_REF] Sobczyk | Talk: Geometric Algebras of Compatible Null Vectors[END_REF][START_REF] Sobczyk | Geometric Algebras of Compatible Null Vectors[END_REF].
Section 1, defines and characterizes the properties of the algebra of positive and negatively correlated nilpotents of the geometric algebras G 1,n and G n,1 . Whereas linear algebra has deep roots in the works of Grassmann, Hamilton, Clifford and Cayley, the importance of early works of Grassmann and Clifford have not been not been fully recognized until fairly recently [START_REF] Hestenes | The Design of Linear Algebra and Geometry[END_REF], [START_REF] Sobczyk | Conformal Mappings in Geometric Algebra[END_REF]. Whereas the invention of multiplication of matrices by Cayley permeates elementary linear algebra today, it is shown here that linear algebra could have developed from Grassmann algebra together with the quite different Multiplication Tables of compatible null vectors.
Section 2, develops the basic properties of the vector derivative or gradient operator, the basic tool of calculus that has been developed in the early works [START_REF] Hestenes | Clifford Algebra to Geometric Calculus: A Unified Language for Mathematics and Physics[END_REF], [START_REF] Sobczyk | New Foundations in Mathematics: The Geometric Concept of Number[END_REF], and other authors [START_REF] Lounesto | Clifford Algebras and Spinors[END_REF], and by the Clifford analysis community [START_REF] Delanghe | Clifford Algebra and Spinor-Valued Functions: Function Theory for the Dirac Operator[END_REF]. A more detailed history can be found in [START_REF] Lounesto | Clifford Algebras and Spinors[END_REF]. Whereas usually the vector derivative is employed in the geometric algebra of a Euclidean space, here its properties are developed in the geometric algebra of a Lorentz-Minkowski space. It is shown how a slight perturbation of the metric on Minkowski space leads to a Euclidean structure on the corresponding geometric algebra, perhaps offering a new mathematical formalism for quantum mechanics.
Section 3, sets down basic differential formulas for elementary functions of the position vector x, expressed in the basis of compatible null vectors. All of the formulas developed previously in the geometric algebra of Euclidean space, appropriately modified, can be used in the setting of the geometric algebra of the Lorentz-Minkowski space.
1 The geometric algebras G n,1 and G 1,n
• Nilpotents are algebraic quantities x = 0 with the property that x 2 = 0. They are added together using the same rules for the addition and multiplication by scalars F, as the real or complex numbers. The trivial nilpotent is denoted by 0.
• A set of nilpotents A n := {a 1 , . . . , a n } F is said to be multiplicatively uncorrelated over F, if for all a i , a j ∈ A n ,
a i a j + a j a i = 0. (1)
A set of uncorrelated nilpotents over a field F are called null vectors, and generate a Grassmann algebra G n (F), provided they are linearly independent over F, satisfying
a 1 ∧ • • • ∧ a n = 0, (2)
[9]. More general fields F can be considered as long as characteristic F = 2.
• A set A ± n+1 := {a 1 , . . . , a n+1 } of n + 1 null vectors is said to be positively or negatively correlated if
a i a j + a j a i = 2a i • a j = ±(1 -δ ij ), (3)
respectively. They generate the 2 n+1 -dimensional Clifford geometric algebras G 1,n and G n,1 , [START_REF] Clifford | Applications of Grassmann's extensive algebra[END_REF].
a i a j a j a i a i 0 a i a j 0 a i a j a j a i 0 a j 0 a i a j a i 0 a i a j 0 a j a i 0 a j 0 a j a i Table 2: NC Multiplication table. a i a j a i a j a j a i a i 0 a i a j 0 -a i a j a j a i 0 -a j 0 a i a j -a i 0 -a i a j 0 a j a i 0 -a j 0 -a j a i
Given below are the multiplication tables for sets of positively (PC), or negatively (NC), correlated null vectors a i , a j , for 1 ≤ i < j ≤ n + 1, [START_REF] Sobczyk | Geometric Algebras of Light Cone Projective Graph Geometries[END_REF]. For a set of positively or negatively correlated null vectors {a 1 , . . . , a n+1 }, define
A k := k i=1 a i .
The geometric algebra
G 1,n := R(e 1 , f 1 , . . . , f n ),
where {e 1 , f 1 , . . . f n } is the standard basis of anticommuting orthonormal vectors, with e 2 1 = 1 and [12, p.71]. Alternatively, the geometric algebra G 1,n can be defined by
f 2 1 = • • • = f 2 n = -1,
G 1,n := R(a 1 , . . . , a n+1 ) = A + n+1 ,
where {a 1 , . . . , a n+1 } is a set of positively correlated null vectors satisfying the multiplication Table 1. In this case, the standard basis vectors of G 1,n can be defined by
e 1 = a 1 + a 2 = A 2 , f 1 = a 1 -a 2 = A 1 -a 2 , and for 2 ≤ k ≤ n f k = α k A k -(k -1)a k+1 , (4)
where
α k := - √ 2 √ k(k-1)
.
The geometric algebra
G n,1 := R(f 1 , e 1 , . . . , e n ),
where {f 1 , e 1 , . . . e n } is the standard basis of anticommuting orthonormal vectors, with f 2 1 = -1 and e 2 1 = • • • = e 2 n = 1. Alternatively, the geometric algebra G n,1 can be defined by
G n,1 := R(a 1 , . . . , a n+1 ) = A - n+1 ,
where {a 1 , . . . , a n+1 } is a set of negatively correlated null vectors satisfying the multiplication Table 2. In this case, the standard basis vectors of G n,1 can be defined by
f 1 = a 1 + a 2 = A 2 , e 1 = a 1 -a 2 = A 1 -a 2 , and for 2 ≤ k ≤ n e k = α k A k -(k -1)a k+1 , (5)
where
α k := - √ 2 √ k(k-1)
.
It is well-known that Clifford's geometric algebras G p,q are algebraically isomorphic to matrix algebras over the real or complex numbers, [START_REF] Sobczyk | Matrix Gateway to Geometric Algebra, Spacetime and Spinors[END_REF]Ch.4]. For this reason, matrices over geometric algebra modules is well defined and fully compatible with the usual rules of matrix multiplication and addition.
Change of basis formulas for
G 1,n a 1 a 2 • • a 8 = T 8 e 1 f 1 • • f 7 for T 8 = 1 2 1 2 0 0 0 0 0 0 1 2 -1 2 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 2 √ 3 2 0 0 0 0 1 0 1 2 1 2 √ 3 2 3 0 0 0 1 0 1 2 1 2 √ 3 1 2 √ 6 √ 5 2 √ 2 0 0 1 0 1 2 1 2 √ 3 1 2 √ 6 1 2 √ 10 3 5 0 1 0 1 2 1 2 √ 3 1 2 √ 6 1 2 √ 10 1 2 √ 15 √ 7 2 √ 3 , and
e 1 f 1 • • f 7 = T -1 8 a 1 a 2 • • a 8 , for T -1 8 = 1 1 0 0 0 0 0 0 1 -1 0 0 0 0 0 0 -1 -1 1 0 0 0 0 0 -1 √ 3 -1 √ 3 -1 √ 3 -2 √ 3 0 0 0 0 -1 √ 6 -1 √ 6 -1 √ 6 -1 √ 6 3 2 0 0 0 -1 √ 10 -1 √ 10 -1 √ 10 -1 √ 10 -1 √ 10 2 2 5 0 0 -1 √ 15 -1 √ 15 -1 √ 15 -1 √ 15 -1 √ 15 -1 √ 15 5 3 0 -1 √ 21 -1 √ 21 -1 √ 21 -1 √ 21 -1 √ 21 -1 √ 21 -1 √ 21 2 3 7
, [START_REF] Sobczyk | Geometric Algebras of Light Cone Projective Graph Geometries[END_REF].
We now turn our attention to the study of geometric calculus in the algebra of nilpotents A + n+1 ≡ G 1,n over the real numbers R.
Gradient operators in
A + n+1 = {a 1 , . . . , a n+1 }
Note, from here-on null vectors are not denoted in bold face. Let A + n+1 be a set of n + 1 linearly independent, positively correlated null vectors, satisfying (2) and ( 3). The positive compatibility property
a i a j + a j a i = 2a i • a j = (1 -δ ij ),
guarantees that a i are null vectors, with the inner product of distinct pairs equal to 1 2 , and in addition, they satisfy the multiplication Table 1. For simplicity, we only consider F := R the real number system.
Given any two distinct a i , a j ∈ A + n , let e := a i + a j and f := a i -a j . It easily follows that
e 2 = (a i + a j )(a i + a j ) = a 2 i + (a i a j + a j a i ) + a 2 j = 1, (6)
and
f 2 = (a i -a j )(a i -a j ) = a 2 i -(a i a j + a j a i ) + a 2 j = -1. (7)
Furthermore, the orthonormal vectors e and f are anticommutative,
ef = (a i + a j )(a i -a j ) = a j a i -a i a j = -(a i a j -a j a i ) = -f e. (8)
Once the seed null vectors a i and a j are chosen determining e and f , the other null vectors a k used to define the successive vectors f k 's can randomly be chosen using the recursive definition (4).
The fundamental gradient operator ∇ in G 1,n is defined by
∇ := e 1 ∂ s1 - n i=1 f i ∂ si+1 , (9)
for the position vector
x = s 1 e 1 + n i=1 s i+1 f i ∈ G 1 1,n , (10)
from which it follows that
∇x = n + 1 ⇐⇒ ∇ • x = n + 1, and ∇ ∧ x = 0.
Regardless of whether the gradient, or vector derivative, is defined in terms of the Euclidean metric in R n+1 of the geometric algebra G n+1 , or in terms of the Lorentz metric in R1,n of the geometric algebra G 1,n it is characterized by two fundamental properties:
ONE: The gradient ∇ has the properties of a vector
x ∈ G 1 p,q and ∇x = p + q (11)
the dimension of R p+q , independent of the metric in which it is calculated. TWO: The v-directional derivative is the scalar differential operator v • ∇, and applied to the vector
x ∈ G 1 p,q gives v • ∇x = v ∈ G 1 p,q , (12)
independent of the metric in which it is calculated. 1 Applied to the position vector x ∈ R n+1 of G n+1 or a position vector x ∈ R 1,n of G 1,n , the gradient of x counts the dimension of the space,
∇x = ∇ • x + ∇ ∧ x = n + 1 =⇒ ∇ ∧ x = 0, (13)
independent of the metric employed. Similarly, Given a vector
v ∈ G 1 n+1 or v ∈ G 1 1,n , respectively, v • ∇x = v, (14)
independent of the metric employed. It follows that all differentiation formulas worked out in any one metric apply equally to any other metric, [6, p.63-66] The gradient ∇ is closely related to two other vector derivatives which make their appearance in A + n+1 . The null gradient
∇ := n+1 i=1 a i ∂ i , (15)
where ∂ i := ∂ ∂xi , for the position vector
x = n+1 i=1 x i a i ∈ A + n+1 . (16)
It follows that ∇x = 0, or equivalently, ∇ • x = 0 = ∇ ∧ x. The additive dual sum gradient
∇ ∨ := n+1 i=1 ∨ a i ∂ i , (17)
where ∨ a i := A n+1 -a i for 1 ≤ i ≤ n + 1. For example,
∨ a 3 = A 3+1 -a 3 = a 1 + a 2 + a 4 .
In working in the calculus of the algebra A + n+1 , the partial differential sum operator,
∂ (n+1) := n+1 i=1 ∂ i = ∂ 1 + • • • + ∂ n+1 (18)
is of fundamental importance. Noting that for
v = v 1 a 1 + • • • + v n+1 a n+1 , v • A n+1 = n+1 i=1 v i a i • A n+1 = n+1 i=1 n 2 v i = n 2 ∨ v,
where we have introduced the dual-like notation
v ∨ := n+1 i=1 v i and v i ∨ := v 1 + • • • + v i-1 + v i+1 + • • • + v n+1 , (19)
for the vector
v = v 1 a 1 + • • • + v n+1 a n+1 ∈ A + n+1 . Whereas v is a vector, its dual v
∨ is a scalar, but both v i and its dual v i ∨ are scalars. Noting that
∂ (n+1) x = A n+1 , and A n+1 • ∇ = n 2 ∂ (n+1) , (20)
it follows that
A n+1 • ∇x = n 2 A n+1 and ∂ (n+1) x • A n+1 = (n + 1)n 2 .
Another calculation gives the basic result
∇ ∨ x = n + 1 2 := (n + 1)n 2 .
Basic decomposition formulas
The dual sum and null gradients satisfy the basic property
∇ ∨ + ∇ = A n+1 ∂ (n+1) , (21)
which easily follow from the definitions. In view of (21), we introduce the vector partial dual sum gradient
∇ (n+1) := A n+1 ∂ (n+1) , (22)
giving the fundamental relationship
∇ ∨ + ∇ = ∇ (n+1)
between the three vector derivative operators. Squaring this last equation gives
∇ ∨ + ∇ 2 = ∇ ∨ 2 + 2∇ ∨ • ∇ + ∇2 = ∇ 2 (n+1) = (n + 1)n 2 ∂ 2 (n+1) , (23)
where ∇2 = 1≤i<j≤n+1 ∂ i ∂ j , (24)
∇ ∨ 2 = n+1 i=1 a ∨ i ∂ i 2 = n(n -1) 2 n+1 i=1 ∂ 2 i + (n 2 -n + 1) n+1 i<j ∂ i ∂ j , (25)
and
∇ ∨ • ∇ = n 2 n+1 i=1 ∂ 2 i + (n -1) n+1 i<j ∂ i ∂ j . ( 26
)
In doing the above calculations for ( 23)-( 26), we have used the fundamental relationships
∂ 2 (n+1) = ∂ 1 + • • • + ∂ n+1 2 = n+1 i=1 ∂ 2 i + 2 n+1 i<j ∂ i ∂ j (27)
and
a ∨ i 2 = n(n -1) 2 , n+1 i<j a ∨ i • a ∨ j = n 2 -n + 1 2 . ( 28
)
The directional dual sum and directional null gradients are given by
v•∇ ∨ = n+1 i,j=1 v i a i •a j ∨ ∂ j = n+1 i,j=1 v i a i •A n+1 -v i a i •a j ∂ j = n 2 v ∨ ∂ (n+1) -v• ∇, ( 29
)
and
v• ∇ = n+1 i=1 1 2 v 1 ∨ ∂ 1 +• • •+ v n+1 ∨ ∂ n+1 = 1 2 v ∨ ∂ (n+1) -v 1 ∂ 1 +• • •+v n+1 ∂ n+1 , (30
) respectively. Applying ( 29) and (30) to the position vector x, gives
v • ∇x = 1 2 v ∨ A n+1 -v = ∇x • v, (31)
and v • ∇ ∨ x = n -1 2 v ∨ A n+1 + 1 2 v = ∇ ∨ x • v, ( 32
) since ∇ ∧ x = ∇ ∨ ∧ x = 0.
Taking the sum of these last two expressions, gives
v • ∇ (n+1) x = n 2 A n+1 - 1 2 v = ∇ (n+1) x • v, (33)
for the directional derivative of the dual sum gradient, since ∇ (n+1) ∧ x = 0. We are now in a position to state the fundamental relationship between the dual sum and null gradients to the gradient (9) in G 1,n ,
∇ = 2∇ ∨ - 2(n -1) n ∇ (n+1) = -2 ∇ + 2 n ∇ (n+1) = ∇ ∨ -∇ - n -2 n ∇ (n+1) . ( 34
)
Other relationships between the three gradients that easily follow are
∇ = - 1 2 ∇ + 1 n ∇ (n+1) , and
∇ ∨ = 1 2 ∇ + n -1 n ∇ (n+1) .
The Laplacian ∇ 2 can now be expressed in terms the scalar differential operator ∂ (n+1) and the dual-null Laplacian ∇2 , [12, p.121]. Using (34),
∇ 2 = 4 n 2 ∇ 2 (n+1) - 8 n A n+1 • ∇∂ (n+1) + 4 ∇2 = 2 n (n + 1)∂ 2 (n+1) -4 ∂ 2 (n+1) + 4 ∇2 = 4 ∇2 - 2(n -1) n ∂ 2 (n+1) . (35)
It follows that the Laplacian reduces to 4 times the null Laplacian for any function f (x) with the property that ∂ 2 (n+1) f (x) = 0. In addition, the Laplacian factors into the product of two first order differential operators:
∇ 2 = 2 ∇ + 2(n -1) n ∂ (n+1) 2 ∇ - 2(n -1) n ∂ (n+1) , (36)
opening the door to a new class of solutions to the Laplace equation in G 1,n , [START_REF] Sobczyk | Spheroidal Domains and Geometric Analysis in euclidean Space[END_REF][START_REF] Sobczyk | Spheroidal Quaternions and Symmetries[END_REF][START_REF] Boyer | Symmetry and Separation of Variables for the Helmholtz and Laplace Equations[END_REF].
For n = 2, ∇ 2 = 2 ∇ + ∂ (3) 2 ∇ -∂ (3) , (37)
and for n = 3, the case of Dirac algebra of spacetime, (36) reduces to
∇ 2 = 4 ∇ + 1 √ 3 ∂ (4) ∇ - 1 √ 3 ∂ (4) , (38)
giving a new factorization of the Laplacian in A + 4 .
The Euclidean geometric algebra G n+1
Using the dual-like notation introduced in ( 19), the geometric product of two vectors v, w ∈ A + n+1 takes the interesting form
vw = 1 2 n+1 i,j=1 v i w j (1 -δ ij ) + n+1 i<j (v i w j -v j w i )a i ∧ a j = 1 2 v ∨ w ∨ - n+1 i=1 v i w i + n+1 i<j (v i w j -v j w i )a i ∧ a j . (39)
It follows that
v • w = 1 2 (vw + wv) = 1 2 v ∨ w ∨ - n+1 i=1 v i w i = 1 2 n+1 i<j (v i w j + v j w i ), (40)
and
v ∧ w = 1 2 (vw -wv) = n+1 i<j (v i w j -v j w i )a i ∧ a j . (41)
Using (40) of G 1,n , a new Euclidean inner product v, w can be defined,
v, w := n+1 i=1 v i w i = v ∨ w ∨ -2v • w = v ∨ w ∨ - n+1 i<j (v i w j + v j w i ). ( 42
)
Using this -inner product the -geometric algebra G n+1,0 makes its appearance by defining the -geometric product of v, w ∈ G n+1 by
v w := v, w + v ∧ w = v ∨ w ∨ - n+1 i<j (v i w j + v j w i ) + n+1 (v i w j -v j w i )a i ∧ a j ,
or more simply,
v w := (v ∨ w ∨ -2v • w) + v ∧ w = (v ∨ w ∨ -3v • w) + vw, (43)
for which case the basis null vectors {a 1 , . . . a n+1 } ∈ G 1,n take on the roll of the orthonormal basis vectors of G n+1 .
It is interesting to note that the grading of the elements of these respective geometric algebras remains the same, i.e., a k-vector in G 1,n remains the same k-vector in G n+1 , but the algebras have completely different structures. Other interesting issues arise as well. Recalling the definition (1) of a null vector, as well as the random selection of the seed null vectors a i and a j in ( 6) and [START_REF] Lounesto | Clifford Algebras and Spinors[END_REF], new possibilities arise. In Hestenes' STA algebra G 1,3 , algebraically isomorphic to the Dirac algebra, each time-like unit vector e = γ 0 determines a unique rest frame or inertial system in Minkowski spacetime [START_REF] Hestenes | Space-Time Algebra[END_REF].
At the level of the even Pauli sub-algebra, choosing a γ 0 determines a unique splitting of the algebra into the relative components of a vector and a bivector of an observer [12, p.123], [START_REF] Baylis | Relativity in Clifford's Geometic Algebras of Space and Spacetime[END_REF][START_REF] Doran | Geometric algebra for Physicists[END_REF], [START_REF] Sobczyk | What's in a Pauli Matrix[END_REF][START_REF] Sobczyk | Spacetime Vector Analysis[END_REF]. On the other hand, the scalar inner product structure in (42) of such a splitting determines the star Euclidean geometric algebra G 4 , [START_REF] Sobczyk | Special relativity in complex vector algebra[END_REF][START_REF] Sobczyk | Spinors in Spacetime Algebra and Euclidean 4-Space[END_REF]. The intriguing question arises whether this might be the missing link to the interpretation of the wave-particle dual nature of matter in quantum mechanics, and in addition provide the key to unification of quantum mechanics with Einstein's general relativity? See Figure 1. Further speculation suggests interpreting the geometric algebra G 3,1 ≡ A - 3,1 as being related to antimatter, [START_REF] Santilli | Isodual Theory of Antimatter with Applications to Antigravity, Grand Unification and Cosmology[END_REF][START_REF] Roldao Da Rocha | Isotopic liftings of Clifford algebras and applications in elementary particle mass matrices[END_REF].
Figure 1: The bottom left side is the even sub-geometric algebra of the rest frame of an observer (particle side). The right side is the quantum mechanical Euclidean geometric algebra of that rest frame (wave side).
Differentiation of basic functions
In the previous section, basic relationships between the four fundamental differential operators, the gradient or vector derivative ∇, the null or hat gradient ∇, the dual sum gradient ∇ ∨ , and the scalar sum partial derivative ∂ (n+1) have been established. In order to efficiently proceed, differential formulas for elementary functions need to be developed.
We begin with a sample of formulas which have been developed elsewhere for the vector derivative ∇ in Euclidean space, [6, p.66]. In adapting the formulas in this reference, the dimension of the space R 1+n is taken to be n + 1, for the position vector
x = x 1 a 1 + • • • + x n+1 a n+1 ∈ A +
n+1 , and the unit vector
x := ∇|x| = x |x 2 | ⇐⇒ x = |x|x, (44)
where |x| := |x 2 |. The absolute value |x 2 | is necessary because only in the case of the positive definite Euclidean space is x 2 ≥ 0 for all x ∈ R n+1 . For simplicity, the formulas given here are only valid for more general spaces R p+q when x 2 ≥ 0, when x is a point in the positive light cone of the metric space.
∇|x| k = k|x| k-1 x, ∇|x| k x = (n + k + 1)|x| k . ( 45
) v • ∇|x| k = k|x| k-1 v • x, v • ∇|x| k x = k|x| k-1 v • x x + |x| k v. (46
The same sample of differential formulas is now given for the null, dual sum gradients, as well as scalar differential sum operator. For the null gradient ∇, with the help of (32) and (34), we first calculate ∇x 2 = x ∨ A n+1 -x. It follows that
∇|x 2 | = ∇x 2 = x ∨ A n+1 -x = 2|x| ∇|x| ⇐⇒ ∇|x| = x ∨ A n+1 -x 2|x| ( 49
)
whenever x is on the positive light cone for which case x 2 > 0. If we evaluate the right side of (49) at x = A n+1 + y 1 a 1 , for a constant y 1 ∈ R, we find that ∇|x| = (n + 1 + y 1 )A n+1 -A n+1 -y 1 a 1
2|A n+1 + y 1 a 1 | = (n + y 1 )A n+1 -y 1 a 1 2|A n+1 + y 1 a 1 | , (50)
which is quite different than the definition of the unit vector x found in (44). Using that (A n+1 + y 1 a 1 ) 2 = A 2 n+1 + 2y
) ∇ ln |x| = x |x| 2 ,
2 ∇e x = e x + n sinh |x| |x| . (47) ∇vx = -(n -1)v, ∇(v ∧ w)x = (n -3)v ∧ w.
Table 1 :
1 PC Multiplication table.
a i a j
1 a 1 • A n+1 = )A n+1 -y 1 a 1 2|A n+1 + y 1 a 1 | = (n + y 1 )A n+1 -y 1 a 1
1 2 n(n + 2y 1 + 1) ,
it follows that (50) simplifies to
∇|x| = (n + y 1 √ 2 n(n + 2y 1 + 1) . (51)
It is also interesting to calculate
( ∇|x|) 2 = (n + y 1 ) n 2 + (1 + y 1 )n -y 1 4(n + 2y 1 + 1) , (52)
which reduces to n 2 4 when y 1 = 0 and x = A n+1 . In this case
∇|x| x=An+1 = √ √ 2 nA n+1 √ n + 1 .
With (49) and (51) in hand, similar formulas like (45)-(48) easily follow for
the other vector derivative operators.
The idea of the gradient ∇, or vector derivative, being a fundamental tool in the study of linear algebra was first developed in my doctoral thesis at Arizona State University(1971).I have published it on my website at https://www.garretstar.com/secciones/publications/publications.html
Acknowlegment
The Zbigniew Oziewicz Seminar on Fundamental Problemes in Physics, organized by Professors Jesus Cruz and William Page has played an important role in the development of this work in relation to Category Theory and the Theory of Graphs, [25,26]. |
04108380 | en | [
"info.info-cr",
"spi.tron"
] | 2024/03/04 16:41:22 | 2023 | https://hal.science/hal-04108380/file/BL23.pdf | Lilian Bossuet
Andres Carlos
email: [email protected]
Lara-Nino
Carlos Andres
Advanced Covert-Channels in Modern SoCs
Keywords: Covert channels, Frequency modulation, Multi-cluster, SoC-FPGAs, Zynq ultrascale+
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
A covert channel is, as the name suggest, an information pathway which is not evident to those unaware of the nature of the system. We might be familiar with a typical example found in classical media: morse code. In that case, the parties employ blinking patterns, quiet sounds, or discrete movements to transfer information stealthily. The channel being either the air or the light. In the context of circuits, the use of blinkers [START_REF] Guri | LED-it-GO: Leaking (A Lot of) Data from Air-Gapped Computers via the (Small) Hard Drive LED[END_REF] or noise [START_REF] Guri | Acoustic Data Exfiltration from Speakerless Air-Gapped Computers via Covert Hard-Drive Noise ('DiskFiltration')[END_REF] might be difficult to exploit. However, within a circuit there are other channels which can be used to the same end.
A typical system-on-a-chip (SoC) is an heterogeneous array of processing and memory elements. These components might be implemented within the same die or in different chips, but they generally share two kinds of wires: supply power and clocks. These lines are supposed to carry only DC components and square waves, respectively. But that does not mean that such signals cannot include additional information. As a practical example we might refer to smart grid applications [START_REF] Ma | Smart Grid Communication: Its Challenges and Opportunities[END_REF]. In the case of clock signals, a modulation of the frequency of the wave or the duty cycle of the period can be used to encode a message covertly. For DC components, small power fluctuations can be induced in the signal to create a pattern which would encode the message. These strategies can be leveraged by different elements of the SoC to create a covert channel and transfer information, even if they are not supposed to communicate with each other.
The sender of the message is potentially a spy process which somehow manages to retrieve critical information from its surroundings (i.e. details from applications running in the same processor, activation signals, and even memory contents under certain scenarios). The receiver is then another application or circuit which can take advantage of this information to perform some processing or a different kind of attack. The entity which the channel aims to be hidden from is usually the owner of the platform; who would restrict the interaction between suspicious applications and circuits. As to where the sender and receiver might come from, some prime suspects include: kernel modules, drivers, third-party accelerators, hardware trojans, libraries, and binaries. The overseer would be the designer of the architecture, in charge of putting everything together. As the complexity of circuits increases, these scenarios result much more feasible since the use of foreign components becomes a necessity for speeding-up the design process.
Previous works on covert channels on SoCs [BB18; GER19; Gna+21; Mie+18] have focused on earlier technologies for these platforms. However, the design of newer devices has shifted towards an heterogeneous approach. Given the slowdown in Moore's Law [START_REF] Eeckhout | Is Moore's Law Slowing Down? What's Next?[END_REF] it became evident that monolithic computing nucleus could no longer bear the burden of processing. Since then, we have seen the emergence of multi-core architectures. At the same time, as proposed by Amdahl's Law [START_REF] Hill | Amdahl's Law in the Multicore Era[END_REF], optimizing the performance of the platform becomes more complex as the number of processors in the cluster grows. For this reason, the latest generations of SoCs now feature multiple processing clusters of different capabilities. Modern SoCs feature low-power processors, application processors, hardware accelerators, and even reconfigurable nucleus, which make it possible to integrate a diverse group of operating systems and applications in the same chip.
The goal of this work is to explore different approaches for creating frequencybased covert channels in heterogeneous SoCs. We study the interaction between different processor clusters of these platforms leveraging the reconfigurable fabric as the shared resource for the implementation of covert channels. These interactions are achieved by modifying some of the divider values with the intent of producing a change in the clock network. This clock tree is used as the channel for the covert transmission of data. Our main contributions can be summarized as follows:
1. We propose, for the first time, the creation of covert channels between different processor clusters of the SoC 2. We evaluate the performance of these attacks using a clear and well understood methodology 3. We demonstrate, also for the first time, the feasibility of implementing frequency-based covert-channels in the Zynq Ultrascale+ family of devices
The rest of the paper is structured as follows. In Section 2 we describe our experiments and methodology to evaluate the different covert channels in the Zynq Ultrascale+ SoCs. In Section 3 we provide our conclusions and final remarks.
Threat model
In [START_REF] El | DVFS as a Security Failure of TrustZone-enabled Heterogeneous SoC[END_REF], the authors used frequency modulation to create covert channels in the Zynq-7000 SoC. That work demonstrated that these architectures are vulnerable to these attacks even when the ARM TrustZone protections are enabled. In their case, a total of four covert channels were implemented, showing that the DVFS mechanisms available in SoCs could be used to bypass some ARM TrustZone protections. Other works, like [START_REF] Giechaskiel | Leakier Wires: Exploiting FPGA Long Wires for Covert-and Side-Channel Attacks[END_REF] have demonstrated that it is possible to employ the electromagnetic emanation within the chip to implement covert channels. The authors exploited the cross-talk between long-wires to implement covert channels with transmission rates up to 6 Kbps in different FPGAs. More formally, voltage-based covert channel attacks were reported by [START_REF] Dennis | Voltage-Based Covert Channels Using FP-GAs[END_REF], also for the Zynq-7000 SoCs. In that work, the authors managed to employ a power-waster circuit to generate fluctuations in the power supply of the circuit. Then, a sensor implemented in a different part of the reconfigurable fabric was used to retrieve the message. That work demonstrated that it was possible to implement power-based covert channels with transmission rates up to 8 Mbps.
The Zynq Ultrascale+ SoCs allow to use the RPU and the APU independently. The real time cores generally would run a real-time operating system like RTOS [START_REF] Iturbe | Microkernel Architecture and Hardware Abstraction Layer of a Reliable Reconfigurable Real-Time Operating System (R3TOS)[END_REF] or simply run standalone applications. The application cores, on the other hand, are complex and their full potential can usually only be achieved through the use of a kernel like Linux. Our work employs this model, which is based on reasonable assumptions about the use cases of the Ultrascale+ technology.
These systems also include a power management unit which is in charge of performing the monitoring and configuration of the Power Distribution Network. It includes anti-tampering characteristics which increase the difficulty of modulating the power supply of the chip directly. That is, we might be able to induce a power fluctuation, but there is a chance that the SoC will go into lock-down. On the other hand, modifying the frequency of different clocks used in the architecture can be achieved through simpler mechanisms. By design a modification of the system frequency will be accompanied with a corresponding voltage scaling. Thus, the proposed covert channels target the frequency of the SoC but also affect its voltage.
In Fig. 1 we illustrate part of the clock tree in the Zynq Ultrascale+ SoCs. Four main reference clocks are available to source the five main PLLs of the architecture. To generate the output of these oscillators, the reference sources are multiplied by a small constant. Subsequently, the output of the PLLs is divided by up to two six-bit constants to produce multiple clocks for different parts of the architecture. From the same figure, it can be seen how there are three main power domains in these chips. The Low Power Domain will source the RPU, the peripherals, the on-chip memory, and one of the interconnect switches. The Full Power Domain will supply the APU, the memory management unit, the memory controller, and the central interconnect switch. And the PL Power Domain will supply the reconfigurable fabric. For each the low and full power domains, the five main PLLs can be used to generate clocks after applying one or two divider values. And for the FPGA, only three of the PLLs can be used to generate the four clocks available to this component (which come from the processing system, as it is also possible to use external clocks). Hence, modifying most of the clocks in the platform is just a matter of editing the value of the main multipliers or any of the dividers.
Materials and Methods
Our work assumes that the spy process can gain access to phase locked loops (PLL) which can modify the oscillators in the SoC. We propose this assumption since the target scope of our work includes the Zynq Ultrascale+ SoCs which feature these components. Next, the receiver must be able to sample the channel to retrieve the message. This is achieved with the use of a sensor module which can be implemented in the reconfigurable fabric. We must note that our work does not investigate a concrete scenario for these attacks to take place, but rather we intend to demonstrate the vulnerabilities present in this technology in order to mitigate such threats.
This work uses the TE0802 SoC (xczu2cg-sbva484-1-e) as target system. Compared to other Ultrascale+ SoCs, this board offers a reduced set of features. However, it can be used to demonstrate a wide range of vulnerabilities in this family of devices. We employ the AMD-Xilinx 2021.1 toolchain to design and implement the hardware architecture, as well as to develop and deploy the applications. The library xtime l.h was used to obtain our time measurements.
Target platform
The Zynq Ultrascale+ SoC is an interesting case study for heterogeneous SoCs. These chips feature a main processing unit (APU), powered by an array of ARM Cortex-A53 cores. The secondary processing system (RPU) includes an array of ARM Cortex-R5F cores. Each one of these processors has independent instruction and data caches, and up to L2 cache in the case of the APU. The main memory of the SoC is an external DDR unit, driven by a silicon-based on-chip memory controller. There is also a smaller on-chip memory which can be shared by the different cores, and a memory management unit which performs the necessary assignments. What sets these architectures apart from other SoCs is the availability of an FPGA: an array of reconfigurable elements and silicon accelerators. The interconnection between processors and accelerators follows the AMBA-AXI specification through two main switches. The reconfigurable fabric of the SoC offers the possibility of implementing a wide range of customized accelerators.
In Fig. 1, in blue, we illustrate part of the clock tree in the Zynq Ultrascale+ SoC-FPGAs. A main reference clock (PSS REF CLK) is used to source the five main PLLs of the architecture (RPLL, IOPLL, APLL, VPLL, DPLL). To generate the PLL output, the reference clocks are multiplied by a constant. The resulting oscillators are then divided by one or two six-bit constants to produce specific clock domains for the different parts of the architecture.
From Fig. 1 it can also be seen how there are three main power domains in these Ultrascale+ SoCs. The Low Power Domain will source the RPU, the peripherals, the on-chip memory, and one of the interconnect switches. The Full Power Domain will supply the APU, the memory management unit, the memory controller, and the central interconnect switch. The PL Power Domain will supply the reconfigurable fabric. The goal for this separation of power domains is to improve the energy footprint of the system by allowing to shut down complete areas of the SoC when these are not required.
For the low and full power domains, the five main PLLs can be used to generate clocks. For the FPGA, only three of the PLLs (RPLL, IOPLL, DPLL) can be used to generate the four clocks available to the fabric (from the processing system, since it is also possible to use external clocks.)
Ultrascale+ SoCs allow to use the RPU and the APU independently. The cores in the RPU would normally run a real-time operating system like RTOS [START_REF] Iturbe | Microkernel Architecture and Hardware Abstraction Layer of a Reliable Reconfigurable Real-Time Operating System (R3TOS)[END_REF] or simply run standalone applications. The cores in the APU, on the other hand, are more complex and their full potential can best be drawn through the use of a kernel like Linux. In this work, we presume that both clusters can be operated independently. We implement bare metal applications in the RPU and linux-based applications in the APU. These chips also feature a power management unit which is in charge of performing the monitoring and configuration of the power distribution network. It features anti-tampering characteristics which increase the difficulty of modulating the power supply of the chip.
Covert transmission of data
The proposed covert channels exploit the potential of the RPU and the APU for modifying the oscillators that source the FPGA. The frequency of the different clocks in Ultrascale+ SoCs can be modified by editing its multiplier or divider values. The multiplier register will affect the PLL output, and in turn modify the frequency of all the SoC components which rely on that given oscillator. In contrast, the divider registers are specific for a given clock and modifying them will only modify the frequency of a particular clock signal. There are clocks which use one divider and there are clocks which use two. All the dividers are stored as a six-bit section of a 32-bit register. To modify the frequency of an oscillators it is then necessary to edit the contents of these control registers.
At low level, like in bare-metal applications, the control registers of the SoC can be edited through direct access operations. For example using the xil io library. However, to edit one of these control registers it is necessary to edit multiple security and configuration registers so that the frequency change is enacted. Furthermore, the application performing the operation must have the appropriate exception level.
In the presence of a kernel this task can be simplified with the help of drivers which allow to request the modification of specific clocks. For example, the processor clocks (by using the cpufreq driver of Linux) or the FPGA clocks (by using the fclk drivers of Xilinx). This scenario is more favorable for attackers since the complexity of the kernel allows to hide malicious applications more easily.
The SoC platforms utilized are by default protected with the ARM Trust-Zone firmware. The fabric is protected by an extension of this technology which allows to declare an IP as trusted. However, neither of these protections prevent the use of the clock drivers (cpufreq, fclk) so long as the application has root access.
Internal sensors
A delay sensor is a circuit created with digital components which can measure the variation in the propagation delay of a digital signal. These fluctuations are generated from variations in the power dissipation, electromagnetic coupling, and thermal fluctuations of the circuit [START_REF] Zick | Low-Cost Sensing with Ring Oscillator Arrays for Healthier Reconfigurable Systems[END_REF]. For this reason, such sensors have been employed to perform internal monitoring of the chip [ZS18; Gra+20]. The main types of such sensors are based in TDCs and ROs. The former are generally more accurate and provide greater resolution in the sampling, but must be calibrated precisely and placed directly in the platform. In contrast, the RO-based sensors (RO-S) do not require any fine-grained implementation directives and provide sufficient information when enough samples are available. Our work focuses on the latter.
The main components of a RO-S are shown in Fig. 2. In this case, the ring of inverters provides a consistent oscillatory wave whose period fluctuates according to the nominal operation of the circuit. This signal is then used to source a binary counter which is subsequently sampled by an external clock to produce a measurement. The number of counts retrieved in a sampling period is thus correlated to the frequency of the ring, and in turn to the operation of the circuit. In a conventional binary counter we must consider the problem of carry propagation, which can drastically affect the critical path of the design. This problem, together with the need to reset the counter after each measurement, causes that the use of conventional counter designs derives into sensors with very low sampling rates. Both of these limitations were addressed by [START_REF] Gravellier | High-Speed Ring Oscillator based Sensors for Remote Side-Channel Attacks on FPGAs[END_REF], who proposed to employ a carry-less ring counter which produces a Johnson encoding. This updated design is used in our work. However, we are more interested in the sampling clock of the sensor. By modifying this signal we can obtain an offset in the measurements due to the periodicity of the ring-counter.
As main sensing module we use the RO-S from [START_REF] Gravellier | High-Speed Ring Oscillator based Sensors for Remote Side-Channel Attacks on FPGAs[END_REF]. This design features an acquisition rate over 350 Msps thanks to the highly optimized counter. The output of the sensors can be quantified with just ten bits, using the average of multiple sensors to mitigate the quantization error. The output of this module can be read directly from the FPGA, or retrieved from either the RPU or the APU through an AXI channel.
Physical characteristics of the channel
To understand the limits of the proposed covert channels we first characterized the behavior of a PLL in the target platform. Using a digital oscilloscope we sampled the time-response of these components when requesting a change in the output frequency. As reference, we generated a digital trigger through the processor's GPIOs. We then measured the width of these pulses. We also captured the activation of the MSB bit in the output of the RO-S (See Fig. 2). In Fig. 3 we illustrate our observations for this experiment.
Our findings suggest that the minimum response time for a frequency change is approximately 600ns. That is the time elapsed from the moment one of the RPU cores modifies the register until the output of the sensor is updated (t R5F toF P GA ). Therefore, assuming that we could transfer one bit per transition, the maximum bandwidth for the proposed channels would be 1.6 Mbps. Note that this is the theoretical limit, without considering the necessary delay to achieve a consistent transmission (low-error rate).
We observed that the response time to transition from a lower to a greater frequency differs from the time required to perform the opposite change. However, explaining this behavior falls out of the scope of our work; we simply take the maximum of both measurements to determine the minimal maximumbandwidth. As we will demonstrate latter in the paper, this asymmetry does not weight on the feasibility of the proposed attacks.
Next, we intended to characterize the RO-S which would be used in our experiments. For this, we implemented a matrix of 64 RO-S and sampled it using different frequencies. Results for this experiment are provided in Fig. 4. At first glance it was possible to clearly differentiate between the multiple sample Figure 2: The architecture of the RO-S used in our work. This module is composed of three main groups of components. A group of sensors that produce a digital output in function of the operation of the circuit. An encoder for each sensor, which quantify the output of the counters. And an adder, whom merges all the encoders' outputs to mitigate the quantization error. There are also three main elements within each sensor: a ring oscillator (in yellow), a Johnson ring-counter (in red), and a register (in cyan). The acquisition rate is defined with an external sampling clock.
Q D FDCE O I0 LUT6 Q D FDCE Q D FDCE Q D FDCE Q D FDCE Q D FDCE Q D FDCE Q D FDCE O I0 I1
windows. However, for some cases, this separation would not be so apparent for a computer. The problem being that multiple sample values would overlap for different sampling frequencies. The outliers can be appreciated in the box diagrams included in Fig. 4. This kind of analysis was useful to identify the most adequate frequencies for implementing the proposed attacks. We selected frequencies that would simplify the implementation of the covert channels without requiring any filtering (100 MHz, 150 MHz, 300 MHz).
Covert channels between the RPU and the FPGA
The first class of covert channels we studied were those where an application running in the RPU (Cortex-R5F@533 MHz) acted as the transmitter, and a circuit implemented in the FPGA was the receiver. This scenario is illustrated in Fig. 5. Such attacks might be found in systems which use third party applications or accelerators. To perform the frequency modulation, the RPUbased application simply needed to access the CRL APB module and overwrite In our experimentation, these tasks were performed using the xil io.h library. Under this attack model, the bandwidth will be limited by the delay for the application to modify the target oscillator (t ′ R5F toF P GA ). This delay is composed of the time required for a Cortex-R5F processor to modify a CRL APB register (t R5F ), plus the transition delay of the PLL (t P LL ), and the time for the RO-S to update its output (t RO-S ). This model is given in Equation 1.
t ′ R5F toF P GA = t R5F + t P LL + t RO-S (1)
We went even further and decided to retrieve the output of the sensor from the processing system. This experiment sought to minimize t ′ R5F toF P GA while maintaining an error rate of zero over the transmission of 12KB of data. For this, we used the modulation strategy in Alg. 1. This approach relies on three different frequencies: two that will represent ones and zeros and a third one that will act as a separator. By using three symbols it is possible to minimize the number of samples per window, as long as the different windows remain clearly differentiable. The results for this experiment are illustrated in Figure 6.
The samples shown in Fig. 6 were retrieved from the RPU by reading the output of the RO-S through an AXI link. Experimentally, we determined that t ′ R5F toF P GA ≈ 1.83 µs. Whereas the FPGA could sample the output of the sensors with a rate of 333 MSps (t F P GAtoF P GA = 3 ns), the RPU could only read the same output with an additional delay (t F P GAtoR5F ). Our experiments found that t F P GAtoR5F = 498 ns. It follows that t R5F toR5F = t ′ R5F toF P GA + t F P GAtoR5F = 2.33 µs. This is the latency for a Cortex-R5F core to communicate with the other Cortex-R5F through a covert channel which uses the FPGA as shared resource. The corresponding bandwidths are calculated in Equation 2.
B R5F toF P GA = 1 2t ′ R5F toF P GA = 1 3.66 µs ≈ 273 Kbps B F P GAtoR5F = 1 2t F P GAtoR5F = 1 995 ns ≈ 1 M bps B R5F toR5F = 1 2(t ′ R5F toF P GA + t F P GAtoR5F ) ≈ 215 Kbps (2) APU A53 0 A53 1 RPU R5 0 R5 1 DDR4 DDRC L2
Covert channels between the APU and the FPGA
The second type of covert channels under evaluation were those that originated from an application executed in the APU (Xilinx' Linux on [email protected]). As in the previous attack, our intended receiver was the RO-S in the FPGA. This scenario is depicted in Fig. 7.
A regular Linux kernel, if configured properly, will feature the cpufreq driver which allows to modify the frequency of the underlying system. This might be leveraged to implement a covert channel between different cores controlled by the same operating system. In the case of the Xilinx' distribution of Linux, the kernel also features a set of APIs (/sys/devices) which allow to modify the frequency of the FPGA clocks. These two mechanisms use configuration files which can be managed from the application space. Thus, performing the modification of some oscillator is a matter of locating the adequate file, opening it, modifying its contents, and closing it again (the file must be closed for the change to be detected). Evidently, this is much more time intensive than writing a register. Therefore, for these attacks the bandwidth of the covert channel will be limited by the delay for the application (operating system) to modify the Algorithm 1 A frequency-modulation strategy for low-width windows Require: f 1 , f 2 , f 3 : A given set of frequencies.
for byte in message do for bit in byte do if bit then oscillator (t OStoF P GA ). This model is given in Equation 3.
fclk ← f 1 else fclk ← f 2 end if fclk ← f 3 end for end for APU A53 0 A53 1 RPU R5 0 R5 1 DDR4 DDRC L2 Cache SCU I/D
t OStoF P GA = t OS + t P LL + t RO-S (3)
In this case, since we expected t OStoF P GA to be much greater than t R5F toF P GA we used the frequency modulation strategy in Alg. 2. Unlike the previous model, when there is a large number of samples it is possible to differentiate a transmitted zero from a one by adding a small delay (δ) in the transmission. If a divider value with minimum delay (t bit ) is used to separate the windows, then only two frequency values are required. The advantage of the new modulation strategy is that only three windows are used to encode two bits, rather than four. This results critical when the windows contain many samples. The results for an experiment with this type of channels are presented in Fig. 8.
For this experiment we saw it necessary to add a delay in the acquisition (ζ = 10 µs) to reduce the number of samples collected. The data was retrieved using the RPU with a sampling period t ′ F P GAtoR5F = t F P GAtoR5F + ζ (it was experimentally corroborated that t ′ F P GAtoR5F = 11.05 µs). We observed that t bit = 33 × t ′ F P GAtoR5F = 365 µs and t bit + δ = 39 × t ′ F P GAtoR5F = 431 µs, so Algorithm 2 A frequency-modulation strategy for large sample-windows Require: f 1 , f 2 : A given set of frequencies.
for byte in message do for bit in byte do if bit then fclk ← f 1 wait(δ) else fclk ← f 1 end if fclk ← f 2 end for end for δ ≈ 66 µs (in practice δ was implemented as usleep(1)) and t OStoR5F = t bit . It follows that t OStoF P GA = t bit -t F P GAtoR5F ≈ 354 µs. As suspected, the transmission delay for this covert channel was about 200 times greater than the delay between the RPU and the PL (t R5F toF P GA ). The corresponding bandwidths are given in Equation 4.
B OStoR5F = 1 2t bit + 0.5δ = 1 795.94 µs ≈ 1.31 Kbps B OStoF P GA = 1 2t bit + 0.5δ -t F P GAtoR5F ≈ 1.33 Kbps (4)
As it can be observed, the bandwidth for these covert channels is much lower than in the previous cases. However, modifying the clock frequency from the kernel space has two evident advantages: arguably it would easier to introduce malicious applications in a piece of software as large as Linux and the attacker does not need any specific knowledge of the architecture under attack. Which increases the range of potential targets.
Covert channels between the APU and the RPU
The experiments in the previous Subsection assumed that the transmitter in the Linux-enabled Cortex-A53 would use the /sys/devices APIs to perform the frequency modulation of the FPGA clocks. However, this is not necessary if the application has access to the physical registers. To achieve this goal we can map the kernel memory through the devmem utility, then a register value can be read and written as any pointer from a C-language application.
Revisiting the case where a transmitter in the Cortex-A53 intends to send data covertly to an application in the Cortex-R5F (illustrated in Fig. 9), we now expected the number of samples per window to be much lower. Thus we continued to use the encoding from Alg. 1. Again, the APU transmitter would perform the frequency modulation but now through direct access to the PLL dividers. Then the RO-S in the FPGA would detect the frequency change, and its output would be read from the RPU. The total delay for the revisited covert channel from the APU to the RPU is modeled in Equation 5.
t A53toR5F = t A53 + t P LL + t RO-S + t F P GAtoR5F
(5)
Experimentally, we determined that it was possible to achieve a zero-error transmission over 30 KB of data with a small delay (η). In this case, the sender required to wait η after editing the register value. On the other hand, the receiver could continuously read the RO-S output.
It results difficult to accurately measure t A53toF P GA from the kernel space, however, using the data in Fig. 10a we can estimate that t A53toR5F ≈ 3 × t F P GAtoR5F . With a conservative guess, we can then calculate the bandwidth for this cover channel as provided in Equation 6.
B A53toR5F = 1 6t F P GAtoR5F = 1 2.99 µs ≈ 335 Kbps (6)
This experiment contributed to demonstrate that both the Cortex-R5F and the Cortex-A53 cores can read and edit the registers in the CRL APB module. Which means that we could simply create a register-to-register channel without the need to wait for the activation of the PLL. The results for this experiment are shown in Fig. 10b. With this approach the accuracy in the transmission increases considerably, to the point it is possible to use more complex modulation strategies, for example to increase the difficulty of detection, like the one in Alg.
3.
Formally, the difference in the register-register covert channel is that the Cortex-R5F application didn't have to retrieve the data from the FPGA (a difference of ∼ 90 ns). This is reflected in the delay model of Equation 7. Nonetheless, it remained difficult to estimate t A53 accurately.
t ′ A53toR5F = t A53 + t P LL + t RO-S + t R5F (7)
Then, from Fig. 10b, since the sampling period can be considered equivalent to t R5F and there are approximately five samples per bit transmitted, t ′ A53toR5F ≈ 5 × t R5F . The updated bandwidth is given in Equation 8.
B ′ A53toR5F = 1 5t R5F = 1 2.04 µs ≈ 490 Kbps (8)
Finally, we revisited the case where the transmitter would be the application in the Cortex-R5F and the receiver would be an application in a Linux-enabled Cortex-A53. In this case both applications could operate without any additional delay. We could measure the transmission delay for the channel, and by using the average number of samples per window estimate the delay for the Cortex-A53 to access a register in the CRL APB module. Figure 11 illustrates some results for this experiment. From this evaluation we estimated that t A53 = 243 ns, and by consequence B ′ R5F toA53 ≈ 750 Kbps. The limiting factor being t R5F .
Algorithm 3 A more complex frequency-modulation strategy Require: f 1 , f 2 , f 3 : A given set of frequencies.
for byte in message do for bit in byte do if bit then
fclk ← f 1 fclk ← f 2 wait(δ) else fclk ← f 2 fclk ← f 1 end if fclk ← f 3 end for end for
Summary
In this Section we have described multiple covert channels which can be implemented between different clusters of the SoC and the reconfigurable fabric.
For each experiment we have sought to maintain zero transmission errors and to estimate the minimum delay. While the performance results vary considerable between some of the attacks, it is necessary to remember that they have different use cases and advantages. In Table 1 we summarize our findings. We have also retrieved results for selected works in the literature, which have implemented covert channels in SoCs and FPGAs. A direct comparison should be avoided since the implementation technologies differ. It is important to note that the qualitative characteristics of the channel should outweigh quantitative criteria such as the bandwidth.
In [START_REF] El | DVFS as a Security Failure of TrustZone-enabled Heterogeneous SoC[END_REF], the authors used frequency modulation to create covert channels in the Zynq-7000 SoC. That work demonstrated that these architectures are vulnerable to these attacks even when the ARM TrustZone protections are enabled. In their case, a total of four covert channels were implemented, showing that [START_REF] Dennis | Voltage-Based Covert Channels Using FP-GAs[END_REF], also for the Zynq-7000 SoCs. In that work, the authors managed to employ a power-waster circuit to generate fluctuations in the power supply of the circuit. Then, a sensor implemented in a different part of the reconfigurable fabric was used to retrieve the message. That work demonstrated that it was possible to implement power-based covert channels with transmission rates up to 8 Mbps.
Final remarks
In this work, we have explored different alternatives for implementing covert channels in heterogeneous SoCs. Through experimentation we have demonstrated that it is possible to create covert channels between different components of these platforms. Using the Zynq Ultrascale+ SoCs as case study, we managed to create covert channels which achieved different transmission rates, going from a few Kbps to 750 Kbps. At the same time, we modeled the transmission delays and transmission bandwidth of the different covert channels adjusting the data with our empirical observations. These findings can be used to design more effective countermeasures for potential attacks based on covert channels.
As an additional contribution we characterized the response times of the PLLs in the Zynq Ultrascale+ SoCs. We demonstrated how these circuits have an asymmetric behavior as a function of the requested transition. For an ascending change from 100 MHz to 150 MHz we observed a transition delay of 600ns; the equivalent descending change showed a slightly shorter transition delay.
We can conclude that, from an efficiency point of view, writing and reading the registers directly is the best option to implement covert channel communications. However, this assumes that the attacker has a) the necessary access level to read/write the control registers of the SoC and b) precise knowledge of the architecture under attack. These assumptions limit the application of these kind of covert channels. On the other hand, performing frequency modulation from the kernel space is not so efficient, but it allows to mitigate these limitations. For the point a), it is much easier to sneak a malicious application into a large component like an operating system, and for the point b), the kernel will make sure that the drivers and APIs are pointing to the appropriate control registers regardless of the platform.
When comparing our results against the state of the art it is possible to reach mixed conclusions. However, we note that it is difficult to come up with a fair metric for comparison. First, because the implementation technologies and underlying phenomena are fundamentally different. And second, because the main goal of a covert channel is not to transfer a lot of data, but to do it stealthily. Finally, our work is not incremental to the related research; they are complementary. The vulnerabilities identified in this paper are architectural in nature, thus it would be possible to combine our work with other circuit-level principles for covert transmissions.
As future work, we intend to explore the impact of the modulation of the sampling frequency in the fine-output of the RO-S, which was not used in this work. We also intend to explore the possibility of creating the same covert channels when more restrictive control policies are implemented in the SoC, for example under trusted execution environments and power management software. Lastly, it might also be interesting to evaluate the application of the proposed covert channels under different SoCs which might not necessarily include an FPGA. This could be achieved through the use of hardware components which can be used to implement delay sensors [START_REF] Gravellier | SideLine: How Delay-Lines (May) Leak Secrets from Your SoC[END_REF].
Figure 1 :
1 Figure 1: The clock tree and power domains of a Zynq Ultrascale+ SoC
Transition from 100 to 150 MHz
Figure 3 :
3 Figure 3: The time-response of a PLL in the Zynq Ultrascale+
Figure 4 :
4 Figure 4: Characterizing the output of the RO-S as a function of the sampling frequency. The selected frequencies range from 375 MHz (which is the result of dividing the output of the IOPLL at 1.5 GHz by four) to 75 MHz (the result of dividing the same oscillator by 20). Each observation consists of 40,000 samples.
Figure 5 :Figure 6 :
56 Figure 5: The information flow through a covert channel between the RPU (Sender) and the FPGA (Receiver)
Figure 7 :
7 Figure 7: The information flow through a covert channel between the APU (Sender) and the FPGA (Receiver)
Figure 8 :
8 Figure 8: The transmission of a stream of bits over a covert channel from the Linux kernel to the FPGA. In this channel the modulation is performed according to Alg. 2, with f 1 = 150 MHz, f 2 = 300 MHz and δ ≈ 66µs.
Figure 9 :
9 Figure 9: The information flow through a covert channel between the APU (Sender) and the RPU (Receiver) using the FPGA in the middle
Figure 10: The transmission of a stream of bits over a covert channel from the APU (Cortex-A53) to the RPU (Cortex-R5F). In this case the modulation is performed according to Alg. 1, with f 1 = 100 MHz, f 2 = 300 MHz and f 3 = 150 MHz.
Figure 11 :
11 Figure 11: The transmission of a stream of bits over a covert channel from the Cortex-R5F to the Cortex-A53 using the registers of the CRL APB module as the shared resource. As in the previous experiment, we do not sample the output of the RO-S but the value of a register. For this channel the modulation is also performed according to Alg. 1, with f 1 = 100 MHz, f 2 = 300 MHz and f 3 = 150 MHz.
Table 1 :
1 Summary of the proposed covert channelsthe DVFS mechanisms available in SoCs could be used to bypass some ARM TrustZone protections. Other works, like[START_REF] Giechaskiel | Leakier Wires: Exploiting FPGA Long Wires for Covert-and Side-Channel Attacks[END_REF] have demonstrated that it is possible to employ the electromagnetic emanation within the chip to implement covert channels. The authors exploited the cross-talk between long-wires to implement covert channels with transmission rates up to 6 Kbps in different FPGAs. More formally, voltage-based covert channel attacks were reported by
Ref. Sender Receiver Shared resource Bandwidth (Kbps)
[BB18] Cortex-A9 Spectrum analyzer PDN 333
[BB18] Cortex-A9 Cortex-A9 PLL 60
[BB18] Cortex-A9 FPGA PLL 125,000
[BB18] FPGA Cortex-A9 PLL -
[GER19] FPGA FPGA Long wires 6
[Gna+21] FPGA FPGA PDN 8,000
This work Cortex-R5F FPGA PLL 273
This work Cortex-R5F Cortex-R5F FPGA 215
This work Cortex-A53 (API) FPGA PLL 1.33
This work Cortex-A53 (API) Cortex-R5F FPGA 1.31
This work Cortex-A53 Cortex-R5F FPGA 335
This work Cortex-A53 Cortex-R5F Register 490
This work Cortex-R5F Cortex-A53 Register 750
Acknowledgments
The authors acknowledge the support of the French Agence Nationale de la Recherche (ANR), under grant ANR-19-CE39-0008 (project ARCHI-SEC). |
04108394 | en | [
"info"
] | 2024/03/04 16:41:22 | 2023 | https://hal.science/tel-04108394/file/thesis_last_version.pdf | Pr Jamal Kharroubi
Keywords: Deep Learning, Medical Image Analysis, Convolutional Neural Networks, Medical Image Segmentation, Medical Image Classification Apprentissage Profond, Analyse d'Images Médicales, Réseaux de Neurons Convolutifs, Segmentation d'Images Médicales, Classification d'Images Médicales
TALIBI ALAOUI, Pr. El Mehdi ISMAILI ALAOUI, and Pr. Ilham CHAKER for their insightful comments and encouragement, and also for all the questions that encouraged me to further expand my research in the future. Finally, my deep
Declaration
I hereby declare that except where specific reference is made to the work of others, the contents of this dissertation are original and have not been submitted in whole or in part for consideration for any other degree or qualification in this, or any other university. This dissertation is my own work and contains nothing which is the outcome of work done in collaboration with others, except as specified in the text.
April 2023 vii Résumé L'apprentissage profond est un sous-domaine de l'apprentissage automatique issu des réseaux neuronaux artificiels, qui a eu un impact considérable sur divers domaines, tels que le traitement des images, le traitement de la parole, le traitement du langage naturel, etc. En traitement d'images, les réseaux de neurones convolutionnels (CNN) sont les architectures d'apprentissage profond les plus populaires, étant donné la nature de leur phase d'extraction de caractéristiques qui est adaptée aux données d'images en particulier. Dans cette thèse, nous effectuons une analyse d'images médicales en utilisant l'apprentissage profond, mais avec l'objectif d'étudier et de découvrir de nouvelles architectures d'apprentissage profond pour résoudre des problèmes d'images médicales, tels que la classification de nodules pulmonaires ou la segmentation de tumeurs cérébrales. Il existe différents angles à partir desquels nous pouvons explorer l'analyse d'images médicales à l'aide de l'apprentissage profond, la détection, la segmentation ou la classification des organes du corps, des maladies, des tumeurs et plus encore.
D'une part, dans la segmentation des images médicales, l'apprentissage profond à travers les CNN a montré de grandes performances. Pour cette raison, nous avons étudié l'utilisation d'une architecture d'apprentissage profond bien connue, appelée U-Net, pour effectuer une segmentation d'images de tomodensitométrie pulmonaire. L'objectif de cette expérience est de montrer comment l'U-Net peut effectuer une segmentation d'image à partir d'une petite quantité de données. De plus, nous avons proposé une nouvelle architecture multiéchelle basée sur certains des meilleurs extracteurs de caractéristiques, tout en utilisant un mécanisme d'attention pour mettre l'accent sur l'objet ciblé. Nous avons adopté cette nouvelle architecture pour la segmentation sémantique des tumeurs cérébrales, et avons montré une segmentation sémantique satisfaisante par rapport à certaines architectures d'apprentissage profond adaptées à la segmentation d'images, telles que U-Net, Attention U-Net et Fully Connected Network (FCN). D'autre part, la classification d'images médicales adoptant les CNN est une tendance dans la vision par ordinateur, étant donné leur façon de fournir des performances exception-nelles. La partie extraction de caractéristiques d'un CNN est la clé de sa popularité. Pour cette raison, nous avons proposé une étude comparative qui inclut la partie d'extraction de caractéristiques d'un CNN et d'autres méthodes d'extraction de caractéristiques communes telles que l'analyse en composantes principales (PCA), la transformée de Fourier à Discrète 2D (2D-DFT) et la Machine de Boltzmann Restreinte (RBM). Avec cette étude, nous avons montré que les CNN atteignent leur meilleur niveau avec des convolutions et des couches de pooling. Par la suite, nous avons explorer plus profondément la partie d'extraction de caractéristiques du CNN, afin d'étudier la possibilité de rendre un CNN plus précis. Par conséquent, nous avons proposé deux méthodes de pooling différentes dans le cadre de deux travaux distincts. Dans la première méthode, nous avons entièrement mélangé le pooling max et moyen dans une couche d'un CNN et avons montré sa supériorité sur les méthodes de pooling conventionnelles en termes de précision et comparé à d'autres stratégies de pooling mixtes en termes de temps de performance. La deuxième méthode de pooling proposée est une amélioration de la première. Nous avons montré que l'ajout d'une fonction dropout à notre stratégie de pooling mixte, a augmenté ses performances et a surpassé toutes les méthodes de pooling mixte avec lesquelles elle a été comparée.
Les résultats de cette thèse représentent des découvertes importantes pour mieux comprendre comment les modèles d'apprentissage profond peuvent être adaptés au traitement des images médicales.
List of tables
General Introduction Context
For the last few decades, Artificial Intelligence (AI) has become one of the main research fields in the world. Given its importance, AI is related to every aspect of our lives (health, education, food, sports. . . ). Humans have long dreamed of creating intelligent machines. Human intelligence only manifests itself through the power of acquiring all kind of information using natural human senses and processing it through the brain. In the seventies, simulating this kind of mechanism was considered science fiction, but nowadays AI can process huge amounts and different kinds of data in few seconds.
Today, AI is a thriving field with many practical applications and exciting current research topics. We are turning to intelligent solutions to automate routine processing, that is, to process information from different sources, mainly images and speech. In the early days, artificial intelligence quickly tackled and solved problems that are intellectually difficult for humans but relatively simple for computers, mainly problems that can be described by a list of formal and mathematical rules. As a result, the real challenge for artificial intelligence has been to solve problems that are difficult to describe formally, such as recognizing faces, recognizing spoken words or people from their voices. These are the types of problems that humans can solve intuitively.
There is a solution to these types of problems that allows computers to learn from experience and understand the world according to a hierarchy of concepts. This hierarchy, represents a collection of knowledge points that are connected in a deep way to form a graph with many layers. This concept is called Deep Learning (DL).
Today, AI is mainly manifested through its sub-fields, machine learning and deep learning. Deep learning was inspired by deep neurobiological structures of human speech and vision perception. Deep learning was introduced by Hinton and his team in 2006 [START_REF] Hinton | A fast learning algorithm for deep belief nets[END_REF], it is based on improvements of Artificial Neural Networks (ANN) and the way they were trained. Deep learning made a revolution in many research fields such as medical domain. Therefore, medical image diagnosis is considered as a pattern recognition problem that could benefit from deep learning techniques. Nowadays, it has attracted great interest due to the high demand for Computer Aided Diagnosis (CAD) applications.
Because of the complex nature of human body organs when captured in images, medical image analysis is a complicated task. Even for human experts (radiologists) it is a challenging mission to process medical images with the naked eye. Thus, relying on advanced technologies to help in getting an accurate diagnosis is becoming strongly recommended. That is why we have CAD systems. A CAD system is an efficient tool designed to provide assistance for radiologists in interpreting medical data (mostly images) in order to deliver an accurate diagnosis for a disease. CAD systems are known for their reduced time to diagnosis, in addition to the most important aspects which is accuracy, i.e., CAD accurately diagnose a patient in a very reduced time compared to human beings.
Let's take an example, for brain tumor diagnosis, a CAD system must go through the following steps:
-Preprocessing: Preparing data and making it standard for the machine, including separating the region of study from the useless area.
-Detection: Detection of the tumors that are present in the region of study.
-Segmentation: After tumors detection, they need to be segmented in order to separate them from other parts of the brain that might interfere in the decision making.
-Classification: Now that we have our patches containing brain tumors, we can perform a classification to decide if the tumors at hand are benign or malignant.
In order to get a precise diagnosis, the adopted algorithms to build this kind of CAD systems must be very robust and very fast. As we mentioned above, there are different types of algorithms that can serve the purpose of a CAD system, but how fast and how accurate?
It is known that medical images are very different from other types of images and need a special way of treatment. In addition, in the literature, the application of deep learning in computer vision problems usually encounters images coming from natural scenes.Thus, in this thesis, we are studying and investigating different deep learning techniques and their behaviour towards medical images. Moreover, we introduce new deep learning techniques that may improve the way of processing medical image data.
Research questions
Our main goal is to examine and make enhancements to the performance of deep neural networks in processing medical image data sets. To reach this goal, we are dividing "medical image processing using deep learning" into: medical data segmentation and medical data classification relying on deep learning for both of them.
There are multiple steps to achieve a better classification accuracy or an efficient segmentation result: data preprocessing and preparation, feature extraction, then comes segmentation or classification. In this section, we identify the questions related to our study, the answers to which form the core of the work we present in this thesis: Data preparation is one of the mandatory steps to achieve satisfactory results; can a conventional deep learning method perform an efficient segmentation by relying on good data preparation? Feature extraction is one of the main steps to prepare data for any computer vision system, it consists of extracting only relevant characteristics from data for next phases. What are the feature extraction methods that can help our deep learning network achieve the best segmentation result?
CNNs are one of the state-of-the-art algorithms that made a huge impact in deep learning raise, it consists of a variety of feature extraction functions followed by a classifier. Medical image classification also relies on good feature extraction methods to reach higher accuracy, thus CNNs can be a good fit for this matter. Are convolution and pooling methods the best choice to optimal features extraction for CNNs? What are the enhancements we could make to CNN's features extraction methods to improve its performance?
In this thesis, we focus more on the deep learning architectural side and not trying to solve medical problems. In other words, our main goal is to find new contributions in deep learning architectures that may be beneficial for computer vision problems in general. The choice of the medical domain is to show that if our findings can show great results while processing medical images (that are a little bit complex compared to other types of images) we can for sure achieve promising results while dealing with other types of images.
Research goals
We provide in this thesis our findings under two computer vision problems:
Medical image segmentation:
-Conventional deep learning methods:
In this chapter, we will conduct an experimentation involving a deep learning segmentation of the lung area from input CT scans of lungs. For that matter, we are going to investigate the performance of a conventional deep learning architecture called U-Net [START_REF] Ronneberger | U-net: Convolutional networks for biomedical image segmentation[END_REF] using a publicly available data set LIDC-IDRI [4] containing lung CT scans.
-Deep learning methods with advanced feature extraction methods:
For this work, we are going to perform a semantic segmentation of brain tumors using an advanced deep learning architecture. This architecture incorporates some state-of-the-art feature extraction methods that were behind the raise of CNNs, beginning from 2012 [START_REF] Krizhevsky | Imagenet classification with deep convolutional neural networks[END_REF]. Furthermore, these methods are architecturally organized in a multi-scale way, besides, using Attention units to emphasize and put more focus on the tumor region.
Medical image classification:
-Conventional deep learning methods:
Deep learning is feature extraction and classification combined in an end-to-end network. Therefore, their success proves that extracting good features leads to better accuracy. Given that CNNs have convolutions and pooling functions as feature extractors, we are comparing other feature extraction functions to those of CNN to show that convolution and pooling layers are the best fit for extracting relevant features in a CNN architecture.
-Deep leaning methods for classification with advanced feature extraction methods:
In [START_REF] Szegedy | Going deeper with convolutions[END_REF], a paper called "going deeper with convolutions", authors proposed a new convolutional block which showed great results in many computer-vision tasks. However, in this chapter of our thesis, we are going deeper with poolings, meaning that we are proposing new pooling strategies involving two of the state-of-the-art pooling functions max and average, in order to boost the performance of CNNs in medical image processing in particular and in image processing in general.
Contributions
In this thesis we are presenting our contributions under two part: -Medical image segmentation:
Thesis structure
In this thesis, we are covering many deep learning aspects performed in medical images in details. In fact, this thesis is presented under five parts with eight chapters organized as follows:
Introduction: This section presents the general introduction (current section).
Part I: In this, we are going to provide an overview of deep learning, and since our thesis is mainly about medical image processing, we will focus the presentation on the CNN architecture through its layers. Then we will briefly present the nature of medical images and the way deep learning can deal with them. Then, we will show through some state-of-the-art examples that with deep learning, processing medical images surely leads to great results in different types of image processing such as detection, segmentation and classification.
Part II: In this part, we are presenting two of our papers on the medical image segmentation filed. In the first chapter, we present our work where we perform lung CT image segmentation using a sophisticated deep learning architecture called U-Net. In the second chapter, we propose a novel architecture we called "Multi-Scale ConvLSTM Attention Neural Network" to perform brain tumor semantic segmentation. In this chapter we use some of the strongest feature extraction methods in the literature of deep learning in addition to Attention units in a multi-scale architecture.
Part III: this part is divided into three chapters. In the first chapter, we are providing a deep learning comparative study of some feature extraction methods such as 2D-DFT, PCA and RBM compared to the feature extraction layers in a standard CNN.
In the first section of the second chapter, we are presenting a pooling function comparative study and proposing a new one. Our proposed pooling method relies on mixing both max and average pooling methods in a specific way. Then, we compare its performance with conventional pooling functions such as max pooling and average pooling, then we compare its performance with other pooling strategies that involve mixing max and average pooling.
The second section of the second chapter is an extension of the latter section, where we are introducing a novel pooling method that mixes max and average pooling in one layer with the inclusion of dropout function in two different ways in order to enhance the performance of the network.
Introduction
Deep learning is one of the most active fields, not only in research domain but also in the industry. It is basically constituted from different types of deep neural networks such as CNNs, Deep Belief Networks DBNs, Recurrent Neural Networks RNNs. . . Their reputation came as a consequence of their ability to learn high levels of abstraction from significantly large amounts of data, and yet accomplish great results compared to conventional machine learning algorithms. Deep learning can handle different types of data like images, audio, text and so on. In our case we deal with image data sets, thus we will only cover deep learning for image analysis generally and for medical image data sets specifically.
CNN is one of the deepest networks that showed very promising performance in the deep learning domain. This architecture is known for its ability to process image data sets in particular, very efficiently. In this part, we introduce an overview of deep learning domain, go through the CNN architecture and its components and present the impact of such algorithms in the medical image analysis domain.
Deep Learning Overview
Deep learning is a sub-field of machine learning that has raised rapidly over the last few years, it is represented by many architectures that have shown very promising results and have proven to be very effective in different application domains, due to their ability to deal with large amounts of data.
Deep neural networks are basically based on artificial neural networks, which try to simulate the human brain behavior by imitating the attitude of biological neurons. It started in 1943 when McCulloch proposed a linear threshold unit as a computational model with binary inputs and outputs [START_REF] Palm | Warren mcculloch and walter pitts: A logical calculus of the ideas immanent in nervous activity[END_REF]. Another type of ANN called perceptron was introduced in 1958 by Rosenblatt [START_REF] Rosenblatt | The perceptron: A probabilistic model for information storage and retrieval in the brain[END_REF], which was considered a pioneering artificial neural network at that time. In 1969, Minsky et al. wrote a book called "Perceptrons" [START_REF] Minsky | Perceptron: an introduction to computational geometry[END_REF] that mainly revealed the limitations of the perceptron, Minsky showed in his book that it is impossible for the perceptron to learn a non-linearly separable function like XOR. Unfortunately, this book caused a significant disinterest and lack of funding for artificial neural networks research. Nevertheless, research on neural networks was revitalized with the introduction of the backpropagation algorithm [START_REF] Rumelhart | Williams, learning representations by backpropagating errors[END_REF] in the late 1980s which made them regain their strength.
A Multi-Layer-Perceptron, Fig. 1.1, is an ANN with an input and output layer plus one or more hidden layers with multiple hidden units in each one of them.
To train an ANN, a backpropagation algorithm is performed with the Gradient Descent (GD) which is a first-order algorithm used for minimizing the error function applied to update the parameters, it has been successfully used for a long period to train ANNs, even with the long training time issue. The problem with GD is that it runs through all the samples in the training set to perform a single parameter update which is time consuming. Thus, the Stochastic Gradient Descent (SGD) was proposed to overcome this drawback, it consists of using a small sample of the training set to update the parameters instead of using the whole set. Nowadays there are plenty of optimization algorithms for ANNs that have proven even more effective compared to SGD such as ADAM [START_REF] Kingma | Adam: A method for stochastic optimization[END_REF] and RMSprop [START_REF] Hinton | Neural networks for machine learning lecture 6a overview of mini-batch gradient descent[END_REF]. Furthermore, in the optimization algorithms we specify a hyperparameter called the learning rate η, it controls the pace in which the parameters (weights) of the ANN are updated. It is an important component due to its role in speeding up the training process. Nonetheless, the value of the learning rate must be chosen wisely because a larger value may lead to divergence instead of convergence. On the other hand, a smaller value can get the training procedure to be stuck in a local minimum. Thus, ADAM and RMSprop have the ability to reduce the learning rate in an adaptive way, on the other hand, there are some commonly used solutions for reducing the learning rate during training according to a pre-defined schedule, such as Step-based decay, Time-based decay and Exponential decay which consist of periodically adjusting the learning rate.
ANNs have known many improvements since their start, but the most important one was in 2006 with the beginning of deep learning, when Hinton and his team proposed a new learning algorithm called the greedy-layer-wise [START_REF] Hinton | A fast learning algorithm for deep belief nets[END_REF] to train deep belief networks. Besides this outbreak, there are many reasons that helped deep learning algorithms to outstand other machine learning algorithms, one of which is that the depth of the architectures is no longer an issue given the invention of advanced technologies such GPUs and TPUs. On top of that, data has become more available than ever before. Machine learning, in particular Deep learning, approaches can be categorized to three subcategories: supervised, semi-supervised and unsupervised.
Supervised learning
It is a learning technique that uses labeled data to match a certain input with its corresponding output. The algorithm tries to minimize the error function by updating the network parameters until it gets to approximately generate the desired output. After the training, the algorithm should be able to get correct outputs for new data. In other words, the aim of supervised learning models is to accurately predict the correct output class for the newly unseen inputs.
Supervised learning algorithms usually try to solve two main problems: classification or regression. During the training phase, classification algorithms are given data points with an assigned label or class. The goal of classification algorithms is to take the input data and link it to its correct class, based on the training process learned beforehand.
Classification algorithms can be binary, i.e., two classes, or multi-class. For example, a classification of lung nodules as benign or malignant is considered binary, on the other hand a classification of different species of dogs is a multi-class classification.
There are many algorithms to solve classification problems. The choice of a specific algorithm for a specific problem depends on the data. Here are some popular classification algorithms ANNs, SVM, CNN, Random Forest. . . On the other hand, regression is considered as a predictive statistical process that attempts to find the correlation between dependent and independent variables. In opposition to classification, regression aims to predict continuous numbers such as time, sales, scores and so on.
There are several types of regression algorithms in the literature, from which we can mention: linear regression, logistic regression and polynomial regression.
Unsupervised learning
It is a learning strategy that tries to learn from data on its own without any supervision. It deals with unlabeled data to detect patterns in it and group them into clusters that share similar features. Unsupervised learning algorithms can be sub-categorized into two subcategories: Parametric and non-parametric unsupervised learning algorithms. The first subcategory assumes that sample data comes from a probabilistic distribution based on a set of fixed parameters. For example, the probability of any future observation can be easily generated when the mean and the standard deviation are known and the distribution is normal. The second subcategory does not require the modeler to make any assumptions regarding sample data which is assembled into clusters where each cluster is supposed to hold features of different categories that are present in the data.
Unsupervised learning algorithms are presented in the literature under clustering or association problems. Clustering is considered as assembling or organizing data entries into groups that are supposed to be similar in some way. The challenge in clustering is to find the best criteria in which data can be grouped. To do so, there are many proximity measures that help in finding a particular clustering solution such as Cosine distance, Jacard distance and Euclidean distance. Those proximity measures are adopted in various clustering algorithms, from which we can site K-means, Fuzzy K-means, Hierarchical clustering and Mixture of Gaussian.
On the other hand, association problems are different than clustering problems. By definition, clustering is grouping data objects in a way that objects in the same group are supposed to be similar than to those objects in other clusters. Whereas, association rules aim to find associations among objects within large commercial data sets. One popular example where association rules are applied is market basket analysis. The aim here is to associate grocery items that are frequently purchased.
Semi-supervised learning
As expected from its name, this type of learning falls between supervised and unsupervised learning strategies. Usually there is more unlabeled data than labeled data. In many cases, more unlabeled data can have a beneficial effect on the learning process like the case of Amazon Alexa as claimed by Jeff Bezos, the increased amount of untagged data helped in achieving more accuracy than before. One of the most popular learning techniques that adopts semi-supervised learning strategy is Reinforcement learning. This learning technique has gained a large popularity and has proven very robust and effective in the last few years. It is widely used in game-based algorithms and self-driving cars. Reinforcement learning is based on rewards and penalties given the action performed. Its aim is to maximize the overall reward. Reinforcement learning began in 2013 with google deep mind group [START_REF] Logothetis | The ins and outs of fmri signals[END_REF] when they proposed an algorithm called deep Q-network that not only excels in playing the Breakout video game but outperforms all previous machine learning methods. Then the same group introduced AlphaGo [START_REF] Silver | Mastering the game of go with deep neural networks and tree search[END_REF] that outperformed the human level in the game of Go by defeating Lee Sedol in four out of five games. Thereafter, the same research group released a new version called AlphaGo Zero [START_REF] Silver | Mastering chess and shogi by self-play with a general reinforcement learning algorithm[END_REF], it dramatically outperformed all the previous versions. Adopting reinforcement learning granted to AlphaGo Zero with just a little bit of knowledge about the game and few resources in terms of computation compared to previous versions to become the state of the art
Convolutional Neural Networks
Introduction
Machine learning algorithms are able to understand the underlying correlation in the hidden data features, hence have the ability to make decisions without the need of any supervised instructions. Most of ML algorithms have been introduced to the literature since the beginning of the 1980s for the simulation of human behaviors in processing different types of data, such as speech and vision [START_REF] Ravindran | Classification of cites-listed and other neotropical meliaceae wood images using convolutional neural networks[END_REF]. Nevertheless, ML algorithms have generally failed to achieve such an abstract level due to the way of handling data, therefore in late 1990s the challenging nature of computer vision tasks gave raise to a new type of Neural Networks called Convolutional Neural Networks that are suitable for understanding and processing image content. CNN has very important attributes that made it overcome the state-of-the-art in computer vision, those attributes are hierarchical learning, automatic feature extraction, multi-tasking, and weight sharing [START_REF] Wang | On the origin of deep learning[END_REF].
CNNs are one of the best algorithms for processing image data and have shown satisfactory performance in several image recognition tasks such as image segmentation, classification and detection [101], [START_REF] Jmour | Convolutional neural networks for image classification[END_REF], [START_REF] Skourt | Lung ct image segmentation using deep neural networks[END_REF]. Beyond the academic research field, CNNs have also captured attention in the industry companies such as Google, Facebook, Amazon and Microsoft, those companies have developed their own research groups and invested in exploring new architectures of CNN.
The strength of CNN resides in its ability to exploit spatial correlations in data. The architecture of CNN is composed of different learning stages; convolutional layers, nonlinear activation units, subsampling layers and discriminative layers. CNNs are multilayered hierarchical networks, where each layer performs multiple transformations, for example the convolution operation helps in extracting useful features such as shapes and patterns from data points with different types of correlation. Then, its output is assigned to non-linear processing units to embed the non-linearity in the feature maps for the purpose of learning abstractions. The activation function produces different patterns of activations thus facilitates the learning process of semantic differences in input images. In some cases, the non-linear activation function is followed by normalization layer [START_REF] Ioffe | Batch normalization: Accelerating deep network training by reducing internal covariate shift[END_REF], to normalize the generated activations from the previous layer, i.e., maintain the mean activation close to 0 and the standard deviation close to 1. Then the subsampling is performed to summarize the results and prevent the input from geometrical distortions. The integrated automatic feature extraction makes CNNs have no need for a separate feature extractor. Therefore, CNN without thorough processing is able to learn high level representations from new presented images.
The very first idea of a CNN-like architecture was introduced under the name of Neocognitron [START_REF] Fukushima | Neocognitron-a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position[END_REF], this architectural design was inspired by Hubel and Wiesel's work in neuroscience, thus generally follows the basic structure of the human visual cortex in pattern recognition. For example, the way convolutional and subsampling layers extract features shows quite resemblance with V1 and V2 portion of visual cortex [START_REF] El-Shamayleh | Visual response properties of v1 neurons projecting to v2 in macaque[END_REF] as shown in Fig. 1.2. However, in 1990 Yann LeCun introduced an architecture inspired by Neocognitron for processing matrix-like topological data through his work intitled "Gradient-based Learning Applied to Document Recognition" [START_REF] Lecun | The mnist database of handwritten digits[END_REF], CNN first came to spotlight through this work. CNNs gained their popularity in image recognition due to their hierarchical feature extraction ability, which gives them the ability to capture different levels of features from low to high. Deep architectures mostly outstand shallow architectures when handling complex data, as a consequence of stacking multiple linear and non-linear processing units in a layer wise form. On the other hand, the availability of big data and advanced hardware have contributed significantly in the recent success of deep CNNs to achieve and outperform human level performance [START_REF] Jie | Squeeze-and-excitation networks[END_REF], [142].
Basic components of CNN
CNN is mostly considered as the widely used machine learning technique in several application domains, especially in vision related applications. CNNs have the advantage of possessing two major feature phases for processing grid-like data representations. For the feature extraction phase, CNNs can learn significant features. Then, the capability of feature classification. A typical CNN architecture is generally composed of convolutional layers with activation functions and pooling layers alternately, followed by one or more fully connected layers as shown in Fig. 1.3. Besides different mapping functions, normalization functions and dropout are also contained to enhance CNN performance [START_REF] Ioffe | Batch normalization: Accelerating deep network training by reducing internal covariate shift[END_REF], [START_REF] Krizhevsky | Imagenet classification with deep convolutional neural networks[END_REF]. Next section discusses in details each component of CNN architecture and its role in achieving optimal performance.
Convolutional layer
A convolutional layer generates a map of abstractly convolved features, after performing convolution operation using slightly small parametrized filters of N × N shape. The nature of those filters defines the type of the resulted feature such as edges, horizontal lines, shapes and patterns. More formally, a convolution operation can be represented by the following formula:
y k = f (W k * x) (1.1)
where x denotes the input image, W k is the convolution filter related to the K th feature map and the convolution operator is represented by the multiplication sign. The function f () applied to the feature map is the activation function.
In general, the shape of input tensor remains invariant after performing a convolution transformation with same padding, leaving the reduction of feature maps size to the pooling layer. Nevertheless, it can be changed if a valid padding had been applied. This operation can effectively simulate the task of visual cortex with the assistance of the non-linear transformation, which is represented by non-linear activation functions, which we will cover later. The purpose of performing non-linear transformation is to produce clearer contrast of meaningful features, thus granting the convolution operation to provide more abstract level in extracting features.
There are different types of convolutional layers considering their purpose in dealing with the input volume, from which we can site the following: Simple convolution It is the commonly used convolution type on a standard CNN, it is the dot product of the same filter with some width/height shape as shown in the following figure: 1×1 convolution This specific type of convolution was first introduced in what is known by network-in-network architecture [START_REF] Lin | Network in network[END_REF], then they were adopted in inception architecture [142] to serve the purpose of reducing the dimensionality in filter space, thus reducing the computation cost in the network. The following figure shows the way a 1 × 1 convolution reduces dimensionality in feature maps: Flattened convolution It has the same reason of usage as the previous type but not only features dimension set to 1, also one of the other dimensions width or height. It consists of consecutive sequence of one-dimensional filters throughout all directions in 3D space to obtain promising results as conventional CNNs. Flattened convolution [START_REF] Jin | Flattened convolutional neural networks for feedforward acceleration[END_REF] has shown faster performance compared to standard convolution, due to the reduction of learning parameters. Spatial and cross-channel convolution It is widely adopted in Inception architecture, the main reason to use this kind of convolution is to split operations from cross-channel correlations into a series of independent operations, for example a 3 × 3 filter would be separately applied as 3 × 1 filter then followed by 1 × 3 filter. Depth-wise convolution Unlike spatial separable convolutions, this kind of convolution works with filters that cannot be separated into two smaller filters. It consists of independently performing spatial convolution depth-wise, i.e. over each channel space. As shown in Fig. 1.8: Grouped convolution It was firstly mentioned in AlexNet [START_REF] Krizhevsky | Imagenet classification with deep convolutional neural networks[END_REF], the motivation behind using such convolutions is to optimize the network performance by reducing the computational complexity while dividing features into groups and computation over multiple GPUs. Next figure shows the working scheme of grouped convolution: Shuffled grouped convolution Coming from Shuffle-Net [START_REF] Zhang | Shufflenet: An extremely efficient convolutional neural network for mobile devices[END_REF], the main idea is to avoid the side effect of only getting features from a certain channel that are only derived from a small subset of input channels.
Activation functions
Activation functions represent a crucial component of deep convolutional neural networks, due to their role in determining the network's output, its accuracy and its training computational efficiency, which plays a significant role in making or breaking a deep neural network. Activation functions have a huge impact on the CNN's ability to converge. Nevertheless, the choice of an activation function is a critical task given the fact that it might prevent CNNs from converging in the first place.
Activation functions are mathematical equations that define the nature of model's output, they control the units that should be activated based on their relevance in making a model converge quickly. Moreover, activation functions help in normalizing each unit's output to a range between 0 and 1 or between -1 and 1. Due to the depth of recent CNNs, the computational strain has been increased on the activation functions, which led to the need for speed, hence the development of new functions such as Swish [START_REF] Ramachandran | Swish: a self-gated activation function[END_REF], ReLU [START_REF] Nair | Rectified linear units improve restricted boltzmann machines[END_REF] and its variants.
The use of non-linear activation functions has particularly become the mostly used in CNNs, which helps the network learn complex data features and provide accurate predictions.
There are three types of activation functions: Binary Step functions, Linear functions and Non-Linear functions. However, we are only going to cover the non-linear type, since it is the more sophisticated and the mostly used in CNNs.
Sgmoid Sigmoid function [START_REF] Narayan | The generalized sigmoid activation function: Competitive supervised learning[END_REF] was widely used in neural networks in general, it nonlinearly transforms the input values into a range between 0 and 1, as shown in the following figure. The sigmoid function is characterized by its smooth gradient that prevents any jumps in the output values and it produces clear predictions given a slightly big input value which brings the output value to the edge of the curve. It is mathematically represented by the following formula:
sig(x) = 1 1 + e -x (1.2)
However, the sigmoid function has many drawbacks, such as vanishing gradient, output not zero centered and computationally expensive.
Softmax The Softmax function is a generalization of sigmoid. Usually, the sigmoid function is used in a binary classification, whereas the Softmax function is used in a multiclass classification. The Softmax function can be represented as follows:
so f tmax(x) = e x i ∑ K k=1 e x k
(1.3)
Where i = 1, . . . . . . ., K.
Hyperbolic tangent Usually referred to as Tanh, it is another non-linear function that has proven effectively compared to Sigmoid. The range of values is between -1 and 1. Apart from that, tanh is very similar to sigmoid, it is continuous and differentiable at all points. The tanh formula is represented as follows:
tanh(x) = 2 1 + e -2x -1 (1.4)
ReLU It is another non-linear activation function that has proven very effective with the raise of deep neural network in the last decade. ReLU [START_REF] Nair | Rectified linear units improve restricted boltzmann machines[END_REF] gained popularity due to its speed and ability to not activating all the neurons at the same time, which provides computation efficiency when compared to Sigmoid and Tanh. ReLU is mathematically formulated as follows:
ReLU(x) = max(0, x) (1.5)
Nevertheless, ReLU has a disadvantage called the dying ReLU problem. When the inputs are negative or close to zero, the gradient of the function becomes zero which prevents the network from performing backpropagation to learn from data.
Leaky ReLU The leaky ReLU [START_REF] Xu | Empirical evaluation of rectified activations in convolutional network[END_REF] aims to improve the ReLU function by presenting a solution to the dying ReLU problem. Instead of resulting a 0 for negative input values of x, the function is defined as an extremely small linear component of x as expressed by the following formula:
LReLU(x) = 0.01x, x < 0 x, x >= 0 (1.6)
Parametric ReLU It is another variant of ReLU function [START_REF] Xu | Empirical evaluation of rectified activations in convolutional network[END_REF] that aims to address the problem of gradient becoming zero when the input value is negative. It is the same as leaky ReLU, but instead of using a constant 0.01 we use a parameter a as shown in the following formula:
PReLU(x) = ax, x < 0 x, x >= 0 (1.7)
Swish Swish is a newly discovered activation function by google research group. According to [START_REF] Ramachandran | Swish: a self-gated activation function[END_REF], it performs better than ReLU on deeper models with the same efficient computation. The range of values is between negative infinity to positive infinity. The function is defined as:
swish(x) = x 1 + e x
(1.8)
Pooling layer
Pooling layers have been from the very beginning of CNNs, one of the building blocks and appear to play an important role in making CNNs very successful.
The integration of pooling layers into a CNN architecture is mandatory for many different reasons. First of all, pooling layers manage to reduce the dimensionality of input data, which have a huge impact on the computational cost of the model, it helps in dramatically decreasing it. Secondly, using pooling layers contribute in dealing with feature location sensitivity in input data. Besides that, pooling layers are involved in preventing overfitting. Their contribution to preventing overfitting resides in generating a lower version of features from input feature maps that preserves most of the important information while eliminating irrelevant features. Pooling layers are usually applied after performing activation non-linearity to convolution feature maps. There are many different variations of pooling layers, such as max-pooling, average pooling and some different versions of mixed pooling like Mixed-Pooling, mixed Max-Average-Pooling and Gated-Pooling functions. As to the existence of numerous pooling operations, the choice of a specific function for a specific problem had always been related to empirical studies as shown in [START_REF] Yu | Mixed pooling for convolutional neural networks[END_REF]. Nevertheless, in [START_REF] Boureau | A theoretical analysis of feature pooling in visual recognition[END_REF] Boureau et al. presented some theoretical work which provides guidance as to which type of pooling operation can be adopted under specific circumstances.
Bellow, we present some of the popular pooling functions.
Max-pooling It is the mostly used functions in the literature due to its ability in reaching better performance compared to standard functions. Max-pooling allows to select the maximum value among the values in the pooled region by performing the following formula:
y i j (x) = max (p,q)∈R i j x k i j (1.9)
Where x k i j represents the element at location (p, q) covered by the pooling region R i j . Max-pooling is usually performed after the first or second convolution layer to guarantee the decrease of dimensionality.
Average-pooling It is the second widely used pooling function. This operation is performed by taking the arithmetic mean of the values in the pooled area. It can be mathematically represented by the following formula:
y i j (x) = 1 |R i j | ∑ (p,q)∈R i j x k i j (1.10)
Where x k i j represents the element at location (p, q) covered by the pooling region R i j .
Mixed-pooling The choice between max and average pooling has always been difficult due to the closeness in terms of performance. Nevertheless, in many cases one is obviously better than the other. For that matter, Dingjun Yu et al. introduced a new pooling operation, Mixed-Pooling [START_REF] Yu | Mixed pooling for convolutional neural networks[END_REF], that relies on randomly picking max or average pooling to be performed in a model. Mixed-pooling has shown satisfactory results, it outperformed both max and average pooling according to their experiments. This operation can be formally represented by the following formula:
y i j (x) = λ max (p,q)∈R i j x k i j + (1 -λ ) 1 |R i j | ∑ (p,q)∈R i j x k i j (1.11)
Where x k i j represents the element at location (p, q) covered by the pooling region R i j and λ can take 0 or 1, with 0 means average pooling operation is performed and 1 means max pooling in performed. In this way, the pooling regulation scheme becomes a probabilistic matter which helps in achieving better performance.
Mixed max-average-pooling This method was proposed by Chen-Yu Lee et al. in [START_REF] Wan | Regularization of neural networks using dropconnect[END_REF].
It consists of proportionally mixing the two pooling operations, max and average, instead of just choosing one operation to perform. The mixed max-average pooling operation can be represented by the following formula:
y i j (x) = α max (p,q)∈R i j x k i j + (1 -α) 1 |R i j | ∑ (p,q)∈R i j x k i j (1.12)
Where x k i j represents the element at location (p, q) covered by the pooling region R i j and α is a scalar representing the mixing proportion which specifies the exact amount of combination of max and average pooling, α ∈ [0 -1]. We can see that mixed-pooling is a generalization of the mixed max-average pooling method when α = 0 or α = 1.
Gated-pooling This method was proposed in the same paper as the last method as a solution to its drawback. Which is the non-responsiveness, i.e., the mixing proportion is fixed no matter what are the characteristics that constitute data. As a matter of fact, Gated-pooling was introduced to address the non-responsive behavior. Rather than fixing the mixing proportion, a gating-mask is learned that has the same spatial dimensions of the pooled region, then the inner product of this gating-mask and the region being pooled produces a scalar which is fed through a sigmoid function to generate the mixing proportion. This operation is translated using the following formula:
y i j (x) = σ (ω T R i j ) max (p,q)∈R i j x k i j + (1 -σ (ω T R i j )) 1
|R i j | ∑ (p,q)∈R i j x k i j (1.13)
Where x k i j represents the element at location (p, q) covered by the pooling region R i j , ω denotes values of the gated mask and σ is the sigmoid function represented by Eq.5.8.
Batch normalization
Deep Neural Networks are among the most if not the mostly sophisticated learning algorithms in machine learning field. Yet, training them remains a very challenging task, as they are sensitive to initial parameters configuration. For the purpose of overtaking this issue and to lead the model's performance into accurate results, many optimization algorithms were adopted. Stochastic Gradient Descent (SGD) is one of these algorithms that has proven to be an excellent way of training DNNs. SGD has shown even more effective when used in mini-batches. However, as robust as SGD is, it still requires careful initialization of parameter values for the network. This seems to be an issue for the training process due to the correlation between the inputs to each layer and the parameters of all preceding layers. Therefore, as the network grows deep-wise, small changes to the network parameters make the layers' inputs dramatically expand.
As a solution to address the beforementioned problem, one can immediately think of changing the distributions of layers' inputs. However, as sophisticated as it seems, this solution generates another problem called "Covariate Shift" [START_REF] Shimodaira | Improving predictive inference under covariate shift by weighting the log-likelihood function[END_REF], which can be handled via domain adaptation. Nevertheless, the Covariate shift problem appears to affect sub-parts of the model such as sub-networks included in the whole network or layers. When trying to adopt the same solution performed to the whole model, many dimensions of layers' inputs will likely increase significantly which will slow down the convergence. In [START_REF] Ioffe | Batch normalization: Accelerating deep network training by reducing internal covariate shift[END_REF], the problem of changing distributions of internal units of a DNN is referred to as "Internal Covariate Shift". Performing Batch Normalization in the training process offers a promise of eliminating it and guarantees a faster training. Batch Normalization is an approach that leads toward reducing the internal covariate shift by normalizing, the networks' inputs, i.e. linearly transformed to have zero mean and unit variances and decorrelated. For optimization purposes, batch normalization is performed over mini-batches since normalizing each layer's inputs is not optimal. Therefore, two simplifications are proposed regarding batch normalization in order to optimize the normalization process. Firstly, instead of jointly normalizing the features in layer's inputs and outputs, each scalar feature is independently normalized as shown in Eq.1.14. For a layer of dimension d and an input x = (x 1 , . . . , x k ) each dimension is normalized as follows:
Z i = x i Mean(x i ) Var(x i ) (1.14)
Where Mean and Var are computed over the training set. In practice, limiting the activation of each layer to a strict mean of 0 and variance of 1 can limit the expressiveness of the network. Hence, practically, batch normalization allows the network to learn γ and β parameters, which can convert the mean and variance into any desired value for the network. In other words, it allows to shift and scale the normalized value:
y i = γ i z i + β i (1.15)
The used parameters in Eq.1.15 are learned along with the original model parameters. When setting γ i = Var(x i ) and β i = Mean(x i ) can lead to restore the original activations if needed.
The second simplification resides on producing estimates of the mean and variance of each activation by each mini-batch since mini-batches are used in stochastic gradient training. Therefore, the used parameters for normalization are fully involved in the gradient back-propagation.
Batch normalization can be applied to any set of activations on the network. For the case of CNNs, the focus is on the transformations that consist of a related transformation followed by an element-wise non-linearity:
Z = f (Wu + b) (1.16)
where W and b are the parameters learned from the model, and f (.) is the nonlinearity function. This transformation includes both fully connected and convolution layers. The BN transformation is added just before the nonlinearity, i.e., normalize x = Wu + b. The layer's inputs could also have been normalized, but since u is probably the result of another nonlinearity, the shape of its distribution is likely to change during training, and limiting the first and second moments would not eliminate the covariate shift. On the contrary, Wu + b has a rather symmetrical and non-dispersive distribution, i.e., "Gaussian"; normalization should lead to activations with a stable distribution. Note that the bias b can be ignored, since its effect is cancelled by the subsequent mean subtraction. Therefore Z = f (Wu + b) is simply replaced by Z = f (BN(Wu)) where the BN is applied independently to every dimension of x = Wu, with a distinct pair of learned parameters γ k , β k for each dimension.
In the case of convolution layers, the normalization needs to follow the convolution property, so that different elements of the same feature map are normalized in the same way at different locations. To achieve this, all activations are normalized in a mini-batch, in all locations. In this case, to perform BN, Values of x in a mini-batch B = x 1. . . m , B is going to be a set of all the values of a feature map on both the elements of a mini-batch and the spatial locations. Thus, for a mini-batch of size s and feature maps of size i × j, the effective size of the mini-batch s ′ = s.i. j is used. This leads to learn a set of parameters γ k and β k per feature map, instead of per activation.
Batch Normalization has been applied to various state of the art models from the ImageNet classification such as inception [START_REF] Szegedy | Going deeper with convolutions[END_REF], and has proven very robust and effective by overtaking the state-of-the-art accuracy as shown in [START_REF] Ioffe | Batch normalization: Accelerating deep network training by reducing internal covariate shift[END_REF].
Dropout
From the beginning, Neural Networks have faced several problems that made training very difficult, one of them is overfitting. There have been many attempts to address that issue, on the one hand the use of pooling layers and hyperparameters finetuning showed promising results in reducing the overfitting of such networks. On the other hand, regularization methods such as L1 and L2 [START_REF] Palm | Warren mcculloch and walter pitts: A logical calculus of the ideas immanent in nervous activity[END_REF] helped in reducing overfitting by keeping network's weight as small as possible. Furthermore, N. Srivastava et. al. proposed a method called Dropout [START_REF] Rosenblatt | The perceptron: A probabilistic model for information storage and retrieval in the brain[END_REF], which is a simple way of preventing neural networks from overfitting simply by introducing a new way of model combination.
For many machine learning methods, model combination seems to improve their performance. However, the computation increases with the increase of combined models, given the number of hyperparameters to fine-tune for each model. This task requires an important deal of computation. Additionally, model combination necessitates large amounts of data which may cause a problem since there may not be enough data to train different networks. Beside computation complexity and lack of enough data to train different models, even if it was practical to train several large networks, there is no way it can be optimal for inference given the importance of speed in test time. To resolve these issues, the dropout was proposed. The dropout function works by randomly dropping out some units in the training process of neural networks. Dropping out a unit means temporally disabling its incoming and outgoing connections in the neural network. The dropout function introduced a new hyperparameter called "retaining probability" usually referred to as p, which is the probability of each unit to be retained. As stated in the original paper, the choice of p's value can be done using a validation set or by simply setting to be 0.5. Although, the choice of an optimal value for p appears to be a very important task. In [START_REF] Rosenblatt | The perceptron: A probabilistic model for information storage and retrieval in the brain[END_REF], it is highly recommended for p to be closer to 1 than to 0.5 for the input layers. While for hidden layers the optimal value is usually between 0.5 and 0.8 with p = 0.5 is most likely to be the optimal value. Nevertheless, the nature of data also plays a significant role in deciding the retaining probability value. For example, the optimal value of p for image data in the input layers is 0.8 as proved in [START_REF] Rosenblatt | The perceptron: A probabilistic model for information storage and retrieval in the brain[END_REF].
Fully connected layers
After all the preprocessing, a flattening function is added, thus image data is transformed into a feature vector representation for feature-learning process, which is taken care of by fully connected layers. Those extracted features are fed to FC layers to classify images as a last part of CNNs. FC layers is just another name for regular neural networks like MLP, ANNs and sometimes dense layers as described in the introduction of this chapter.
It is only normal to ask the following question: why not use MLP directly for image classification? Well, first of all, to feed an image directly to a MLP, it needs to be flattened which breaks any information's relation that may be present in the image. Moreover, MLPs perform poorly compared to CNNs for image classification.
The basic components of FC layers are as follows:
-Input layer: it contains the flattened feature vector extracted from previous layers.
-Weights: they represent the percentage of importance of a node in a layer, hence the final output prediction.
-Hidden layers: it's a type of layers that their inputs and outputs are supposed to be uncontrollable, they contain a number of nodes called neurons stacked on top of each other.
-Output layer: after that data is fed to the input and passed through hidden layers, the output layer decides which real value to output in case of regression or a set of probabilities in case of classification.
We discussed MLPs deeply in the introduction, so for further information see section 1.3 of this chapter. Fig. 1.1 in the mentioned section shows an example of fully connected layers architecture.
Deep Learning Applications in Medical Image Analysis
Medical images seem to be different from other types of images in different ways [START_REF] Chung | Information processing in medical imaging[END_REF]. Moreover, when it comes to image processing, medical images usually illustrate sensitive information, which is mostly relied on in clinical interpretation and diagnosis. The issue with this information is that they are easily damaged or can be lost during the processing operation [START_REF] Lévêque | Comparative study of the methodologies used for subjective medical image quality assessment[END_REF], or even during acquisition.
For human body organs acquisition, there are various options, depending on the type of the acquired organ and the quality of acquisition. The most used modalities in medical practice cases are; Computer Tomography (CT) [START_REF] Khalil | Basic science of PET imaging[END_REF], Positron Emission Tomography (PET) [START_REF] Webb | Fundamentals of Body CT E-Book[END_REF], and Magnetic Resonance Imaging (MRI) [START_REF] Pai | Magnetic resonance imaging physics[END_REF]. Each one of them has its own way of obtaining the organ information. For instance, MRI scans use magnetic wave and CT scans use X-rays . . .
In the beginning of medical image analysis, researchers used to use sequential application of low-level pixel processing (basic shape detection filters, region growing) and mathematical modeling (fitting lines, circles and ellipses) to construct a mixture of rule-based systems that interpreted particular tasks. Expert systems were called the good old-fashioned artificial intelligence and they were very similar, in some ways, to rule-based image processing systems. Thereafter, the introduction of supervised techniques came to be increasingly popular in the medical image analysis domain. Therefore, image segmentation through active shape models, atlas models, feature extraction methods and statistical classifiers for diagnosis or objects detection, they helped in shifting from human designed systems, to systems trained by computers with a very little intervention from humans, considered in order to obtain handcrafted features.
The added value brought by deep learning into medical image analysis domain, as well to other domains, was the automation of the feature extraction phase. This concept of combining feature extraction layers (extract high levels of abstraction) with feature learning ones (previous layers make it easy for these layers to perform inference) is integrated in many deep learning models, which is basically the key to the strength of such architectures.
There are different applications of deep learning in the medical imaging field, from detection through segmentation into classification. In the following we will present each application separately. Note that those applications are part of what is called Computer Aided Diagnosis (CAD) system. A CAD system is a tool designated to help medical doctors/radiologists in order to provide an accurate diagnosis decision. The efficiency of a CAD system is measured by its accuracy, speed and automation level.
For example, the lifecycle of a CAD system for lung cancer diagnosis relies on these four major steps: segmentation of the lung fields to eliminate any useless area and focus on the area of study (lung parenchyma), detection of lung nodules inside the lung parenchyma, segmentation of the detected nodules, and then classification of segmented nodules into two classes: malignant and benign.
Detection
In image processing, object detection is one of the major steps. It consists of detecting instances of semantic objects of a given class. In general, object detection can have a general meaning of image classification of certain images that may or may not contain an object. It can be confusing some times to distinguish between object classification, object localization and object detection. In object classification, we assign a class label to an image, whereas object localization consists of surrounding an object inside the image by a bounding box. Object detection is a combination of both classification and localization, it draws a bounding box around the object and assign it a class label. The performance of a model for object detection is evaluated using the precision and recall for the known objects inside the image across each of the best matching bounding boxes. Now that all made clear, we will present in the following, some recent state-of-the-art deep learning models for image detection and their application in the medical field. Object detection in the medical field refers to the detection of lesions and organs in medical images.
There are numerous object detection architectures in deep learning that are very popular, such as R-CNN, fast R-CNN, faster R-CNN, YOLO in all its versions etc. R-CNN is a region-based CNN that was proposed by Ross Girshick et al. [START_REF] Girshick | Rich feature hierarchies for accurate object detection and semantic segmentation[END_REF] in 2014. It was one of the first large and successful applications of CNNs in the object detection domain. R-CNN achieved state-of-the-art performance in both VOC-2012 and the 200-class ILSVRC-2013 data sets [START_REF] Li | Path r-cnn for prostate cancer diagnosis and gleason grading of histological images[END_REF]. R-CNN consists of basic three building blocks; region proposal block, which generates and extracts candidate bounding boxes, meaning proposal of category independent regions. Thereafter, feature extraction from each candidate region using a deep CNN based on AlexNet architecture [START_REF] Krizhevsky | Imagenet classification with deep convolutional neural networks[END_REF]. Finally, the classifier block that consists of classifying features given the classes at hand.
In [START_REF] Li | Path r-cnn for prostate cancer diagnosis and gleason grading of histological images[END_REF] R-CNN was applied to prostate cancer detection based on Gleason grading using histological images. Wenyuan Li et al. demonstrated with the utilization of R-CNN for multitask prediction they managed to provide complementary contextual information which led to better performance compared to single task model. Authors claimed that with this model, they achieved state-of-the-art performance in epithelial cells detection with 99.07% accuracy. Fast R-CNN is a variant of R-CNN in terms of speed. In 2015, Girshick et al. [START_REF] Girshick | Fast r-cnn[END_REF] proposed a faster version of R-CNN that addresses the issue of involving three separate models which is computationally expensive, however, in [START_REF] Girshick | Fast r-cnn[END_REF] authors proposed a single model that performs object detection very fast and deals with the issues presented in [START_REF] Girshick | Rich feature hierarchies for accurate object detection and semantic segmentation[END_REF]. Nevertheless, in 2016 Shaoqing Ren et al. [START_REF] Ren | Faster r-cnn: Towards real-time object detection with region proposal networks[END_REF] proposed further improvements of the Fast R-CNN architecture in terms of training speed and detection accuracy. The proposed architecture is designed to propose and refine region proposals. Thereafter, these regions are used as input to Fast R-CNN in a single model design. In both R-CNN and fast R-CNN, the selective search is used to find the region proposals, which is time consuming and computationally expensive. Nonetheless, in [START_REF] Ren | Faster r-cnn: Towards real-time object detection with region proposal networks[END_REF] authors got rid of the selective search and replaced it by letting the network learn the region proposals itself, they called it Faster R-CNN.
In [START_REF] Yang | Faster r-cnn based microscopic cell detection[END_REF], SuYang et al. perfomed microscopic cell detection based on Faster R-CNN architecture. Authors showed in their paper, that with the use of Faster R-CNN in microscopic cell detection they manage to detect almost every microscopic cell with a much faster and accurate performance.
The R-CNN family is generally accurate, yet the You Only Look Once (YOLO) models are much faster than R-CNN even achieving real-time object detection accurately. Opposed to prior works in object detection, YOLO [START_REF] Redmon | You only look once: Unified, real-time object detection[END_REF] does not repurpose classifiers to perform object detection but instead, it frames object detection as a regression problem. In a single pass, the model performs object detection and classification via bounding boxes and class probabilities respectively, directly from the input image. The YOLO architecture can process images in real time at 45 frames per second, and still achieve promising results very fast. The base YOLO model suffers from localization errors compared to other object detection methods, but it learns general representations of the objects and outperforms R-CNN when dealing with different kinds of images such as artwork images.
There are various variants of YOLO that provide even more accurate and faster object detection. For instance, YOLOv2 [START_REF] Redmon | Yolo9000: Better, faster, stronger[END_REF], improves the first version by adding batch normalization and can deal with higher resolution input images. Besides, the choice of bounding boxes is preprocessed using k-means during training. Further enhancements were added to the YOLOv2 and presented by Redmon et al. in [START_REF] Redmon | Yolov3: An incremental improvement[END_REF]. YOLOv3 has known very basic improvements in terms of architectural design and depth without compromising the accuracy and time performance. Researchers are making improvements of YOLO (in terms of architectural design, time performance and accuracy) every now and then, for instance we are now talking about YOLOv5.
In fact, YOLOv3 was applied to kidney detection in CT scans [START_REF] Lemay | Kidney recognition in ct using yolov3[END_REF]. It showed very promising results by achieving 0.851 dice score in 2D CT scans and 0.742 in 3D.
There are many applications of YOLO family in the medical domain such as Breast masses detection and classification using YOLO [START_REF] Aly | Yolo based breast masses detection and classification in full-field digital mammograms[END_REF], lung nodule detection in CT scans using YOLO [START_REF] Biserinska | YOLO Models for Automatic Lung Nodules Detection from CT Scans[END_REF] then detection and classification of cholelithiasis and gallstones in CT images [START_REF] Pang | A novel yolov3-arch model for identifying cholelithiasis and classifying gallstones on ct images[END_REF].
Segmentation
The next step into image processing is image segmentation. It is a challenging image processing task, which requires computational friendly and accurate algorithms. In each general introduction of each chapter from part II , we describe in details image segmentation and we mentioned few advanced deep learning architectures for that purpose. Medical image segmentation [START_REF] Sharma | Automated medical image segmentation techniques[END_REF] helps in highlighting and analyzing a certain region of interest such as lung tissues, spleen, brain etc. there are many applications in the medical image segmentation from which we can refer to brain tumor segmentation with boundary extraction in MRI slices, cancer detection in lung CT scans, segmentation of affected area in chest X-rays and so on. As there is a shortage of experts in such domains, a number of well-designed algorithms in terms of speed and accuracy are proposed in the literature to serve the purpose of diagnosis [START_REF] Ng | National survey to identify subspecialties at risk for physician shortages in canadian academic radiology departments[END_REF].
Here we are mentioning some application of those architectures in the medical domain. For instance, in [START_REF] Havaei | Brain tumor segmentation with deep neural networks[END_REF] authors adopted a novel two-pathway CNN based architecture to perform brain tumor semantic segmentation. This architecture can capture local features to learn about local details of the brain as well as the large contextual features. In addition to the architectural contribution, authors proposed a two-phase training strategy which seems to deal in an efficient way with imbalanced labels distributions. In another work related to the medical domain, Baumgartner et al. [START_REF] Baumgartner | An exploration of 2d and 3d deep learning techniques for cardiac mr image segmentation[END_REF] proposed a 3D CNN based architecture for cardiac MRI images segmentation into left and right ventricular cavities and myocardium. In [START_REF] Wang | Automated segmentation and diagnosis of pneumothorax on chest x-rays with fully convolutional multi-scale scse-densenet: a retrospective study[END_REF] Wang et al. performed a pneumothorax segmentation in X-ray images adopting a spatial and channel Squeeze-Excitation CNN based architecture. In [START_REF] Havaei | Brain tumor segmentation with deep neural networks[END_REF] authors designed CNN architecture for brain tissue segmentation in MRI images. Based on FCN architecture, Zhang et al. [START_REF] Zhang | Deep convolutional neural networks for multi-modality isointense infant brain image segmentation[END_REF] proposed a liver segmentation in CT scans. Christ et al. [START_REF] Dou | 3d deeply supervised network for automatic liver segmentation from ct volumes[END_REF] designed a two cascaded FCN architecture for liver segmentation, thereafter, the final segmentation result is provided using a dense 3D conditional random field.
Classification
The final step into medical image processing is classification. It is a mandatory task for delivering a precise diagnosis. The classification accuracy depends on the quality of all the previous steps in an image processing journey, besides the utilized classification models.
Image classification is a supervised learning problem used to define a set of target labels then trained a model to recognize them in new input images.
In each general introduction of each chapter from part III, we described in detail some advanced deep learning architectures for image classification. Hence, in this section we are only going to present some of their applications in the medical field.
There are two strategies of transfer learning, first one uses a pre-trained network as a feature extractor while the second fine-tunes a pre-trained model on new data. Antony et al. [START_REF] Christ | Automatic liver and tumor segmentation of ct and mri volumes using cascaded fully convolutional neural networks[END_REF] used transfer learning to train a CNN as a fine-tuning model on medical data and achieved 57.6% accuracy in grade assessment of knee osteoarthritis compared to feature extractor strategy 53.4%. Whilst, Kim et al. [3] showed the opposite, meaning that using a pre-trained CNN as a feature extractor outperformed fine-tuning strategy in cytopathology image classification by achieving 70.5% accuracy versus 69.1%. It is confusing when it comes to choosing between the two transfer learning strategies, however in [START_REF] Kim | A deep semantic mobile application for thyroid cytopathology[END_REF][START_REF] Estava | Dermatologist level classification of skin cancer with deep neural networks [j[END_REF] authors showed that with the use of a pre-trained model as a feature extractor can not achieve better results than using fine-tuned pre-trained models. In fact, they fine-tuned a pre-trained version of the well-known Inception v3 deep architecture on medical data and nearly reached human expert performance.
In [START_REF] Gulshan | Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs[END_REF], Shen et al. performed a lung nodule classification relying on a multi-scale CNN, authors used three different CNNs where each one deals with a different scale lung nodule patch. In the end, the outputs of the three CNNs are combined to form the final feature vector. A similar approach was used by Kawahara and Hamarneh [START_REF] Shen | Multi-scale convolutional neural networks for lung nodule classification[END_REF] for skin lesions classification.
Processing natural images often tend to be in a 2D way, and computer vision networks only leverage those kinds of images. However, in medical image processing, images tend to have a 3D format. Many works have tried to integrate 3D in an efficient way. For instance, Nie, et al. [START_REF] Kawahara | Multi-resolution-tract cnn with hybrid pretrained and skin-lesion trained layers[END_REF] trained a 3D CNN for 3D MRI images to classify high grade gliomas.
Furthermore, Setio et al.
[128] performed a lung nodule classification (chest region of interest as nodule or non-nodule) using a 3D multi-stream CNN in chest CT scans.
Conclusion
In this chapter, we presented a deep learning overview and a detailed CNN architecture. We also presented some advanced DL architectures for medical image analysis. In the rest of this thesis, we concentrate on presenting comparative studies focusing on enhancements on DL architectures, and introducing new DL methods for medical image analysis.
Introduction
Image segmentation is a technique used in image processing for the purpose of partitioning an image into multiple parts or regions based on their pixel's characteristics. Image segmentation is simply used for the purpose of reducing the complexity of the image for making further image processing easier. Image segmentation makes all the elements or pixels that belong to the same category get assigned their proper label. In other words, instead of processing the whole image, segmenting the region of interest and then analyze it seems much simpler, the principle of "divide to conquer". The resulted image is then processed separately instead of the whole image, which makes the inference time reduced. In a work of ours [START_REF] Skourt | Lung ct image segmentation using deep neural networks[END_REF] chapter 2 from part II, we used image segmentation in lung CT scans in order to separate the lung area from other parts that we are not interested in, then used the region of interest for lung nodule detection. Image segmentation can be categorized into two categories: semantic segmentation and instance segmentation.
Semantic segmentation identifies all objects of the same type as one class, while instance segmentation identifies similar objects each one with its own label. To elaborate more, let's consider an image that contains 5 people. Semantic segmentation classifies all the five people as one instance with the background is identified as another class, while instance segmentation will segment each one of these people individually.
There are different techniques in the literature to perform image segmentation, from which we can mention a few: Threshold-based image segmentation, edge-based image segmentation and Artificial Neural Networks based image segmentation. In the segmentation part of our thesis, we perform ANN based image segmentation.
Since deep learning was born, several deep-learning-based architectures for image segmentation were born as well, most of them were based on CNNs. Lately, performing image segmentation using CNNs became widespread across several businesses and industries due to their effectiveness in dealing with image data sets in general. Because of their ability of extracting relevant features, CNNs are known to achieve very promising results in healthcare in general. For instance, in medical image segmentation, CNNs have outperformed significantly many conventional machine learning methods, hence achieved state of the art results [START_REF] Minaee | Image segmentation using deep learning: A survey[END_REF]. They became the most popular choice for multiple medical image processing fields, we mention few of them such as; brain MRI [START_REF] Myronenko | 3d mri brain tumor segmentation using autoencoder regularization[END_REF], lung nodule [START_REF] El Hassani | Efficient lung nodule classification method using convolutional neural network and discrete cosine transform[END_REF], spleen [START_REF] Roth | Hierarchical 3d fully convolutional networks for multi-organ segmentation[END_REF] and cardiac medical imaging issues [START_REF] Galea | Region-of-interest-based cardiac image segmentation with deep learning[END_REF].
Since our contributions in this part rely on FCN like architecture, Auto-Encoder based architecture, Multi-Scale like architecture and Attention based architecture, we are presenting in this general introduction some of the advanced CNN architectures that are very popular in image segmentation based on their model architecture.
Fully Convolutional Networks (FCN)
Based on an architecture fully consisting of convolutional operations called Fully Convolutional Networks, Long et al. [START_REF] Long | Fully convolutional networks for semantic segmentation[END_REF] introduced one of the first deep learning architectures that are suitable for semantic segmentation. The way a FCN is built enables it to deal with arbitrary sized images in order to generate a corresponding segmentation map with the same size. The authors made a modification on the well-known VGG16 and GoogleNet to deal with the problem of non-fixed sized images by replacing fully connected layers by fully convolutional layers. Fully connected layers give classification scores as output, while fully convolutional layers generate spatial segmentation maps. Moreover, authors used skip connections which takes feature maps from the final layers of the model then they are up-sampled and combined with earlier layers of the model in order to keep as much as possible of features from the input image and help getting accurate segmentation result. FCN has showed great performance in several data sets which demonstrates that deep neural networks are trainable for semantic segmentation in an end-to-end way on data sets containing variable sized images. This work is considered a milestone in image segmentation. Nevertheless, FCN has some limitations that made it a little bit far from satisfactory: it cannot be transferable to 3D image data sets, it lacks speed for real time inference and it does not take into consideration the global context information in a robust way. However, many works have attempted to address those limitations. From which we can mention for instance ParseNet proposed by Liu et al. [START_REF] Liu | Parsenet: Looking wider to see better[END_REF], that consists of overcoming the issue with global context information, simply by adding average features to capture mode general features at each location. In a layer, the feature map is pooled over the whole input resulting in a context vector. This context vector normalized and unpooled to generate new feature maps having the same size as the initial ones. Then, these feature maps are concatenated. FCNs have been performed in many works such as skin lesion segmentation [START_REF] Wang | Automatic brain tumor segmentation using cascaded anisotropic convolutional neural networks[END_REF], iris segmentation [START_REF] Li | Fully convolutional instance-aware semantic segmentation[END_REF], instance aware segmentation [START_REF] Yuan | Automatic skin lesion segmentation using deep fully convolutional networks with jaccard distance[END_REF] and brain tumor segmentation [START_REF] Liu | Accurate iris segmentation in non-cooperative environments using fully convolutional networks[END_REF].
Convolutional models with graphical design
One of the limitations of FCNs we discussed above is the lack of capturing useful global context features. In order to handle this kind of issues, a lot of approaches integrated graphical models into deep learning architectures, from which we bring up Conditional Random Fields (CRFs) and Markov Random Fields (MRFs). In [START_REF] Chen | Semantic image segmentation with deep convolutional nets and fully connected crfs[END_REF], Chen et al. introduced a novel DL architecture based of CNNs combined with fully connected CRFs. Authors showed that this model is capable of localizing boundaries of each segment at a higher accuracy rate. In another work related to combining CNNs and CRFs, Schwing and Urtasun [START_REF] Schwing | Fully connected deep structured networks[END_REF] presented uses both methods jointly in order to perform semantic image segmentation in an accurate way, which showed encouraging performance results in several image data sets. In another work [START_REF] Zheng | Conditional random fields as recurrent neural networks[END_REF], Zheng et al. integrated CRFs in. a CNN architecture to perform semantic segmentation. In [START_REF] Lin | Efficient piecewise training of deep structured models for semantic segmentation[END_REF], Lin et al. introduced a semantic segmentation algorithm based on contextual deep CRFs. To elaborate more, they made use of the contextual information to explore patch-patch context and patch-background context in order to improve the algorithm of semantic segmentation. Last but not least, Liu et al. [START_REF] Liu | Semantic image segmentation via deep parsing network[END_REF] proposed a novel semantic segmentation algorithm that integrates rich features into MRFs including high-order relations and label contexts.
Encoder-Decoder based models
In this thesis, we managed to perform image segmentation in the medical field. However, most of state-of-the-art DL architecture that were introduced in the literature were first applied on image data sets other than those of the medical field. Thus, in this chapter we group these works that we are mentioning after into two categories, encoder-decoder model for segmentation in general, and then for medical image segmentation.
General segmentation with encoder-decoder models
In [START_REF] Noh | Learning deconvolution network for semantic segmentation[END_REF], authors performed a semantic segmentation using a two parts architecture, first part consists of an encoder based on adopted layers from the well-known VGG-16-layer model, then second part that integrates deconvolution layers in a consecutive way with upsampling (unpooling) layer to generate a map of pixel-wise class probabilities called segmentation mask. In another work, Badrinarayanan et al. [5] proposed a novel encoderdecoder architecture named SegNet. Similar to the deconvolution network beforementioned, it consists of the early 13 layers of the VGG-16 network followed by a pixel-wise classification layer to predict each pixel's class. Another popular encoder-decoder based work called high resolution network HRNet [START_REF] Yuan | Segmentation transformer: Object-contextual representations for semantic segmentation[END_REF] consists of maintaining high resolution representations through the encoder process, then repeatedly exchanging the information across resolutions. Many works adopted the encoder-decoder strategy to perform image segmentation, such as W-Net [START_REF] Chaurasia | Linknet: Exploiting encoder representations for efficient semantic segmentation[END_REF], Stacked Deconvolutional Network SDN [START_REF] Xia | W-net: A deep model for fully unsupervised image segmentation[END_REF], and LinkNet [START_REF] Cheng | Locality-sensitive deconvolution networks with gated fusion for rgb-d indoor semantic segmentation[END_REF]. However, Encoder-Decoder architectures suffer from the loss of fine-grained information of the image, and that is due to the loss of high-resolution representations over the encoding process, HRNet is one of the works that addresses such vulnerability.
Medical and biomedical image segmentation using encoder-decoder models
There are many medical image segmentation works that were inspired by encoder-decoder and FCN models, such as U-Net [START_REF] Ronneberger | U-net: Convolutional networks for biomedical image segmentation[END_REF] and V-Net [START_REF] Milletari | V-net: Fully convolutional neural networks for volumetric medical image segmentation[END_REF]. The U-Net architecture was performed on biological microscopy images. It consists of using data augmentation to learn from the very few annotated images in a very efficient manner. This network, consists of two parts, the first part called the contracting path which consists of high-level feature extraction, then the expanding path that relies on the extracted feature maps, in addition to copying and cropping from the corresponding layer in the contracting path to avoid losing pattern information and to perfectly recover the required features of the segmentation mask. On the other hand, V-Net is a 3D medical image segmentation algorithm proposed by Milletari et el. Authors introduced a new objective function based on Dice Coefficient enabling the model to deal with situations in which there is a strong imbalance between the number of voxels in the foreground and background. Many works adopted U-Net and V-Net architectures, from which we mention the following: Zhou et al. [START_REF] Çiçek | 3d u-net: learning dense volumetric segmentation from sparse annotation[END_REF] developed a nested U-Net architecture. Cicek [START_REF] Zhou | Unet++: A nested u-net architecture for medical image segmentation[END_REF] proposed a U-Net architecture for 3D images. Zhang et al. [START_REF] Zhang | Road extraction by deep residual u-net[END_REF] developed a road segmentation/extraction U-Net-based algorithm. The same goes for the V-Net architecture, Progressive Dense V-net (PDV-Net) et al. for fast and automatic segmentation of pulmonary lobes from chest CT images, and the 3D-CNN encoder for lesion segmentation [START_REF] Brosch | Deep 3d convolutional encoder networks with shortcuts for multiscale feature integration applied to multiple sclerosis lesion segmentation[END_REF].
Multi-scale and pyramid network based models
It is an old idea in image processing, it has been adopted in many neural network architectures. One of the most popular models of this family is the Feature Pyramid Network FPN [START_REF] Lin | Feature pyramid networks for object detection[END_REF] proposed by Lin et al. It was mainly built for object detection and then was extended to image segmentation. A combination of the multi-scale and pyramidal hierarchy of CNNs was adopted to build feature pyramids, to merge low-and high-resolution features.
FPN is composed of top-down, a bottom-up pathways and lateral connections. The generated feature maps are then fed to a 3 × 3 convolution to generate the output of each stage. Finally, each stage of the top-down pathway produces a prediction to separately detect an object. On the other hand, for image segmentation, they used two-multilayer perceptrons.
To better learn the global context features in a scene, Zhao et al. [START_REF] Ding | Context contrasted feature and gated multi-scale aggregation for scene segmentation[END_REF] proposed a pyramid scene parsing network which is a multi-scale network. With the use of a ResNet as a feature extractor, multiple patterns are extracted from the input image in a dilated network. Thereafter, those features are fed to a pyramid pooling module in order to identify different patterns of different scales. These feature maps go through four different pooling scales, each one belongs to a corresponding pyramid level, then followed by a 1 × 1 convolutional layer for dimensionality reduction purposes. Next step, is up-sampling the outputs of the pyramid levels and concatenated with their corresponding initial feature maps to acquire both global and local characteristics. Finally, generating pixel-wise prediction based on a convolutional layer. There are various models in the literature that adopt a multi-scale analysis strategy for image segmentation such as Adaptive Pyramid Context Network [START_REF] He | Adaptive pyramid context network for semantic segmentation[END_REF], Context Contrasted Network [START_REF] Lin | Multi-scale context intertwining for semantic segmentation[END_REF], Salient Object Segmentation [START_REF] Chen | Attention to scale: Scale-aware semantic image segmentation[END_REF] and so on.
Semantic segmentation using attention-based models
Attention mechanism have shown very effective in many works over the last few years, it suits the semantic segmentation task very well given its nature in focusing on each object separately. Chen et al. [START_REF] Huang | Semantic segmentation with reverse attention[END_REF] introduced in their paper an attention mechanism that joins multi-scale features at each pixel location. Authors manage to jointly merge a powerful segmentation model with the attention mechanism in a multi-scale architecture. In another work, Huang et al. [START_REF] Li | Pyramid attention network for semantic segmentation[END_REF] proposed an approach for semantic segmentation that is based on reverse attention mechanism. Opposite to attention mechanism, reverse attention mechanism captures the features that are not in the scope of the model's aim. In other words, it captures features that are not associated with the targeted class. The RAN model captures the features mentioned above, in addition to attention mechanism's features. It is a three-branch network that performs simultaneously both feature extraction attention mechanisms (direct and reverse), in order to contain high level features. In [START_REF] Fu | Dual attention network for scene segmentation[END_REF], Li et al. proposed a Pyramid Attention Model dedicated for semantic segmentation and it gathers between spatial pyramid and attention mechanism concepts to exploit effect of global contextual information for a better extraction of dense features in order to efficiently label pixels. In a recent work [START_REF] Yuan | Ocnet: Object context network for scene parsing[END_REF], Fu et al. developed a dual attention network for scene segmentation, that can generate contextual dependencies based on the attention mechanism. Furthermore, they adopted a dilated FCN for inter-dependencies in both spatial and channel dimensions in addition to those extracted contextual dependencies.
Several other attention-based works have been adopted for semantic segmentation, such as OCNet [START_REF] Huang | Ccnet: Crisscross attention for semantic segmentation[END_REF], Criss-Cross Attention Network [START_REF] Ren | End-to-end instance segmentation with recurrent attention[END_REF], Discriminative Feature Network DFN [START_REF] Yu | Learning a discriminative feature network for semantic segmentation[END_REF] and end-to-end instance segmentation with recurrent attention [START_REF] Ridge | Epidemiology of lung cancer[END_REF].
Overview
Lung CT image segmentation is a prerequisite step for any kind of lung image analysis, it is a necessary step to obtain a satisfactory result in CT image analysis tasks, such as lung cancer detection. In order to get an accurate image analysis, it is important to properly provide a lung area segmentation.
Lung cancer is one of the most common cancers in the world, it is a dangerous lung disease that causes more than one million death each year [START_REF] Ridge | Epidemiology of lung cancer[END_REF]. Lung cancer is a malignant lung nodule that is described by uncontrollable growth. To prevent from death by lung cancer, early detection remains the best way to increase the patient's survival rate.
For lung cancer diagnosis and detection, Computed Tomography (CT) is an efficient medical screening test used for that purpose. Nevertheless, physicians in many cases find it difficult to obtain an accurate diagnosis without the assistance of Computer Aided Diagnosis (CAD) Systems. CAD is an additional tool known for its help in providing assistance to physicians in order to obtain accurate diagnosis. In today's medical imaging, it is mandatory to obtain assistance from CAD systems in order to provide an efficient medical diagnosis.
The success of a CAD system resides on providing an accurate segmentation of the organ targeted, it is a mandatory initial step for an efficient lung CT image analysis.
Nevertheless, for a better lung segmentation, the lung parenchyma needs to be separated from the bronchus region because they are often confused with the lung tissue. Furthermore, in the case of abnormal lung parenchyma, it is a challenging task to include lung nodules and blood vessels in the segmentation along with the lung parenchyma, thus it is a complicated problem to exactly provide a thorough lung segmentation.
In this chapter, we cover the presentation of lung segmentation using a commonly used architecture for image segmentation based on deep learning called U-net [START_REF] Ronneberger | U-net: Convolutional networks for biomedical image segmentation[END_REF]. This work is a first step into lung cancer detection, by erasing irrelevant information acquired in lung CT scans. Our network achieves an accurate segmentation, 0.9502 Dice coefficient score, based on a data set that contains few hundreds of manually prepared training lung images.
The rest of this chapter is organized as follows: in next section we provide an overview of the U-Net architecture. Then in section that follows we present the resultant segmentation accuracy using LIDC data set, then concluding remarks.
Methods
In very few years, deep learning has dramatically improved over the state-of-the-art in different domains. The era of deep learning began with the proposition of a new algorithm with a new learning strategy by Hinton et al. [START_REF] Hinton | A fast learning algorithm for deep belief nets[END_REF]. Deep learning in general and CNNs in particular have known an exceptional raise to be the most approved algorithms for a variety of pattern recognition tasks. Until today, deep learning algorithms are still winning many international competitions in speech and image recognition [START_REF] Schmidhuber | Deep learning in neural networks: An overview[END_REF].
As for image classification, CNNs has had a tremendous success in image segmentation problems. For example, in 2015 Long et al. [START_REF] Long | Fully convolutional networks for semantic segmentation[END_REF] proposed a new architecture named Fully Convolutional Networks (FCN), that made CNNs very popular for dense predictions without any fully connected layers. It allowed to generate segmentation masks for any type of images in a much faster way compared to its classical competitors for image segmentation.
In FCN architecture, performing image segmentation without fully connected layers was not the only challenge, but also the pooling layers. Pooling layers reduce the size of their inputs. To deal with this issue, up-sampling layers were adopted. It consists of performing the reverse task as pooling operation. Hence, in this encoder-decoder architecture, pooling layers were responsible for reducing the spatial dimensions of objects in the encoder part.
On the other part, the decoder, the up-sampling operation is responsible for recovering the objects relevant details.
Encoder-decoder architecture type has many variants such as FCN and U-Net. In this work we adopt the U-Net architecture, one of the most popular architectures for medical image classification.
In 2015, Ronneberger et al. [START_REF] Ronneberger | U-net: Convolutional networks for biomedical image segmentation[END_REF] proposed the U-Net architecture (see Fig. 2.1) which is based on FCN. U-Net's basic components can be viewed as an association of convolution layers in the encoder part which is named the contracting path, then deconvolution layers in the decoder part, called expansive path. The contracting path is formed as a standard CNN, it consists of convolution layers with Rectified Linear Units (2.1) (ReLU see Fig. 2.2) as activation function, then followed by a max-pooling layers. On the expansive path, the resultant feature maps are fed to up-sampling layers followed by up-convolution and convolution layers with ReLU activation function. As consequence of the loss of border pixels at every convolution operation, it is mandatory to crop the missing parts from the corresponding feature maps in the extracting path and concatenate it with its parallel correspondent in the expansive path.
f (x) = max(0, x) (2.1)
In the training phase, the U-Net architecture uses input images and their corresponding masks as outputs. While in the test phase, input images are fed the network to generate the corresponding mask as output, Fig. 2.3 show the input and the output examples. The output mask is then applied to its corresponding input image to perform the segmentation of the area of interest in the image, i.e. the lung parenchyma in our data set.
To traing our network, we use the Lung Image Database Consortium image collection (LIDC) for lung CT scans [4]. It consists of diagnostic and lung cancer screening thoracic CT scans with marked-up annotations of lesions. It is a web-accessible resource mad available for the development of CAD systems for lung cancer diagnosis.
The data set only contains CT scans of the whole lung area; thus, a data preparation was needed to obtain a suitable data set for our network which is composed of images and their corresponding output masks. The ground truth of lung area was provided manually. We performed a necessary pre-processing step that consist of images cropping to remove any irrelevant information that doesn't belong to the area of study.
Results an discussion
We performed our experimentation in an environment composed of Keras and NVIDIA GTX1050. Keras is high-level API made for especially for neural networks. It is written in python and runs on top of either Tensorflow or Theano with the possibility to run on CPU or GPU. Therefore, our experimentation took place on a Tensorflow backend and a GPU environment with graphics processor NVIDIA GTX 1050 equipped with 640 CUDA cores and 4Gb of memory in order to make profit of its maximum computational speed. We set the network parameters to:
• Batch size: 32.
• Number of epochs: 50.
For evaluation purposes of our network, we use dice coefficient score DSC as a similarity metric considering that it is one of the most used similarity measurements in image segmentation; it is calculated using the following formula: The S stands for the lung parenchyma area obtained using our network, and T represents the ground truth provided by manual segmentation. Using our network, we've reached an average DSC of 0.9502. Fig. 2.4 presents experimentation results that show resultant segmentation performed using our network. The results are presented under five columns, first column from the left represents the original input image, the second column going to the right is the ground truth of lung parenchyma manual segmentation masks, then the next column presents the resulted segmentation maps generated using our network, next column represents the desired segmentation generated using manual segmented masks, and finally the last column is the segmentation result provided by the generated segmentation maps using our network applied on the input lung image.
Fig. 2.4. Experimentation Results
We can see that in the 5 th column, the lung segmentation obtained using the U-Net architecture does not contain any parts of the trachea nor the bronchos regions and it does not eliminate any lesions such as lung nodules and blood vessels that are important for upcoming tasks of diagnosis, this result shows the accuracy of our network.
Conclusion
In this chapter, we provided a study about lung parenchyma segmentation using one of the most efficient deep learning segmentation architectures, called U-Net. The efficiency of such network resides on the good quality of the segmentation result by using only few hundreds of images in the data set. We obtained using the U-Net architecture, 0.9502 dice coefficient index score, which is a high score for such kind of tasks. The advantage of this approach is that it is uniform and can be applied to a wide area of different medical image segmentation tasks. This work is an introduction to understand how deep learning architectures can contribute to the medical image segmentation domain and a first step to propose new deep learning architectures tailored for medical image segmentation. In the next chapter, we are proposing a novel architecture based on deep learning to perform brain tumor segmentation.
Chapter 3
Multi-Scale ConvLSTM Attention-based Brain Tumor Segmentation
Overview
In the previous chapter, we performed lung ct image segmentation using a deep learning architecture that is known to work efficiently with small amounts of data. In this chapter, we introduce a novel deep learning architecture to perform medical image segmentation and we compare its performance to the one used in the previous chapter besides other related methods. Our goal is to show that with our novel method that adopts robust feature extraction methods surpasses the methods we used in our previous chapter that uses conventional feature extraction methods.
Nowadays, the use of CNNs is widespread across industries and businesses. In healthcare, CNNs achieve very promising results due to their robust feature extraction capabilities. For example, in medical image segmentation, they have achieved state-of-the-art performance [START_REF] Minaee | Image segmentation using deep learning: A survey[END_REF] with a significant margin compared to conventional machine learning models, which makes them the most popular choice in different medical imaging fields. They also dominate the health informatics literature on brain [START_REF] Myronenko | 3d mri brain tumor segmentation using autoencoder regularization[END_REF], lung nodule [START_REF] Hassani | Efficient lung nodule classification method using convolutional neural network and discrete cosine transform[END_REF], spleen [121], and cardiac [START_REF] Galea | Region-of-interest-based cardiac image segmentation with deep learning[END_REF] medical imaging issues, to mention a few.
In this chapter, we perform brain tumor semantic segmentation using a novel deep learning architecture. Brain tumors are considered one of the deadliest cancers in the world. There are various brain tumor types, but gliomas are the most common ones among adults. Furthermore, gliomas can be present with different degrees of aggressiveness with an average survival time for patients diagnosed with glioma lesser than 14 months [START_REF] Walid | Prognostic factors for long-term survival after glioblastoma[END_REF]. Therefore, time is a critical factor for doctors to act regarding gliomas. To diagnose a brain tumor, there are different types of medical image acquisition involved, such as MRI, CT scans and X-Ray, each having its pros and cons. For example, CT scans have the advantage of speed of tissue acquisition at the cost of lower quality of tissue contrast and higher radiation risk. On the other hand, MRIs are slow compared to CT scans but they are best suitable for capturing abnormal tissues with more details due to their accuracy in acquiring different types of contrasts. After the acquisition of the brain region, radiologists perform a manual segmentation of brain tumors from MRI images, which is time-consuming. Therefore, designing an automatic brain tumor segmentation is mostly desirable. The use of such multi-scale architecture, which is composed of multiple stages, is to generate multiple versions of the same image with different resolutions, each containing diverse semantics. The first low-level stage serves to model the spatially sequential relationship between different parts of each MRI modality (FLAIR, T1w,T1gd,T2w) 1 , while the next stage manages the extraction of local features in addition to decreasing the size of the images for computational optimization. Finally, the third high-level stage captures the global representations. Thereafter, at each level, we introduce a stack of attention modules to gradually emphasize the regions that contain a large number of semantic features.
The integration of attention mechanism in the image segmentation of natural scenes has been widely adopted [START_REF] Li | Pyramid attention network for semantic segmentation[END_REF][START_REF] Fu | Dual attention network for scene segmentation[END_REF][START_REF] Chen | Attention to scale: Scale-aware semantic image segmentation[END_REF][START_REF] Zhao | Psanet: Point-wise spatial attention network for scene parsing[END_REF]. However, in medical imaging, the inclusion of attention mechanism is rare [START_REF] Wang | Deep attentional features for prostate segmentation in ultrasound[END_REF][START_REF] Li | Attention based hierarchical aggregation network for 3d left atrial segmentation[END_REF][START_REF] Schlemper | Attention gated networks: Learning to leverage salient regions in medical images[END_REF][START_REF] Nie | Asdnet: attention based semisupervised deep networks for medical image segmentation[END_REF]. For this reason, we investigate the impact of a simple attention module in boosting the performance of standard deep networks for brain tumor semantic segmentation. Experimental results show that our proposed method improves the segmentation performance by modeling a combination of rich contextual features with local features.
The results shown later on this work prove that our model performs semantic brain tumor segmentation effectively compared to standard U-Net, Attention U-Net and Fully Connected Network (FCN), by reaching 79.78 Dice score index using our model compared to 78.61, 73,65 and 72.89 using Att-UNet , UNet and FCN respectively.
The remainder of this chapter is organized as follows. Next section presents related works. In the section that follows, we introduce our proposed method in detail. Thereafter, we present and discuss the obtained results. Finally, we conclude our work in the last section.
Related work
Most of the state-of-the-art deep learning architectures used for automatic medical image segmentation are inspired from Fully Convolutional Networks (FCN) [START_REF] Long | Fully convolutional networks for semantic segmentation[END_REF] or U-Net [START_REF] Ronneberger | U-net: Convolutional networks for biomedical image segmentation[END_REF]. Many variants of these architectures have been proposed to perform semantic segmentation in different application domains [START_REF] Dolz | Multiregion segmentation of bladder cancer structures in mri with progressive dilated convolutional networks[END_REF][START_REF] Li | H-denseunet: hybrid densely connected unet for liver and tumor segmentation from ct volumes[END_REF][START_REF] Heinrich | Obelisk-net: Fewer layers to solve 3d multi-organ segmentation with sparse deformable convolutions[END_REF].
FCN is an architecture in which fully connected layers are replaced by deconvolution layers to generate segmentation masks [START_REF] Long | Fully convolutional networks for semantic segmentation[END_REF]. Jesson et al. [START_REF] Jesson | Brain tumor segmentation using a 3d fcn with multiscale loss[END_REF] proposed a variant of the standard FCN with a multi-scale loss function. With this approach it is possible to model the context in both the input and output domains. A limitation of this approach is that FCN is not able not explicitly model the context in the label domain. Compared to U-Net, FCN does not use skip connections between the contracting (i.e feature extraction path) and the expanding paths (i.e data reconstruction path).
The U-Net architecture was introduced by Ronneberger et al. in 2015 [START_REF] Ronneberger | U-net: Convolutional networks for biomedical image segmentation[END_REF]. It overcomes the limitations of FCN by including features from the contracting path. In order to obtain the missing feature-contexts, multi-scale features are concatenated in a mirroring way. Many works have adopted this architecture to perform medical image segmentation over different parts of the human body. In a previous work of ours [START_REF] Skourt | Lung ct image segmentation using deep neural networks[END_REF], we also adopted the U-Net architecture to perform lung CT image segmentation.
A limitation of both FCN and U-Net is that they both do not perform very well in multiclass segmentation tasks [START_REF] Jesson | Brain tumor segmentation using a 3d fcn with multiscale loss[END_REF]. To overcome this issue, cascaded architectures can be used. They have the beneficial effect of decomposing a multi-class segmentation problem into multiple binary segmentation problems. This approach is also used in various medical image segmentation works. For example, Chen et al. [START_REF] Chen | Focus, segment and erase: an efficient network for multi-label brain tumor segmentation[END_REF] adopted a cascaded classifier to perform a multi-class segmentation. Furthermore, in [START_REF] Liu | A cascaded deep convolutional neural network for joint segmentation and genotype prediction of brainstem gliomas[END_REF] authors proposed a cascaded architecture to merge different feature extraction methods. Nonetheless, these models still face a problem of focusing on pixel level classification while ignoring adjacent pixels' connections. To overcome this issue, Generative models were adopted. A widely used variant of generative models is Generative Adversarial Network (GAN) [START_REF] Goodfellow | Generative adversarial networks[END_REF]. GANs are employed for semantic segmentation in the following way: a convolutional semantic segmentation network is trained along with an adversarial network to discriminate segmentation maps [START_REF] Souly | Semi supervised semantic segmentation using generative adversarial network[END_REF]. That is, two models are trained; the first captures data distribution, while the second is used for a discriminative purpose.
To capture sequence patterns in medical imaging, Recurrent Neural Networks (RNNs) are typically used as they they are well suited for handling sequential data. Specifically in medical image segmentation, RNNs are used to keep track of features in previous image slices in order to better generate the corresponding segmentation maps. There are various RNN architectures mentioned in the literature, and amongst them Gated Recurrent Units (GRU) [START_REF] Cho | On the properties of neural machine translation: Encoder-decoder approaches[END_REF] and Long-Short Term Memory (LSTM) [START_REF] Hochreiter | Long short-term memory[END_REF] are likely the most robust and widely used. GRU is memory efficient, nonetheless not very suitable for keeping track of long-term features. LSTM is better adapted to such tasks due to the forget gate that preserves features from previous sequences to use in upcoming sequences. [START_REF] Andermatt | Automated segmentation of multiple sclerosis lesions using multi-dimensional gated recurrent units[END_REF][START_REF] Le | Deep recurrent level set for segmenting brain tumors[END_REF][START_REF] Zhao | A deep learning model integrating fcnns and crfs for brain tumor segmentation[END_REF] are some examples of employing RNNs for performing image segmentation for sclerosis lesions and brain tumors respectively.
In the last few years, a new concept called attention mechanism was introduced into computer vision tasks. Attention mechanism was introduced first in neural machine translation [START_REF] Bahdanau | Neural machine translation by jointly learning to align and translate[END_REF] to help remember long range context from long source sentences. The added value brought by attention modules is the creation of shortcuts between the input sentence and the context vector. Attention in deep learning can be interpreted as a vector of weights that represent the importance of an element within a context. The attention vector is used to estimate how strongly is an element related to other elements (elements in this context are image pixels), it takes the sum of these elements' values weighted by the attention vector as the approximation of the target context.
The success of the attention mechanism for neural machine translation has encouraged its application to computer vision immediately [START_REF] Xu | Show, attend and tell: Neural image caption generation with visual attention[END_REF]. In medical image segmentation, the attention mechanism was adopted in many works and various variants of attention modules have been introduced.
In [122], authors propose a combination of FCN with a Squeeze and Excitation (SE) attention-based module to perform whole-brain and whole-body segmentation. They integrate the SE block in three ways: channel SE (cSE), spatial SE (sSE) and concurrent spatialchannel SE (csSE). In [START_REF] Wang | Deep attentional features for prostate segmentation in ultrasound[END_REF], Wang et. al. perform prostate segmentation in ultrasound images using deep attentional features. They use an attention module to extract refined features at each layer, eliminate nonprostate noise and focus on more prostate details at deep layers. Furthermore, Li et. al. propose an auto-encoder CNN-based architecture, called hierarchical aggregation network (HAANet) [START_REF] Li | Attention based hierarchical aggregation network for 3d left atrial segmentation[END_REF], which combines the attention mechanism and hierarchical aggregation to perform 3D left atrial segmentation. In another work, Oktay et al. propose an attention U-net [START_REF] Oktay | Attention u-net: Learning where to look for the pancreas[END_REF] which extends the U-Net architecture by incorporating an attention gate in the expanding path in order to accurately segment the pancreas area.
Method
In this section, we describe our proposed architecture for brain tumor segmentation. Our method combines different techniques in order to extract relevant features and keep track of them during the entire process of segmentation.
We combine Inception, ResNet and Squeeze-Excitation blocks in one part of our architecture for relevant feature extraction, and attention modules in another part to perform brain tumor segmentation. The combination of Inception, ResNet and Squeeze-Excitation is known as the most successful architecture in the ImageNet challenge. With this combination, the team Trimps-Soushen achieved 2.99% error rate in object classification in the ImageNet challenge 2 .
We first feed our network different modalities of brain MRI images (FLAIR, T1w,T1gd,T2w) to include various intensities and to better perform the semantic segmentation. Each modality is split into four patches, then for each modality, three scales of feature extraction are performed. The motivation behind this multi-scale mixture is to best separate each tumor label (enhancing tumor, tumor core, whole tumor and background).
At the first scale, ConvLSTM is used over each of the four patches to preserve the correlation among features. ConvLSTM are best suitable for catching spatiotemporal information without any much redundancy [157]. At the second scale, an SE-inception [START_REF] He | Deep residual learning for image recognition[END_REF] module is used over the output of the first scale to extract low level features and decrease the computation cost. Fig. 3.1 shows the inception module [START_REF] Szegedy | Going deeper with convolutions[END_REF] and Fig. 3.2 shows the SE-inception block.
At the third scale, we extract high level features by integrating an SEResNet module [START_REF] Jie | Squeeze-and-excitation networks[END_REF]. The use of such block increases computational complexity with a thin margin but in exchange of increasing the accuracy [START_REF] Jie | Squeeze-and-excitation networks[END_REF]. The ResNet block [START_REF] He | Deep residual learning for image recognition[END_REF] and SE-ResNet are described in Fig. 3.3 and Fig. 3.4 respectively.
At each scale (different scales are highlighted by green color in Fig. 3.6), we combine the four outputs to form what we call single-scale features as stated in Fig. 3.6. These three single-scale features are then concatenated and convolved to form multi-scale features as mentioned in the same figure. We then take the multi-scale features and we combine them with each single-scale feature. At this stage, our model holds general context feature-maps that contain different levels of features, from low to high level features. Thereafter, we add a convolution layer to refine these features. Furthermore, in order to explore more global contextual characteristics by building connections among features, we include attention mechanism in the form of a location-based attention module, we call it Spatial Attention Module (SAM). The attention mechanism is presented in the Fig. 3.5.
In Fig. 3.5, we assume the input to the SAM module is V , which is a 3D shaped input (W, H,C), here W , H and C represent the width, height and depth respectively. In the red branch, we perform a convolution operation, resulting in a feature map B 0 with same width and height but with depth equals to C/8. B 0 is then reshaped to (W, H,C). The same operation is applied to the blue branch B1. Thereafter, we perform a matrix multiplication and apply a softmax operation to calculate the spatial attention map following the formula in (3.1), where S i, j represents the impact of the pixel in the i th position on the pixel in the j th position.
S i, j = exp(B 0 * B 1 ) W * H ∑ i=1 exp(B 0 * B 1 ) (3.1)
The yellow branch performs a convolution and results in B 2 with the same shape as V . B 2 is then reshaped to (C,W, H) then it is multiplied by the transpose of the spatial attention map S. Furthermore, the output R is reshaped to C × (W × H) and multiplied by a parameter λ and then an element-wise sum with input V is performed to obtain the output O as expressed in (3.2).
Experiments and results
To evaluate our architecture, we are using BRATS'18 data set for brain tumor segmentation, provided in the Medical Segmentation Decathlon Challenge3 . This data set contains multimodal MRI data (FLAIR, T1w,T1gd,T2w) 4 . Furthermore, it contains 210 High Grade Glioma (HGG) scans and 75 Low Grade Glioma (LGG) scans. In this data set, the focus is mainly on the segmentation of different sub-regions of the glioma. First, the enhancing tumor (ET), the tumor core (TC) and finally the whole tumor (WT) as can be seen in Fig. 3.7. Each one of these sub-regions have some specific characteristics regarding their intensities, hence different modalities are responsible for capturing different characteristics. For example, the ET is described by areas that are hyper-intense in T1gd. The appearance of the non-enhancing tumor (NET) (solid parts) and the necrotic (NCR) (fluid-filled) is represented by areas that show hypo-intensity in T1gd when compared to T1. The WT describes the whole disease and it contains the TC and the peritumoral edema (ED), which is characterized by hyper-intensity in the FLAIR modality. The provided labels in this data set are as follows: 1 for NCR and NET, 2 for ED, 3 for ET and finally 0 for other parts of the brain. The annotations were created by domain experts and approved by other domain experts as described in [START_REF] Bakas | Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the brats challenge[END_REF]. Given the presence of different features related to gliomas in different modalities, we feed the four modalities as input to our architecture, then we get the semantic segmentation that belongs to these inputs. The loss function we use is the dice loss optimized using the Adam optimizer [START_REF] Kingma | Adam: A method for stochastic optimization[END_REF]. The learning rate is initially set to 0.001 and then multiplied by 0.5 after each 30 epochs. We used 500 epochs to train our network. Due to limitations in computational resources, we reduced the input size to 190 × 190 by cropping some of the background area and we only took from the 30 th slice to the 120 th given that most of the brain information is present in that interval. Furthermore, we normalize the inputs to have zero mean and unit standard deviation. In addition, given that each session of the notebook used for training has 12 hours lifetime, we use the following strategy to train our network. We save our model and its weights after each 50 epochs and we reload it and continue training with new data. For development, we shuffled and randomly split the images into training (225 patients), validation (30 patients), and test (30 patients). Experiments were performed in a server equipped with a single 12GB NVIDIA Tesla K80 GPU.
We compare our method with the standard UNet [START_REF] Ronneberger | U-net: Convolutional networks for biomedical image segmentation[END_REF], standard FCN [START_REF] Long | Fully convolutional networks for semantic segmentation[END_REF] and the Attention U-Net [START_REF] Oktay | Attention u-net: Learning where to look for the pancreas[END_REF] architectures. And we evaluate their performance using the dice coefficient (DSC) as a comparison metric. Table 3.1 contains experimental results obtained using the different segmentation methods described above, and compared regarding their DSC score. Our proposed architecture achieved the best score with 0.649, 0.881 and 0.865 in ET, WT, and TC respectively and a mean score of 0.798. It can be observed that both our method and AttUnet, which also includes attention modules, perform better than the other ones without attention modules. This proves that adding attention modules surely enhances the segmentation procedure by putting more attention into the tumor location. Oktay et al. have reported the same observation in their work with MSConvLSTM-Att [START_REF] Oktay | Attention u-net: Learning where to look for the pancreas[END_REF].
Our architecture outperforms AttUNet with a significant margin, this is mainly due to the focus on location attention modules, besides the use of powerful feature extraction modules (ConvLSTM, SE-Inception and SE-ResNet) in the first part of the architecture, which is beneficial in eliminating irrelevant features. Our proposed architecture can be implicitly considered as a cascaded architecture even though we do not explicitly use multiple cascaded architectures.
Fig. 3.7 displays a sample of the input MRI images, ground truth and the segmentation results using our proposed architecture. As seen in Table 3.1, the ET segmentation has the smallest DSC value. It can be seen also in Fig. 3.7, where the ET region is not well detected especially in the first and third row.
It has to be mentioned that our method is slightly slower compared to the other methods, which is normal given the fact that complex building blocks has been used in order to ensure a better segmentation result.
Conclusion
In this chapter, we proposed a novel deep learning architecture for brain tumor segmentation we call multi-scale ConvLSTM Attention Neural Network, and we compare its performance to various deep learning architectures that are tailored to such kind of tasks. Our proposed method is built as a multiscale architecture composed of different state-ofthe-art feature extraction blocks such as Inception, Squeeze-Excitation, Residual Network, ConvLSTM and finally Attention units. We compare the performance of our architecture to standard U-net, AttU-net and FCN that have shown effective results in semantic segmentation. Experimental results show that our proposed model outperforms standard U-net, AttU-net and FCN in terms of dice score. Our model reached 0.797 as a mean dice score for the three parts of the brain tumor, while Attention U-net, standard U-net and FCN reached 0.786, 0.736 and 0.728 respectively. We observe that both our method and the AttU-net perform better than the other ones, which can be explained that the integration of attention modules enhances the segmentation procedure. Besides, our method outperforms the AttU-net, and this is due to the use ConvLSTM, SE-Inception and SE-ResNet.
In the previous part, we investigated the role of features extraction methods in reaching great accuracy results in medical image segmentation. In this part, we are exploring the same roles but for medical image classification.
In computer vision, image classification is one of the key components for a better understanding of image content. There are numerous image processing methods in the literature that serve very well this purpose.
CNNs have proven to be very powerful when it comes to image classification. It all comes to the way CNNs are made, as mentioned in the state-of-the-art chapter 1.3. The power of CNNs resides on the feature extraction phase, that is why most of the state-of-the-art architectures achieved the best performances only by contributing to the feature extraction part of a CNN. In fact, starting from 2012, CNNs have known the raise of their era, with the introduction of a novel architecture named AlexNet [START_REF] Krizhevsky | Imagenet classification with deep convolutional neural networks[END_REF], this architecture proposed by Kirzhevsky et al. has been successfully applied to various application domains. It consists of five convolution layers each one followed by a max-pooling layer then three fully connected layers. The success key presented by AlexNet's authors is the use ReLu, multiple GPUs and overlapping pooling. Back then, it was very recurrent to use hyperbolic tangent (tanh) as an activation function in ANNs, however in AlexNet authors replaced tanh by ReLU and performed very well. Besides, the use of overlapping pooling layers which led to reducing the error rate by 0.5% and enabled the network to less overfitting as claimed by the authors. Nevertheless, AlexNet was trained on multiple parallel GPUs due to limitations on the computational power.
Another CNN architecture called ZFNet [167] was proposed in 2013 by Zeiler and Fergus, it is a variation of AlexNet. With their architecture, authors won the first place in the ImageNet challenge in 2013 by reaching 14.8% error rate. Their secret ingredient was the use of an advanced visualization technique that was proposed by Zeiler in 2011 [START_REF] Zeiler | Deconvolutional networks[END_REF], which consisted of visualizing intermediate layers of AlexNet, that enables for a better understanding of its mechanism in regards of the choice of different parameters. With the beforementioned technique, Zeiler and Fergus managed to build ZFNet with the same number of layers as AlexNet but with very enhanced parameters, such as the use of 7 × 7 convolution filters with stride of 2 in convolutional layers instead of 11 × 11, which proved to preserve much more pixel information. Secondly, they replaced sparse connections used in some layers in AlexNet due to the split of the training across two GPUs, by dense connections in ZFNet. Finally, adopting the transfer learning as a supervised pretraining strategy to boost the training of the network.
After ZFNet, the research team of Google proposed a new CNN architecture named GoogLeNet [START_REF] Szegedy | Going deeper with convolutions[END_REF] popular by the name of Inception. Inception network outperformed all the state-of-the-art models in image classification and image detection in the ImageNet challenge 2014, by reaching a significantly low error rate of 6.67%. Compared to previous networks, Inception has shown remarkable accuracy in regards of image classification, it is mainly due to the way Inception architecture is built, authors introduced a new convolutional block called inception, this block contains skip connections that are known to expand the architecture depth-wise and width-wise without compromising the computational cost and surely with a step-up regarding the network's accuracy. The inception module comes in two versions, the first version is a naïve inception module that consists of a combination of 1 × 1, 3 × 3 and 5 × 5 convolution layers besides alternative pooling layers. However, this version encounters multiple computational cost issues and generates an inevitable number of outputs that surely led to increase the computation complexity. Thus, the use of a 1 × 1 convolution layer in parallel with the naïve inception to decrease the dimensionality depth-wise. This strategy, helped in building a very deep architecture without worrying about the computational cost, and showed very robust memory efficiency during training [START_REF] Szegedy | Going deeper with convolutions[END_REF]. Many variants of the GoogLeNet architecture were released after, such as Inception V2 and V3 [START_REF] Szegedy | Rethinking the inception architecture for computer vision[END_REF], Inception V4 [START_REF] Szegedy | Rethinking the inception architecture for computer vision[END_REF] and Inception-ResNet [START_REF] Szegedy | Rethinking the inception architecture for computer vision[END_REF]. Each one of them introduced an enhancement to the first version by including either convolutional or pooling blocks in an efficient way to decrease the computational cost and yet increase the accuracy. They surely showed better accuracy results compared to the original architecture.
The same year as the Inception network was released, another CNN architecture has drawn attention, it is called VGG [START_REF] Simonyan | Very deep convolutional networks for large-scale image recognition[END_REF]. It is a very popular CNN architecture and has a remarkable reputation inside the deep learning networks family. The VGG architecture is a variant of the AlexNet and ZFNet, instead of using 7 × 7 of 11 × 11 convolutional filters, authors of decided to use consecutive 3 × 3 convolutional filters. As stated in [START_REF] Simonyan | Very deep convolutional networks for large-scale image recognition[END_REF], the adoption of two consecutive 3 × 3 convolutional filters results in an effective receptive field of 5 × 5 convolution filter, and three consecutive 3 × 3 convolutional filters give a receptive field of 7 × 7 filters. The idea behind using consecutive 3 × 3 convolutional filters is to keep the same receptive field while using far less hyperparameters. Furthermore, experiments showed that the decision function became more discriminative, and that is due to the raise of the number of non-linear rectification layers. The VGG architecture contains two different kinds of building blocks, the ones with two consecutive 3 × 3 convolutional filters and the other ones with three consecutive 3 × 3 convolutional layers, each followed by a max-pooling layer. The overall architecture contains two blocks of the first building block followed by three building blocks of the second kind. Finally, three fully connected layers added for discriminative purposes. The VGG architecture was adopted in many works in image processing, for image classification and image segmentation as we mentioned in each general introduction of each chapter from part II.
The VGG network was not the last contribution to the ImageNet challenge. In fact, ResNet [START_REF] He | Deep residual learning for image recognition[END_REF] is another CNN based architecture that is known for its robust accuracy in image classification, object detection and image segmentation. It was introduced in 2015, in the ImageNet challenge and lowered the error rate of the GoogLeNet by 3.07% error rate. Its contribution resides in proposing a new building block for CNNs family called residual learning block, which consists of mostly 3 × 3 convolutional layers and pooling layers with stride of two. At the end of the network, a global average pooling is adopted followed by a fully connected layer with Softmax as a discriminative function. The residual learning module can be considered as a forward neural network with shortcut connections that help skipping one or more layers in the network. Despite of the high number of used layers in the network, ResNet architecture has no added extra parameters and no computational complexity, that is due the nature of skip connections between layers which performs an element-wise addition of the outputs of the identity mapping and the outputs of the stacked layers. Given that, ResNet can achieve low error rates and high accuracy.
Nonetheless, the ResNet team was not the one that reached the lowest error rate in the ImageNet challenge. In 2017, Jie et al. proposed an architecture called Squeeze-and-Excitation [START_REF] Jie | Squeeze-and-excitation networks[END_REF] and achieved 2.25% error rate. This architecture is composed of stacked multiple squeeze-excitation (SE) blocks. SE block contains a global average pooling operation instead of max pooling due to its superiority in terms of keeping global contextual features. SE blocks are considered the building blocks of this architecture, they are used in order to recalibrate features by aggregating feature maps across their spatial dimensions, this step is called squeeze and it generates embeddings of the global distribution from the channel-features. On the other hand, following up using excitation block to produce perchannel modulation weights. The contribution of this architecture resides on allowing global information features flowing through the network layers to selectively generate informative features while blocking useless ones.
Overview
Feature extraction methods were one of the success keys that made a breakthrough in deep learning era, by providing relevant features for training To achieve a significant accuracy in visual recognition systems it is mandatory to adopt an effective feature extraction method. Feature extraction is the first prerequisite step to an efficient image recognition system, that is due to its ability in representing data in simplified and summarized form which affects positively all its following steps. In our work here, we focus on adopting CNNs in the medical field; specifically in lung nodule detection. Therefore, in this chapter we present an in-depth comparative study of four different feature extraction techniques adopted to analyze a lung nodule image data set then we report their result performance within a deep learning scenario.
Experimental results show that feature extraction using convolutional neural networks reach the best results among other methods including restricted Boltzmann machines.
The rest of this chapter is organized as follows: next section introduces feature extraction methods that are used for this research. Then in next section, we present and discuss experimental results from our study. Finally, concluding remarks.
Methods
Feature extraction methods for image processing are usually divided into two categories: Global feature extraction methods: consist of extracting features that characterize the image at hand as a whole. These kinds of features are usually used for object detection or classification. There are several examples of feature extraction methods that are considered global, from which we mention Principal Component Analysis PCA [START_REF] Hotelling | Analysis of a complex of statistical variables into principal components[END_REF], Fourier transform [START_REF] Helson | Prediction theory and fourier series in several variables[END_REF] and RBMs [START_REF] Rumelhart | Parallel distributed processing: Exploration in the microstructure of cognition[END_REF].
Local feature extraction methods: consist of selecting features that characterize image patches which are computed at multiple points in an image. Local feature methods are commonly adopted in object recognition. Some examples of local feature methods are; scale invariant feature transform SIFT [START_REF] Lowe | Object recognition from local scale-invariant features[END_REF] adopted in [START_REF] Fei | Feature extraction methods for palmprint recognition: A survey and evaluation[END_REF] for palm print recognition, Gabor Wavelet [START_REF] Lee | Image representation using 2d gabor wavelets[END_REF] utilized in [START_REF] Shen | A review on gabor wavelets for face recognition[END_REF] to perform face recognition.
Our work belongs to the use of global feature extraction methods due the nature of the used images, which are used for lung nodule detection. Thus, in the upcoming sections, we present a full description of the used global feature extraction methods in our work.
Restricted Boltzmann Machines
RBMs are one of the key components that started the deep learning era, it is the building block of Deep Belief Networks DBN that adopted a new training strategy back then called greedy-layer-wise as stated in early sections. RBMs are based on Boltzmann Machines, which were introduced by Hinton and Sejnowski in 1986 [START_REF] Hinton | Learning and relearning in boltzmann machines[END_REF]. BMs are considered as neural networks with stochastic units bidirectionally connected. It is an energy-based model trained in an unsupervised way (the probability joint distribution in (4.1) is defined by the model variable using an energy function).
P(x) = e -E(x) z (4.1)
E(x) is the energy function:
E(x) = -x T W x -b T x (4.2)
Where W here denotes matrix of weights belonging to the model, b presents the offsets for each x, then z is the partition function that ensures the sum of P(x) equals to 1.
BMs take as input a set of unlabeled binary victors. Their training consists of updating repeatedly the weights among the units until reaching a state of equilibrium.
In the beginning, RBMs were proposed as a concept by Smolensky [START_REF] Smolensky | Information processing in dynamical systems: Foundations of harmony theory[END_REF] in 1986. RBMs are simply BMs that are constituted of only two layers (hidden and visible) with restriction to form connections between units of the same layers. This restriction is the key concept that makes RBMs quickly reach an equilibrium state in the hidden layer given a set of visible units (for example, when using RBMs for image reconstruction, the state of equilibrium is reached when the image is optimally reconstructed).
To train a RBM, Gibbs sampling is adopted [START_REF] Smolensky | Information processing in dynamical systems: Foundations of harmony theory[END_REF]. Gibbs sampling is a simple Markov Chain Monte Carlo MCMC algorithm that is responsible for producing sample from the joint probability distribution of multiple random variables. To elaborate more, performing Gibbs sampling while starting with random state in one layer we can generate relevant features from a RBM. Gibbs sampling relies on two steps:
• Sample h(t) ∼ P(h|v(t)), sample all the elements of h(t) given v(t) .
• Sample v(t+1) ∼ P(v|h(t)), sample all the elements of v(t+1) given h(t).
Where h(t) denotes the hidden units and v(t) denotes the visible units. The mentioned steps above are performed repeatedly until reaching the equilibrium state, in other words until convergence.
Note that there are two commonly used strategies for approximation, they are adopted in order to make the training process more significant, they are called contrastive divergence CD [START_REF] Hinton | Training products of experts by minimizing contrastive divergence[END_REF] and stochastic maximum likelihood SML [START_REF] Goodfellow | Deep learning[END_REF].
The CD algorithm is much powerful when it comes to modeling high-dimensional data, thus it is widely used in shallow models. To perform with CD algorithm, it takes as input the visible units, the training rate and the number of Gibbs steps. Then the outputs are RBM weights matrix and the gradient approximations noted as ∆w i j , ∆a i and ∆b j . The algorithm consists of randomly initializing weights with positive values between 0 and 1, then performing Gibbs sampling steps between hidden and visible units to finally calculate the gradient approximations.
2D Discrete Fourier Transform
In different application domains, the Fourier Transform has shown its importance and effectiveness, such as in mathematics, engineering and physics. Its discrete equivalent is called the DFT which is calculated using the fast Fourier Transform FFT, usually used in signal processing to get from a spatial representation its equivalent spectral representation. In our work here, we mainly deal with image data which are considered a form of signals represented in 2D. those signals are discretely sampled at constant intervals and are of finite durations. In this kind of situations, DFT seems convenient due to the need of only finite number of sinusoids.
The 2D-DFT F and its inverse f are mathematically represented by the following formulas:
F(u, v) = 1 NM N-1 ∑ x=0 M-1 ∑ y=0 f (x, y)e -j2π( ux N + yv M ) (4.3) f (x, y) = N-1 ∑ u=0 M-1 ∑ v=0 F(u, v)e j2π( ux N + yv M ) (4.4)
where 0
≤ u ≤ M-1, 0 ≤ v ≤ N-1, 0 ≤ x ≤ M-1 and 0 ≤ y ≤ N-1.
Principal Component Analysis
PCA is a mathematical function that resides on an orthogonal transformation in order to convert a data of possibly correlated variables to a data of uncorrelated variables named principal components. The effectiveness of PCA relies on reducing the dimensionality of the high-dimensional data at hand by preserving the variance in its distribution as much as possible.
If we assume that each data point is represented by a vector x with N dimensions, PCA starts with finding a linear function α1(x) with a maximum variance of the projection of data points onto it. Then, PCA tries to look for another linear function α2(x) with a maximum variance of the projection of the data points onto it among all lines orthogonal to α1(x).
this process is repeated until a αk with the maximum variance for some k > 1 is found. To summarize, the main goal of PAC is to reduce as much as possible the dimensionality of a given high-dimensional data set by making most of the variation in the data set accounted for by the first k principal components, then reducing the complexity obtained by transforming the variables.
Convolutional Neural Networks
CNNs are one of the most used architectures in a variety of machine learning tasks during the last few years, they have proven to be very effective in face recognition, object detection, text mining, action recognition, and many more. A standard CNN architecture consists of two different types of layers: convolution layers and pooling layers. Note that in this work, the fully connected layers are not included when CNN is only used for feature extraction.
Convolution layers: it simply consists of a NxN filter applied to input data to extract feature maps that contain high level features used as an input for next layers. In the literature, this kind of layers are followed by a nonlinear activation function in order to increase the expressiveness of the generated feature maps. Mathematically, a convolution operation can be represented by the following formula:
y k = f (W k * x) (4.5)
where x represents the input image, W k denotes the used convolution filter in relation to the k th feature map, and the convolution operation is represented by the multiplication sign. The nonlinear activation function mention above is represented by the f () function, in our case it is the rectifier linear activation ReLU function [START_REF] Nair | Rectified linear units improve restricted boltzmann machines[END_REF].
Pooling layers: this kind of layers have the ability to control overfitting in a network by progressively reducing the spatial size of feature maps using maximum or average operations on data, pooling layers are usually used after convolution layers to reduce the number of parameters and decrease computation capacity in a network. There are various type of pooling layers, Max-pooling and average-pooling are the most common ones in the literature. Max-pooling is represented by the following formula:
y k i j = max (p,q)∈R x k i j (4.6)
Where x k i j denotes the element at location (p, q) contained by the pooling region R i j .
Experimentation and results
This section describes the experiments conducted to compare the performance of each feature extraction technique adopted in our comparative study. To conduct our experiments, we use the lung image database consortium image collection (LIDC-IDRI) [4]. it contains many diagnostic and lung cancer CT scans with annotated lesions. It is a web-accessible resource for the development of CAD methods for lung-cancer segmentation and diagnosis. The advantage of this data set is that it is provided with descriptive files that give details about any lung nodule location and size in the CT scans. In our study, we are interested in in the lung nodule area more than other parts of the lung. Hence, we developed an algorithm that consists of extracting lung nodule patches in a 32 × 32 size, from the CT scans with the help of annotations provided with the data set. Thus, our data set contains image patches containing lung nodules (Fig. 4.1(a)) and images patches without lung nodules (Fig. 4.1(b)). For accuracy enhancement measures and to increase the volume of our data set, we applied data augmentation methods such as translation, rotation at different degrees, flipping and scaling. In total, our data set contains 8000 image patches, out of which 70% are for the training and 20% for validation then 10% designed for testing.
CNNs are known for their ability to retrieve complex feature characteristics, which make them very appealing for object detection tasks. In our work here, we compare a standard CNN constituted of the parts described before, to the other feature extraction methods mentioned in the same section. Fig. 4.2 presents the data processing flow used in our experiments. Once image patches are extracted, a data augmentation mechanism is applied to proceed with the feature extraction procedure phase right after. Thereafter, each feature extraction method is followed by a deep neural network DNN in order to measure the effectiveness of each feature extraction method.
The first feature extraction method consists of a standard CNN. The architecture is composed of a convolutional layer in after the input, that produces a set of feature maps with convolution filter bank composed of 32 × 32 filters. Next, we fed those feature maps to a ReLU activation function followed by a batch normalization layer [START_REF] Ioffe | Batch normalization: Accelerating deep network training by reducing internal covariate shift[END_REF]. Next, a maxpooling layer is applied to the output with 2 × 2 window with a stride of 2 pixels. We repeat those steps two more times but with different number of parameters, such as the number of convolution filters is 64 and 128 in the second and the third runs respectively.
The second feature extraction method we experimented with consists of an RBM. The beneficial effect of RBMs while applied for feature extraction is tremendous as proposed by Hinton [START_REF] Hinton | Reducing the dimensionality of data with neural networks[END_REF], because they use hidden units to model the correlation among raw features in the processed data. Our implemented RBM is composed of 1024 (32 × 32) visible units and 512 units in the hidden layer, then we trained our model with the CD learning algorithm using 30 Gibbs steps.
In our third method, we adopted 2D-DFT for feature extraction as used by Tao et al. [START_REF] Tao | A texture extraction technique using 2d-dft and hamming distance[END_REF]. We adopted Eq.4.3 in order to transform all images using 2D-DFT transformation, we then took the magnitude coefficient matrices for the classification step.
For the last method, we adopted PCA for feature extraction by reducing the dimension of the input data from 1024 into 512 components. After each feature extraction method, a DNN with the same parameters was introduced for classification purposes in order to measure their performance. The used DNN is composed of three hidden layers, each layer consists of 512, 1024 and 128 neurons in the first, the second and the third layer respectively. The model gives as output two options, the first is for a patch with a lung nodule and the second is for a patch without a lung nodule. All layers contain a ReLU method as activation function, except for the output where we use the sigmoid function in order to compute the probabilities of the outputs.
It is known in the literature that when initializing a model with the right parameters, the model has more chances to converge rapidly. Therefore, in our experiment we adopt the He initialization [START_REF] He | Delving deep into rectifiers: Surpassing human-level performance on imagenet classification[END_REF]. It consists of performing random initialization then multiplying the random parameters by for the links between layers l -1 and l, where S(l -1) stands for the size of layer l -1. As reported in [START_REF] Hanin | How to start training: The effect of initialization and architecture[END_REF], the main reason after using He initialization in our work is that it reportedly leads to faster training. Overfitting is one of the main cons of neural networks, to tackle this issue we adopted two different regularization methods. First, we performed L2-regularization [START_REF] Ng | Feature selection, l 1 vs. l 2 regularization, and rotational invariance[END_REF] that consists of adding a regularization term to the error function so that it keep decreasing in the following form: W here denotes neural networks weights, LT (W ) is the total loss and λ is regularization parameter. LD(W ) is a sum-of-squares error function between the desired output and the network output, and LW (W ) is the sum-of-squares of the weight parameters. The second regularization method adopted is the dropout [START_REF] Hinton | Improving neural networks by preventing co-adaptation of feature detectors[END_REF]. This regularization method relies on dropping out most of the irrelevant units during training. In Table 4.1, we present model results in terms of accuracy for all the methods we experimented with. We notice that the CNNs as a feature extraction method surely leads to the best accuracy as expected: 0.996 in training, 0.973 in validation and finally 0.962 in test time. Then comes the 2D-DFT is the second place by reaching 0.990 during training, 0.965 in validation and 0.943 in test time. The third place belongs to RBM, which achieved 0.78 in training, 0.730 in validation and 0.655 in test time. The reason behind this low accuracy may be due to the nature of RBMs in being shallow models which makes them need huge amounts of data to achieve a better accuracy. Its power can manifest when it is used multiple times to form a deep belief network, which is beyond the scope of this work. Lastly, the PCA was the worst feature accuracy method in terms of accuracy. It reached 0.608 in training phase, 0.580 during validation and 0.509 in test time. We observe that PCA is the only linear feature-transformation method we experimented with, we can argue that non-linear feature transformation tends to lead to higher accuracy results given the presented experiments. Fig. 4.3 provides the accuracy plots for all the used feature-extraction methods. It presents a more detailed illustration of the performance of the four feature-extraction methods and shows that CNN outperforms other three. We observe that the 2D-DFT is not significantly worse that CNN in terms of accuracy. But when we compare their variance and bias, we notice that the CNN shows low variance and bias compared to the 2D-DFT. Which means that the Fourier transformation misses relevant correlations among features and target outputs, we call that underfitting. Table 4.2 shows the loss during training and test time for the CNN and the 2D-DFT from which we can get their bias and variance. Given the presented results in Table 4.2, we observe that the 2D-DFT has high loss and high bias in both training and testing compared to CNN.
L T (W ) = L D (W ) + λ .L W (W ) (4.7)
Given the nature of data we experimented with (lung nodule images) and due to the sensitivity of the result in affecting the life of a patient, the detection of a nodule before it transforms into a cancer is a very important task. Therefore, adopting a model with high accuracy and low bias and variance is a very important matter and can be helpful for doctors in taking critical decisions and to save lives.
Conclusion
In this chapter, we perform a comparative study of four different feature extraction methods based on deep neural networks for lung nodule detection. Our data follows a flow that begins with data preparation, then features are extracted using each method separately. Then, for evaluation purposes, we adopted a deep neural network composed of three hidden layers. Experimental results show that CNN outperforms RBM, PCA and 2D-DFT, with 2D-DFT being very close to CNN in terms of accuracy but suffers from both high bias and high variance which leads to overfitting and affects the inference phase. The high classification accuracy reached after feature extraction with CNN is an evidence that CNN is very successful in extracting high-level features from CT scans of lungs for the purpose of lung-nodule detection.
Introduction
Pooling layers play an interesting role in reaching higher accuracies using CNNs. They are responsible for the down-sampling operation that aims to prevent overfitting in CNNs.
There are various conventional pooling operations that work fine for that purpose, such as max and average pooling which are the most popular ones in the literature. However, there are some variants that have proven to be more efficient in terms of preventing overfitting; those variants are a mixture of features generated from max and average pooling and combined in an optimal way. In this regard, we proposed a new mixing strategy between max and average pooling, called fully mixed-max-average pooling FMMAP, which outperformed max and average pooling in terms of accuracy and significantly outperformed other mixed-pooling methods time-wise. Furthermore, the dropout showed remarkable results, it has the benefit of using a combination of multiple networks in one architecture besides preventing the network's units from co-adapting in an excessive way. However, the application of dropout function is only limited to fully connected layers. To further address the problem of overfitting, we present a novel method called Mixed-Pooling-Dropout which benefits from the utilization of the dropout strategy in early stages of CNNs, especially in pooling layers. We represent the dropout function by a binary mask with each element drawn independently from a Bernoulli distribution.
We employ those architectures in the medical field, the first one is used to perform lung nodule classification, where we managed to outperform max and average pooling in terms of accuracy, and while being close to other mixed-pooling methods accuracy-wise, we significantly outperformed them time-wise. In addition, the second architecture is applied to brain tumor classification, it outperformed conventional pooling methods as well as the max-pooling-dropout method. With our method, we reached 0.926 vs 0.868 regardless of the retaining probability.
In the upcoming sections, we present thoroughly each one of the architectures, besides the experimental results.
Related work
Pooling layers have played a game changing role in the raise of CNNs in computer vision, due to their structure that helps in reducing the number of features, surprisingly, without affecting those that seem to be relevant for the model training. As a matter of fact, decreasing the number of features in this way have a direct beneficial effect on the performance of a CNN; for example, it helps in preventing overfitting and achieving better results in less time. In the deep learning literature, the well-known pooling functions are max, average and stochastic pooling. However, many variants of these functions have been proposed as well, such as mixed [START_REF] Yu | Mixed pooling for convolutional neural networks[END_REF], mixed max-average [START_REF] Lee | Generalizing pooling functions in cnns: Mixed, gated, and tree[END_REF], gated pooling (GP) function [START_REF] Lee | Generalizing pooling functions in cnns: Mixed, gated, and tree[END_REF] and max-pooling-dropout [155]. Herein, we present each of these functions. In general, any pooling function can be represented by the following formula in Eq.(5.1).
a l+1 j = Pool(a (l) 1 , ..., a (l) i , ..., a (l) n ), a i ∈ R (l) j (5.1)
In Eq.(5.1), R (l) j is the j th pooled region at layer l. The pooled region slides over a layer l by a stride s, and n is the number of elements in the pooled region. Pool() denotes the pooling function performed over the pooled region. The pooling operation can be either max or average. Both pooling operations have been used in many works, and have proven very effective in boosting the accuracy [START_REF] Nagi | Max-pooling convolutional neural networks for vision-based hand gesture recognition[END_REF][START_REF] Lin | Network in network[END_REF]. The max pooling operation takes the maximum value in the pooled region as shown in Eq.5.2, while the average pooling operation takes the average value over the pooled region according to Eq.5.3.
Max(x k
i j ) = max (p,q)∈R i j (x k i j ) (5.2) Avg(x k i j ) = 1 | R i j | ∑ (p,q)∈R i j x k i j (5.3)
In both Eq.5.2 and Eq.5.3, x k i j represents the element at location (p, q) covered by the pooling region R i j . Zeiler et al. [START_REF] Zeiler | Stochastic pooling for regularization of deep convolutional neural networks[END_REF] proposed a new probabilistic function for the pooling operation, called stochastic pooling. With stochastic pooling, the selected output is drawn from a multinomial distribution formed by units within a local pooled patch. A multinomial distribution consists in computing the probabilities p for each region (see Eq.5.4) and then the activation a l is sampled from the multinomial distribution based on p (see Eq.5.5). Even though these functions can work very well on many data sets, they still encounter some problems and it is unknown which one can be best for solving new problems. It is proven theoretically [START_REF] Boureau | A theoretical analysis of feature pooling in visual recognition[END_REF] and empirically [START_REF] Yu | Mixed pooling for convolutional neural networks[END_REF] that the choice between the pooling functions is dependent on the characteristics that are present in the data. Therefore, as a solution, Yu et al. proposed a new method called mixed-pooling (MP) [START_REF] Yu | Mixed pooling for convolutional neural networks[END_REF] that combines the two popular pooling functions max and average. The idea behind it is to randomly choose either of the two pooling methods (average or max) according to Eq.5.6.
y k i j = λ . max (p,q)∈R i j x k i j + (1 -λ ). 1 | R i j | ∑ (p,q)∈R i j x k i j (5.6)
Where x k i j represents the element at location (p, q) covered by the pooling region R i j and λ can be either 0, i.e. average pooling operation is performed, or 1, i.e. max pooling is performed. Thus, the pooling regulation scheme becomes a probabilistic matter which helps in overcoming the aforementioned issues. Furthermore, Lee et al. proposed three new methods called mixed max-average, gated and tree pooling [START_REF] Lee | Generalizing pooling functions in cnns: Mixed, gated, and tree[END_REF]. The first method, mixed max-average pooling (MMAP), combines max and average pooling proportionally, in other words the output is a combination of fixed proportions extracted from max and average pooling using Eq.5.6. In this case λ ∈ [0, 1] is a scalar representing the mixing proportion which specifies the exact amount of combination of max and average pooling. The difference between MP and MMAP is that the output of the former is the result of max or average pooling given that λ is either 1 or 0, while the output of the latter is a combination of the max and average operations given that λ can take any value between 0 and 1. In other words, MMAP is a generalization of MP. MMAP has the drawback of being "non-responsive", i.e. the mixing proportion remains fixed regardless of what characteristics are present in the pooled region. The second method proposed by Lee et al., i.e. gated pooling (GP), addresses this nonresponsive behavior. Rather than fixing the mixing proportion, a gating-mask is learned that has the same spatial dimensions as the pooled region. Then the inner product of the gating-mask and the region being pooled produces a scalar which is fed through a sigmoid function to generate the mixing proportion (see Eq.(5.7)).
y k i j =σ (ω T R i j ). max (p,q)∈R i j x k i j + (1 -σ (ω T R i j )). 1 | R i j | ∑ (p,q)∈R i j
x k i j
(5.7)
In (5.7), x k i j represents the element at location (p,q) covered by the pooling region R i j ; ω denotes the values of the gated mask and σ is the sigmoid function in Eq. (5.8).
σ (z) = 1 1 + exp -z
(5.8)
The third method proposed by Lee et al. [START_REF] Lee | Generalizing pooling functions in cnns: Mixed, gated, and tree[END_REF] has three main points: first it learns pooling filters directly from data. Second, it allows to learn how to mix pooling filters in a differentiable way, thus the whole function will be differentiable with respect to its parameters and inputs. Finally, the tree pooling brings together the other characteristics belonging to the structure of a hierarchical tree. Each leaf node is associated with pooling filters learned during training denoted as v m , with m from 1 to n(number of leaves). The internal nodes in turn are processed in the same way as the gated max-average pooling method. Finally, the root node corresponds to the overall output produced by the tree (see Eq.5.9). For a specific example of a "2 level" tree pooling, the resulting function will be as follows:
f tree (x) = σ (ω T 3 x)v T 1 x + (1 -σ (ω T 3 x))v T 2 x
(5.9)
x represents the input and ω is the gating mask similar to the one adopted in gated pooling.
Moreover, max-pooling-dropout is another pooling related method that combines max pooling and a Dropout function. Dropout is a regularization approach that is very efficient in overcoming overfitting in artificial neural networks and deep neural networks. In [START_REF] Hinton | Improving neural networks by preventing co-adaptation of feature detectors[END_REF], N. Srivastava et. al. published a paper describing in details how does a dropout function work. It acts in fully connected layers by randomly dropping a number of hidden units given a probability p named retaining probability. This approach has been adopted in several works, most of them are winners of the ImageNet competition, such as AlexNet [START_REF] Krizhevsky | Imagenet classification with deep convolutional neural networks[END_REF], SENet [START_REF] Jie | Squeeze-and-excitation networks[END_REF], Inception [START_REF] Szegedy | Going deeper with convolutions[END_REF] and many more. As shown in [155], the dropout function was extended to cover feature extraction layers using a method called max-pooling-dropout. It consists of selecting activations based on a multinomial distribution then perform max-pooling operation, in this way the output is stochastically picked instead of just selecting the strongest activation.
Another dropout inspired work called dropconnect was introduced in [START_REF] Wan | Regularization of neural networks using dropconnect[END_REF], it consists of randomly selecting a set of weights to set to zero instead of setting randomly picked units to zero, their motivation behind using this method is to address the shortcoming presented by the dropout function which consists of preventing network weights from collaborating with one another in order to remember training samples. In [START_REF] Goodfellow | Maxout networks[END_REF], authors proposed a novel approach related to dropout called maxout network. The maxout function improves both optimization and model averaging characteristics in a dropout function. Adopting a function like dropout in pooling layers is a great way to enhance the quality of selected features in order to reach a better learning scheme.
Fully Mixed Max-Average Pooling for Convolutional
Neural Networks
Fully mixed max-average pooling
There is one noticeable issue with the aforementioned mixed pooling operations, they randomly choose between one of the standard pooling operations or combine them proportionally. Which raises an interesting question, what if fully combined max and average pooling in one single layer? Before answering this question, we are going to describe our architecture in details. The standard pooling layer is replaced by a layer that combines max and average pooling operations, in order to provide different types of features such as global contextual feature information provided by average pooling and dense features provided by max pooling. The proposed mixed pooling block is illustrated in 5.2. Mathematically speaking, our proposed method can be represented by the following formula:
y k i j = f ( max (p,q)∈R i j x k i j ) ⊕ f ( 1 | R i j | ∑ (p,q)∈R i j
x k i j ) (5.10)
In Eq.(5.10), x k i j represents the element at location (p, q) covered by the pooling region Ri j; f represents the 1 × 1 convolution filters and we use ⊕ to denote the depthwise concatenation of the two operations' outputs.
The use of both operations in one layer can surely slow the network, that is why we included a 1 × 1 convolutional layer after each pooling operation before we combined them. The use of 1×1 convolutional layer have a beneficial effect of reducing the outputs dimension and decreasing the computational cost [START_REF] Szegedy | Going deeper with convolutions[END_REF]. The 1 × 1 convolution was first introduced by Lin [START_REF] Lin | Network in network[END_REF] for the purpose of cross channel down sampling. In other words, the utilization of 1 × 1 convolution has the advantage of squeezing the information in a certain volume by reducing the number of channels.
Results and discussion
This section presents the results of our comparative study of different pooling-based architectures. The CNN architectures we experimented with are: using max-pooling, using average-pooling, using mixed-pooling, using mixed-max-average-pooling, using gatedpooling and finally using our proposed pooling method. Fig. 5.3. The convolutional neural network architecture adopted for all the used models. Note that the pooling layer is replaced by the convenient pooling operation regarding each one of the six models.
In order to be neutral as much as we can in our comparative experiment, we used the same architecture with the same setup for all pooling methods. Fig. 5.3, shows the used architecture for our experiment. It contains a convolution layer with 32 3 × 3 filters then followed by an activation layer with ReLU function and then a pooling layer. This block is repeated three times with 64 3 × 3 and 128 3 × 3 filters in convolution layers respectively. Then come three fully connected (FC) layers for the discriminative part of CNN. The FC layers contain 1024, 512 and 128 hidden units, respectively. We performed our experiments with the lung image database consortium image collection (LIDC-IDRI) [4]. It contains CT scans of lung-cancer screening with described and annotated lesions. It is an open-source data set, mainly dedicated for the development of CAD systems for lung-cancer segmentation and diagnosis. The advantage of this data set is that it is provided with descriptive files that give details about any lung nodule location and size in the CT scans. In this work, we are interested only in the lung nodule regions, thus we developed an algorithm that consists of extracting lung-nodule patches (32 × 32 pixels in size) from the CT scans using the annotations provided in the descriptive files. Hence, our data set has image patches that contain lung nodules (see Fig. 5.4(a)) and image patches that do not contain lung nodules (see Fig. 5.4(b)). We applied data augmentation methods such as translation, rotation at different degrees, flipping and scaling to increase the volume of our data set. In total, our data set consists of 8,000 examples out of which 70% are used for training, 20% for validation and 10% for testing.
Our proposed method was put under two types of experiments, the first one consists of comparing it with standard pooling operations (max and average). Then, our proposed method is compared to mixed-pooling-based methods. Table 5.1 shows accuracy results for our proposed method with max and average pooling. Our method outperformed both max and average pooling in terms of accuracy. We achieved 0.954 in training and 0.906 in test time, while max and average pooling achieved 0.947 in training and 0.901 in testing and 0.941 in training and 0.904 in test time respectively. As we predicted, the performance of max and average pooling is slightly the same. Nonetheless, in Table 5.1 it shows also the running time in seconds of the three pooling methods. Our model While GP has the highest accuracy on the test data set, the results are very close to each other for all the four models which makes it hard to pick a winner and gives evidence that a mixture of features coming from max and average operations can lead to some good results. Table 5.2 also shows the running times (in seconds) for the four models. FMMAP is significantly faster compared to the other three methods and this is mainly due to the 1 × 1 filters added after each pooling operation. In our proposed pooling block, performing both max and average pooling has surely increased the number of channels, hence the increase of number of parameters in the following convolution layer. Nevertheless, with the use of 1 × 1 convolution filters after each pooling operation and before jumping to the next convolution layer, we managed to reduce the number of channels by dividing it by two, therefore, decreasing the model's number of parameters. The gated pooling model takes more time compared to the other models and this is essentially due to the use of sigmoid function in the pooling block. Mixed-Pooling-dropout is a fusion of a mixed-pooling layer with a dropout layer. The mixed-pooling layer consists of a mixture of both max and average pooling operations in a particular way.
Translation invariance is one of the problems faced while performing image processing, the utilization of pooling operations deals with such kind of problems efficiently. Pooling operations have also the beneficial effect of reducing the computational cost of a CNN model by canceling irrelevant features that are present in a feature map. Nevertheless, adding the dropout function to fully-mixed-pooling layers leads to significant improvements result-wise, and that is due to the dropout's stochasticity and diversity of features. In this work, the pooling operation adopted is fully-mixed-pooling which is mathematically represented by the following formula:
x k represents the elements of the pooled region, f is the 1 × 1 convolution operation performed over the pooling operation in order to computationally help the network and ⊕ denotes the depth-wise concatenation of both operations' outputs. Furthermore, given the output units' nature in being binary with respect to a retaining probability, the adopted dropout function is represented by a binary mask containing elements independently drawn respecting a Bernoulli distribution. In [155], authors claim that applying pooling operation after performing a dropout is considered as if activation units were sampled from a multinomial distribution. We describe a multinomial distribution as follows; we first compute the probabilities p i for each area following this formula:
p i = a i ∑ k∈R j a k (5.11)
then based on p, the activation a l is sampled regarding a multinomial distribution: s j = a l where l ∼ P(p 1 ...p k ) (5.12) a dropout function can be included into feature extraction layers in a CNN in two ways: first, we can apply dropout after generating outputs from fully mixed-pooling layer. Or, we can perform dropout function on each pooling operation then mix both the outputs (max and average). In both scenarios, the dropout function can be represented mathematically respecting the following equation:
z (l) k ≈ M * y (l) k (5.13)
M represents a binary mask generated from a Bernoulli distribution with the same dimensions as y (l) k , and ( * ) denotes an element-wise multiplication of the generated binary mask and the pooled area. y (l) k in Eq.?? takes two different values, depending on if the dropout function is performed after fully-mixed pooling then the resulted value from Eq.?? is taken, otherwise y (l) k take the resulted value from Eq.5.3 in case of average polling or from Eq.5.2 in case of max pooling. Performing this strategy grates that not only the maximum or the average value is being selected but the chance for other values that might be relevant to the training process is a valid option, this is possible regarding a retaining probability p = 1q, where q here stands for the probability of a value to be drooped out. When a dropout function is used in fully connected layers during training time, at test time a mean network [START_REF] Srivastava | Dropout: a simple way to prevent neural networks from overfitting[END_REF] is adopted to help reduce the error. A mean network contains all the hidden units but with their outgoing weights halved. In [155], authors used what they called 'probabilistic weighted pooling' during test time, which consists of averaging all the possibly trained max-pooling-dropout networks. In our case, we perform a multiplication of the output from the pooling layer with the retaining probability p as indicated by these following equations:
d (l) k = p.Max(x k ) ⊕ p.Avg(x k ) (5.14) d (l) k = p.y (l) k (5.15)
With Eq.5.14 is the formula for performing dropout before mixing pooling outputs (Max(x k ) from Eq.5.2, Avg(x k ) from Eq.5.3) and Eq.5.15 is the formula for dropout after mixing max and average pooling features(y
k from Eq.??). With Eq.5.14 is the formula for performing dropout before mixing pooling outputs (Max(x k ) from Eq.5.2, Avg(x k ) from Eq.5.3) and Eq.5.15 is the formula for dropout after mixing max and average pooling features (y (l) k from Eq.5.10). This strategy is equivalent to using a mean network scheme as adopted in fully connected layers. It is an efficient approach as we will be presenting in upcoming sections. In the next section, we provide empirical proofs that our proposed method is significantly superior to conventional pooling and max-pooling-dropout methods, but first we provide theoretical evidences to why our method outperforms those methods. The more included features from different sources (max and average) the merrier [START_REF] Yu | Mixed pooling for convolutional neural networks[END_REF], to include more features is certainly good for CNN's accuracy. Although, some irrelevant features might be taken into consideration while adopting this strategy, which has negative effect on model's accuracy. Thus, excluding them is a mandatory task. That is why, in our proposed method we perform a dropout function to stochastically remove any irrelevant feature units. This approach enhances the quality of our model's generated features stochastically, and considered as if they were drawn from a multinomial distribution.
Experimental results
Our experiments were conducted using a data set of MRI images for brain tumors. The obtained data identified 120 patients at first, then ten patients were excluded because they did not have genomic cluster information available. Therefore, the final version of the data-set contains 110 patients gathered from five different institutions [START_REF] Mazurowski | Radiogenomics of lower-grade glioma: algorithmically-assessed tumor shape is associated with tumor genomic subtypes and patient outcomes in a multi-institutional study with the cancer genome atlas data[END_REF]. There were 101 patients with all available sequences, nine patients with a missing post-contrast sequence. The number of slices ranged from 20 to 88 among different patients. To determine the original growth pattern of the tumor, they analyzed only the preoperative data. Evaluation of the tumor shape was based on Fluid Attenuated Reversal Recovery (FLAIR) abnormality, because the tumor rarely improves in Low-Grade Gliomas (LGG). The FLAIR images were manually annotated by drawing the FLAIR abnormality on each slice to form training data for the automated segmentation algorithm. For this purpose, a locally developed software was used. Then a board eligible radiologist has reviewed all annotations and modified those that were incorrectly annotated. All image data with corresponding segmentation masks for each used case is publicly available via the following link: https://www.kaggle.com/mateuszbuda/lggmrisegmentation/version/2. Since we are performing medical image classification, we only use image data with their corresponding labels. To conduct our experiments, we needed a computational power that can satisfy our needs, we trained our model on NVIDIA TESLA P100 GPUs. CNNs are known to be very successful in computer vision in general, and in classification specifically, that is due to their ability to process massive amount of data. The building blocks of this architecture such as convolution and pooling are one of the keys to a significant accuracy, mostly used for retrieving complex and relevant features as demonstrated in [START_REF] Skourt | Feature-extraction methods for lung-nodule detection: A comparative deep learning study[END_REF]. In our work, we inspect the role of pooling layers in enhancing the accuracy of CNNs besides the use of the dropout function in feature extraction layers. For that, we introduce a novel pooling method named fully-mixed-pooling-dropout and we compare it first to conventional pooling methods then to max-pooling-dropout [155]. Our proposed method is presented in two versions, first version consists of max and average pooling, each one is followed by a 1 × 1 convolution for computational enhancement purposes, then both outputs are concatenated depth-wise and finally followed by a dropout layer, see Fig. 5.5(a). on the other hand, the second version of our proposed method contains a dropout layer applied to input feature maps, the output then is fed to max-pooling and average pooling separately and each one of them is followed by a 1 × 1 convolution layer, after that, the results are concatenated depth-wise as shown in Fig. 5
.5(b).
The whole architecture is presented as follows: a convolution layer of 64 filters with 3 × 3 shape and ReLU activation function, followed by a batch normalization layer then our proposed mixed-pooling-dropout layer (First model) with 2 × 2 window and stride 2 in both max and average pooling, Fig. 5.5(a). The second architecture has the same parameters configuration except for the pooling area where we use the second version of our proposed method, Fig. 5.5(b). Moreover, Fig. 5.5(c) presents the max-pooling-dropout method. It consists of the same number of layers used in all the other architectures except for the pooling layers, max-pooling-dropout is used instead.
To train our networks we used 50 epochs with 32 as batch-size, and we used Adam as an optimization algorithm.
Mixed pooling dropout vs Max, Average and Stochastic pooling
Our method is a pooling-based function, it is mandatory to compare its performance with conventional pooling methods like max pooling, average pooling and stochastic pooling. Stochastic pooling is included in our experiments because it resembles to our proposed method in stochastically selecting activation units and showed promising results as shown in [START_REF] Wang | Alcoholism detection by data augmentation and convolutional neural network with stochastic pooling[END_REF][START_REF] Zhang | Twelve-layer deep convolutional neural network with stochastic pooling for tea category classification on gpu platform[END_REF]. This section contains a comparative experiment in which we compare the performance of conventional pooling methods with ours. We kept the same architecture description as described above (same number of layers, hyperparameters) in out method in the architectures of these conventional pooling architectures. The difference resides only in the used pooling method for down-sampling (max, average or stochastic). As presented in Table 5.3, our proposed method in its both versions overtake all of max, average and stochastic pooling accuracy-wise and also in sensitivity and specificity. As was assumed, max and average pooling achieved the same results with a slight difference, which is normal given that in the literature they both have always shown equal performance. Thus, the choice between them remains not very important. On the other hand, stochastic pooling showed slightly better performance compared to max and average pooling and that is due to its nature in stochastically drawing activation features.
Mixed pooling dropout vs Max pooling dropout
As mentioned above, one of the few methods that adopted dropout operation in feature extraction layers is max-pooling-dropout [155]. hence, it is mandatory to provide experimental results to compare its performance to our proposed method. Fig. 5.6 shows accuracy and loss charts belonging to both versions of our proposed method opposing to max-pooling-dropout. We managed to show a better accuracy using our proposed method in its both versions compared to max-pooling-dropout, with the second version reaching the highest accuracy and the lowest loss. Furthermore, Table 5.4 presents model accuracy results for all the three methods. It can be observed that both models of our proposed method outperformed max-pooling-dropout with a significant margin, with second model of MiPD reached 0.926 in training and 0.908 in test, first model of MiPD with 0.888 in training and 0.864 in test, and finally max-pooling-dropout with 0.868 in training and 0.831 in test. In addition to accuracy, our proposed method reached better sensitivity and specificity as shown in Table 5.4.
Furthermore, Table 5.4 also shows the loss for the three methods, and again our proposed method in both models achieved a lower loss value compared to max-pooling-dropout with 0.274, 0.338 and 0.369 respectively. As shown empirically, the second model of our proposed method outperforms the first model, even with the same number of layers and parameters in both models. The only difference is that the dropout in the second model is performed twice before we include max and average pooling operations. While in the first model, we only perform the dropout function one time and that is after the concatenation of both average and max pooling layers. The difference in the results between both models of our proposed method is explained by the fact that performing dropout twice in the second model, the randomness of selecting diverse and significant features is higher than when we performed the dropout function one time on the generated pooling features in the first model. In addition to accuracy, sensitivity and specificity, time performance is also presented in Table 5.4. As we presumed, our model took slightly more time than max-pooling dropout and that is explained by the fact that a mixture of both max and average pooling operations is performed in the same layer. Nevertheless, utilizing dropout before/after pooling functions served greatly in reducing time performance simply by randomly disabling useless units for the training of our model. Besides, 1 × 1 convolution filters were added right after each combination of max and average pooling in order to decrease the number of generated feature maps for the same purpose which is to reduce time performance. During test time, we adopted the "mean-network" strategy and that means the test time network contains all units but with their outgoing weights divided by two. According to [155], this strategy is far more effective than taking the average predictions of several separate networks. Furthermore, we make certain that it is the same case in our experiments, empirical evidence shows that using mean-network at test time outperforms the averaging strategy, see Table 5.5. Our proposed method showed great performance in medical images. Unfortunately, its generalization to other application domains is out of the scoop of this study. Nevertheless, our proposed pooling block is composed of state-of-the-art blocks, which have shown very efficient results in many application domains [START_REF] Lin | Network in network[END_REF][START_REF] Szegedy | Inception-v4, inceptionresnet and the impact of residual connections on learning[END_REF] (including medical domain [START_REF] Winkels | d g-cnns for pulmonary nodule detection[END_REF]), their combination in a certain manner already proved to be effective for different application fields [START_REF] El Hassani | Efficient lung nodule classification method using convolutional neural network and discrete cosine transform[END_REF]. Given that, and until we empirically prove those assumptions in future works, we can presume that our proposed method can be generalized to other applications.
Dropout rate
The dropout function works with hyperparameter that is directly responsible for the activation units that will be dropped out of training, named retaining probability p. in [START_REF] Srivastava | Dropout: a simple way to prevent neural networks from overfitting[END_REF], it is proven that p makes various neural networks very efficient. p can take values from 0 to 1, p = 1 means that no units were excluded from training while small values of p mean that more units were disabled during training. In case of our proposed method, p = 1 denotes that mixed-pooling is performed while in max-pooling-dropout p = 1 means that max-pooling is adopted. As shown in section ?? of chapter 5, we show that mixed-pooling layers reach promising results compared to conventional pooling methods. Including dropout function in fully connected layers has shown very good results, especially when choosing optimal values for the retaining probability p regarding each type of layer. Different values for hidden layers are usually between 0.5 and 0.8, while the optimal value by experiments is 0.5. For input layers, the choice of p is related to what king of data is dealt with. According to [START_REF] Srivastava | Dropout: a simple way to prevent neural networks from overfitting[END_REF], the optimal value that p can take in input layers when processing image data sets is 0.8.
In the above experiments, we presumed that the optimal value for p is 0.5 since it is the optimal value while using dropout function in fully connected layers according to [START_REF] Srivastava | Dropout: a simple way to prevent neural networks from overfitting[END_REF]. However, to validate our presumption in regards to the optimal value for the retaining probability, we conducted more experiments where we try different value of p for all the architectures. Fig. 5.7 shows the impact of changing the retaining probability on the accuracy of each method. The second model of our proposed method outperforms the other methods in all scenarios where the model with the value p = 0.5 reached the highest accuracy of them all. Our proposed method (both versions) performs poorly when using higher values of p, while for smaller values it performs relatively better. As was presumed, the optimal retaining probability value remains p = 0.5. on the other hand, for max-pooling-dropout it the opposite, with p = 0.5 remains the optimal value in this case too. In general, regardless of what value take p, our proposed method outperforms max-pooling-dropout anyway.
Conclusion
In this chapter, we provided two different mixed pooling strategies. On the one hand, we presented a pooling-based comparative study with the proposition of a new mixed-pooling method, called fully mixed max average pooling. With our method, we perform a fully mixture of both max and average pooling operations in one single pooling layer with a 1 × 1 convolutional layer to reduce both the number of parameters and the computational cost. In this work, we trained our models on the lung image database consortium image collection LIDC-IDRI, for lung nodule detection. Experimental results show that our proposed method outperforms standard pooling methods accuracy-wise, while being 0.05% low on accuracy compared to state-of-the-art gated pooling method, with the leading best accuracy in this experiment. Nevertheless, our pooling method is 2.68 times faster than any of the mixed pooling strategies used in this chapter. To summarize, FMMAP provides a mixture of max and average pooling features with a reasonable accuracy at a significantly lower computational cost compared to popular mixed pooling architectures.
On the other hand, with the second mixed pooling strategy, we proposed a novel pooling layer that resides in mixing max and average pooling operations besides using dropout as a probabilistic function in one single layer, we named it mixed-pooling-dropout. Dropout in our proposed method selects activation units based on a Bernoulli distribution regarding a retaining probability p in training time. While in test time, a mean-network strategy was adopted given its efficiency.
Our proposed method was presented under two versions. We compared our proposed method (both models) to conventional pooling methods which were overtaken by our method with a significant margin. Then, we showed the superiority of our method to max-poolingdropout since it is the only method in the literature that uses the same strategy as in our method. Moreover, we tried using different retaining probability values to know for sure the optimal value where our method can reach the highest accuracy. We found that p = 0.5 is the optimal value. MiPD is a mixture of max and average pooling with 1 × 1 convolution for computation optimization purposes with the inclusion of dropout function for stochasticity. Our experiments showed that our proposed method outperforms max-pooling-dropout for different retaining probability value.
Conclusion
In this part, we investigated and introduced some of the feature extraction methods for medical image classification using CNNs. In the first chapter, we performed a comparative study of four different feature extraction methods based on deep neural networks for lung nodule detection. Experimental results show that CNN outperforms RBM, PCA and 2D-DFT, with 2D-DFT being very close to CNN in terms of accuracy but suffers from both high bias and high variance which leads to overfitting and affects the inference phase. The high classification accuracy reached after feature extraction with CNN is an evidence that CNN is very successful in extracting high-level features from CT scans of lungs for the purpose of lung-nodule detection.
In the second chapter, we provided two different mixed pooling strategies. The first one is called fully mixed max average pooling, where we performed a fully mixture of both max and average pooling operations in one single pooling layer with a 1 × 1 convolutional layer to reduce both the number of parameters and the computational cost. To examine the performance of this architecture, we adopted the lung image database consortium image collection LIDC-IDRI, for lung nodule detection. And, we compared our architecture to similar mixed pooling strategies, where we showed that with our proposed method, we are 2.68 times faster than any of the mixed pooling strategies adopted in our study.
The second proposed architecture is called mixed pooling dropout, in which we included a dropout function in the mixed pooling layer. Dropout in our proposed method selects activation units based on a Bernoulli distribution regarding a retaining probability p in training time. While in test time, a mean-network strategy was adopted given its efficiency. Our proposed method was presented under two versions, and with both versions we managed to outperform max, average and stochastic pooling in a first experimentation and to also outperform max-pooling-dropout in a second experimentation regardless of the retaining probability values.
We followed the same experimentation strategy followed in the first part; we tested existing feature extraction methods that exist in the literature to perform medical image classification. Then, we proposed our own architectures and we outperformed conventional and state-of-the-art pooling methods in medical image classification.
General Conclusion Summary
The main goal of this research thesis was to investigate thoroughly some of deep learning architectures that are suitable for medical image processing, and develop new architectures that improve the accuracy when dealing with medical images.
In the introduction, we rose some research questions to which we are answering as follows:
1. Q: Data preparation is one of the mandatory steps to achieve satisfactory results; can a conventional deep learning method perform an efficient segmentation by relying on good data preparation?
A: In chapter 2, part II, we performed a lung CT image segmentation using the U-Net architecture. However, before starting the training process, we had to prepare our own data set to fit the network. The LIDC-IDRI data set contains raw lung CT scans without any segmentation maps. Hence, our first challenge was getting input images from the LIDC-IDRI data set containing lung images, an provide their corresponding segmentation masks manually. It was a time-consuming task. Nevertheless, the nature of the adopted U-Net in dealing with small data sets, was a major advantage to still reach satisfactory segmentation results. Besides, data augmentation also provided more training data. Therefore, the data preparation process and data augmentation had a huge impact on ameliorating the performance of lung CT image segmentation using the U-Net architecture. In the end, we came to conclude that convolutions and poolings are the best match for a CNN to achieve a satisfactory classification accuracy.
A-2: In the first section of chapter 5, part III, we proposed a new pooling strategy to enhance the performance of CNNs for image classification. Our proposed pooling method resides on mixing max and average pooling in an optimal manner. The comparison of our method with conventional pooling methods showed that we have made an improvement accuracy-wise. On the other hand, the comparison with other pooling methods that involve mixture of max and average pooling, showed that we have made an enhancement in terms of time performance.
A-3: In the second section of chapter 5, part III, we proposed a new enhancement of the pooling layer in a CNN architecture, in which we extended the mixed-pooling strategy proposed before, by involving a dropout function in order to lower the computational cost and of course we improved the accuracy to outperform conventional pooling strategies beside those involving a mixture strategy. In this work, we gathered our own findings that managed in one way or another to improve the performance of certain deep learning architectures in medical image analysis. There are other methods that were not mentioned in this work, which didn't show what we were expecting. However, they still helped in finding other ways to proceed in our direction and improve our work.
We started this thesis in 2016, at that stage, deep learning in computer vision was still not as explored as in the present time. Which made us face some struggles, such as lack of prepared data sets, especially in the medical domain and lack of computational power. Nonetheless, we managed to reach the goal that we set, by preparing our own data sets and finding new way to train our models even with insufficient computational power, also proposing new deep learning architectures after exploring and comparing a lot of architectures.
Future work
Our research work has left some uncovered roads, due to lack of prepared data or limited computational resources. However, we have reached some satisfactory results that may lead to some other findings, from which we care to mention these few ones:
As we mentioned before, medical image processing is mainly manifested via detection, segmentation and classification. We only explored the last two phases due to their importance in delivering a decent diagnosis. Nevertheless, we intend to extend our research to touch the detection phase, by proposing improvements on the YOLO [START_REF] Redmon | You only look once: Unified, real-time object detection[END_REF] algorithm and adopt it for a real time brain tumors detection.
In the work presented in chapter 5, part III, we proposed a new pooling strategy that involves using a multinomial distribution in order to select activations. However, we thought of using different kinds of distribution to select activation and compare their impact on the selected activations, thus on the accuracy.
As for the application of our proposed semantic segmentation architecture, we intend to generalize it to other types of diseases to see if it will perform the same way as in segmenting brain tumors. If not, we want to explore our options in how to make it more generic to perform in whatever types of image data sets.
3. 1
1 Proposed method's DSC score compared to those of U-Net, Att-UNet and FCN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 DNN accuracy for the four feature-extraction methods. . . . . . . . . . . . 4.2 DNN loss for CNN and 2D-DFT. . . . . . . . . . . . . . . . . . . . . . . . 5.1 Model accuracy and time performance for max, average and our proposed pooling method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Model accuracy and time performance for all mixed-pooling architectures including our proposed method. . . . . . . . . . . . . . . . . . . . . . . . 5.3 Proposed method's accuracy, sensitivity and specificity compared to those of Max, Average and stochastic pooling. . . . . . . . . . . . . . . . . . . . . 5.4 Models' accuracy, loss and time performance in seconds for our proposed method (both versions) vs max-pooling-dropout. . . . . . . . . . . . . . . 5.5 Mean-network vs. averaging strategy at test time for our proposed method. . List of Abbreviations AI . . . . . . . . Artificial Intelligence ANN . . . . . . Artificial Neural Network API . . . . . . . Application Programming Interface BM . . . . . . . Boltzmann Machine CAD . . . . . . Computer Aided Diagnosis CD . . . . . . . Contrastive Divergence CNN . . . . . . Convolutional Neural Network ConvLSTM . . Convolutional Long Short Term Memory CPU . . . . . . Central Processing Unit CRF . . . . . . Conditional Random Field cSE . . . . . . . Channel Squeeze and Excitation CT . . . . . . . Computed Tomography DBN . . . . . . Deep Belief Network DFT . . . . . . Discrete Fourier Transform DL . . . . . . . Deep Learning DNN . . . . . . Deep Neural Network DSC . . . . . . Dice Score Coefficient ED . . . . . . . Edema EM . . . . . . . Expectation Maximization ET . . . . . . . Enhancing Tumor FC . . . . . . . Fully Connected FCN . . . . . . Fully Convolutional Networks List of Abbreviations xvi FFT . . . . . . . Fast Fourier Transform FMMAP . . . . Fully Mixed-Max-Average Pooling FPN . . . . . . Feature Pyramid Network GAN . . . . . . Generative Adversarial Network GP . . . . . . . Gated Pooling GPU . . . . . . Graphics Processing Unit GRU . . . . . . Gated Recurrent Units HAANet . . . . Hierarchical Aggregation Network HGG . . . . . . High Grade Glioma HRNet . . . . . High Resolution Network LGG . . . . . . Low Grade Glioma LIDC . . . . . . Lung Image Database Consortium LSTM . . . . . Long Short Term Memory MaDP . . . . . Max Pooling Dropout MCMC . . . . . Markov Chain Monte Carlo MiDP . . . . . . Mixed Pooling Dropout MLP . . . . . . Multi-Layer Perceptron MMAP . . . . . Mixed Max Average Pooling MP . . . . . . . Mixed Pooling MRF . . . . . . Markov Random Field MRI . . . . . . Magnetic Resonance Imaging MSConvLSTM Multi-Scale Convolutional Long Short Term Memory NCR . . . . . . Necrotic NET . . . . . . Non-Enhancing Tumor PCA . . . . . . Principal Component Analysis RBM . . . . . . Restricted Boltzmann Machine ReLU . . . . . . Rectified Linear Unit RNN . . . . . . Recurrent Neural Network SDN . . . . . . Stacked Deconvolutional Network List of Abbreviations xvii SGD . . . . . . Stochastic Gradient Descent ResNet . . . . . Residual Network scSE . . . . . . Spatial-Channel Squeeze and Excitation SENet . . . . . Squeeze and Excitation Network SE-inception . . Squeeze and Excitation inception SE-ResNet . . . Squeeze and Excitation Residual Network SIFT . . . . . . Scale Invariant Feature Transform SML . . . . . . Stochastic Maximum Likelihood sSE . . . . . . . Spatial Squeeze and Excitation TC . . . . . . . Tumor Core TPU . . . . . . Tensor Processing Unit WT . . . . . . . Whole Tumor
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Deep Learning Overview . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Supervised learning . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Unsupervised learning . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Semi-supervised learning . . . . . . . . . . . . . . . . . . . . . . 1.3 Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . 1.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Basic components of CNN . . . . . . . . . . . . . . . . . . . . . 1.4 Deep Learning Applications in Medical Image Analysis . . . . . . . . 1.4.1 Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.3 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Fig. 1 . 1 .
11 Fig. 1.1. Basic MLP architecture.
Fig. 1 . 2 .
12 Fig. 1.2. Visual cortex resemblance with convolutional neural network architecture.
Fig. 1 . 3 .
13 Fig. 1.3. Basic Convolutional Neural Network architecture.
Fig. 1 . 4 .
14 Fig. 1.4. Simple convolution operation.
Fig. 1 .
1 Fig. 1.5. 1 × 1 convolution.
Fig. 1 . 6 .
16 Fig. 1.6. Flattened convolution, X, Y and Z are the width height and depth respectively
Fig. 1 . 7 .
17 Fig. 1.7. Cross-channel convolution.
Fig. 1 . 8 .
18 Fig. 1.8. Depth-wise convolution.
Fig. 1 . 9 .
19 Fig. 1.9. Grouped convolution.
Fig. 1 .Fig. 1 .
11 Fig. 1.10. Shuffled-grouped convolution.
Fig. 1 . 11 .
111 Fig. 1.11. Sigmoid activation function representation.
Fig. 1 . 12 .
112 Fig. 1.12. Hyperbolic tangent activation function representation.
Fig. 1 . 13 .
113 Fig. 1.13. ReLU activation function representation.
Fig. 1 . 14 .
114 Fig. 1.14. Leaky ReLU activation function representation.
Fig. 1 . 15 .
115 Fig. 1.15. Swish activation function representation.
Fig. 1 . 16 .
116 Fig. 1.16. Neural network transition from basic to dropout.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.3 Results an discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Fig. 2 . 1 .
21 Fig. 2.1. The U-Net architecture.
Fig. 2 . 2 .
22 Fig. 2.2. Rectified Linear Units (ReLU), presented in (2.1).
Fig. 2 . 3 .
23 Fig. 2.3. The way that U-net architecture works is that it takes the input image to generate the corresponding map for segmentation.
Contents 3 . 1
31 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.4 Experiments and results . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
We propose a novel deep neural network, called Multi-Scale ConvLSTM Attention Neural Network (MSConvLSTM-Att), to automatize brain tumor semantic segmentation. Our architecture is multi-scale-attention based with each level using Convolutional Long Short Term Memory (ConvLSTM) [157], Squeeze and Excitation-inception (SE-inception) [142] and Squeeze and Excitation-Residual-Network (SE-ResNet) [142].
Fig. 3 . 1 .
31 Fig. 3.1. Inception block.
Fig. 3 . 2 .
32 Fig. 3.2. SE-inception block.
Fig. 3 . 3 .
33 Fig. 3.3. ResNet block.
Fig. 3 . 4 .
34 Fig. 3.4. SE-ResNet block.
Fig. 3 . 5 .
35 Fig. 3.5. Spatial Attention Module.
Fig. 3 . 7 .
37 Fig. 3.7. Segmentation results sample: (a) is the input MRI images, (b) is the ground truth and (c) is the segmentation results using our proposed architecture.
Chapter 4 Feature
4 Extraction Methods for Lung-Nodule Detection: A Comparative Deep Learning Study Contents 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Restricted Boltzmann Machines . . . . . . . . . . . . . . . . . . 4.2.2 2D Discrete Fourier Transform . . . . . . . . . . . . . . . . . . . 4.2.3 Principal Component Analysis . . . . . . . . . . . . . . . . . . . 4.2.4 Convolutional Neural Networks . . . . . . . . . . . . . . . . . . 4.3 Experimentation and results . . . . . . . . . . . . . . . . . . . . . . . 4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Fig. 4 . 1 .
41 Fig. 4.1. A sample from the lung nodule data set used in our experimentation.
Fig. 4 . 2 .
42 Fig. 4.2. Overview of the data processing flow (from data pre-processing to classification).
Fig. 4 . 3 .
43 Fig. 4.3. Accuracy charts for the four different feature extraction methods in validation and training.
84 5. 4 . 1
8441 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 5.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 5.3 Fully Mixed Max-Average Pooling for Convolutional Neural Networks 80 5.3.1 Fully mixed max-average pooling . . . . . . . . . . . . . . . . . 80 5.3.2 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . 81 5.4 Mixed Pooling-Dropout for Convolutional Neural Networks Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mixed pooling dropout . . . . . . . . . . . . . . . . . . . . . . . 84 5.4.2 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . 86 5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Fig. 5 .
5 Fig.5.1 illustrates how a pooling function works in general. The input represents layer l and the red region is pooled region R (l)j where j in this case is 1 and the output X is a l+1 j in the formula and the stride s is 2 in this example.
Fig. 5 . 1 .
51 Fig. 5.1. Illustration of a pooling operation.
a l where l ∼ P(p 1 ...p k )(5.5)
Fig. 5 . 2 .
52 Fig. 5.2. Fully mixed max-average pooling block.
Fig. 5 . 4 .
54 Fig. 5.4. A sample from the lung nodule data set used in our experiments.
5. 4
4 Mixed Pooling-Dropout for Convolutional Neural Networks Regularization 5.4.1 Mixed pooling dropout
Fig. 5 . 5 .
55 Fig. 5.5. The three architectures adopted for the experimentation. Mixed-pooling-dropout first version (a), second version (b) and max-pooling-dropout (c)
Fig. 5 . 6 .
56 Fig. 5.6. Model accuracy and loss charts for our proposed dropout-pooling method (v1 and v2) and max-pooling-dropout.
Fig. 5 . 7 .
57 Fig. 5.7. Training accuracy by retaining probability for our proposed mixed-pooling-dropout method (v1 and v2) and max-pooling-dropout.
2 .A- 1 :
21 Q: Feature extraction is one of the main steps to prepare data for any computer vision system, it consists of extracting only relevant characteristics from data for next phases. What are the feature extraction methods that can help our deep learning network achieve the best segmentation result?A: In chapter 3, part II, we proposed a new deep learning architecture for brain tumor semantic segmentation based on some of the state-of-the-art deep learning feature extraction methods. These feature extraction methods were organized in a multi-scale way, then followed by an attention mechanism to put more focus on the brain tumor. With the inclusion of these feature extraction methods into our proposed architecture, we managed to keep different kinds feature characteristics for the attention mechanism to work with. Therefore, we've reached the best semantic segmentation results (relying on robust feature extraction methods) compared to U-Net, Attention U-Net and FCN. 3. Q: CNNs are one of the state-of-the-art algorithms that made a huge impact in deep learning raise, it consists of a variety of feature extraction functions followed by a classifier. Medical image classification also relies on good feature extraction methods to reach higher accuracy, thus CNNs can be a good fit for that matter. Are convolution and pooling methods the best choice to optimal features extraction for CNNs? What are the enhancements we could make to CNN's features extraction methods to improve its performance? In chapter 4, part III, we presented a deep learning feature extraction comparative study. In this study, we differentiate between local and general feature extraction methods and we focus more on the general feature extraction methods, since CNNs are considered general. Thereafter, we gathered some popular general feature extraction methods to compare their performance to the feature extraction part of a standard CNN.
Publications•
Brahim AIT SKOURT, Nikola S. NIKOLOV, and Aicha MAJDA. "Multi-Scale ConvL-STM Attention-Based Brain tumor segmentation" International Journal of Advanced Computer Science and Applications (IJACSA) in press 2022.
Table 3 . 1 .
31 Proposed method's DSC score compared to those of U-Net, Att-UNet and FCN.
Labels ET WT CT Mean
U-Net 0.563 0.848 0.797 0.736
Att-UNet 0.637 0.875 0.845 0.786
FCN 0.551 0.853 0.781 0.728
ours 0.649 0.881 0.865 0.798
Table 4 .
4 1. DNN accuracy for the four feature-extraction methods.
Methods CNN 2D-DFT RBM PCA
Training accuracy 0.996 0.990 0.780 0.608
Validation accuracy 0.973 0.965 0.730 0.580
Test accuracy 0.962 0.943 0.655 0.509
Table 4 .
4 2. DNN loss for CNN and 2D-DFT.
Methods CNN 2D-DFT
Training loss 3.6% 8.4%
Test loss 4.1% 12.3%
Table 5 . 1 .
51 Model accuracy and time performance for max, average and our proposed pooling method.
Networks Average-pooling Max-pooling FMMAP
Training accuracy 0.931 0.937 0.954
Test accuracy 0.903 0.901 0.906
Performance Time 190.466 190.551 209.753
took faintly more time compared to max and average pooling, which is expected given the
performing of both methods in the same layer.
Table 5 .
5 2. Model accuracy and time performance for all mixed-pooling architectures including our proposed method.The second part of our comparative study is presented in Table5.2. As it can be observed, all the four methods have relatively good accuracy results, with GP reaching 0.973 in training and 0.956 in test, MMAP with 0.979 in training and 0.938 in test, MP with 0.962 in training and 0.922 in test and our FMMAP with 0.954 and 0.906 in training and test, respectively.
Networks FMMAP MP MMAP GP
Training accuracy 0.954 0.962 0.979 0.973
Test accuracy 0.906 0.922 0.938 0.956
Performance time 209.753 353.617 430.154 561.488
Table 5 . 3 .
53 Proposed method's accuracy, sensitivity and specificity compared to those of Max, Average and stochastic pooling.
Methods Sensitivity Specificity Accuracy
Max 0.843 0.849 0.850
Avg 0.850 0.844 0.855
Stochastic 0.858 0.854 0.864
Ours(mix first) 0.884 0.874 0.888
Ours(drop first) 0.918 0.920 0.926
Table 5 .
5 4. Models' accuracy, loss and time performance in seconds for our proposed method (both versions) vs max-pooling-dropout.
Methods MaPD Ours(mix first) Ours(drop first)
Training accuracy 0.868 0.888 0.926
Sensitivity 0.858 0.884 0.918
Specificity 0.861 0.874 0.920
Training Loss 0.369 0.338 0.274
Test accuracy 0.831 0.864 0.908
Performance Time 810.521 853.940 846.687
Table 5 .
5 5. Mean-network vs. averaging strategy at test time for our proposed method.
Methods Ours(mix first) Ours(drop first)
Mean-network accuracy 0.864 0.908
Averaging accuracy 0.858 0.897
• Brahim AIT SKOURT, Nikola S. NIKOLOV, and Aicha MAJDA. "Fully Mixed Max-Average Pooling for Convolutional Neural Network" accepted in international network of research and training inrt agadir 2022. • Brahim AIT SKOURT, Abdelhamid EL HASSANI, and Aicha MAJDA. "Mixedpooling-dropout for convolutional neural network regularization." Journal of King Saud University -Computer and Information Sciences 34.8 (2022): 4756-4762. • Abdelhamid EL HASSANI, Brahim AIT SKOURT,and Aicha MAJDA."Efficient Lung Nodule Classification Method using Convolutional Neural Network and Discrete Cosine Transform." International Journal of Advanced Computer Science and Applications 12.2 (2021). • Brahim AIT SKOURT, Nikola S. NIKOLOV, and Aicha MAJDA. "Feature-extraction methods for lung-nodule detection: A comparative deep learning study." 2019 International Conference on Intelligent Systems and Advanced Computing Sciences (ISACS), pp. 1-6. IEEE, 2019. • Abdelhamid EL HASSANI, Brahim AIT SKOURT, and Aicha MAJDA."Efficient Lung CT Image Segmentation using Mathematical Morphology and the Region Growing algorithm." International Conference on Intelligent Systems and Advanced Computing Sciences (ISACS), pp. 1-6. IEEE, 2019. • Brahim AIT SKOURT, Abdelhamid EL HASSANI, and Aicha MAJDA. "Lung CT image segmentation using deep neural networks." Procedia Computer Science 127 (2018): 109-113.
Conclusion:In this last section, we present a general conclusion of our thesis and future works.
https://case.edu/med/neurology/NR/MRI Basics.htm
https://image-net.org/challenges/beyond ilsvrc
http://medicaldecathlon.com
https://case.edu/med/neurology/NR/MRI Basics.htm
Acknowledgements
In (3.2), λ is initialized by 0 and gradually updated to give more weight to the spatial attention map, as adopted in [START_REF] Fu | Dual attention network for scene segmentation[END_REF].
At the last level, we perform a convolution operation to generate the final prediction map for each scale and then average all these maps to output the segmentation map. Fig. 3.6 presents an overview of our proposed architecture.
Experiments and results
Conclusion
In this part, we used one of the most effective deep learning models for medical image segmentation, U-Net, to segment lung parenchyma in this section. This model can produce high-quality segmentation results with only a few hundred images in the data set. We achieved a dice coefficient index score of 0.9502 using U-Net, which is a high score for this kind of task. This approach is general and can be used for many different medical image segmentation tasks. This work was a preliminary study to explore how deep learning models can help with medical image segmentation and a first step to design new deep learning models for this domain.
Thereafter, we presented a new deep learning model for brain tumor segmentation. We introduced a new deep learning model for brain tumor segmentation that we called multiscale ConvLSTM Attention Neural Network, and we evaluated its performance against various deep learning models that are designed for such tasks. Our model consists of a multiscale architecture that combines different feature extraction blocks such as Inception, Squeeze-Excitation, Residual Network, ConvLSTM and Attention units. We benchmark our model against standard U-net, AttU-net and FCN that have proven effective in semantic segmentation. Experimental results demonstrated that our model surpasses standard U-net, AttU-net and FCN in terms of dice score. Our model achieved 0.797 as a mean dice score for the three parts of the brain tumor, while Attention U-net, standard U-net and FCN achieved 0.786, 0.736 and 0.728 respectively. We notice that both our model and the AttU-net perform better than the others, which can be attributed to the use of attention modules that improves the segmentation process. Moreover, our model beats the AttU-net, and this is due to the use of ConvLSTM, SE-Inception and SE-ResNet.
As a first step of this part, we managed to test existing deep learning architectures for medical image segmentation. Then, we proposed a new architecture to serve the same purpose. We only compared our proposed architecture to others in the same level due to lack of computational power.
Part III Medical Image Classification |
04108433 | en | [
"info.info-lo"
] | 2024/03/04 16:41:22 | 2001 | https://inria.hal.science/hal-04108433/file/nf.pdf | Gilles Dowek
email: [email protected]
The Stratified Foundations as a theory modulo
The Stratified Foundations are a restriction of naive set theory where the comprehension scheme is restricted to stratifiable propositions. It is known that this theory is consistent and that proofs strongly normalize in this theory. Deduction modulo is a formulation of first-order logic with a general notion of cut. It is known that proofs normalize in a theory modulo if it has some kind of many-valued model called a pre-model. We show in this paper that the Stratified Foundations can be presented in deduction modulo and that the method used in the original normalization proof can be adapted to construct a pre-model for this theory.
The Stratified Foundations are a restriction of naive set theory where the comprehension scheme is restricted to stratifiable propositions. This theory is consistent [START_REF] Jensen | On the consistency of a slight (?) modification of Quine's new foundations[END_REF] while naive set theory is not and the consistency of the Stratified Foundations together with the extensionality axiom -the so-called New Foundations -is open.
The Stratified Foundations extend simple type theory and, like in simple type theory, proofs strongly normalize in The Stratified Foundations [START_REF] Crabbé | Stratification and cut-elimination[END_REF]. These two normalization proofs, like many, have some parts in common, for instance they both use Girard's reducibility candidates. This motivates the investigation of general normalization theorems that have normalization theorems for specific theories as consequences. The normalization theorem for deduction modulo [START_REF] Dowek | Proof normalization modulo[END_REF] is an example of such a general theorem. It concerns theories expressed in deduction modulo [START_REF] Dowek | Theorem proving modulo[END_REF] that are first-order theories with a general notion of cut. According to this theorem, proofs normalize in a theory in deduction modulo if this theory has some kind of many-valued model called a pre-model. For instance, simple type theory can be expressed in deduction modulo [START_REF] Dowek | Theorem proving modulo[END_REF][START_REF] Dowek | HOL-λσ an intentional first-order expression of higher-order logic[END_REF] and it has a pre-model [START_REF] Dowek | Proof normalization modulo[END_REF][START_REF] Dowek | HOL-λσ an intentional first-order expression of higher-order logic[END_REF] and hence it has the normalization property. The normalization proof obtained this way is modular: all the lemmas specific to type theory are concentrated in the pre-model construction while the theorem that the existence of a premodel implies normalization is generic and can be used for any other theory in deduction modulo.
The goal of this paper is to show that the Stratified Foundations also can be presented in deduction modulo and that the method used in the original normalization proof can be adapted to construct a pre-model for this theory. The normalization proof obtained this way is simpler than the original one because it simply uses the fact that proofs normalize in the Stratified Foundations if this theory has a pre-model, while a variant of this proposition needs to be proved in the original proof.
It is worth noticing that the original normalization proof for the Stratified Foundations is already in two steps, where the first is the construction of a socalled normalization model and the second is a proof that proofs normalize in the Stratified Foundations if there is such a normalization model. Normalization models are, more or less, pre-models of the Stratified Foundations. So, we show that the notion of normalization model, that is specific to the Stratified Foundations, is an instance of a more general notion that can be defined for all theories modulo, and that the lemma that the existence of a normalization model implies normalization for the Stratified Foundations is an instance of a more general theorem that holds for all theories modulo.
The normalization proof obtained this way differs also from the original one in other respects. First, to remain in first-order logic, we do not use a presentation of the Stratified Foundations with a binder, but one with combinators. To express the Stratified Foundations with a binder in first-order logic, we could use de Bruijn indices and explicit substitutions along the lines of [START_REF] Dowek | HOL-λσ an intentional first-order expression of higher-order logic[END_REF]. The pre-model construction below should generalize easily to such a presentation. Second, our cuts are cuts modulo, while the original proof uses Prawitz' folding-unfolding cuts. It is shown in [START_REF] Dowek | About folding-unfolding cuts and cuts modulo[END_REF] that the normalization theorems are equivalent for the two notions of cuts, but that the notion of cut modulo is more general that the notion of folding-unfolding cut. Third, we use untyped reducibility candidates and not typed ones as in the original proof. This quite simplifies the technical details.
A last benefit of expressing the Stratified Foundations in deduction modulo is that we can use the method developed in [START_REF] Dowek | Theorem proving modulo[END_REF] to organize proof search. The method obtained this way, that is an analog of higher-order resolution for the Stratified Foundations, is much more efficient than usual first-order proof search methods with the comprehension axioms, although it remains complete as the Stratified Foundations have the normalization property.
1 Deduction modulo
Identifying propositions
In deduction modulo, the notions of language, term and proposition are that of first-order logic. But, a theory is formed with a set of axioms Γ and a congruence ≡ defined on propositions. Such a congruence may be defined by a rewrite systems on terms and on propositions (as propositions contain binders -quantifiers -, these rewrite systems are in fact combinatory reduction systems [START_REF] Klop | Combinatory reduction systems: introduction and survey[END_REF]). Then, the deduction rules take this congruence into account. For instance, the modus ponens is not stated as usual
A ⇒ B A B B Excluded middle if A ≡ B ∨ (B ⇒ ⊥) Γ ⊢≡ A Fig. 1. Natural deduction modulo
All the rules of intuitionistic natural deduction may be stated in a similar way. Classical deduction modulo is obtained by adding the excluded middle rule (see figure 1).
For example, in arithmetic, we can define a congruence with the following rewrite system 0 + y → y S(x) + y → S(x + y)
0 × y → 0 S(x) × y → x × y + y
In the theory formed with a set of axioms Γ containing the axiom ∀x x = x and this congruence, we can prove, in natural deduction modulo, that the number 4 is even
axiom Γ ⊢ ≡ ∀x x = x (x, x = x, 4) ∀-elim Γ ⊢ ≡ 2 × 2 = 4 (x, 2 × x = 4, 2) ∃-intro Γ ⊢ ≡ ∃x 2 × x = 4
Substituting the variable x by the term 2 in the proposition 2 × x = 4 yields the proposition 2 × 2 = 4, that is congruent to 4 = 4. The transformation of one proposition into the other, that requires several proof steps in usual natural deduction, is dropped from the proof in deduction modulo.
In this example, all the rewrite rules apply to terms. Deduction modulo permits also to consider rules rewriting atomic propositions to arbitrary ones. For instance, in the theory of integral domains, we have the rule
x × y = 0 → x = 0 ∨ y = 0
that rewrites an atomic proposition to a disjunction.
Notice that, in the proof above, we do not need the axioms of addition and multiplication. Indeed, these axioms are now redundant: since the terms 0 + y and y are congruent, the axiom ∀y 0+y = y is congruent to the axiom of equality ∀y y = y. Hence, it can be dropped. Thus, rewrite rules replace axioms.
This equivalence between rewrite rules and axioms is expressed by the the equivalence lemma that for every congruence ≡, we can find a theory T such that Γ ⊢ ≡ A is provable in deduction modulo if and only if T Γ ⊢ A is provable in ordinary first-order logic [START_REF] Dowek | Theorem proving modulo[END_REF]. Hence, deduction modulo is not a true extension of first-order logic, but rather an alternative formulation of first-order logic. Of course, the provable propositions are the same in both cases, but the proofs are very different.
Model of a theory modulo
A model of a congruence ≡ is a model such that if A ≡ B then for all assignments, A and B have the same denotation. A model of a theory modulo Γ, ≡ is a model of the theory Γ and of the congruence ≡. Unsurprisingly, the completeness theorem extends to classical deduction modulo [START_REF] Dowek | La part du calcul[END_REF] and a proposition is provable in the theory Γ, ≡ if and only if it is valid in all the models of Γ, ≡.
Normalization in deduction modulo
Replacing axioms by rewrite rules in a theory changes the structure of proofs and in particular some theories may have the normalization property when expressed with axioms and not when expressed with rewrite rules. For instance, from the normalization theorem for first-order logic, we get that any proposition that is provable with the axiom A ⇔ (B ∧ (A ⇒ ⊥)) has a normal proof. But if we transform this axiom into the rule A → B ∧ (A ⇒ ⊥) (Crabbé's rule [START_REF] Crabbé | Non-normalisation de ZF[END_REF]) the proposition B ⇒ ⊥ has a proof, but no normal proof.
We have proved a normalization theorem: proofs normalize in a theory modulo if this theory has a pre-model [START_REF] Dowek | Proof normalization modulo[END_REF]. A pre-model is a many-valued model whose truth values are reducibility candidates, i.e. sets of proof-terms. Hence we first define proof-terms, then reducibility candidates and at last pre-models.
Definition 1 (Proof-term).
Proof-terms are inductively defined as follows.
π ::= α | λα π | (π π ′ ) | ⟨π, π ′ ⟩ | f st(π) | snd(π) | i(π) | j(π) | (δ π 1 απ 2 βπ 3 ) | (botelim π) | λx π | (π t) | ⟨t, π⟩ | (exelim π xαπ ′ )
Each proof-term construction corresponds to an intuitionistic natural deduction rule: terms of the form α express proofs built with the axiom rule, terms of the form λα π and (π π ′ ) express proofs built with the introduction and elimination rules of the implication, terms of the form ⟨π, π ′ ⟩ and f st(π), snd(π) express proofs built with the introduction and elimination rules of the conjunction, terms of the form i(π), j(π) and (δ π 1 απ 2 βπ 3 ) express proofs built with the introduction and elimination rules of the disjunction, terms of the form (botelim π) express proofs built with the elimination rule of the contradiction, terms of the form λx π and (π t) express proofs built with the introduction and elimination rules of the universal quantifier and terms of the form ⟨t, π⟩ and (exelim π xαπ ′ ) express proofs built with the introduction and elimination rules of the existential quantifier.
Definition 2 (Reduction). Reduction on proof-terms is defined by the following rules that eliminate cuts step by step.
(λα π 1 π 2 ) ▷ [π 2 /α]π 1 f st(⟨π 1 , π 2 ⟩) ▷ π 1 snd(⟨π 1 , π 2 ⟩) ▷ π 2 (δ i(π 1 ) απ 2 βπ 3 ) ▷ [π 1 /α]π 2 (δ j(π 1 ) απ 2 βπ 3 ) ▷ [π 1 /β]π 3 (λx π t) ▷ [t/x]π (exelim ⟨t, π 1 ⟩ αxπ 2 ) ▷ [t/x, π 1 /α]π 2
Definition 3 (Reducibility candidates). A proof-term is said to be neutral if it is a proof variable or an elimination (i.e. of the form (π π ′ ), f st(π), snd(π), (δ π 1 απ 2 βπ 3 ), (botelim π), (π t), (exelim π xαπ ′ )), but not an introduction. A set R of proof-terms is a reducibility candidate if
-if π ∈ R, then π is strongly normalizable, -if π ∈ R and π ▷ π ′ then π ′ ∈ R, -if π is neutral and if for every π ′ such that π ▷ 1 π ′ , π ′ ∈ R then π ∈ R.
We write C for the set of all reducibility candidates. Definition 4 (Pre-model). A pre-model N for a language L is given by: a set N , for each function symbol f of arity n a function f from N n to N , for each predicate symbol P a function P from N n to C. Definition 5 (Denotation in a pre-model). Let N be a pre-model, t be a term and φ an assignment mapping all the free variables of t to elements of N . We define the object t N φ by induction over the structure of t.
-
x N φ = φ(x), -f (t 1 , . . . , t n ) N φ = f ( t 1 N φ , . . . , t n N φ ).
Let A be a proposition and φ an assignment mapping all the free variables of A to elements of N . We define the reducibility candidate A N φ by induction over the structure of A.
-If A is an atomic proposition P (t 1 , . . . , t n ) then A N φ = P ( t 1 N φ , . . . , t n N φ ). -If A = B ⇒ C then A N
φ is the set of proofs π such that π is strongly normalizable and whenever it reduces to λα π 1 then for every
π ′ in B N φ , [π ′ /α]π 1 is in C N φ . -If A = B ∧ C then A N
φ is the set of proofs π such that π is strongly normalizable and whenever it reduces to
⟨π 1 , π 2 ⟩ then π 1 is in B N φ and π 2 is in C N φ . -If A = B ∨ C then A N
φ is the set of proofs π such that π is strongly normalizable and whenever it reduces to i(π 1 ) (resp. j(π 2 )) then π 1 (resp.
π 2 ) is in B N φ (resp. C N φ ). -If A = ⊥ then A N φ is the set of strongly normalizable proofs. -If A = ∀x B then A N
φ is the set of proofs π such that π is strongly normalizable and whenever it reduces to λx π 1 then for every term t and every element a of
N [t/x]π 1 is in B N φ+a/x . -If A = ∃x B then A N
φ is the set of proofs π such that π is strongly normalizable and whenever it reduces to ⟨t, π 1 ⟩ then there exists an element a in N such that π 1 is in B N φ+a/x . Definition 6. A pre-model is said to be a pre-model of a congruence ≡ if when A ≡ B then for every assignment φ, A N φ = B N φ .
Theorem 1 (Normalization). [START_REF] Dowek | Proof normalization modulo[END_REF] If a congruence ≡ has a pre-model all proofs modulo ≡ strongly normalize.
2 The Stratified Foundations
(Stratifiable proposition)
A proposition A in the language ∈ is said to be stratifiable if there exists a function S mapping every variable (bound or free) of A to a natural number in such a way that every atomic proposition of A, x ∈ y is such that S(y) = S(x)+1.
For instance, the proposition
∀v (v ∈ x ⇔ v ∈ y) ⇒ ∀w (x ∈ w ⇒ y ∈ w) is stratifiable (take, for instance, S(v) = 4, S(x) = S(y) = 5, S(w) = 6) but not the proposition ∀v (v ∈ x ⇔ v ∈ y) ⇒ x ∈ y
Definition 8. (The stratified comprehension scheme) For every stratifiable proposition A whose free variables are among x 1 , . . . , x n , x n+1 we take the axiom ∀x 1 . . . ∀x n ∃z ∀x n+1 (x n+1 ∈ z ⇔ A) Definition 9. (The skolemized stratified comprehension scheme) When we skolemize this scheme, we introduce for each stratifiable proposition A in the language ∈ and sequence of variables x 1 , . . . , x n , x n+1 such that the free variables of A are among x 1 , . . . , x n , x n+1 , a function symbol f x1,...,xn,xn+1,A and the axiom
∀x 1 . . . ∀x n ∀x n+1 (x n+1 ∈ f x1,...,xn,xn+1,A (x 1 , . . . , x n ) ⇔ A)
The Stratified Foundations as a theory modulo
Now we want to replace the axiom scheme above by a rewrite rule, defining a congruence on propositions, so that the Stratified Foundations are defined as an axiom free theory modulo.
Definition 10. (The rewrite system R)
t n+1 ∈ f x1,...,xn,xn+1,A (t 1 , . . . , t n ) → [t 1 /x 1 , . . . , t n /x n , t n+1 /x n+1 ]A Proposition 1.
The rewrite system R is confluent and terminating. Proof. The system R is an orthogonal combinatory reduction system, hence it is confluent [START_REF] Klop | Combinatory reduction systems: introduction and survey[END_REF].
For termination, if A is an atomic proposition we write ∥A∥ for the number of function symbols in A and if A is a proposition containing the atomic propositions A 1 , . . . , A p we write A • for the multiset {∥A 1 ∥, . . . , ∥A p ∥}. We show that if a proposition A reduces in one step to a proposition B then B • < A • for the multiset ordering.
If the proposition A reduces in one step to B, there is an atomic proposition of A, say A 1 , that has the form t n+1 ∈ f x1,...,xn,xn+1,C (t 1 , . . . , t n ) and reduces to
B 1 = [t 1 /x 1 , . . . , t n /x n , t n+1 /x n+1 ]C. Every atomic proposition b of B 1 has the form [t 1 /x 1 , . . . , t n /x n , t n+1 /x n+1 ]c
where c is an atomic proposition of C. The proposition c has the form x i ∈ x j for distinct i and j (since C is stratifiable) x i ∈ y, y ∈ x i or y ∈ z. Hence b has the form t i ∈ t j for distinct i and j, t i ∈ y, y ∈ t i or y ∈ z and ∥b∥ < ∥A 1 ∥. Therefore B • < A • . Proposition 2. A proposition A is provable from the skolemized comprehension scheme if and only if it is provable modulo the rewrite system R.
Consistency
We want now to construct a model for the Stratified Foundations.
If M is a model of set theory we write M for the set of elements of the model, ∈ M for the denotation of the symbol ∈ in this model, ℘ M for the powerset in this model, etc. We write also A M φ for the denotation of a proposition A for the assignment φ.
The proof of the consistency of the Stratified Foundations rests on the existence of a model of Zermelo's set theory, such that there is a bijection σ from M to M and a family v i of elements of M , i ∈ Z such that
a ∈ M b if and only if σa ∈ M σb σv i = v i+1 v i ⊆ M v i+1 ℘ M (v i ) ⊆ M v i+1
The existence of such a model is proved in [START_REF] Jensen | On the consistency of a slight (?) modification of Quine's new foundations[END_REF].
Using the fact that M is a model of the axiom of extensionality, we prove that a ⊆ M b if and only if σa ⊆ M σb, σ{a, b} M = {σa, σb} M , σ⟨a, b⟩ M = ⟨σa, σb⟩ M , σ℘(a) = ℘(σa), etc.
For the normalization proof, we will further need that M is an ω-model. We define 0 = ∅ M , n + 1 = n∪ M {n} M . An ω-model is a model such that a ∈ M N M if and only if there exists n in N such that a = n. The existence of such a model is proved in [START_REF] Jensen | On the consistency of a slight (?) modification of Quine's new foundations[END_REF] (see also [START_REF] Crabbé | Stratification and cut-elimination[END_REF]).
Using the fact that M is a model of the axiom of extensionality, we prove that σ∅ M = ∅ M and then, by induction on n that σn = n.
Notice that since ℘ M (v i ) ⊆ M v i+1 , ∅ M ∈ M v i and for all n, n ∈ M v i . Hence as the model is an ω-model N M ⊆ M v i .
In an ω-model, we can identify the set N of natural numbers with the set of objects a in M such that a ∈ M N M . To each proof-term we can associate a natural number n (its Gödel number) and then the element n of M. Proof-terms, their Gödel number and the encoding of this number in M will be identified in the following.
We are now ready to construct a model U for the Stratified Foundations. The base set is the set U of elements a of M such that a ∈ M v 0 . The relation ∈ U is defined by a ∈ U b if and only if a ∈ M σb. This permits to define the denotation of propositions built without Skolem symbols. To be able to define the denotation of Skolem symbols, we prove the following proposition. Proposition 3. For every stratifiable proposition A in the language ∈ whose free variables are among x 1 , . . . , x n , x n+1 and for all a 1 , . . . , a n in U , there exists an element b in U such that for every a n+1 in U , a n+1 ∈ M σb if and only if A U a1/x1,...,an/xn,an+1/xn+1 = 1
Proof. Let |A| be the proposition defined as follows.
-
|A| = A if A is atomic, -|A ⇒ B| = |A| ⇒ |B|, |A ∧ B| = |A| ∧ |B|, |A ∨ B| = |A| ∨ |B|, |⊥| = ⊥, -|∀x A| = ∀x ((x ∈ E S(x) ) ⇒ |A|), -|∃x A| = ∃x ((x ∈ E S(x) ) ∧ |A|).
Notice that the free variables of |A| are among E 0 , . . . , E m , x 1 , . . . , x n , x n+1 . Let φ = a 1 /x 1 , . . . , a n /x n , a n+1 /x n+1 ψ = v 0 /E 0 , . . . , v m /E m , σ k1 a 1 /x 1 , . . . , σ kn a n /x n , σ kn+1 a n+1 /x n+1
where k 1 = S(x 1 ), . . . , k n+1 = S(x n+1 ). We check, by induction over the structure of A, that if A is a stratifiable proposition in the language ∈, then
|A| M ψ = A U φ -If A is an atomic proposition x i ∈ x j , then k j = k i + 1, |A| M ψ = 1 if and only if σ ki a i ∈ M σ kj a j if and only if a i ∈ M σa j , if and only if A U φ = 1. -if A = B ⇒ C then |A| M ψ = 1 if and only if |B| M ψ = 0 or |C| M ψ = 1 if and only if B U φ = 0 or C U φ = 1 if and only if A U φ = 1. -if A = B ∧ C then |A| M ψ = 1 if and only if |B| M ψ = 1 and |C| M ψ = 1 if and only if B U φ = 1 and C U φ = 1 if and only if A U φ = 1. -if A = B ∨ C then |A| M ψ = 1 if and only if |B| M ψ = 1 or |C| M ψ = 1 if and only if B U φ = 1 and C U φ = 1 if and only if A U φ = 1. -|⊥| M ψ = 0 = ⊥ U φ . -if A = ∀x B then |A| M ψ = 1 if and only if for every c in M such that c ∈ M v k , |B| M ψ+c/x =
U φ = 1.
Then, the model M is a model of the comprehension scheme. Hence, it is a model of the proposition
∀E 0 . . . ∀E m ∀x 1 . . . ∀x n ∀y ∃z ∀x n+1 (x n+1 ∈ z ⇔ (x n+1 ∈ y ∧ |A|))
Thus, for all a 1 , ..., a n , there exists an object b 0 such that for all a n+1
(x n+1 ∈ z ⇔ (x n+1 ∈ y ∧ |A|)) M ψ+v k n+1 /y+b0/z = 1
We have
σ kn+1 a n+1 ∈ M b 0 if and only if σ kn+1 a n+1 ∈ M v kn+1 and |A| M ψ = 1 thus a n+1 ∈ M σ -kn+1 b 0 if and only if a n+1 is in U and A U φ = 1. We take b = σ -(kn+1+1) b 0 . For all a n+1 in U , we have a n+1 ∈ M σb if and only if A U φ = 1. Notice finally that b 0 ∈ M ℘ M (v kn+1 ), thus b 0 ∈ M v kn+1+1 , b ∈ M v
Normalization
We want now to construct a pre-model for the Stratified Foundations.
Let
u i = v 3i and τ = σ 3 . The function τ is an automorphism of M, τ u i = u i+1 , u i ⊆ M u i+1 and ℘ M (℘ M (℘ M (u i ))) ⊆ M u i+1 .
As M is an ω-model of set theory, for each recursively enumerable relation R on natural numbers, there is an object r in M such that R(a 1 , . . . , a n ) if and only if ⟨a 1 , . . . , a n ⟩ M ∈ M r. In particular there is an object P roof such that π ∈ M P roof if and only if π is (the encoding in M of the Gödel number of) a proof, an object T erm such that t ∈ M T erm if and only if t is (the encoding of the Gödel number of) a term, an object Subst such that ⟨π, π 1 , α, π 2 ⟩ M ∈ M Subst if and only if π, π 1 and π 2 are (encodings of Gödel numbers of) proofs, α is (the encoding of the Gödel number of) a proof variable and π = [π 1 /α]π 2 , an object Subst ′ such that ⟨π, t, x, π 1 ⟩ M ∈ M Subst ′ if and only if π and π 1 are (encodings of the Gödel numbers of) proofs, x is (the encoding of the Gödel number of) a term variable and t (the encoding of the Gödel number of) a term and π = [t/x]π 1 , an object Red such that ⟨π, π 1 ⟩ M ∈ M Red if and only if π and π 1 are (encodings of Gödel numbers of) proofs and π ▷ * π 1 , an object Sn such that π ∈ M Sn if and only if π is (the encoding of the Gödel number of) a strongly normalizable proof, an object ImpI such that ⟨π, α, π 1 ⟩ M ∈ M ImpI if and only if π and π 1 are (encodings of Gödel numbers of) proofs, α is (the encoding of the Gödel number of) a proof variable and π = λα π 1 , an object AndI such that ⟨π, π 1 , π 2 ⟩ M ∈ M AndI if and only if π, π 1 and π 2 are (encodings of Gödel numbers of) proofs and
π = ⟨π 1 , π 2 ⟩, -an object OrI1 (resp. OrI2) such that ⟨π, π 1 ⟩ M ∈ M OrI1 (resp. ⟨π, π 2 ⟩ M ∈ M
OrI2) if and only if π and π 1 (resp. π and π 2 ) are (encodings of Gödel numbers of) proofs and π = i(π 1 ) (resp. π = j(π 2 )), an object F orallI such that ⟨π, α, π 1 ⟩ M ∈ M F orallI if and only π and π 1 are (encodings of Gödel numbers of) proofs, α is (the encoding of the Gödel number of) a proof variable, and π = λαπ 1 , an object ExistsI such that ⟨π, t, π 1 ⟩ M ∈ M ExistsI if and only if π and π 1 are (encodings of Gödel numbers of) proofs, t is (the encoding of the Gödel number of) a term and π = ⟨t, π 1 ⟩.
Notice also that, since M is a model of the comprehension scheme, there is an object Cr such that α ∈ M Cr if and only if α is a reducibility candidate (i.e. the set of objects β such that β ∈ M α is a reducibility candidate).
Definition 12 (Admissible). An element α of M is said to admissible at level i if α is a set of pairs ⟨π, β⟩ M where π is a proof and β an element of u i and for each β in u i the set of π such that ⟨π, β⟩ M ∈ M α is a reducibility candidate.
Notice that if R is any reducibility candidate then the set R × M u i is admissible at level i. Hence there are admissible elements at all levels. Proposition 5. There is an element A i in M such that α ∈ M A i if and only if α is admissible at level i.
Proof. An element α of M admissible at level i if and only if
α ∈ M ℘ M (P roof × M u i ) ∧∀β (β ∈ M u i ⇒ ∃C (C ∈ M Cr ∧ (⟨π, β⟩ M ∈ M α ⇔ π ∈ M C)))
Hence, as M is a model of the comprehension scheme, there is an element A i in M such that α ∈ M A i if and only if α is admissible at level i.
Notice that α ∈ τ A i if and only if α ∈ A i+1 . Hence as M is a model of the extensionality axiom, τ A i = A i+1 .
Notice, at last, that
A i ⊆ M ℘ M (P roof × M u i ) ⊆ M ℘ M (u i × M u i ) ⊆ M ℘ M (℘ M (℘ M (u i ))) ⊆ M u i+1 . Proposition 6. If β ∈ M A i and α ∈ M A i+1 then the set of π such that ⟨π, β⟩ ∈ M α is a reducibility candidate. Proof. As α ∈ M A i+1 and β ∈ M A i ⊆ M u i+1 , the set of π such that ⟨π, β⟩ ∈ M α is a reducibility candidate.
We are now ready to construct a pre-model N of the Stratified Foundations. The base set of this pre-model is the set N of elements of M that are admissible at level 0. We take ∈ N (α, β) = {π | ⟨π, α⟩ M ∈ M τ β}. This permits to define the denotation of propositions built without Skolem symbols. To define the denotation of Skolem symbols, we prove the following proposition. Proposition 7. For every stratifiable proposition A in the language ∈ whose free variables are among x 1 , . . . , x n , x n+1 and for all a 1 , . . . , a n in N , there exists an element b in N such that for every a n+1 in N , ⟨π, a n+1 ⟩ M ∈ M τ b if and only if π is in A N a1/x1,...,an+1/xn+1 . Proof. Let |A| be the proposition (read p realizes A) defined as follows.
- Notice that the free variables of |A| are among term, subst, subst ′ , red, sn, impI, andI, orI1, orI2, f orallI, existsI, p, E 0 , . . . , E m , x 1 , . . . , x n , x n+1 . Let φ = a 1 /x 1 , . . . , a n /x n , a n+1 /x n+1 ψ = T erm/term, Subst/subst, Subst ′ /subst ′ , Red/red, Sn/sn, ImpI/impI, AndI/andI, OrI1/orI1, OrI2/orI2, F orallI/f orallI, ExistsI/existsI, A 0 /E 0 , . . . , A m /E m , τ k1 a 1 /x 1 , . . . , τ kn a n /x n , τ kn+1 a n+1 /x n+1
|x i ∈ x j | = ⟨p, x i ⟩ ∈ x j , -|A ⇒ B| = p ∈
We check, by induction over the structure of A, that if A is a stratifiable proposition in the language ∈, then the set of proofs π such that |A| M ψ+π/p = 1 is A N φ .
-If A is an atomic proposition x i ∈ x j , then k j = k i +1, we have |A| M ψ+π/p = 1 if and only if ⟨π, τ ki a i ⟩ M ∈ M τ kj a j if and only if ⟨τ ki π, τ ki a i ⟩ M ∈ M τ kj a j if and only if τ ki ⟨π, a i ⟩ M ∈ M τ kj a j if and only if ⟨π,
a i ⟩ M ∈ M τ a j if and only if π is in A N φ . -if A = B ⇒ C then we have |A| M
ψ+π/p = 1 if and only if π is strongly normalizable and whenever π reduces to λα π 1 then for all π ′ such that |B| M ψ+π ′ /p = 1 we have |C| M ψ+[π ′ /α]π1/p = 1 if and only if π is strongly normalizable and whenever π reduces to λx π 1 then for all
π ′ in B N φ , [π ′ /α]π 1 is in C N φ if and only if π is in A N φ . -If A = B ∧ C then we have A M ψ+π/p = 1 if
, π 2 ⟩ then π 1 is in B N φ and π 2 is in C N φ if and only if π is in A N φ . -If A = B ∨ C then we have A M
ψ+π/p = 1 if and only if π is strongly normalizable and whenever π reduces to i(π 1 ) (resp. j(π 2 )) then B M ψ+π1/p = 1 (resp. C M ψ+π2/p = 1) if and only if π is strongly normalizable and whenever π reduces to i(π 1 ) (resp. j(π
2 )) then π 1 is in B N φ (resp. C N φ ) if and only if π is in A N φ . -If A = ⊥ then A M ψ+π/p = 1 if and only if π is strongly normalizable if and only if π is in A N φ . -if A = ∀x B, then |A| M ψ+π/p = 1 if
0 . . . ∀E m ∀x 1 . . . ∀x n ∃z ∀p ∀x n+1 ⟨p, x n+1 ⟩ ∈ z ⇔ ⟨p, x n+1 ⟩ ∈ proof ×U ∧|A|
Thus, for all a 1 , ..., a n , there exists an object b 0 such that for all a n+1 ⟨p,
x n+1 ⟩ ∈ z ⇔ ⟨p, x n+1 ⟩ ∈ N M × U ∧ |A| M ψ+P roof /proof,b0/z,u k n+1 +1 /U,π/p = 1 We have ⟨π, τ kn+1 a n+1 ⟩ M ∈ M b 0 if and only if π is a proof, τ kn+1 a n+1 ∈ M u kn+1+1 and |A| M ψ+π/p = 1. Thus ⟨π, a n+1 ⟩ M ∈ M τ -kn+1 b 0 if and only if a n+1 ∈ M u 1 and π is in A N φ .
We take b = τ -(kn+1+1) b 0 and for all a n+1 in N we have ⟨π, a n+1 ⟩ M ∈ M τ b if and only if π is in A N φ . Finally, notice that b 0 is a set of pairs ⟨π, β⟩ M where π is a proof and β an element of u kn+1+1 and for each β in u kn+1+1 the set of π such that ⟨π, β⟩ Corollary 2. All proofs strongly normalize in the Stratified Foundations.
M ∈ M b 0 is |A| M ψ+β/x k n+1 ,π/p =
Remark 1. As already noticed in [START_REF] Crabbé | Stratification and cut-elimination[END_REF], instead of constructing the a pre-model of the Stratified Foundations within an automorphic ω-model of Zermelo's set theory, we could construct it within an ω-model of the Stratified Foundations.
In such a model U, we can define recursively enumerable relations, because the Stratified Foundations contains enough arithmetic and comprehension. Then we can take the sequence u i to be the constant sequence equal to w where w is a universal set, i.e. a set such that a ∈ U w for all element a of the model. Such an object obviously verifies ℘ U (℘ U (℘ U (w))) ⊆ U w. In other words, we say that an element of U is admissible if it is a set of pairs ⟨π, β⟩ U where π is a proof and for each β in U , the set of π such that ⟨π, β⟩ ∈ U α is a reducibility candidate. Proposition 6 becomes trivial, but we need to use the existence of a universal set to prove that there are admissible elements in the model and that there is a set A of admissible elements in the model. Hence, the difficult part in this pre-model construction (the part that would not go through for Zermelo's set theory for instance) is the construction of the base set.
Conclusion
In this paper, we have have shown that the Stratified Foundations can be expressed in deduction modulo and that the normalization proof for this theory be decomposed into two lemmas: one expressing that it has a pre-model and the other that proof normalize in this theory if it has a pre-model. This second lemma is not specific to the Stratified Foundations, but holds for all theories modulo. The idea of the first lemma is to construct a pre-model within an ω-model of the theory with the help of formal realizability. This idea does not seems to be specific to the Stratified Foundations either, but, its generality remains to be investigated. Thus, this example contributes to explore of the border between the theories modulo that have the normalization property and those that do not.
2. 1
1 The Stratified Foundations as a first-order theory Definition 7.
sn ∧ ∀q ∀w ∀r (⟨p, q⟩ ∈ red ∧ ⟨q, w, r⟩ ∈ impI) ⇒ ∀s [s/p]|A| ⇒ ∀t ⟨t, s, w, r⟩ ∈ subst ⇒ [t/p]|B|), -|A ∧ B| = p ∈ sn ∧ ∀q ∀r ∀s ((⟨p, q⟩ ∈ red ∧ ⟨q, r, s⟩ ∈ andI) ⇒ [r/p]|A| ∧ [s/p]|B|), -|A ∨ B| = p ∈ sn ∧ ∀q ∀r ((⟨p, q⟩ ∈ red ∧ ⟨q, r⟩ ∈ orI1) ⇒ [r/p]|A|) ∧ ∀q ∀r ((⟨p, q⟩ ∈ red ∧ ⟨q, r⟩ ∈ orI2) ⇒ [r/p]|B|), -|⊥| = p ∈ sn, -|∀x A| = p ∈ sn ∧ ∀q ∀w ∀r (⟨p, q⟩ ∈ red ∧ (⟨q, w, r⟩ ∈ f orallI) ⇒ ∀x ∀y (x ∈ E S(x) ∧ y ∈ term) ⇒ ∀s (⟨s, w, y, r⟩ ∈ subst ′ ⇒ [r/p, x/x]|A|)), -|∃x A| = p ∈ sn ∧ ∀q ∀t ∀r (⟨p, q⟩ ∈ red ∧ (⟨q, t, r⟩ ∈ existsI) ⇒ ∃x x ∈ E S(x) ⇒ [r/p, x/x]|A|)).
1, if and only if for every e in U , |B| M ψ+σ k e/x = 1 if and only if for every e in U , B U φ+e/x = 1 if and only if A U φ = 1. if A = ∃x B then |A| M ψ = 1 if and only if there exists c in M such that c ∈ M v k and |B| M ψ+c/x = 1, if and only if there exists e in U such that |B| M ψ+σ k e/x = 1 if and only if there exists e in U such that B U φ+e/x = 1 if and only if A
0 and hence b is in U . Definition 11 (Jensen's model). The model U = ⟨U, ∈ U , fx1,...,xn,y,A ⟩ is defined as follows. The base set is U . The relation ∈ U is defined above. The function fx1,...,xn,xn+1,A maps (a 1 , . . . , a n ) to an object b such that for all a n+1 in U , a n+1 ∈ M σb if and only if A U a1/x1,...,an/xn,an+1/xn+1 = 1. Proposition 4. The model U is a model of the Stratified Foundations.Proof. If A is a stratifiable proposition in the language ∈, then t n+1 ∈ f x1,...,xn,xn+1,A (t 1 , . . . , t n ) U /x 1 , . . . , t n /x n , t n+1 /x n+1 ]A U φ = 1 Hence, if A ≡ B then A and B have the same denotation.
φ = 1
if and only if
t n+1 U φ ∈ M σ fx1,...,xn,xn+1,A ( t 1 U φ , . . . , t n U φ )
if and only if
[t 1 Corollary 1. The Stratified Foundations are consistent.
and only if π is strongly normalizable and whenever π reduces to ⟨π 1 , π 2 ⟩ then B M ψ+π1/p = 1 and C M ψ+π2/p = 1 if and only if π is strongly normalizable and whenever π reduces to ⟨π 1
and only if π is strongly normalizable and whenever π reduces to λx π 1 , for all term t and for all c in M such that c∈ M A k , |B| M ψ+c/x,[t/x]π1/p = 1 if and only if π is strongly normalizable and whenever π reduces to λx π 1 , for all t and for all e in N , |B| M ψ+τ k e/x+[t/x]π1/p = 1 if and only if π is strongly normalizable and whenever π reduces to λx π 1 , for all t and for all e in N , [t/x]π 1 is in B N φ+e/x if and only if π is in A N φ . if A = ∃x B, then |A| M ψ+π/p = 1 if and only if π is strongly normalizable and whenever π reduces to ⟨t, π 1 ⟩, there exists a c in M such that c ∈ M A k and |B| M ψ+c/x,[t/x]π1/p = 1 if and only if π is strongly normalizable and whenever π reduces to ⟨t, π 1 ⟩, there exists a e in N such that |B| M ψ+τ k e/x+[t/x]π1/p = 1 if and only if π is strongly normalizable and whenever π reduces to ⟨t, π 1 ⟩, there exists a e in N such that [t/x]π 1 is in B N φ+e/x if and only if π is in A N φ . Then, the model M is a model of the comprehension scheme. Hence, it is a model of the proposition ∀E
1, hence it is a reducibility candidate. Hence b 0 ∈ M A kn+1+1 and b is in N . Definition 13 (Crabbé's pre-model). The pre-model N = ⟨N, ∈ N , fx1,...,xn,y,A ⟩ is defined as follows. The base set is N . The function ∈ N is defined above. The function fx1,...,xn,xn+1,A maps (a 1 , . . . , a n ) to the object b such that for all a n+1 in N , ⟨π, a n+1 ⟩ M ∈ M τ b if and only if π is in A N a1/x1,...,an/xn,an+1/xn+1 . Proposition 8. The pre-model N is a pre-model of the Stratified Foundations. Proof. If A is a stratifiable proposition in the language ∈, then π is in t n+1 ∈ f x1,...,xn,xn+1,A (t 1 , . . . , t n ) N
φ if and only if ⟨π, t n+1 N φ ⟩ M ∈ M τ fx1,...,xn,xn+1,A ( t 1 N φ , . . . , t n N φ ) if and only if π is in [t 1 /x 1 , . . . , t n /x n , t n+1 /x n+1 ]A N φ
Hence, if A ≡ B then A and B have the same denotation.
as the first premise need not be exactly A ⇒ B but may be only congruent to this proposition, hence it is statedC A if C ≡ A ⇒ B B axiom if A ∈ Γ and A ≡ B Γ ⊢≡ B Γ, A ⊢≡ B ⇒-intro if C ≡ (A ⇒ B) Γ ⊢≡ C Γ ⊢≡ C Γ ⊢≡ A ⇒-elim if C ≡ (A ⇒ B) Γ ⊢≡ B Γ ⊢≡ A Γ ⊢≡ B ∧-intro if C ≡ (A ∧ B) Γ ⊢≡ C Γ ⊢≡ C ∧-elim if C ≡ (A ∧ B) Γ ⊢≡ A Γ ⊢≡ C ∧-elim if C ≡ (A ∧ B) Γ ⊢≡ B Γ ⊢≡ A ∨-intro if C ≡ (A ∨ B) Γ ⊢≡ C Γ ⊢≡ B ∨-intro if C ≡ (A ∨ B) Γ ⊢≡ C Γ ⊢≡ D Γ, A ⊢≡ C Γ, B ⊢≡ C ∨-elim if D ≡ (A ∨ B) Γ ⊢≡ C Γ ⊢≡ B ⊥-elim if B ≡ ⊥ Γ ⊢≡ A Γ ⊢≡ A (x, A) ∀-intro if B ≡ (∀x A) and x ̸ ∈ F V (Γ ) Γ ⊢≡ B Γ ⊢≡ B (x, A, t) ∀-elim if B ≡ (∀x A) and C ≡ [t/x]A Γ ⊢≡ C Γ ⊢≡ C (x, A, t) ∃-intro if B ≡ (∃x A) and C ≡ [t/x]A Γ ⊢≡ B Γ ⊢≡ C Γ, A ⊢≡ B (x, A) ∃-elim if C ≡ (∃x A) and x ̸ ∈ F V (Γ B) Γ ⊢≡ B |
00410869 | en | [
"info.info-ts",
"spi.signal"
] | 2024/03/04 16:41:24 | 2009 | https://hal.science/hal-00410869/file/OuartiPeyre-NSWP-ICIP09.pdf | Nizar Ouarti
email: [email protected]
Gabriel Peyré
email: [email protected]
BEST BASIS DENOISING WITH NON-STATIONARY WAVELET PACKETS
Keywords: wavelet, wavelet packet, best basis, denoising
This article introduces a best basis search algorithm in a nonstationary (NS) wavelet packets dictionary. It computes an optimized labeled quad-tree that indexes the filters used for the NS wavelet packets decomposition. This algorithm extends the classical best basis search by exploring in a hierarchical manner the set of NS wavelet packets coefficients. The scale-by-scale variation of the filters adapts the transform to the frequency content of complex textures. The resulting denoising method is made translation invariant by cycle spinning. Numerical results show that NS wavelet packets give better results than wavelet packets and waveatoms for the denoising of natural images, in particular in textured areas. Moreover, the cycle spinning method increases significantly the denoising abilities of our algorithm 1 .
INTRODUCTION
Fixed basis denoising. Wavelet bases capture efficiently transient parts of signals and images. In particular, orthogonal wavelet bases are optimal for the approximation and denoising of piecewise smooth signals and images with bounded variations [START_REF] Mallat | A Wavelet Tour of Signal Processing[END_REF].
Natural images contain complex structures such as regular edges and oscillating textures. In the image setting, wavelets are sub-optimal to represent both textures and edges. Elongated oriented atoms such as curvelets [START_REF] Candès | New tight frames of curvelets and optimal representations of objects with piecewise C 2 singularities[END_REF] should be used to capture efficiently cartoon edges. Locally oscillating atoms such as local DCT [START_REF] Mallat | A Wavelet Tour of Signal Processing[END_REF], brushlets [START_REF] Meyer | Brushlets: A tool for directional image analysis and image compression[END_REF] or waveatoms [START_REF] Demanet | Wave atoms and sparsity of oscillatory pat-Fig. 4. Curves showing the PSNR (in dB) as a function of σ/||f || ∞ for Barbara (top) and Fingerprint (bottom). Solid curves: NS wavelet packets, dashed: WaveAtoms, dotted: wavelet packets. terns[END_REF] should be preferred to process oscillating oriented textures,.
Best basis denoising. Each of these fixed bases are well suited to process a specific kind of structures in images. A best basis algorithm computes an adapted orthogonal basis within a structured dictionary. This best basis selection is applied to a noisy image to perform an adaptive denoising. This adaptive processing can alleviate some of the difficulties that faces a fixed representation. For instance, a best wavelet packet denoising [START_REF] Coifman | Entropy-based algorithms for best basis selection[END_REF] is useful to denoise oscillating textures such as fingerprints by performing an adapted segmentation of the frequency domain.
Contribution. This paper extends the best wavelet packet adaptivity for image denoising by using non-stationary (NS) wavelet packets introduced by Cohen and Séré [START_REF] Cohen | Time-frequency localization by non-stationary wavelet packets[END_REF]. We show that the set of all bases is parameterized by a set of labeled quadtrees. We propose a new best basis selection method that extends classical dynamical programming method to the non-stationary setting. This algorithm shares similarities with the search in multi-tree dictionaries [START_REF] Huang | Fast search for best representations in multitree dictionaries[END_REF]. A translation invariant extension of the best NS wavelet packets thresholding further enhances the denoising quality by reducing denoising artifacts. Numerical results show that the resulting adaptive NS denoising method improves over wavelet packets and waveatoms denoising over textured areas.
NON-STATIONARY WAVELET PACKETS
Quadtree parameterization. A 2D NS wavelet packet basis B(λ) is parameterized by a quad-tree λ. The nodes (j, i) of λ are indexed by a scale 0 j J = log 2 (n)/2 representing the depth in the tree, and a position 0 i < 4 j . Each node (j, i) is assigned a label λ j,i ∈ {0, . . . , S -1} ∪ ∅. A leaf node is such that λ j,i = ∅, and it has no child node. We denote by L(λ) the set of leaves. An interior node (j, i) is such that λ j,i ∈ {0, . . . , S -1} and it has 4 children nodes indexed as (j + 1, 4i), . . . , (j + 1, 4i + 3). We denote by I(λ) the set of interior nodes. Figure 1 shows an example of such a quadtree λ.
Wavelet filters. Each label λ j,i indexes a 1D low pass filter chosen in a set {h ℓ } S-1 ℓ=0 . For each index ℓ, the corresponding high pass quadrature filter is g ℓ , and 2D orthogonal tensorial filters {h η ℓ } 3 η=0 are computed as
h 0 ℓ = h ℓ ⊗ h ℓ , h 1 ℓ = h ℓ ⊗ g ℓ , h 2 ℓ = g ℓ ⊗ h ℓ , h 3 ℓ = g ℓ ⊗ g ℓ .
NS wavelet packet transform. The forward NS wavelet packet transform computes coefficients of a discrete image f ∈ R n of n pixels, that are stored on the leaves L(λ) of the tree λ. The computation is performed with an iterative algorithm that follows the edges of the tree λ. It starts at scale j = 0, where f 0,0 = f . For each scale 0 < j < J = log 2 (n)/2, for each interior node (j, i) ∈ I(λ), the signals on the children nodes is defined as
W λ (f ) = {f j,i } (j,i)∈L(λ)
∀ 0 η < 4, f j+1,4i+η = (f j,i * h η ℓ ) ↓ 2 (1)
where ℓ = λ j,i is the index of the filter, and ↓ 2 is the operator that subsamples an image by a factor 2 along each direction. The backward NS wavelet packet transforms retrieves an image f = W * λ (F ) ∈ R n from a set of coefficients F = {f j,i } (j,i)∈L(λ) . For each scale J > j > 0, the signal on the interior node (j, i) ∈ I(λ) is recovered as
f j,i = 3 η=0 (f j+1,4i+η ↑ 2) * hη ℓ (2)
where h[n] = h[-n] and ↑ 2 is the upsampling operator that inserts a zero at each odd location along each direction. The image is recovered on the root node as f = f 0,0 . Both the forward and the backward transform are computed in O(n log 2 (n)) operations.
BEST NS WAVELET PACKET BASIS
Processing an image f ∈ R n is performed by modifying the coefficients of f in an optimized NS wavelet packet basis B(λ ⋆ ). The quadtree λ ⋆ is selected by minimizing a Lagrangian
L(f, B(λ)) = (j,i)∈L(λ) ρ + n/4 j -1 m=0 Φ(f j,i [m]) (3)
where {f j,i } (j,i)∈L(λ) = W λ (f ) and where Φ : R → R is a cost function that depends on the application (denoising, compression, . . . ). The parameter ρ 0 is a penalization that reduces the complexity of the best basis tree.
The tree λ ⋆ adapted to an image f is computed by exploiting the hierarchical structure of the wavelet packet coefficients.
Step 1 -Computing all the coefficients. The set of all possible NS wavelet packets coefficients is obtained by a top to bottom filtering process. The coefficients on the root node are F 0,0 = f . For each scale 0 < j < J, for each index 0 k < (4S) j , the coefficients on the 4S children nodes of F j,i are obtained by computing all possible filterings
∀ 0 ℓ < S, ∀ 0 η < 4, F j+1,4Sk+4ℓ+η = (F j,i * h η ℓ ) ↓ 2.
Step 2 -best filters selection. A bottom to top recursive selection process selects the best filters for each scale j and index 0 k < (4S) j . The Lagrangian is evaluated for each index k at the finest scale j = J
∀ 0 k < (4S) J , L J,k = n/4 J -1 m=0 Φ(F J,i [m]).
For each J < j 0, the cumulated Lagrangian is computed for each application of a filter indexed by
0 ℓ < S L ℓ j,k = ρ + 3 η=0 L j+1,4Sk+4ℓ+η .
The best filter index defines the Lagrangian at the next scale for each 0 k < (4S) j that minimizes the cumulative Lagrangian
L j,k = min min 0 ℓ<S L ℓ j,k , n/4 j m=0 Φ(F j,k [m]) .
If L j,k = L ℓ j,k for some 0 ℓ < S, the best filter choice is set to ℓ j,k = ℓ, otherwise, the filtering is stopped and ℓ j,k = ∅.
Step 3 -Best tree construction. Once the best filter choices ℓ j,k are computed for all j and all 0 k < (4S) j , the best tree λ ⋆ with indexes λ ⋆ j,i is computed for all j and all 0 i < 4 j . The initial tree is initialized with a single root node L(λ ⋆ ) = {(0, 0)}, together with a filter choice λ ⋆ 0,0 = ℓ 0,0 and with a link π 0,0 = 0.
For each 0 j < J, for each node (j, i) such that ℓ = λ ⋆ j,i = ∅, we retrieve the link k = π j,i , and add the children nodes to the leaves of the optimal tree λ ⋆ ,
L(λ ⋆ ) ← L(λ ⋆ ) ∪ {(j + 1, 4i + η)} 3 η=0 .
The label for each new node is set to
∀ 0 η < 4, λ j+1,4i+η = ℓ j+1,4Sk+4ℓ+η
and the link is defined as π j+1,4i+η = 4Sk + 4ℓ + η.
Complexity. The numerical complexity of this best basis algorithm is dominated by the computation of the set of coefficients {F j,k } (j,k) for all possible 0 j < J = log 2 (n)/2 and 0 k < (4S) j . For each scale j = 1, . . . , J, one needs to compute (4S) j filterings of vectors of size n/4 j , where n is the size of the input vector f . The overall complexity is thus
log 2 (n)/2 j=0 (4S) j n 4 j = O(n log 2 (n)) if S = 1, O(n 1+log 2 (S)/2 ) if S > 1.
(4)
BEST NS WAVELET PACKET DENOISING
We consider an additive noise model, where a noisy image is obtained as f = f 0 + ε where ε is a Gaussian white noise of variance σ 2 .
Thresholding estimator. Following Donoho and Johnstone [START_REF] Donoho | Ideal spatial adaptation via wavelet shrinkage[END_REF], an estimator fλ of f 0 is obtained by hard thresholding at T > 0 the coefficients of the decomposition of f in a NS wavelet packet basis B(λ)
fλ = W * λ S T (W λ (f )) , (5)
where the hard thresholding operator S T applies to each coefficient the non-linearity S T (x) = x if |x| > T and S T (x) = 0 otherwise. Asymptotically minimax optimal estimators are obtained by choosing T = σ 2 log e (P ) where P is number of atoms in all NS wavelet packets basis, see [START_REF] Donoho | Ideal spatial adaptation via wavelet shrinkage[END_REF]. For NS wavelet packet bases, this number is of the order of P = O(n 1+log 2 (S)/2 ). In practice, good numerical results are obtained with T ≈ 3σ, and in the numerical results, we select in an oracle manner the T value that minimizes ||f 0 -fλ ||.
Best basis for denoising. The best average denoising result is obtained by selecting the basis B(λ) that minimizes the risk E ε (||f 0 -fλ || 2 ). Following Krim et al. [START_REF] Krim | On denoising and best signal representation[END_REF], an approximation of the risk is obtained by considering the Lagrangian L(f, B(λ)) defined in (3), with the cost function
Φ(x) = x 2 -σ 2 if |x| T, σ 2 if |x| > T. (6)
In the numerical experiments, we set ρ = T 2 to penalize the complexity of the basis. The best NS wavelet packet denoising is defined as fλ ⋆ where λ ⋆ is the tree that minimizes L(f, B(λ)) with the cost [START_REF] Cohen | Time-frequency localization by non-stationary wavelet packets[END_REF].
Translation invariant denoising. Once λ ⋆ is computed, the denoising quality is greatly improved by using a cycle spinning scheme to reduce thresholding artifacts. For each translation vector τ = (τ 1 , τ 2 ), we denote as θ τ f [n] = f [n -τ ] the translated image, with periodic boundary conditions. The cycle spinning denoising is obtained with
fλ ⋆ = 1 K 2 K-1 τ1=0 K-1 τ2=0 θ -τ (θ τ f ) λ ⋆
For the numerical results, we use K = 4, which increases by K 2 = 16 the numerical complexity of the last step of our method.
Original PSNR=19.6dB WaveAtoms PSNR=24.4dB
WavePackets PSNR=23.7dB NSWP PSNR=25.4dB
Fig. 2. Comparison of denoising methods on Barbara image.
NUMERICAL RESULTS
For the numerical experiments, we use S = 6 Daubechies orthogonal filters {h ℓ } S-1 ℓ=0 , where h ℓ is a filter of length 2ℓ + 2.
Figure 2 shows denoising results on a natural image that contains oscillating textures, that is degraded with a noise level σ = 0.15||f || ∞ . The translation invariant NS wavelet packets improves both over a translation invariant wavelet packet denoising [START_REF] Coifman | Entropy-based algorithms for best basis selection[END_REF] and over the WaveAtoms denoising [START_REF] Demanet | Wave atoms and sparsity of oscillatory pat-Fig. 4. Curves showing the PSNR (in dB) as a function of σ/||f || ∞ for Barbara (top) and Fingerprint (bottom). Solid curves: NS wavelet packets, dashed: WaveAtoms, dotted: wavelet packets. terns[END_REF], that are both known for their efficiency to restore oscillating textures. Figure 3 shows an example of denoising of a fingerprint texture, for σ = .2||f || ∞ . Switching from orthogonal NS wavelet denoising to translation invariant denoising leads to a PSNR improvement of 2dB on this texture. This shows the importance of this cycle spinning extension. Original PSNR=18.0dB NSWP PSNR=24.5dB Fig. 3.
Denoising of a fingerprint texture using the NS wavelet packet algorithm with cycle spinning.
CONCLUSION
This article has presented a new algorithm to compute a best basis in a non-stationary wavelet packets dictionary. Thresholding the NS wavelet packets coefficients in a best basis computed from a noisy observation performs an adaptive denoising of the image. The translation extension of this thresholding is competitive with the state of the art methods to denoise oscillating textures.
Fig. 1 .
1 Fig. 1. Example of quadtree λ that defines a 2D NS wavelet packet basis B(λ).
This work is supported by ANR grant NatImages ANR-08-EMER-009. |
04108717 | en | [
"math.math-nt"
] | 2024/03/04 16:41:24 | 2023 | https://hal.science/hal-04108717/file/Norm.of.units.pdf | Georges Gras
email: [email protected]
NEW CHARACTERIZATION OF THE NORM OF THE FUNDAMENTAL UNIT OF Q( √ M )
Keywords: Mathematics Subject Classification. Primary 11R11, 11R27, 11R29, 11R37 Real quadratic fields, Fundamental unit, Norm of units, Class field theory, PARI programs
We give an elementary criterion for the norm of the fundamental unit ε
Introduction -Main result
Let K =: Q( √ M ), M ∈ Z ≥2 square-free, be a real quadratic field and let Z K be its ring of integers. Recall that M is called the "Kummer radical" of K, contrary to any "radical" R = M r 2 giving the same field K. We will write the elements of Z K under the form α = 1 2 (u + v √ M ), with u, v of same parity. We denote by T K/Q =: T and N K/Q =: N, the trace and norm maps in K/Q, so that T(α) = u and N(α) = 1 4 (u 2 -M v 2 ). We denote by ε K > 1 the fundamental unit of K and by S K =: S := N(ε K ) ∈ {-1, 1} its norm. An obvious necessary condition for S = -1 is to have -1 ∈ N(K × ), equivalent to the fact that any odd prime ramified in K/Q is congruent to 1 modulo 4.
The starting point of our result is the following observation proved by means of elementary applications of class field theory (see Theorem 2.1 assuming -1 ∈ N(K × ) and Remark 2.2 (iii) in the case -1 / ∈ N(K × )):
Set M = q|M q for the prime divisors q of M and let q | q be the prime ideal of
K = Q( √ M
) over q. The fundamental unit ε K of K is of norm -1 if and only if the relation q = ( √ M ) is the unique non-trivial relation of principality (in the ordinary sense) between the ramified prime ideals of K (that is to say, the q's dividing ( √ M ) and q 2 | 2 if 2 ∤ M ramifies, whence if M ≡ 3 (mod 4)).
Of course, when -1 / ∈ N(K × ) and when 2 ∤ M ramifies, one can verify the existence of a relation of principality, either of the form q eq (distinct from (1) and ( √ M )) with exponents in {0, 1}, or else q 2 principal (e.g., M = 3 × 17 with q 2 principal, the ideals q 3 and q 17 being non-principal; for more examples and comments, see Remark 2.2 (iii)).
Then we can state the main result, under the assumption -1 ∈ N(K × ) (see Theorem 2.4 for more details and informations):
Theorem 1.1. Let ε K = a + b √ M > 1, a, b ∈ Z or 1 2
Z, be the fundamental unit of K. We consider the integers A + B √ M and A ′ + B ′ √ M , defined as follows:
ε K + 1 = a + 1 + b √ M =: g (A + B √ M ), where g := gcd (a + 1, b) ∈ Z >0 , ε K -1 = a -1 + b √ M =: g ′ (A ′ + B ′ √ M )
, where g ′ := gcd (a -1, b) ∈ Z >0 .
Let m := gcd (A, M ) and m ′ := gcd (A ′ , M ). Then we have:
(i) If M is odd, ε K is of norm -1 if and only if m = m ′ = 1. (ii) If M is even, ε K is of norm -1 if and only if m = m ′ = 2.
Let D ≤X be the set of discriminants D ≤ X corresponding to quadratic fields K such that -1 ∈ N(K × ) and let D - ≤X the subset of discriminants such that S := N(ε K ) = -1, and put ∆ - disc := lim for the density ∆ - disc = 0.5805..., has been proved from techniques developed by [START_REF] Koymans | On Stevenhagen's conjecture[END_REF], in relation with that of Smith [Sm], based on estimations of the 2 k -ranks of the class groups of the K's, k ≥ 2. It raises the question of whether our point of view can constitute a way to define another notion of density since it does not seem compatible with the common principle of classification of the fields via their discriminants; this will be discussed in Section 3.
Characterizations of N(ε
K ) = -1
In this Section, we give (Theorems 2.1, 2.4) a new characterization of the norm S of the fundamental unit ε K of K = Q( √ M ) when -1 ∈ N(K × ), in particular for the case S = -1 giving the solvability of the norm equation u 2 -M v 2 = -4, u, v ∈ Z >0 .
2.1. Class field theory results involving N(ε K ). We will use the classical class field theory context given by the Chevalley-Herbrand formula and by the standard exact sequence defining the group of invariant classes (for simplicity, we refer to our book [Gra1], but any classical reference book may agree). Of course, in the case of quadratic fields, the theory is equivalent to that of quadratic forms and goes back to Gauss and, after that in a number field context, to Kummer, Hilbert, Takagi, Hasse, Chevalley-Herbrand, Frölich, Furuta, Ishida, Leopoldt, for particular formulas in the area of genus theory, but we are convinced that the general framework may be used, for instance, in relative quadratic extensions K/k, since the index (E k : NE K ) of groups of units, when
E k ⊂ N(K × ), is as mysterious as (E Q : NE K ) for K = Q( √ M ), when -1 ∈ N(K × ) (for some examples in this direction, see [Gra2, Théorème 3.2, Corollaires 3.3, 3.4]). Theorem 2.1. Consider Kummer radicals M = q 1 • • • q r or M = 2 • q 2 • • • q r , r ≥ 1, with odd primes q i ≡ 1 (mod 4) (r = 1 gives M = q 1 or M = 2) and let K = Q( √ M ).
If the prime q (odd or not) divides M we denote by q the prime ideal of K above q.
(i) The fundamental unit ε K is of norm S = -1 if and only if the relation
q|M q = ( √ M )
is the unique relation of principality, distinct from (1), between the ramified prime ideals of K (this relation and the trivial one will be called the canonical relations).
(ii) In the case S = 1, the unique non-canonical relation m = q|m q = (α), of support m | M distinct from 1 and M , is given by ε K + 1 =: g α, where the rational integer g is maximal and, similarly, the complementary relation n = q|n q = (β), of support n = M m distinct from 1 and M , is given by ε K -1 =: g ′ β where the rational integer g ′ is maximal.
Proof. (i) Let G := Gal(K/Q) =: σ . The Chevalley-Herbrand formula [START_REF] Chevalley | Sur la théorie du corps de classes dans les corps finis et les corps locaux, Thèse 155[END_REF] gives, for the ordinary 2-class group
H K , #H G K = 2 r-1 ( -1 : -1 ∩ N(K × )) = 2 r-1
since -1 ∈ N(K × ) by assumption. Moreover, we have the classical exact sequence:
(2.1)
1 → H ram K -→ H G K -→ -1 ∩ N(K × )/ S = -1 / S → 1, where H ram K
is the subgroup generated by the classes of the ramified prime ideals q of K. Indeed, for an invariant class of an ideal a, one associates with (α
) := a 1-σ , α ∈ K × , the sign N(α) = ±1 ∈ -1 ∩ N(K × ) = -1 , defined modulo S since α is defined modulo E K := -1, ε K .
• Image. Since there exists β ∈ K × such that -1 = N(β), the ideal (β), being of norm (1), is of the form b 1-σ , giving a pre-image in H G K (surjectivity). • Kernel. If a 1-σ =: (α) with N(α) = S, one may suppose, up to a unit, that N(α) = 1, whence α = θ 1-σ (Hilbert's Theorem 90), and a(θ) -1 is an invariant ideal, product of a rational ideal by a product of ramified prime ideals of K.
This yields
#H ram K = 2 r-1 ( -1 : S )
, where we recall that r is the number of ramified prime ideals of K; let's examine each case for S:
If S = -1, then #H ram K = 2 r-1
and the unique non-trivial relation of principality between the ramified primes is the canonical one, q = ( √ M ).
If S = 1, then #H ram K = 2 r-2
, necessarily with r ≥ 2, and another relation of principality does exist, given by a suitable product q eq = (α), e q ∈ Z; since any q 2 is principal, one may write the relation under the form m = (ii) From m = (α) when S = 1, we get
α 1-σ = ±ε k K , k ∈ Z; since ε 2 K = ε 1+σ+1-σ K = ε 1-σ K , we may assume that m = (α) with α 1-σ ∈ {±1, ±ε K }. If α 1-σ = ±1, then α ∈ {1, √ M } × Q × gives the canonical relations (absurd); thus α 1-σ = ±ε K . If for instance α 1-σ = 1, then (α √ M ) 1-σ = -1 and this gives the complementary relation n = (β) with β 1-σ = -ε K .
Since we have (ε
K + 1) 1-σ = ε K + 1 ε σ K + 1 = ε K + 1 ε -1 K + 1 = ε K and similarly (ε K -1) 1-σ = -ε K , the quotients ε K + 1 α and ε K -1 β
are invariants under σ, hence are rational numbers; this proves the second claim of the theorem writing ε K + 1 = gα and ε K -1 = g ′ β in an obvious manner (g and g ′ are the maximal rational integer factors of the quadratic integers ε K ± 1).
Remarks 2.2. (i) If r = 1, one finds again the well-known result S = -1 for the prime Kummer radicals M = q ≡ 1 (mod 4) and for M = 2. The properties given by Theorem 2.1, due to the Chevalley-Herbrand formula, come from the "product formula" of Hasse's norm residue symbols in class field theory [Gra1, Theorem II.3.4.1].
(ii) The theorem is also to be related with that of Trotter [START_REF] Trotter | On the Norms of Units in Quadratic Fields[END_REF]Theorem,p. 198] leading, in another framework, to the study of the equations mx 2 -ny 2 = ±4, equivalent to the principalities of m and n, since, for instance, m = ( 1 2 (x + y √ M )) is equivalent to x 2 -M y 2 = ±4m, whence mx 2 -ny 2 = ±4; then, some sufficient conditions of solvability of these equations are given by means of properties of suitable quadratic residues.
Remark 2.3. We intend to give some properties of the case M = q 1 • • • q r odd, when one assumes that -1 / ∈ N(K × ) (there exists q | M such that q ≡ 3 (mod 4)), then S = 1, the
Chevalley-Herbrand formula becomes #H G K = 2 r+δ-1 ( -1 : -1 ∩ N(K × ))
= 2 r+δ-2 , where δ = 1 (resp. 0) if M ≡ 3 (mod 4) (resp. if not) and the exact sequence (2.1) becomes the isomorphism
H ram K ≃ H G K .
Nevertheless, if M ≡ 1 (mod 4) (thus an even number of primes q ≡ 3 (mod 4) dividing M ), we have δ = 0, so that one obtains the pair of non-canonical relations of principality from the computation of m and n from ε K ± 1. Now, we assume that M ≡ 3 (mod 4) implying the ramification of 2 with M odd and S = 1. So we have δ = 1, #H ram K = 2 r-1 and the existence of a group of relations of principality of order 4 between the r + 1 ramified primes, including the canonical ones.
• If q 2 | 2 is principal, this gives the non-canonical relation, and the process using ε K ± 1 only gives the canonical relations (1) and ( √ M ).
• If q 2 | 2 is not principal, this implies the existence of a non-canonical relation of principality, distinct from (1) and ( √ M ). We intend to show that this relation is not of the form q 2 • m principal, for m distinct from (1) and ( √ M ). Otherwise, we have:
(2.2) q 2 m = (α) and q 2 n = (β), mn = ( √ M ), whence the existence of units ε, ε ′ such that: α 2 ε = 2m and β 2 ε ′ = 2n; modulo the squares of units one may choose α and β such that:
α 2 = ±2m or α 2 ε K = ±2m and β 2 = ±2n or β 2 ε K = ±2m.
The cases ±2m and ±2n does not hold since the Kummer radical M is unique. So:
(2.3)
α 2 ε K = 2m and β 2 ε K = 2n.
In the same way, we have, from the principality relations (2.2), α 1-σ = η and β 1-σ = η ′ that we may write (using
ε 2 K = ε 1-σ K since S = ε 1+σ K = 1): α 1-σ = ±1 or α 1-σ = ±ε K and β 1-σ = ±1 or β 1-σ = ±ε K .
As above, the cases ±1 are excluded (e.g.,
α 1-σ = ±1 gives α ∈ Q × or Q × • √ M ). Thus: (2.4) α 1-σ = ±ε K and β 1-σ = ±ε K .
From (2.3) one gets the relation α 2 β 2 ε 2 K = 2m 2n = 4M and then αβε K = ±2 √ M , giving (αβε K ) 1-σ = -1. Using relations (2.4), the previous relation leads to: 78 [2,3,13]
±ε K • ε K • ε 2 K = -1 (absurd). So, when q 2 is non-principal,
26 3 [1] q2 non principal [3,11] 11 3 [] D=M 79 [79] 1 79 [0] q2 principal [5,7] 5 7 [1] q2 non principal 83 [83] 83 1 [] q2 principal [2,19] 2 19 [] 86 [2,43] 43 2 [] [3,13] 13 3 [1] q2 non principal 87 [3,29] 3 29 [1] q2 non principal [2,3,7] 6 7 [1] q2 non principal 91 [7,13] 13 7 [1] q2 non principal [43] 1 43 [] q2 principal 93 [3,31] 31 3 [] D=M [2,23] 2 23 [] 94 [2,47] 2 47 [] [47] 1 47 [] q2 principal 95 [5,19] 5 19 [1] q2 non principal
The mention D = M means that 2 does not ramify, so it does not intervene in the relations; the mention q2 principal is then the unique non-canonical relation and the mention q2 non principal occurs when 2 ramifies but there exists a pair of non-canonical relations (m, n) given in the left column. A box [ ] means a trivial class group and [0] is equivalent to the principality of q 2 when the class group in non-trivial and [a, b, ...], with not all zero a, b, ..., gives the components of the class of q 2 on the PARI basis of the class group.
The first "non-trivial" example is M = 51, where ε 51 = 50 + 7 √ 51, giving ε 51 + 1 = 51 + 7 √ 51 (m = 51) and ε 51 -1 = 7 (7 + √ 51) (n = 1), thus the canonical relations.
The case -1 / ∈ N(K × ) being without any mystery regarding the norm S, we will assume -1 ∈ N(K × ) in the sequel.
2.2. Computation of the non-canonical relations of principality. The following program computes, when -1 ∈ N(K × ) and S = 1, the non-canonical relations of principality between the ramified primes of K to make statistics and notice that any kind of relation seems to occur with the same probability for r fixed.
It uses the arithmetic information given by the PARI instructions K = bnfinit(x 2 -M) and bnfisprincipal(K, A) testing principalities. When S = -1, the program gives only the trivial list L = List([1, 1, 1, 1]) corresponding to the canonical relation. The parameter Br forces r ≥ Br. As illustration, let's give examples, for r = 5, of the relations of principality m = (α), n = (β), by means of the exponents [e 1 , ..., e r ] given by the list L and where the radical M is given via the list of its prime divisors; for instance: 2,5,13,17,37] L = List([0,1,0,1,1]), means q 5 • q 17 • q 37 principal (or q 2 • q 13 principal).
M = [
Main characterization of N(ε
K ). Assume that -1 ∈ N(K × ) and let S := N(ε K ).
When S = 1, the non-canonical relation m = q|m q = (α), of support m | M distinct from 1 and M , comes from Theorem 2.1 (ii); it suffices to determine the quadratic integer α, without any rational factor, deduced from ε K + 1, this giving m. In the same way
ε K -1 gives n = (β) of support n = M m such that mn = (αβ) = ( √ M ).
Let's give some numerical examples of the process leading to the result:
(i) For M = 15170 = 2 • 5 • 37 • 41, ε K = 739 + 6 √ M and: ε K + 1 = 740 + 6 √ M = 2 × (370 + 3 √ M ),
where 2 = gcd (740, 6); then gcd (370, M ) = 370 = 2 • 5 • 37, whence the principality of m = q 2 q 5 q 37 , which immediately gives the principality of q 41 that can be obtained from:
-ε K + 1 = -738 -6 √ M = -6 × (123 + √ M ),
for which gcd (123, M ) = 41.
(ii) For M = 141245 = 5 • 13 • 41 • 53, ε K = 49609 + 132 √ M and:
ε K + 1 = 49610 + 132 √ M = 22 × (2255 + 6 √ M ),
where 22 = gcd (49610, 132) and gcd (2255, M ) = 5 • 41 giving the principality of q 5 q 41 , and the principality of q 13 q 53 , also obtained from
-ε K + 1 = -12 × (4134 + 11 √ M ) and gcd (4134, M ) = 13 • 53. (iii) For M = 999826 = 2 • 41 • 89 • 137, ε K + 1 is given by: 11109636935777158836160759499956087745610931184259730878643242570969499893 0608609351188823863817034706422630544237192750927410464023060264033743426 +111106036003421265074388547121779710827974912909282096975471217501055084 481117390502436801341332286005566466631729812289759396153149523058885768 √ M
with the gcd of the two coefficients equal to:
21082286734619551653000708969094423248037542899079230940776043942241398 =2 × 3 × 43 × 11210269457991049 × 7289235104943832975100088612482411236091543240227619, giving the integer A + B √ M ∈ Z K equal to: 5269654604181931962753271711433450204598096091653697788648087227648861999187 +5270113123970195868835491287611281655343354587474034791159597224413298316 √ M
then gcd (A, M ) = 41 × 89 × 137, giving the principality of: q 41 q 89 q 137 =(5269654604181931962753271711433450204598096091653697788648087227648861999187
+5270113123970195868835491287611281655343354587474034791159597224413298316 √ M )
or simply that of q 2 . Theorem 2.4. Let M ≥ 2 be a square-free integer and put
K = Q( √ M ); we assume that -1 ∈ N(K × ). Let ε K = a + b √ M (a, b ∈ Z or 1 2 Z) be the fundamental unit of K. We consider the following integers A + B √ M , A ′ + B ′ √ M ∈ Z K , A, B, A ′ , B ′ ∈ Z or 1 2 Z
, where the gcd function must be understood in Z K , giving for instance, gcd (1 2 U 0 , 1 2 V 0 ) = gcd (U 0 , V 0 ) for U 0 and V 0 odd; in other words one may see g and g ′ below as the maximal rational integer factors of the quadratic integers:
1 ε K + 1 = a + 1 + b √ M =: g (A + B √ M ), g := gcd (a + 1, b), ε K -1 = a -1 + b √ M =: g ′ (A ′ + B ′ √ M ), g ′ := gcd (a -1, b). Let m := gcd (A, M ), n = M m , C = A m , m ′ := gcd (A ′ , M ), n ′ = M m ′ , C ′ = A ′ m ′ . (i) If M is odd, ε K is of norm S = 1 if and only if (m, m ′ ) = (1, 1). If M is even, ε K is of norm S = 1 if and only if, either m > 2 or else m ′ > 2. In other words, the characterization of S = -1 becomes: If M is odd, ε K is of norm S = -1 if and only if m = m ′ = 1. If M is even, ε K is of norm S = -1 if and only if m = m ′ = 2. (ii)
If the above conditions giving S = 1 hold, then q|m q = (α) and q|n q = (β),
with α = C m + B √ M and β = B n + C √ M in Z K .
Proof. We do not consider the particular case r = 1 giving S = -1 and H G K = H ram K = 1; we prove the characterization of S = 1 of the statement.
• Let's assume S = 1: Recall, from Theorem 2.1, that since (ε K + 1) 1-σ = ε K , the ideal (ε K + 1) gives, after elimination of its maximal rational factor, the unique non-canonical relation of principality m = (α), of support m | M , m = 1, M , between the ramified primes.
Since ( √ M • (ε K + 1)) 1-σ = -ε K and (ε K -1) 1-σ = -ε K , the ideals ( √ M • (ε K + 1)) (leading to n = (β) of support n = M m
) and (ε K -1) (leading to m ′ = (α ′ ) of support m ′ ) yield the same non-canonical relations of supports n and m ′ , so that n = m ′ , with m, n distinct from 1, M .
One obtains the suitable condition m = 1 and m ′ = n = 1 when M is odd; if M is even, the integers m, m ′ = n are of different parity, distinct from 1 and M and such that mm ′ = M > 2; so, if we assume, for instance, m ≤ 2 (hence m = 2), this implies
m ′ = M m > 2. • Reciprocal. We assume m = 1 (resp. m > 2 or m ′ > 2) if M is odd (resp. even): In the odd case, ε K ≡ -1 (mod m), m of support m, implies ε σ K ≡ -1 (mod m), whence S = ε 1+σ K ≡ 1 (mod m) thus S = 1 since m = 1 implies m > 2 in the odd case.
In the even case, since m > 2 or m ′ > 2, the same conclusion holds, using one of the congruence
ε K ≡ -1 (mod m) or ε K ≡ 1 (mod m ′ ) (note that in the reciprocal one does not know if m ′ = n).
• The characterization of S = -1 is of course obvious, but the precise statement is crucial for statistical interpretation, especially in the even case:
-In the odd case, one gets m = m ′ = 1.
-In the even case, one gets m ≤ 2 & m ′ ≤ 2, but we have m = 1 and
m ′ = 1; indeed, let ε K = a + b √ M , a, b ∈ Z,
where a 2 -M b 2 = -1 implies a and b odd (the case of a is obvious, so a 2 ≡ 1 (mod 8), then b even would imply 1 ≡ -1 (mod 8)). Since ε K ≡ 1 (mod q 2 ) with b odd, necessarily ε K ≡ 1 (mod 2) and (ε K + 1) = q 2 a, where a is an odd ideal and, except for M = 2, one gets a = 1, then (ε σ Remark 2.5. (i) In the case M even with S = -1, the ideals (ε K ± 1) do not define invariant classes (except that of
q 2 in Q( √ 2) since (ε 2 + 1) = (2 + √ 2) = √ 2 • ε 2 and (ε 2 -1) = ( √ 2
); this comes from the fact that it is the unique even case where H G K = 1). But q 2 is not principal since there is no non-canonical relation of principality in the case S = -1 (e.g., M ∈ {10, 26, 58, 74, 82, 106, 122, 130, 170}).
To get illustrations for M even, S = -1, the following program computes the class group of K (in HK = K.clgp), then (in R) the components of the (non-trivial) class of q 2 :
{forstep(M=2,10^6,2,if(core(M)!=M,next);K=bnfinit(x^2-M,1); S=norm(K.fu[1]);if(S==1,next);q2=component(idealfactor(K,2),1)[1]; R=bnfisprincipal(K,q2)[1];print("M=",M," R=",R," HK=",K.clgp))} M=10 R=[1] HK=[2,[2]] M=199810 R=[0,0,1,1] HK=[128,[16,2,2,2]] M=82 R=[2] HK=[4,[4]] M=519514 R=[32,1] HK=[128,[64,2]] M=130 R=[0,1] HK=[4,[2,2]] M=613090 R=[32,0,1] HK=[256,[64,2,2]] M=226 R=[4] HK=[8,[8]] M=690562 R=[16,2] HK=[128,[32,4]] M=442 R=[0,1] HK=[8,[4,2]] M=700570 R=[8,1,0,1] HK=[128,[16,2,2,2]] M=2210 R=[1,1,1] HK=[8,[2,2,2]] M=720802 R=[16,0] HK=[128,[32,4]] M=3026 R=[2,2]
HK= [16,[4,4]] M=776866 R=[16,0,0] HK= [128,[32,2,2]]
But we have the cases M even and S = 1 (e.g., M ∈ {34, 146, 178, 194, 386, 410, 466, 482, 514, 562}), where relations q 2 principal are more frequent when r is small (e.g., M ∈ {34, 146, 178, 194, 386, 466, 482}; for M = 410, the relation is given by m = 41, n = 10).
(ii) Consider the field
L = K( √ 2) for M even, M =: 2M ′ ; the extension L/K is unramified since Q( √ M ′ )/Q is not ramified at 2. The extension of q 2 in L becomes the principal ideal ( √ 2) whatever the decomposition of 2 in Q( √ M ′ )/Q. Meanwhile, if 2
is inert in this extension (i.e., M ′ ≡ 5 (mod 8)), q 2 can not be principal in K, otherwise, if q 2 = (α), then, in L, α = η √ 2, where η is a unit of L, and by unicity of radicals (up to K × ) this is absurd and the class of q 2 capitulates in L.
So either m = 2 & m ′ = 2, with q 2 non-principal and S = -1 (e.g., M ∈ {10, 26, 58, 74, 82, 106, 122} showing that reciprocal does not hold since 2 may split in
Q( √ M ′ )/Q, as for M = 82), or else m = 2 & m ′ > 2,
giving the principality of q 2 and S = 1 (e.g., all the previous cases M ∈ {34, 146, 178, 194, 386, 466, 482}).
2.4. Computation of S by means of the gcd criterion. The following PARI/GP program computes (if any) the non-canonical relation of principality m distinct from (1), ( √ M ) between the ramified primes, only by means of the previous result on the coefficients of the fundamental unit (Theorem 2.4), and deduces the norm S without calculating it; when there are no non-canonical relations (whence S = -1), the corresponding data is empty. One has only to give the bound BD of the discriminants D whose odd prime divisors are congruent to 1 modulo 4. The counters CD, Cm, Cp, C22, enumerate the sets D, D -, D + and the set D - 22 of cases where m = m ′ = 2 (equivalent to M even and S = -1), respectively; for short we do not write the cases where m = 1, giving S = -1 (M ∈ {5, 13, 17, 29, 37, . . .}:
MAIN PROGRAM COMPUTING S VIA THE RELATIONS OF PRINCIPALITY {BD=10^7;CD=0;Cm=0;Cp=0;C22=0;for(D=5,BD,v=valuation(D,2);if(v!=0 & v!=3,next); i0=1;M=D;if(v==3,M=D/4;i0=2);if(core(M)!=M,next);if(Mod(M,4)==3,next); r=omega(M);f=factor(M);ellM=component(f,1);for(i=i0,r,ell=ellM[i]; if(Mod(ell,4)==3,next(2)));CD=CD+1;res=lift(Mod(M,2));e=quadunit(D); Y=component(e,3)/(res+1);X=component(e,2)+res*Y;g=gcd(X+1,Y);m=gcd((X+1)/g,M); if(m==1,S=-1;print("D=",D," M=",M," relations: "," ",","," ",", S=",S);Cm=Cm+1); if(m>2,S=1;print("D=",D," M=",M," relations: ",m,",",M/m,", S=",S);Cp=Cp+1); if(m==2,gp=gcd(X-1,Y);mp=gcd((X-1)/gp,M); if(mp>2,S=1;print("D=",D," M=",M," relations: ",m,",",mp,", S=",S);Cp=Cp+1); if(mp==2,S=-1;print("D=",D," M=",M," relations: "," ",","," ",", S=",S,","," m=mp=2"); Cm=Cm+1;C22=C22+1))); print("CD=",CD," Cm=",Cm," Cp=",Cp," C22=",C22); print("Cm/CD=",Cm/CD+0.0," Cp/CD=",Cp/CD+0.0); print("C22/CD=",C22/CD+0.0," C22/Cm=",C22/Cm+0.0
Remarks on density questions
A classical principle in number theory is to examine some deep invariants (as class groups, units, etc.) of families of fields (assuming, in general, that some parameters are fixed, as for instance, the Galois group, the signature, etc.), classified regarding the discriminants. The analytic reason is that the order of magnitude of
h K • R K √ D K is controlled
by suitable ζ-functions and then, if the discriminant D K increase in the family, the class number h K and/or the regulator R K increase or, at least, have a larger complexity. This case represents the reality quite well in a global context, but it can be questioned for p-adic framework or non semi-simple Galois setting. We intend to give some remarks in this direction about the norm of S = N(ε K ) linked significantly to the 2-class group.
3.1. Classical approach of the density. We will describe the case of S = -1, using the following definitions:
Definitions 3.1. (i) Denote by M (resp. D) the set of all Kummer radicals (resp. of all Discriminants), such that -1 ∈ N(K × ). We have, from a result of Rieger [Rie]:
#M ≤X ≈ 3 2π p≡1 mod 4 1 - 1 p 2 1 2 X log(X) ≈ 0.464592... X log(X) , #D ≤X ≈ 3 4 #M ≤X .
(ii) Denote by M -⊂ M (resp. D -⊂ D) the subset of all Kummer radicals (resp. of all Discriminants), such that S := N(ε K ) = -1 and put:
∆ - disc := lim X→∞ M - ≤X M ≤X = lim X→∞ D - ≤X D ≤X .
Many heuristics have given ∆ - disc around 0.5 or 0.6. Our characterization would give a density around 3 2 • 6 π 2 2 ≈ 0.554363041753..., but without a precise classification by means of ascending discriminants or of number of ramified primes, a context using structure of the 2-class group, whence quadratic symbols, Rédei's matrices, Furuta symbols [Fu], etc. These classical principles consist in using the filtration of the 2-class group following, e.g., the theoretical algorithm described in whole generality in [Gra3], with the fixed point formulas generalizing that of Chevalley-Herbrand (say for the quadratic case):
#(H i+1 K /H i K ) = 2 r-1 (Λ i K : Λ i K ∩ N(K × )) , i ≥ 0,
where
H i K = Ker(1 -σ) i , the Λ i K 's, with Λ i K ⊆ Λ i+1 K for all i, defining a sequence of suitable subgroups of Q × ; more precisely, Λ 0 K = -1 , Λ 1 K = -1, q 1 , • • • , q r , the next Λ i+1 K 's introducing "random" numbers b = N(b) from identities of the form a = (y)b 1-σ , when x ∈ Λ i K is such that (x) = N(a) = N(y), cl(a) ∈ H i K , y ∈ K × . Then the index (Λ i K : Λ i K ∩ N(K × )) being nothing else than #ρ i K (Λ i K )
, where ρ i K is the r-uple of Hasse's norm residue symbols giving rise to generalized "Rédei matrices of quadratic residue symbols", or more simply, random maps F ri 2 → F r-1 2 (product formula of the symbols), r i = dim F2 Λ i K /(Λ i K ) 2 . Similar viewpoints are linking the norm of ε K to the structure of the 2-class group in the restricted sense (see many practical examples in [START_REF] Gras | Sur la norme du groupe des unités d'extensions quadratiques relatives[END_REF]).
This proportion of D -inside D ([St, § 1, p. 122], [BoSt, (1.3), p. 1328]), was conjectured by Stevenhagen to be:
P = 1 - k≥0 1 - 1 2 1+2k ≈ 0.5805775582 . . .
We refer to [St], then to [START_REF] Fouvry | On the negative Pell equation[END_REF][START_REF] Koymans | On Stevenhagen's conjecture[END_REF] for history and bibliographical comments about the norm of the fundamental unit of Q( √ M ) and for his heuristic based on the properties of densities P t , corresponding to discriminants having t distinct prime factors.
These results involving the 2-class groups structures allow informations about ∆ - disc and especially the determination of lower and upper bounds. For this aspect, we refer to the Chan-Koymans-Milovic-Pagano paper [CKMP] who had proven that ∆ - disc is larger than 0.538220, improving Fouvry-Klüners results [START_REF] Fouvry | The parity of the period of the continued fraction of √ d[END_REF][START_REF] Fouvry | On the negative Pell equation[END_REF] saying that ∆ - disc lies between 0.524275 and 0.6666666, which gave 0.538220 ≤ ∆ - disc ≤ 0.6666666.
For X = 10 8 , a PARI [P] calculation gives the experimental density
#D - ≤X #D ≤X ≈ 0.787255.
Moreover, it is well known that such partial densities decrease as X increases; in other words, computer approaches are misleading as explained in [St]. More precisely the arithmetic function ω(x) giving the number of prime divisors of x fulfills the optimal upper bound ω(x) ≤ (1 + o(1))
log(x) log(log(x))
[Ten, I.5.3], so that large Kummer radicals have "more prime divisors", whence more important probability to get S = -1.
In the present paper, we do not use these classical ways, so that the main question is to understand how densities may be defined; let's give the example of classification by ascending traces before giving that of the gcd criterion.
3.2. Classification by ascending traces of fundamental units. In another direction, let's apply the "First Occurrence Process" algorithm [START_REF] Gras | Unlimited lists of fundamental units of quadratic fields -Applications[END_REF]Section 4,Theorem 4.6] in the interval [1, B], simultaneously for the two cases s = -1 and s = 1 of the polynomials m s (t) = t 2 -4s, under the condition -1 ∈ N(K × ) (to be checked only for s = 1).
Recall that in the set
T ≤B of traces T(ε K ) ≤ B, T + ≤B , T - ≤B , denote the corresponding subsets of traces t ≤ B of fundamental units, of norm s = -1, s = 1, respectively.
As B → ∞, all units are represented in these lists, as shown with the following PARI program. Indeed, this is clear with t 2 + 4 = M r 2 giving T - ≤B , and from [Gra4, Theorem 4.6 and Corollaries], the set T + ≤B deals with minimal traces t for which t 2 -4 = M r 2 gives the unit ε K (if S = 1) or ε 2 K (if S = -1), but the squares of units of norm -1 were obtained with t 2 + 4 for a smaller trace, so that the process eliminates this data in the final list. Finally the densities dp, dm are that of units of norms 1, -1, respectively.
PROGRAM FOR DENSITY OF UNITS OF NORM 1 AND OF NORM -1 {B=10^6;Cm=0;Cp=0;LM=List;for(t=1,B,mtm=t^2+4;M=core(mtm);L=List([M,-1]); listput (LM,vector(2,c,L[c]));mtp=t^2-4;if(mtp<=0,next);M=core(mtp); r=omega(M);i0=1;if(Mod(M,2)==0,i0=2);f=factor(M);ellM=component(f,1); T=1;for(i=i0,r,c=ellM [i]
;if(Mod(c,4)!=1,T=0;break));if(T==1,L=List([M,1]); listput(LM,vector(2,c,L[c]))));VM=vecsort(vector(B,c,LM[c]),1,8);print(VM); for(k=1,#VM,S=VM[k][2];if(S==1,Cp=Cp+1);if(S==-1,Cm=Cm+1)); print("#VM=",#VM," Cp=",Cp," Cm=",Cm," dp=",Cp/#VM+0.," dm=",Cm/#VM+0.)} In that context, ε K = a + b √ M is written 1 2 (t + r √ M
) and only units with too large traces are missing in the above finite list. Restricting to the density ∆ - trace of the set of units of norm -1 inside the set of units classified by ascending traces, one gets:
Theorem 3.2. We have ∆ - trace := lim B→∞ #T - ≤B #T ≤B = 1. Proof. From [Gra4, Theorem 4.5], we have #T - ≤B ∼ B -O( 3 √ B).
3.3. Approximations of the density from the gcd principle. Theorem 2.4, on the characterization of the norm S of ε K , allows heuristics for densities since the statement reduces to elementary arithmetic properties. The criterion only depends on properties of m = gcd (A, M ) and m ′ = gcd (A ′ , M ) whose "probabilities" may be computed, assuming that the the pairs (A, M ) and (A ′ , M ) are random with Kummer radicals taken in the subset of square-free integers, without using the natural order of radicals or discriminants, nor that of the order of magnitude of the integers A, A ′ depending on the unpredictable trace of the unit. Before giving some heuristics, we introduce the partial densities corresponding to the six cases summarized by the following array, with numerical values obtained for M in various intervals [bM, BM]:
M ∈ [bM, BM ] m m ′ S = N(ε K ) densities δ even = 2 = 2 -1 ↓ δ even 2,2 even = 2 > 2 = 1 ↓ δ even 2,m ′ even > 2 = 2 = 1 ↓ δ even m,2 even > 2 > 2 = 1 ↑ δ even m,m ′ odd = 1 = 1 -1 ↓ δ odd 1,1 odd > 2 > 2 = 1 ↑ δ odd m,m ′
Notations of partial densities are given in the right column; the densities δ even 2,m ′ and δ even m,2 , corresponding to each sign in formulas of ε K ± 1, are indistinguishable about the cases m = 2 & m ′ > 2 or m > 2 & m ′ = 2, so we will only give the sum δ even 2,m ′ + δ even m,2 . Then δ even 2,2 represent the cases M even, S = -1 where q 2 is not principal. The densities δ odd 1,1 (resp. δ odd m,m ′ ) represent the cases S = -1 with no principality relations (resp. S = 1 with the two complementary principality relations). We put ∆ - gcd := δ odd 1,1 + δ even 2,2 . Many tests, using the following program, have been done and have shown that some densities increase while the others decrease as the Kummer radicals are taken in larger intervals (indicated with arrows ↑ and ↓): {bM=2;BM=10^6;CM=0;C22=0;C2p=0;Cm2=0;Cmp=0;CC11=0;CCmp=0; for(M=bM,BM,res8=Mod(M,8);if(res8!=1&res8!=2&res8!=5,next); if(core(M)!=M,next);res=Mod(M,2);i0=1;if(res==0,i0=2); r=omega(M);f=factor(M);ellM=component(f,1);for (i=i0,r,ell=ellM[i]; if(Mod(ell,4)==3,next(2)));D=M;if(res==0,D=4*M);CM=CM+1;e=quadunit(D); res=lift(res);Y=component(e,3)/(res+1);X=component(e,2)+res*Y; g=gcd(X+1,Y);A=(X+1)/g;m=gcd(A,M);gp=gcd(X-1,Y);Ap=(X-1)/gp;mp=gcd(Ap,M); if (res==0, if(m==2 & mp==2,C22=C22+1)
;if(m==2 & mp>2,C2p=C2p+1); if(m>2 & mp==2,Cm2=Cm2+1);if(m>2 & mp>2,Cmp=Cmp+1)); if(res==1, if(m==1 & mp==1,CC11=CC11+1);if(m>2 & mp>2,CCmp=CCmp+1
))); print("CM=",CM," C22=",C22," C2p=",C2p," Cm2=",Cm2," Cmp= ",Cmp, " CC11=",CC11," CCmp= ",CCmp); d22=C22/CM+0.0;d2p=C2p/CM+0.0;dm2=Cm2/CM+0.0; dmp=Cmp/CM+0.0;dd11=CC11/CM+0.0;ddmp=CCmp/CM+0.0; print("Sum=",C22+C2p+Cm2+Cmp+CC11+CCmp); print ("d22=",d22," d2p=",d2p," dm2=",dm2," dmp=",dmp, " dd11=",dd11," ddmp=",ddmp)
} M ∈ [1, 10 6 ] m m ′ S = N(ε K ) densities δ even = 2 = 2 -1 0.2347176480 ↓ δ even 2,2 even = 2, > 2 > 2, = 2 = 1 0.0652421881 ↓ δ even 2,m ′ + δ even m,2 even > 2 > 2 = 1 0.0389107558 ↑ δ even m,m ′ odd = 1 = 1 -1 0.5475861515 ↓ δ odd 1,1 odd > 2 > 2 = 1 0.1135432564 ↑ δ odd m,m ′ CM = 124490, C22 = 29220C2p = 4079, Cm2 = 4043, Cmp = 4844, CC11 = 68169, CCmp = 14135, δ even 2,2 + (δ even 2,m ′ + δ even m,2 ) + δ even m,m ′ = 0.3388705920 ↓, δ even 2,2 = 0.2347176480 ↓, (δ even 2,m ′ + δ even m,2 ) + δ even m,m ′ = 0.1041529440 ↑, δ odd 1,1 + δ odd m,m ′ = 0.6611294079 ↑, ∆ - gcd = δ even 2,2 + δ odd 1,1 = 0.7823037995 ↓. M ∈ [10 6 , 10 7 ] m m ′ S = N(ε K ) densities δ even = 2 = 2 -1 0.2312433670 ↓ δ even 2,2 even = 2, > 2 > 2, = 2 = 1 0.0614862378 ↓ δ even 2,m ′ + δ even m,2 even > 2 > 2 = 1 0.0452699271 ↑ δ even m,m ′ odd = 1 = 1 -1 0.5374200520 ↓ δ odd 1,1 odd > 2 > 2 = 1 0.1245804159 ↑ δ odd m,m ′ CM = 1029889, C22 = 238155, C2p = 31753, Cm2 = 31571, Cmp = 46623, CC11 = 553483, CCmp = 128304, δ even 2,2 + (δ even 2,m ′ + δ even m,2 ) + δ even m,m ′ = 0.33767663 ↓, δ even 2,2 = 0.2312433670 ↓, (δ even 2,m ′ + δ even m,2 ) + δ even m,m ′ = 0.1067561648 ↑, δ odd 1,1 + δ odd m,m ′ = 0.66232334 ↑, ∆ - gcd = δ even 2,2 + δ odd 1,1 = 0.768663419 ↓. M ∈ [10 7 , 10 8 ] m m ′ S = N(ε K ) densities δ even = 2 = 2 -1 0.2290897913 ↓ δ even 2,2 even = 2, > 2 > 2, = 2 = 1 0.058607396 ↓ δ even 2,m ′ + δ even m,2 even > 2 > 2 = 1 0.0497431369 ↑ δ even m,m ′ odd = 1 = 1 -1 0.5297912763 ↓ δ odd 1,1 odd > 2 > 2 = 1 0.1327683993 ↑ δ odd m,m ′ CM = 9652809, C22 = 2211360, C2p = 283421, Cm2 = 282305, Cmp = 480161, CC11 = 5113974, CCmp = 1281588, δ even 2,2 + (δ even 2,m ′ + δ even m,2 ) + δ even m,m ′ = 0.3374403243 ↓, δ even 2,2 = 0.2290897913 ↓, (δ even 2,m ′ + δ even m,2 ) + δ even m,m ′ = 0.1083505329 ↑, δ odd 1,1 + δ odd m,m ′ = 0.6625596756 ↑, ∆ - gcd = δ even 2,2 + δ odd 1,1 = .7588810676 ↓. M ∈ [10 8 , 10 8 + 10 6 ] m m ′ S = N(ε K ) densities δ even = 2 = 2 -1 0.2285032150 ↓ δ even 2,2 even = 2, > 2 > 2, = 2 = 1 0.0571186698 ↓ δ even 2,m ′ + δ even m,2 even > 2 > 2 = 1 0.0518491039 ↑ δ even m,m ′ odd = 1 = 1 -1 0.5276699767 ↓ δ odd 1,1 odd > 2 > 2 = 1 0.1348590343 ↑ δ odd m,m ′ CM = 105132, C22 = 24023, C2p = 2971, Cm2 = 3034, Cmp = 5451, CC11 = 55475, CCmp = 14178, δ even 2,2 + (δ even 2,m ′ + δ even m,2 ) + δ even m,m ′ = 0.3374709887 ↓, δ even 2,2 = 0.2285032150 ↓, (δ even 2,m ′ + δ even m,2 ) + δ even m,m ′ = 0.1089677737 ↑, δ odd 1,1 + δ odd m,m ′ = 0.662529011 ↑, ∆ - gcd = δ even 2,2 + δ odd 1,1 = 0.7561731917 ↓. M ∈ [10 9 , 10 9 + 10 6 ] m m ′ S = N(ε K ) densities δ even = 2 = 2 -1 0.2262061480 ↓ δ even 2,2 even = 2, > 2 > 2, = 2 = 1 0.0563555787 ↓ δ even 2,m ′ + δ even m,2 even > 2 > 2 = 1 0.0540844730 ↑ δ even m,m ′ odd = 1 = 1 -1 0.5226055410 ↓ δ odd 1,1 odd > 2 > 2 = 1 0.1407482589 ↑ δ odd m,m ′ CM = 99511, C22 = 22510, C2p = 2808, Cm2 = 2800, Cmp = 5382, CC11 = 52005, CCmp = 14006, δ even 2,2 + (δ even 2,m ′ + δ even m,2 ) + δ even m,m ′ = 0.3366461999 ↓, δ even 2,2 = 0.2262061480 ↓, (δ even 2,m ′ + δ even m,2 ) + δ even m,m ′ = 0.1104400517 ↑, δ odd 1,1 + δ odd m,m ′ = 0.6633538000 ↑, ∆ - gcd = δ even 2,2 + δ odd 1,1 = 0.748811689 ↓. M ∈ [10 10 , 10 10 + 10 6 ] m m ′ S = N(ε K ) densities δ even = 2 = 2 -1 0.2252429443 ↓ δ even 2,2 even = 2, > 2 > 2, = 2 = 1 0.0535799257 ↓ δ even 2,m ′ + δ even m,2 even > 2 > 2 = 1 0.0571751842 ↑ δ even m,m ′ odd = 1 = 1 -1 0.5164271590 ↓ δ odd 1,1 odd > 2 > 2 = 1 0.1475747866 ↑ δ odd m,m ′ CM = 94569, C22 = 21301, C2p = 2494, Cm2 = 2573, Cmp = 5407, CC11 = 48838, CCmp = 13956, δ even 2,2 + (δ even 2,m ′ + δ even m,2 ) + δ even m,m ′ = 0.3359980543 ↓, δ even 2,2 = 0.2252429443 ↓, (δ even 2,m ′ + δ even δ even m,2 ) + δ even m,m ′ = 0.1107551099 ↑, δ odd 1,1 + δ odd m,m ′ = 0.6640019456 ↑, ∆ - gcd = δ even 2,2 + δ odd 1,1 = 0.7416701033 ↓. M ∈ [10 11 , 10 11 + 2.5•10 5 ] m m ′ S = N(ε K ) densities δ even = 2 = 2 -1 0.2240462381 ↓ δ even 2,2 even = 2, > 2 > 2, = 2 = 1 0.0514559794 ↓ δ even 2,m ′ + δ even m,2 even > 2 > 2 = 1 0.0607678259 ↑ δ even m,m ′ odd = 1 = 1 -1 0.5127736860 ↓ δ odd 1,1 odd > 2 > 2 = 1 0.1509562704 ↑ δ odd m,m ′ CM = 49829, C22 = 11164, C2p = 1308, Cm2 = 1256, Cmp = 3028, CC11 = 25551, CCmp = 7522, δ even 2,2 + (δ even 2,m ′ + δ even m,2 ) + δ even m,m ′ = 0.3362700435 ↓↑, δ even 2,2 = 0.2240462381 ↓, (δ even 2,m ′ + δ even m,2 ) + δ even m,m ′ = 0.1122238054 ↑, δ odd 1,1 + δ odd m,m ′ = 0.6637299564 ↑↓, ∆ - gcd = δ even 2,2 + δ odd 1,1 = 0.7368199241 ↓.
3.3.1. First heuristic from the above data. It is difficult to go further because of the execution time, but some rules appear, that are not proved, but allow possible heuristics:
• δ even := δ even 2,2 + δ even 2,m ′ + δ even m,2 + δ even m,m ′ → 1 3 ; • δ odd := δ odd 1,1 + δ odd m,m ′ → 2 3
; this is almost obvious since random Kummer radicals M , such that -1 ∈ N(K × ), are in the classes 1, 2 or 5 modulo 8, whence with uniform repartition 1 3
for M even (class of 2) and 2 3
for the case M odd (classes of 1 and 5).
The indications ↑ and ↓ give some interesting phenomena:
• δ even m,m ′ must have a hight increasing, since δ even 2,m ′ and δ even m,2 are decreasing, but the sum δ even 2,m ′ + δ even m,2 + δ even m,m ′ is increasing. • δ even 2,2 must have a hight decreasing, since δ even = δ even 2,2 + δ even 2,m ′ + δ even m,2 + δ even m,m ′ is decreasing while the partial sum δ even 2,m ′ + δ even m,2 + δ even m,m ′ is increasing. • The sum ∆ - gcd = δ odd 1,1 + δ even 2,2 is much decreasing.
• The quotient
δ even 2,2 δ odd 1,1
seems to be increasing and the quotient
δ odd 1,1 δ odd m,m ′
seems to be rapidly decreasing.
• The quotient
δ even 2,2
δ even 2,m ′ + δ even m,2 + δ even m,m ′ seems to be decreasing up to a constant ρ ≈ 2.
To reinforce this last heuristic, let's consider the computation of the parity of the component b of ε K = a + b √ M , for M even and -1 ∈ N(K × ); indeed, recall that in that cases S = -1 is equivalent to b odd. The following program examines this question taking M ≡ 2 (mod 8), in the intervals [k * 10 7 , (k + 1) * 10 7 ], k ≥ 1: Many oscillations can be observed, on the successive intervals [k * 10 7 , (k + 1) * 10 7 ], which support the idea of a slow convergence of CI/CP towards ρ = 2 + .
With the same principle, but in intervals of the form [k * 10 9 , k * 10 9 + 10 7 ], k ≥ 1 (long calculation time), we obtain, as expected, a slow global decreasing of the data: To conclude, one may say that these heuristics are compatible with the next one suggesting that δ odd 1,1 → 6 π 2 2 ≈ 0.3695..., then ∆ - gcd := δ odd 1,1 + δ even 2,2 → 6 π 2
1
2 (1 + 0.5) ≈ 0.5543....
3.3.2.
Second heuristic from randomness of A, A ′ . In our viewpoint, we are reduced to use the well-known fact that the density of pairs of independent co-prime integers is 6 π 2 ≈ 0.6079...; but this implies that no condition is assumed, especially for a radical R taken at random, and we must take into account that a radical lies in the subset of square-free integers, whose relative density is also given by 6 π 2 . The integers a ± 1 giving A, A ′ may be considered as random and independent regarding M .
• When M is odd, S = -1 is equivalent to m := gcd (A, M ) = 1; which gives the partial density δ odd 1,1 = 6 π 2
2
.
• When M is even, S = -1 is equivalent to m ′ := gcd (A ′ , M ) = 2 and this may be written
m ′ 2 := gcd A ′ 2 , M 2
= 1; we know this is equivalent to b odd and that a good heuristic is that this occur with probability 1 2 . The specific case m = 2 (with the alternative m ′ > 2 or m ′ = 2) occurs only for M even, whence a coefficients ≈ 0.554363041753.
To be compared with the density 0.5805775582 [St, Conjecture 1.4], proven in [START_REF] Koymans | On the distribution of Cl(K)[ℓ ∞ ] for degree ℓ cyclic fields[END_REF][START_REF] Koymans | On Stevenhagen's conjecture[END_REF] depending on the natural order of discriminants.
Our method, assumes the independence and randomness of the parameters; moreover it does not classify the radicals (or discriminants), nor the units (for example by means of their traces), so one can only notice that the density (3.1) may be an lower bound for the classical case; fortunately it is greater than the lower bound 0.538220, which was proved in [CKMP], and also less than 0.666666..., proved in [START_REF] Fouvry | The parity of the period of the continued fraction of √ d[END_REF][START_REF] Fouvry | On the negative Pell equation[END_REF].
q|m q = (α) of support m | M . For n = M m we get the non-canonical analogous relation n = q|n q = (β), with mn = (αβ) = ( √ M ); which proves the first claim.
π 2 2 ;
2 densities; indeed, M even andWe then obtain the following discussion, on m and m ′ , from Theorem 2.4:• m = 1 (M ofany parity); thus S = -1 with density 6 • m = m ′ = 2 (M even); thus S = -1 with density the previous interpretation, we propose, for D -inside D, the conjectural
the non-canonical relations are of the form m and n, given by the usual process. Let's give few examples using Program 2.4:
M relations M relations
3 [3] 1 3 [] q2 principal 51 [3,17] 1 51 [0] q2 principal
6 [2,3] 2 3 [] 55 [5,11] 11 5 [1] q2 non principal
7 [7] 1 7 [] q2 principal 57 [3,19] 3 19 [] D=M
[11] 1 11 [] q2 principal 59 [59] 59 1 [] q2 principal
[2,7] 7 2 [] 62 [2,31] 2 31 []
[3,5] 3 5 [1] q2 non principal 66 [2,3,11] 33 2 [0]
[19] 1 19 [] q2 principal 67 [67] 67 1 [] q2 principal
[3,7] 3 7 [] D=M 69 [3,23] 23 3 [] D=M
[2,11] 2 11 [] 70 [2,5,7] 14 5 [1] q2 non principal
[23] 1 23 [] q2 principal 71 [71] 1 71 [] q2 principal
[2,3,5] 5 6 [1] q2 non principal 77 [7,11] 7 11 [] D=M
[31] 1 31 [] q2 principal
The PARI/GP gcd function gives instead gcd ( 1
U 0 , 1 2 V 0 ) = 12 gcd (U 0 , V 0 ); this gap only occurs when M is odd, in which case this does not matter for the computation of the odd integers m and m ′ . |
03705446 | en | [
"math.math-nt"
] | 2024/03/04 16:41:24 | 2023 | https://hal.science/hal-03705446v2/file/Quad-Units.pdf | Georges Gras
UNLIMITED LISTS OF QUADRATIC INTEGERS OF GIVEN NORM APPLICATION TO SOME ARITHMETIC PROPERTIES
Keywords: Class groups). 2023. hal-03705446v2 Mathematics Subject Classification. Primary 11R11, 11R27, 11R37 Real quadratic fields, Fundamental units, Norm equations, p-rationality, p-class numbers, PARI programs. 1
Unlimited lists of fundamental units of quadratic fields -Applications to some arithmetic properties
Georges Gras
3.3.Remarks on the use of the F.O.P. algorithm 4. Application of the F.O.P. algorithm to norm equations 4.1. Main property of the trace map for units 4.2. Unlimited lists of fundamental units of norm s, s ∈ {-1, 1} 4.3. Unlimited lists of fundamental integers of norm sν, ν ≥ 2 5. Universality of the polynomials m sν 5.1. Mc Laughlin's polynomials 5.2. Application to finding units 6. Non p-rationality of quadratic fields 6.1. Recalls about p-rationality 6.2. Remarks about p-rationality and non-p-rationality 6.3. Families of local p-th power units -Computation of T K 6.4. Infiniteness of non p-rational real quadratic fields 7. Application to p-class groups of some imaginary cyclic fields 7.1. Imaginary quadratic fields with non-trivial 3-class group 7.2. Imaginary cyclic fields with non-trivial p-class group, p > 3 References
Introduction and main results
1.1. Definition of the "F.O.P. " algorithm. For the convenience of the reader, we give, at once, an outline of this process which has an interest especially under the use of PARI programs [P].
Definition 1.1. We call "First Occurrence Process" (F.O.P. ) the following algorithm, defined on a large interval [1, B] of integers. As t grows from t = 1 up to t = B, we compute some arithmetic invariant F (t); for instance, a pair of invariants described as a PARI list, as the following illustration with square-free integers M (t) and units η(t) of Q( M (t)): F(t) → L(t) = List M(t), η(t) , provided with a natural order on the pairs L(t), then put it in a PARI list LM: Listput(LM, vector (2, c, L[c])) →List ([L(1), L(2), . . . , L(t), . . . , L(B)])
=List([ [M (1), η(1)], . . . , [M(t), η(t)], . . . , [M(B), η(B)] ]);
after that, we apply the PARI instruction VM = vecsort(LM, 1, 8) which builds the list:
VM = List([L 1 , L 2 , . . . , L j , . . . , L N ]), N ≤ B,
such that L j = L(t j ) = [ [M(t j ), η(t j )] ] is the first occurrence (regarding the selected order, for instance that on the M's) of the invariant found by the algorithm and which removes the subsequent duplicate entries.
Removing the duplicate entries is the key of the principle since in general they are unbounded in number as B → ∞ and do not give the suitable information Since the length N of the list VM is unknown by nature, one must write LM as a vector and put instead: VM = vecsort(vector (B, c, LM[c]), 1, 8); thus, N = #VM makes sense and one can (for possible testing) select elements and components as X = VM[k] [2], etc. If N is not needed, then VM = vecsort(LM, 1, 8) works well.
For instance, the list LM of objects F (t) = (M (t), ε(t)), 1 ≤ t ≤ B = 10:
LM = List([ [5, ε 5 ], [2, ε 2 ], [5, ε ′ 5 ], [7, ε 7 ], [5, ε ′′ 5 ], [3, ε 3 ], [2, ε ′ 2 ], [5, ε ′′′ 5 ], [6, ε 6 ], [7, ε ′ 7 ] ]
), with the natural order on the first components M , leads to the list: [3, ε 3 ], [5, ε 5 ], [6, ε 6 ], [7, ε 7 ]]).
VM = List([[2, ε 2 ],
1.2. Quadratic integers. Let K =: Q( √ M ), M ∈ Z ≥2 square-free, be a real quadratic field and let Z K be its ring of integers.
Recall that M ≥ 2, square-free, is called the "Kummer radical" of K, contrary to any "radical" m = M r 2 giving the same field K.
There are two ways of writing for an element α ∈ Z K . The first one is to use the integral basis {1, √ M } (resp. 1,
1+ √ M 2
) when M ≡ 1 (mod 4) (resp. M ≡ 1 (mod 4)). The second one is to write α = 1 2 (u + v √ M ), in which case u, v ∈ Z are necessarily of same parity; but u, v may be odd only when M ≡ 1 (mod 4). We denote by T K/Q and N K/Q , or simply T and N, the trace and norm maps in K/Q, so that T(α) = u and N(α) = 1 4 (u 2 -M v 2 ) in the second writing for α. Then the norm equation in u, v ∈ Z (not necessarily with co-prime numbers u, v):
u 2 -M v 2 = 4sν, s ∈ {-1, 1}, ν ∈ Z ≥1 ,
for M square-free, has the property that u, v are necessarily of same parity and may be odd only when M ≡ 1 (mod 4); then:
z := 1 2 u + v √ M ∈ Z K , T(z) = u & N(z) = sν.
Finally, we will write quadratic integers α, with positive coefficients on the basis {1, √ M }; this defines a unique representative modulo the sign and the conjugation. Put:
Z + K := α = 1 2 u + v √ M , u, v ∈ Z ≥1 , u ≡ v (mod 2) .
Note that these α's are not in Z, nor in Z• √ M ; indeed, we have the trivial solutions N(q) = q 2 (α = q ∈ Z ≥1 , s = 1, ν = q 2 , M v 2 = 0, u = q) or N(v
√ M ) = -M v 2 , v ∈ Z ≥1 , s = -1, ν = -M v 2 ,
u = 0), which are not given by the F.O.P. algorithm for simplicity. These viewpoints will be more convenient for our purpose and these conventions will be implicit in all the sequel.
Since norm equations may have several solutions, we will use the following definition: Definition 1.2. Let M ∈ Z ≥2 be a square-free integer and let s ∈ {-1, 1}, ν ∈ Z ≥1 . We call fundamental solution (if there are any) of the norm equation u 1.3. Quadratic polynomial units. It is classical that the continued fraction expansion of √ m, for a positive square-free integer m, gives, under some limitations, the fundamental solution, in integers u, v ∈ Z ≥1 , of the norm equation u 2 -mv 2 = 4s, whence the fundamental unit
2 -M v 2 = 4sν, with u, v ∈ Z ≥1 , the corresponding integer α := 1 2 (u + v √ M ) ∈ Z + K of minimal trace u.
ε m := 1 2 (u + v √ m) of Q( √ m).
A similar context of "polynomial continued fraction expansion" does exist and gives polynomial solutions (u(t), v(t)), of u(t) 2 -m(t)v(t) 2 = 4s, for suitable m(t) ∈ Z[t] (see, e.g., [McL, McLZ, Nat, Ram, SaAb]). This gives the quadratic polynomial units E(t) := 1 2 u(t) + v(t) m(t) . We will base our study on the following polynomials m(t) that have interesting universal properties (a first use of this is due to Yokoi [Yo, Theorem 1]).
Definition 1.3. Consider the square-free polynomials m sν (t) = t 2 -4sν ∈ Z[t], where s ∈ {-1, 1}, ν ∈ Z ≥1 . The continued fraction expansion of √ t 2 -4sν leads to the integers A sν (t) := 1 2 t + √ t 2 -4sν
, of norm sν and trace t, in a quadratic extension of Q(t). When ν = 1, one obtains the units E s (t) := 1 2 t + √ t 2 -4s , of norm s and trace t.
The continued fraction expansion, with polynomials, gives the fundamental solution of the norm equation (cf. details in [McL]), but must not be confused with that using evaluations of the polynomials; for instance, for
t 0 = 7, m 1 (t 0 ) = 7 2 -4 = 45 is not square-free and E 1 (7) = 1 2 (7 + √ 45) = 1 2 (7 + 3 √ 5) is indeed the fundamental solution of u 2 -45v 2 = 4, but not the fundamental unit ε 5 of Q( √ 45) = Q( √ 5
), since one gets E 1 (7) = ε 6 5 . 1.4. Main algorithmic results. We will prove that the families of polynomials m sν (t) = t 2 -4sν, s ∈ {-1, 1}, ν ∈ Z ≥1 , are universal to find all square-free integers M for which there exists a privileged solution α ∈ Z + K to N(α) = sν; moreover, the solution obtained is the fundamental one, in the meaning of Definition 1.2 saying that α is of minimal trace t ≥ 1. This is obtained by means of an extremely simple algorithmic process (described § 1.1) and allows to get unbounded lists of quadratic fields, given by means of their Kummer radical, and having specific properties.
The typical results, admitting several variations, are given by the following excerpt of statements using quadratic polynomial expressions m(t) deduced from some m sν (t):
Theorem 1.4. Let B be an arbitrary large upper bound. As the integer t grows from 1 up to B, for each first occurrence of a square-free integer M ≥ 2, in the factorizations m(t) =: M r 2 , we have the following properties for Definition 1.2 (Theorem 4.6). b) Let p be an odd prime number and consider the following polynomials.
K := Q( √ M ): a) Consider the polynomials m(t) = t 2 -4sν, s ∈ {-1, 1}, ν ∈ Z ≥2 : (i) m(t) = t 2 -4s. The unit 1 2 (t + r √ M ) is the fundamental unit of norm s of K (Theorem 4.2). (ii) m(t) = t 2 -4sν. The integer A sν (t) = 1 2 (t + r √ M ) is the fundamental integer in Z + K of norm sν in the meaning of
(i) m(t) ∈ p 4 t 2 ± 1, p 4 t 2 ± 2, p 4 t 2 ± 4, 4p 4 t 2 ± 2, 9p 4 t 2 ± 6, 9p 4 t 2 ± 12, . . . . The field K is non p-rational apart from few explicit cases (Theorem 6.3). (ii) m(t) = 3 4 t 2 -4s. The field F 3,M := Q( √ -3M
) has a class number divisible by 3, except possibly when the unit 1 2 9t + r √ M is a third power of a unit (Theorem 7.1). Up to B = 10 5 , all the 3-class groups are non-trivial, apart from few explicit cases.
(iii) m(t) = p 4 t 2 -4s, p ≥ 5.
The imaginary cyclic extension
F p,M := Q (ζ p -ζ -1 p ) √ M , of degree p -1,
has a class number divisible by p, except possibly when the unit 1 2
p 2 t + r √ M ) is a p-th power of unit (Theorem 7.2).
For p = 5, the quartic cyclic field F 5,M is defined by the polynomial P = x 4 +5M x 2 +5M 2 and up to B = 500, all the 5-class groups are non-trivial, except for M = 29. Moreover, this principle gives lists of solutions by means of Kummer radicals (or discriminants) of a regularly increasing order of magnitude, these lists being unbounded as B → ∞. See, for instance Proposition 3.1 for lists of Kummer radicals M , then Section 2 for lists of arithmetic invariants (class groups, p-ramified torsion groups, logarithmic class groups of K), and Theorems 6.4, 6.5, giving unlimited lists of units, local (but non global) pth powers, whence lists of non-p-rational quadratic fields.
All the lists have, at least, O(B) distinct elements, but most often B -o(B), and even B distinct elements in some situations.
So, we intend to analyze these results in a computational point of view by means of a new strategy to obtain arbitrary large list of fundamental units, or of other quadratic integers, even when radicals m sν (t) =: M (t)r(t) 2 , t ∈ Z ≥1 , are not square-free (i.e., r(t) > 1). By comparison, it is well known that many polynomials, in the literature, give subfamilies of integers (especially fundamental units) found by means of the m sν 's with assuming that the radical m sν (t) are square-free.
Remark 1.5. It is accepted and often proven that the integers t 2 -4sν are square-free with a non-zero density and an uniform repartition (see, e.g., [FrIw], [Rud]); so an easy heuristic is that the last M = M B of the list VM is equivalent to B 2 . This generalizes to the F.O.P. algorithm applied to polynomials of the form
a n t n + a n-1 t n-1 + • • • + a 0 , n ≥ 1, a n ∈ Z ≥1 , and gives the equivalent M B ∼ a n B n as B → ∞.
The main fact is that the F.O.P. algorithm will give fundamental solutions of norm equations u 2 -M v 2 = 4sν (see Section 4), whatever the order of magnitude of r; for small values of M , r may be large, even if r(t) tends to 1 as M (t) tends to its maximal value, equivalent to B 2 , as t → ∞. Otherwise, without the F.O.P. principle, one must assume m sν (t) square-free in the applications, as it is often done in the literature.
2.
First examples of application of the F.O.P. algorithm 2.1. Kummer radicals and discriminants given by m s (t). Recall that, for t ∈ Z ≥1 , we put m s (t) = M (t)r(t) 2 , M (t) square-free.
2.1.1. Kummer radicals. The following program gives, as t grows from 1 up to B, the Kummer radical M and the integer r obtained from the factorizations of m ′ 1 (t) = t 2 -1, under the form M r 2 ; then we put them in a list LM and the F.O.P. algorithm gives the pairs C = core(mt, 1) = [M, r], in the increasing order of the radicals M and removes the duplicate entries:
MAIN PROGRAM GIVING KUMMER RADICALS {B=10^6;LM=List;for(t=1,B,mt=t^2-1;C=core(mt,1);L=List(C); listput(LM,vector (2,c,L[c]))); M=vecsort(vector(B,c,LM[c]),1,8); print (M);print("#M = ",#M)} [M,r]= [0,1], [2,2], [3,1], [5,[START_REF]The case b) of Theorem 6.3 gives an analogous program and will be also illustrated in the Section 7 about p-class groups, especially for the case p = 3. The results are similar and give, in almost cases, non[END_REF], [6,2], [7,3], [10,6], [START_REF]Ln=List;LM=List; for(t=1[END_REF]3], [13,180], [14,[START_REF]The case b) of Theorem 6.3 gives an analogous program and will be also illustrated in the Section 7 about p-class groups, especially for the case p = 3. The results are similar and give, in almost cases, non[END_REF], [15,1], [17,8], [19,39], [START_REF]+ 5 * M * x 2 + 5 * M 2 , still giving a particular faster program than the forthcoming one, valuable for any p ≥ 3: LISTS OF 5-CLASS GROUPS OF QUARTIC FIELDS {p=5;B=100;s=-1;Lp=List;Lh=List;p2=p^2;p4=p^4;for([END_REF]12], [22,42], [23,5], [26,10], [29,1820], [30,2], [31,273], [33,[START_REF]The case b) of Theorem 6.3 gives an analogous program and will be also illustrated in the Section 7 about p-class groups, especially for the case p = 3. The results are similar and give, in almost cases, non[END_REF], [34,6], [35,1], [37,12], [38,6], [39,[START_REF]The case b) of Theorem 6.3 gives an analogous program and will be also illustrated in the Section 7 about p-class groups, especially for the case p = 3. The results are similar and give, in almost cases, non[END_REF], [41,320], [42,2], [43,531], [46,3588], [47,7], [51,7], [53,9100], [55,12], [57,20] {B=10^6;LD=List;for(t=1,[START_REF]T = prime(t) (traces are prime), s = -1 (B = 10 4[END_REF])]); listput(LD,vector (1,c,L[c])); L=List([quaddisc(core(t^2+1))]); listput(LD,vector (1,c,L[c])));D=vecsort(vector (2*B,c,LD[c]),1,8); print(D);print("#D = ",#D)} [D]= [[0], [5] This possibility is valid for all programs of the paper; we will classify the Kummer radicals, instead of discriminants, because radicals are more related to norm equations, but any kind of output can be done easily.
2.2. Application to minimal class numbers. One may use this classification of Kummer radicals and compute orders h of some invariants, then apply the F.O.P. principle, with the instruction VM = vecsort(vector(B, c, LM[c]), 2, 8) to the outputs [M, h], to get successive possible class numbers h in ascending order (we use here m 1 (t) = t 2 -4):
2.4. Application to minimal orders of logarithmic class groups. For the definition of the logarithmic class group T p governing Greenberg's conjecture [START_REF] Greenberg | On the Iwasawa invariants of totally real number fields[END_REF], see [START_REF] Jaulent | Classes logarithmiques des corps de nombres[END_REF]Jau4], and for its computation, see [BeJa] which gives the structure as abelian group. The following program, for p = 3, gives the results by ascending orders (all the structures are cyclic in this interval):
MAIN PROGRAM GIVING SUCCESSIVE CLASSLOG NUMBERS {B=10^5;LM=List;for(t=3,B,M1=core(t^2-4);M2=core(t^2+4); K1=bnfinit(x^2-M1);Clog= bnflog(K1,3) [1];C=1;for(j=1,#Clog, C=C*Clog[j]);L=List([M1,Clog,C]);listput (LM,vector(3,c,L[c])); K2=bnfinit(x^2-M2);Clog= bnflog(K2,3) [1];C=1;for(j=1,#Clog, C=C*Clog[j]);L=List([M2,Clog,C]);listput (LM,vector(3,c,L[c]))); [5,[] ,1],[257,[3],3],[2917,[9],9],[26245,[27],27],[577601,[81],81],[236197,[243],243], [19131877,[729],729], [172186885,[2187],2187], [1549681957,[6561],6561]] #VM = 9
VM=vecsort(vector(2*(B-2),c,LM[c]),3,8); print(VM);print("#VM = ",#VM)} [M,Clog,#Clog]= [
3. Units E s (t) vs fundamental units ε M(t)
3.1. Polynomials m s (t) = t 2 -4s and units E s (t). This subsection deals with the case ν = 1 about the search of quadratic units (see also [Yo,Theorem 1]). The polynomials
m s (t) ∈ Z[t] define, for t ∈ Z ≥1 , the parametrized units E s (t) = 1 2 (t + √ t 2 -4s) of norm s in K := Q( √ M )
, where M is the maximal square-free divisor of t 2 -4s. But M is unpredictable and gives rise to the following discussion depending on the norm
S := N(ε M ) of the fundamental unit ε M =: 1 2 (a + b √ M ) of K and of the integral basis of Z K : (i) If s = 1, E 1 (t) = 1 2 (t + √ t 2 -4) is of norm 1; so, if S = 1, then E 1 (t) ∈ ε M , but if S = -1, necessarily E 1 (t) ∈ ε 2 M . If s = -1, E -1 (t) = 1 2 (t + √ t 2 + 4) is of norm -1; so, necessarily the Kummer radical M is such that S = -1. (ii) If t is odd, E s (t) is written with half-integer coefficients, t 2 -4s ≡ 1 (mod 4), giving M ≡ 1 (mod 4) and Z K = Z 1+ √ M 2
; so ε M can not be with integer coefficients (a and b are necessarily odd).
If t is even, M may be arbitrary as well as ε M .
We can summarize these constraints by means of the following Table : (3.1)
t 2 -4s S = N(ε M ) Es(t) ∈ ε M = 1 2 (a + b √ M ) t 2 -4, t even 1 (resp. -1) ε M (resp. ε 2 M ) a, b odd or even t 2 -4, t odd 1 (resp. -1) ε M (resp. ε 2 M ) a, b odd t 2 + 4, t even -1 ε M a, b odd or even t 2 + 4, t odd -1 ε M a, b odd
Recall that the F.O.P. algorithm consists, after choosing the upper bound B, in establishing the list of first occurrences, as t increases from 1 up to B, of any square-free integer M ≥ 2, in the factorization m s (t) = M (t)r(t) 2 (whence M = M (t 0 ) for some t 0 and M = M (t) for all t < t 0 ), and to consider the unit:
E s (t) := 1 2 t + t 2 -4s = 1 2 t + r(t) M (t) , of norm s. The F.O.P. is necessary since, if t 1 > t 0 gives the same Kummer radical M , E s (t 0 ) = ε n0 M and E s (t 1 ) = ε n1 M with n 1 > n 0 .
We shall prove (Theorem 4.2) that, under the F.O.P. algorithm, one always obtains the minimal possible power n ∈ {1, 2} in the writing E s (t) = ε n M , whence n = 2 if and only if s = 1 and S = -1, which means that E s (t) is always the fundamental unit of norm s.
The following result shows that any square-free integer M ≥ 2 may be obtained for B large enough.
Proposition 3.1. Consider the polynomial m 1 (t) = t 2 -4. For any square-free integer M ≥ 2, there exists t ≥ 1 such that m 1 (t) = M r 2 . Proof. The corresponding equation t 2 -4 = M r 2 becomes of the form t 2 -M r 2 = 4. Depending on the writing in Z[ √ M ] (M ≡ 2, 3 (mod 4)) or Z 1+ √ M 2 (M ≡ 1 (mod 4)), of the powers ε n M = 1 2 (t + r √ M ), n ≥ 1, of the fundamental unit ε M , this selects infinitely many t ∈ Z ≥1 . Remark 3.2. One may use, instead, the polynomial m ′ 1 (t) = t 2 -1 since for any funda- mental unit of the form ε M = 1 2 (a+b √ M ), a, b odd, then ε 3 M ∈ Z[ √ M ]
, but some radicals are then obtained with larger values of t; for instance, m 1 (5) = 21 and m ′ 1 (55) = 21 • 12 2 corresponding to 55 + 12
√ 21 = 1 2 (5 + √ 21) 2 .
Since for t = 2t ′ , t 2 + 4s = 4(t ′2 -s) gives the same Kummer radical as t ′2 -s, in some cases we shall use m ′ s (t) := t 2 -s and especially m ′ 1 (t) := t 2 -1 which is "universal" for giving all Kummer radicals.
With the polynomials m -1 (t) = t 2 + 4 or m ′ -1 (t) = t 2 + 1 a solution does exist if and only if N(ε M ) = -1 and one obtains odd powers of ε M .
Checking of the exponent
n in E s (t) = ε n M(t) .
The following program determines the expression of E s (t) as power of the fundamental unit of K; it will find that there is no counterexample to the relation
E s (t) ∈ {ε M(t) , ε 2 M(t)
}, depending on S, from Table (3.1); this will be proved later (Theorem 4.2). So these programs are only for verification, once for all, because they unnecessarily need much more execution time.
Since E s (t) is written in 1 2 Z[ √ M
] and ε M on the usual Z-basis of Z K denoted {1, w} by PARI (from the instruction quadunit), we write E s (t) on the PARI basis {1, quadgen(D)}, where D = quaddisc(M) is the discriminant.
One must specify B and s, the program takes into account the first value 2 + s of t since t = 1, 2 are not suitable when s = 1; then the test n > (3 + s)/2 allows the cases n = 1 or 2 when s = 1. The output of counterexamples is given by the (empty) list Vn: by the following ones (but any information can be put in L; the sole condition being to put M as first component): (LM,vector(3,c,L[c])), listput (LN,vector(3,c,Ln[c]))
L = List([M, n, t]), listput
or simply:
L = List([M, t]), listput(LM, vector(2, c, L[c])), listput(LN, vector(2, c, Ln[c]))
giving the parameter t whence the trace, then the whole integer of Q( √ M ); for instance for m -1 (t) = t 2 + 4 and the general program with outputs [M, n, t]:
(iii) When several polynomials m i (t), 1 ≤ i ≤ N , are considered together (to get more Kummer radicals solutions of the problem), there is in general commutativity of the two sequences in for(t = 1, B, for
(i = 1, N, mt = • • • )) and for(i = 1, N, for(t = 1, B, mt = • • • )).
But we will always use the first one.
Application of the F.O.P. algorithm to norm equations
We will speak of solving a norm equation in
K = Q( √ M ), for the search of integers α ∈ Z + K such that N(α) = sν, for s ∈ {-1, 1} and ν ∈ Z ≥1 given (i.e., α = 1 2 u + v √ M , u, v ∈ Z ≥1 ).
If the set of solutions is non-empty we will define the notion of fundamental solution; we will see that this definition is common to units (ν = 1) and non-units.
We explain, in Theorem 4.6, under what conditions such a fundamental solution for ν > 1 does exist, in which case it is necessarily unique and found by means of the F.O.P. , algorithm using m -1 (t) or m 1 (t) (depending in particular on S).
Note that the resulting PARI programs only use very elementary instructions and never the arithmetic ones defining K (as bnfinit, K.fu, bnfisintnorm, ...); whence the rapidity even for large upper bounds B.
4.1.
Main property of the trace map for units. In the case ν = 1, let S = N(ε M ); we will see that α defines the generator of the group of units of norm
s of Q( √ M ) when it exists (whence ε M if s = S or ε 2 M if S = -1 and s = 1). Theorem 4.1. Let M ≥ 2 be a square-free integer. Let ε = 1 2 (a + b √ M ) > 1 be a unit of K := Q( √ M ) (non-necessarily fundamental). Then T(ε n ) defines a strictly increasing sequence of integers for n ≥ 1. 1 Proof. Set ε = 1 2 (a -b √ M )
for the conjugate of ε and let s = εε = ±1 be the norm of ε;
then the trace of ε n is T n := ε n + ε n = ε n + s n
ε n . Thus, we have:
T n+1 T n = ε n+1 + s n+1 ε n+1 ε n + s n ε n = ε 2(n+1) + s n+1 ε n+1 × ε n ε 2n + s n = ε 2(n+1) + s n+1 ε 2n+1 + s n ε .
To prove the increasing, consider ε 2n+1 + s n ε and ε 2(n+1) + s n+1 , which are positive for all n since ε > 1; then:
(4.1) ∆ n (ε) := ε 2(n+1) + s n+1 -(ε 2n+1 + s n ε) = ε 2(n+1) -ε 2n+1 + s n+1 -s n ε = ε 2n+1 (ε -1) -s n (ε -s). (i) Case s = 1. Then ∆ n (ε) = (ε -1)(ε 2n+1 -1) is positive. (ii) Case s = -1. Then ∆ n (ε) = ε 2(n+1) -ε 2n+1 -(-1) n (ε + 1
). If n is odd, the result is obvious; so, it remains to look at the expression for n = 2k, k ≥ 1:
(4.2) ∆ 2k (ε) = ε 4k+2 -ε 4k+1 -ε -1. Let f (x) := x 4k+2 -x 4k+1 -x -1; then f ′ (x) = (4k + 2)x 4k+1 -(4k + 1)x 4k -1 and f ′′ (x) = (4k + 1)x 4k-1 [(4k + 2)x -4k] ≥ 0 for all x ≥ 1. Thus f ′ (x)
is increasing for all x ≥ 1 and since f ′ (1) = 0, f (x) is an increasing map for all x ≥ 1; so, for k ≥ 1 fixed, ∆ 2k (ε) is increasing regarding ε.
Since the smallest unit ε > 1 with positive coefficients is ε 0 := 1+ √ 5 2 ≈ 1.6180... we have to look, from (4.2), at the map F (z) := ε 4z+2 0 -ε 4z+1 0 -ε 0 -1, for z ≥ 1, to check if there exists an unfavorable value of k; so:
F ′ (z) := 4 log(ε 0 )ε 4z+2 0 -4 log(ε 0 )ε 4z+1 0 = 4 log(ε 0 )ε 4z+1 0 (ε 0 -1) > 0.
Since F (1) ≈ 4.2360 > 0, one gets ∆ n (ε) > 0 in the case s = -1, n even. 4.2. Unlimited lists of fundamental units of norm s, s ∈ {-1, 1}. We have the main following result.
Theorem 4.2. Let B ≫ 0 be given. Let m s (t) = t 2 -4s, s ∈ {-1, 1} fixed. Then, as t grows from 1 up to B, for each first occurrence of a square-free integer M ≥ 2 in the factorization m s (t) = M r 2 , the unit E s (t) = 1 2 (t + r √ M ) is the fundamental unit of norm s of Q( √ M ) (according to the Table (3.1) in § 3.1, we have E s (t) = ε M if s = -1 or if s = S = 1, then E s (t) = ε 2
M if s = 1 and S = -1). Proof. Let M 0 ≥ 2 be a given square-free integer. Consider the first occurrence t = t 0 giving m s (t 0 ) = M 0 r(t 0 ) 2 if it exists (existence always fulfilled for s = 1 by Proposition 3.1); whence
M 0 = M (t 0 ). Suppose that E s (t 0 ) = 1 2 t 0 + r(t 0 ) M (t 0 ) is not the fundamental unit of norm s, ε n0 M(t0) (n 0 ∈ {1, 2}) but a non-trivial power (ε n0 M(t0) ) n , n > 1. Put ε n0 M(t0) =: 1 2 (a+b M (t 0 )); from Table (3.1), n 0 ∈ {1, 2} is such that N(ε n0 M(t0) ) = s (recall that if s = 1 and S = -1, n 0 = 2, if S = s = 1, n 0 = 1; if s = -1, necessarily S = -1 and n 0 = 1,
otherwise there were no occurrence of M 0 for s = -1 and S = 1).
Then, Theorem 4.1 on the traces implies 0 < a < t 0 . We have:
a 2 -M (t 0 )b 2 = 4s and m s (a) = a 2 -4s =: M (a)r(a) 2 ; but these relations imply M (t 0 )b 2 = M (a)r(a) 2 , whence M (a) = M (t 0 ) = M 0 .
That is to say, the pair t 0 , M 0 ) compared to a, M (a) = M 0 ) , was not the first occurrence of M 0 (absurd).
Corollary 4.3. Let t ∈ Z ≥1 and let E 1 (t) = 1 2 t + √ t 2 -4 of norm 1. Then E 1 (t) is a square of a unit of norm -1, if and only if there exists t ′ ∈ Z ≥1 such that t = t ′2 + 2; thus E 1 (t) = 1 2 (t ′ + √ t ′2 + 4) 2 = (E -1 (t ′ )) 2 .
So, the F.O.P. algorithm, with m 1 (t) =:
M (t)r(t) 2 , gives the list of [M(t), t] for which 1 2 t+ √ t 2 -4 = ε 2 M (resp. ε M ) if t-2 = t ′2 (resp. if not).
S = -1. Then t ′2 + 4 = M r ′2 for t ′ minimal gives the fundamental unit ε M = 1 2 (t ′ + r ′ √ M ) and t 2 -4 = M r 2 , for t minimal, gives ε 2 M ; whence t = t ′2 + 2 and r = r ′ t ′ . For s = -1, hence m -1 (t) = t 2 + 4, t ∈ [1, B],
we know, from Theorem 4.2, that the F.O.P. algorithm gives always the fundamental unit
ε M of Q( √ M ) whatever its writing in Z[ √ M ] or in Z 1+ √ M 2
.
For s = 1 one obtains ε 2 M if and only if S = -1. So we can skip checking and use the following simpler program with larger upper bound B = 10 7 ; the outputs are the Kummer radicals [M] in the ascending order (specify B and s):
(27) = 1 2 (5+ √ 29) 2 = ε 2 29
. Some Kummer radicals giving units ε M of norm -1 do not appear up to B = 10 7 , e.g., M ∈ {241, 313, 337, 394, . . .}; but all the Kummer radicals M , such that S = -1, ultimately appear as B increases. So, as B → ∞, any unit is obtained, which suggests the existence of natural densities in the framework of the F.O.P. algorithm. More precisely, in the list LM (i.e., before using VM = vecsort(vector(B, c, LM[c]), 1, 8)), any Kummer radical M does appear in the list as many times as the trace of ε n M (n odd) is less than B, which gives for instance (B = 10 3 ): This fact with Corollaries 4.3,4.4 may suggest some analytic computations of densities (see a forthcoming paper [START_REF] Gras | New characterization of the norm of the fundamental unit of Q( √ M )[END_REF] for more details). For this purpose, we give an estimation of the gap #LM -#VM = B -#VM.
Theorem 4.5. Consider the F.O.P. algorithm for units, in the interval [1, B] and s ∈ {-1, 1}. Let ∆ be the gap between B and the number of results. Then, as B → ∞:
(i) For the polynomial m -1 (t) = t 2 + 4, ∆ ∼ B 1 3 , (ii) For the polynomial m 1 (t) = t 2 -4, ∆ ∼ B 1 2 ,
Proof. (i) In the list LM of Kummer radicals giving units of norm -1, we know, from Theorem 4.2, that one obtain first the fundamental unit ε 0 := ε M0 from the relation t 2 0 + 4 = M 0 r 2 0 , then its odd powers ε 2n+1
M0
for n ∈ [1, n max ] corresponding to some t n such that t 2 n + 4 = M 0 r 2 n and t n ≤ t max defined by the equivalence:
1 2 t max + r max M 0 ∼ 1 2 t 0 + r 0 M 0 2nmax+1
in an obvious meaning. Thus, the "maximal unit" is equivalent to B giving Of course there will be repetitions in the sum, but a more precise estimation is not necessary and we obtain an upper bound:
n max ∼ 1 2 log B log t 0 -1 .
∆ ∼ t∈[1,b] 1 2 log B log t -1 ∼ log b t∈[1,b] 1 log t ∼ log b • b log b ∼ b = B 1 3 .
(ii) In the case of norm 1, the list LM is relative to the fundamental units of norm 1 with all its powers (some are the squares of the fundamental units of norm -1); the reasoning is the same, replacing 1 3 by 1 2 . 4.3. Unlimited lists of fundamental integers of norm sν, ν ≥ 2. The F.O.P. algorithm always give lists of results, but contrary to units, some norms sν do not exist in a given field K; in other words, the F.O.P. only give suitable Kummer radicals since sν is given. Recall the well known: Theorem 4.6. Let s ∈ {-1, 1} and ν ∈ Z ≥2 be given.
(i) A fundamental solution of the norm equation u 2 -M v 2 = 4sν (Definition 1.2) does exist if and only if there exists an integer principal ideal a of absolute norm ν with a generator α ∈ Z + K whose norm is of sign s. Under the existence of a = (α), with N(α) = s ′ ν, a representative α ∈ Z + K , modulo ε M , does exist whatever s as soon as S = -1; if S = 1, a fundamental solution α ∈ Z + K does exist if and only if s ′ = s.
(ii) When the above conditions are fulfilled, the fundamental solution corresponding to the ideal a is unique (in the meaning that two generators of a in Z + K , having same trace, are equal) and found by the F.O.P. algorithm.
Proof. (i) If a = (α), of absolute norm ν, with α = 1 2 (u + v √ M ) ∈ Z + K , one obtains u 2 -M v 2 = 4sν for a suitable s ∈ {-1, 1} giving a solution with t = u; then m sν (t) = t 2 -4sν = M (t)r 2 , whence M = M (u) and r = v.
Reciprocally, assume that the corresponding equation (in unknowns t ≥ 1, s = ±1) t 2 -4sν = M r 2 , M ≥ 2 square-free, has a solution, whence t 2 -M r 2 = 4sν. Set α := 1 2 (t + r √ M ) ∈ Z + K ; then one obtains the principal ideal a = (α)Z K of absolute norm ν.
(ii) Assume that α, β are two generators of a in Z + K with common trace t ≥ 1. Put
β = α • ε n M , n ∈ Z, n = 0. Then: T(β) = α • ε n M + α σ • ε nσ M = α 2 • ε 2n M + sS n ν α • ε n M , T(α) = α 2 + s ν α ; thus T(β) = T(α) is equivalent to α 2 • ε 2n M + sS n ν = α 2 • ε n M + s νε n M , whence to: α 2 • ε n M (ε n M -1) = (ε n M -S n )s ν.
The case
S n = -1 is not possible since N(β) = N(α) = N(ε n M ) = S n ; so, S n = 1, in which case, one gets α 2 • ε n M = s ν = s α 1+σ , thus α σ = α • ε n M and β = α σ , but in that case, β / ∈ Z + K (absurd)
. Whence the unicity. Remark 4.7. Consider the above case where α and β are two generators of a in Z + K with common trace t ≥ 1 and norm sν. Thus, we have seen that β = α σ = α • ε n M . The ideal a = (α) is then invariant by G := Gal(K/Q), so it is of the form a = (q) × p|D p ep , where q ∈ Z, D is the discriminant of K, p 2 = pZ K and e p ∈ {0, 1}. In other words, we have to determine the principal ideals, products of distinct ramified prime ideals. This is done in details in [Gra10, § § 2.1, 2.2] For instance, let M = 15 and sν = -6. One has the fundamental solution α = 3 + √ 15 of norm -6, with the trace t = 6; then
α σ = 3 - √ 15 = α • (-4 + √ 15) = α • (-ε -1 M )
; similarly, for sν = 10, one has the fundamental solution α = 5 + √ 15 of norm 10, with trace t = 10 and the relation
α σ = 5 - √ 15 = α • (4 - √ 15) = α • (ε σ M )
. These fundamental solutions are indeed given by the F.O.P. algorithm by means of the data [M, t]: [7,2], [10,[START_REF]The case b) of Theorem 6.3 gives an analogous program and will be also illustrated in the Section 7 about p-class groups, especially for the case p = 3. The results are similar and give, in almost cases, non[END_REF], [15,6] [-31,3], [-15,5], [-6,4], [-1,2], [1,7], [6,8], [10,20],[15,10],... Depending on the choice of the polynomials m -1 (t) or m 1 (t), consider for instance, the F.O.P. algorithm applied to M = 13 (for which S = -1), ν = 3, gives with m -1 (t) the solution [M = 13, t = 1] whence α = 1 2 (1 + √ 13) of norm -3; with m 1 (t) it gives [M = 13, t = 5], α = 1 2 (5 + √ 13) of norm 3; the traces 1 and 5 are minimal for each case. We then compute that 1 2 (5 + √ 13) = 1 2 (1 -√ 13)(-ε 13 ). But with M = 7 (for which S = 1), the F.O.P. algorithm with m -1 (t) and ν = 3 gives [M = 7, t = 4] but nothing with m 1 (t).
s.Nu=-6 [M,t]= [1,1],[6,24],
,... s.Nu=10 [M,t]= [-39,1],
Remark 4.8. A possible case is when there exist several principal integer ideals a of absolute norm νZ (for instance when ν = q 1 q 2 is the product of two distinct primes and if there exist two prime ideals q 1 , q 2 , of degree 1, over q 1 , q 2 , respectively, such that a := q 1 q 2 and a ′ := q 1 q σ 2 are principal). Let a =: (α) and a ′ =: (α ′ ) of absolute norm ν. We can assume that, in each set of generators, α and α ′ have minimal trace u and u ′ , and necessarily we have, for instance, u ′ > u; since the ideals a are finite in number, there exists an "absolute" minimal trace u defining the unique fundamental solution which is that found by the suitable F.O.P. algorithm.
For instance, let s = -1, ν = 15; the F.O.P. algorithm gives the solution [19,[START_REF]The case b) of Theorem 6.3 gives an analogous program and will be also illustrated in the Section 7 about p-class groups, especially for the case p = 3. The results are similar and give, in almost cases, non[END_REF], whence
α = 2 + √ 19 of norm -15. In K = Q( √ 19) we have prime ideals q 3 = (4 + √ 19) | 3, q 5 = (9 + 2 √ 19) | 5.
Then we obtain the fundamental solution with a = q σ 3 q 5 , while q 3 q 5 = (74 + 17 √ 19). The fundamental unit is ε M = 170 + 39 √ 19 of norm S = 1 and one computes some products ±αε n M giving a minimal trace with n = -1 and the non-fundamental solution 17 + 4 √ 19.
If ν = q|ν q nq , where q denotes distinct prime numbers, there exist integer ideals a of absolute norm νZ if and only if, for each inert q | ν then n q is even. In the F.O.P. algorithm this will select particular Kummer radicals M for which each q | ν, such that n q is odd, ramifies or splits in
K = Q( √ M ); this is equivalent to q | D (the discriminant of K = Q( √ M
)) or to ρ q := M q = 1 in terms of quadratic residue symbols; if so, we then have ideal solutions N(a) = νZ.
Let's write, with obvious notations a = q,q =0 q nq q, ρq=-1 q 2n ′ q q, ρq =1 q n ′ q q n ′′ q σ . Then the equation becomes N(a ′ ) = ν ′ Z for another integral ideal a ′ and another ν ′ | ν, where a ′ is an integer ideal "without any rational integer factor". Thus, N(α ′ ) = sν ′ is equivalent to a ′ = α ′ Z K . This depends on relations in the class group of K and gives obstructions for some Kummer radicals M . Once a solution a ′ principal exists (non unique) we can apply Theorem 4.6. For instance, [85,5] illustrates Theorem 4.6 with the solution α = 1 2 (5 + √ 85) of norm -15, with (α)Z K = q 3 q 5 , where 3 splits in K and 5 is ramified; one verifies that the ideals q 3 and q 5 are non-principal, but their product is of course principal. For this, one obtains the following PARI/GP verifications: [2,[2], [[3,1;0,1]]] idealfactor(k,3)=[ [3,[0,2] [3,[2,2] [3,[2,2]~,1,1,[0,-1]~], [5,[1,2]
k=bnfinit(x^2-85) k.clgp=
~,1,1,[-1,-1]~]1],[[3,[2,2]~,1,1,[0,-1]~]1] idealfactor(k,5)=[[5,[1,2]~,2,1,[1,2]~]2] bnfisprincipal(k,
~,1,1,[0,-1]~])=[[1]~,[1,0]~] bnfisprincipal(k,[5,[1,2]~,2,1,[1,2]~])=[[1]~,[1,1/3]~] A=idealmul(k,
~,2,1,[1,2]~]) bnfisprincipal(k,A)=[[0]~,[2,-1]~] nfbasis(x^2-85)=[1,1/2*x-1/2]
The data [[0], [2, -1]] gives the principality with generator [2, -1]
denoting (because of the integral basis {1, 1 2 x -1 2 } used by PARI), 2 -1 2 √ 85 -1 2 = 1 2 (5 - √ 85) = α σ . (iv) s = -1, ν = 9 × 25.
The case [37,5] may be interpreted as follows:
m -1 (5) = 5 2 + 4 • 9 • 25 = 5 2 • 37, whence A -1 (5) = 1 2 (5 + 5 √ 37) = 5 • 1 2 (1 + √ 37) =: 5B, where B := 1 2 (1 + √ 37
) is of norm -9 and 5 is indeed inert in K. Thus 1 2 (1 + √ 37)Z K is the square of a prime ideal q 3 over 3. The field K is principal and we compute that
q 3 = 1 2 (5 ± √ 37)Z K , q 2 3 = 1 2 (31 ± 5 √ 37)Z K . So, BZ K = 1 2 (1 + √ 37)Z K = 1 2 (31 + 5 √ 37)Z K or 1 2 (31 -5 √ 37)Z K .
We have ε 37 = 6 + √ 37 and we obtain that
B = 1 2 (31 -5 √ 37) • ε 37 , showing that α = 5 • 1 2 (1 + √ 37)
is the fundamental solution of the equation N(α) = 3 2 • 5 2 with minimal trace 5.
For larger integers ν, fundamental solutions are obtained easily, as shown by the following example with the prime ν = 1009: (v) s = -1, ν = 1009.
We have not dropped the negative radicals meaning, for instance with M = -839, that a solution of the norm equation does exist in Q( √ -839) with α = 1 2 (1 + √ -839), or with M = -14 giving α = 14 + √ -14.
Universality of the polynomials m sν
Let's begin with the following obvious result making a link with polynomials m sν .
Lemma 5.1. Let M ≥ 2 be a square-free integer and
K = Q( √ M ); then, any α ∈ Z + K is characterized by its trace a ∈ Z and its norm sν, s ∈ {-1, 1}, ν ∈ Z ≥1 ; from these data, α = 1 2 (a + b √ M ) where b is given by m sν (a) =: M b 2 .
Proof. From the equation α 2 -aα+sν = 0, we get α = 1 2 (a+ √ a 2 -4sν), where necessarily a 2 -4sν =: M b 2 (unicity of the Kummer radical) giving b > 0 from the knowledge of a and sν.
Mc Laughlin's polynomials.
Consider some polynomials that one finds in the literature; for instance that of Mc Laughlin [McL] obtained from "polynomial continued fraction expansion", giving formal units, and defined as follows.
Let m ≥ 2 be a given square-free integer and let 2,3,6}). For such m, u, v, each of the data below leads to the fundamental polynomial solution of the norm equation U (t) 2 -m(t)V (t) 2 = 1 (see [START_REF] Laughlin | Polynomial Solutions of Pell's Equation and Fundamental Units in Real Quadratic Fields[END_REF]), giving the parametrized units
E m = u + v √ m, u, v ∈ Z ≥1 , be the fundamental solution of the norm equation (or Pell-Fermat equation) u 2 -mv 2 = 1 (thus, E m = ε n0 m , n 0 ∈ {1,
E M(t) = U (t) + V (t)r M (t), of norm 1 of Q( M (t))
, where m(t) =: M (t)r(t) 2 , M (t) square-free.
The five polynomials m(t) are:
mcl 1 (t) = v 2 t 2 + 2ut + m, U (t) = v 2 t + u, V (t) = v; mcl 2 (t) = (u -1) 2 v 2 t 2 + 2t + m, U (t) = (u -1) v 4 t 2 + 2v 2 t + u, V (t) = v 3 t + v; mcl 3 (t) = (u + 1) 2 v 2 t 2 + 2t + m, U (t) = (u + 1) v 4 t 2 + 2v 2 t + u, V (t) = v 3 t + v; mcl 4 (t) = (u + 1) 2 v 2 t 2 + 2(u 2 -1)t + m, U (t) = (u + 1) 2 u -1 v 4 t 2 + 2(u + 1)v 2 t + u, V (t) = u + 1 u -1 v 3 t + v; mcl 5 (t) = (u -1) 2 v 6 t 4 + 4v 4 t 3 + 6v 2 t 2 + 2(u -1)(2u -1)t + m, U (t) = (u -1) v 6 t 3 + 3v 4 t 2 + 3v 2 t + u, V (t) = v 3 t + v.
Note that for mcl 1 (t) one may also use a unit
E m = u + v √ m of norm -1 since U (t) 2 -mcl 1 (t)V (t) 2 = u 2 -mv 2
, which is not possible for the other polynomials.
We may enlarge the previous list with cases where the coefficients of E m may be halfintegers defining more general units (as ε 5 , ε 13 of norm -1 in the case of mcl 1 (t), then as ε 21 of norm 1 for the other mcl(t)). This will give E m = ε m or ε 2 m . So we have the following transformation of the mcl(t), U (t), V (t), that we explain with mcl 1 (t). The polynomial mcl 1 (t) fulfills the condition
U (t) 2 -mcl 1 (t)V (t) 2 = u 2 -mv 2 , which is the norm of E m = u + v √ m; so we can use any square-free integer m ≡ 1 (mod 4) such that E m = 1 2 u + v √ m , u, v ∈ Z ≥1
odd, and we obtain the formal unit
E M(t) = 1 2 U (t) + V (t) mcl 1 (t)
under the condition t even to get U (t), V (t) ∈ Z ≥1 . This gives the polynomials mcl 6 (t) = v 2 t 2 +2ut+m and the coefficients
U (t) = 1 2 (v 2 t+u), V (t) = 1 2 v of a new unit, with mcl 6 (t) = M (t)r(t) 2
, for all t ≥ 0,. For the other mcl(t) one applies the maps t → 2t, t → 4t, depending on the degrees; so we obtain the following list, where the resulting unit is E M(t) = U (t) + V (t) m(t), of norm ±1, under the conditions m ≡ 1 (mod 4) and
ε m = 1 2 (u + v √ m), u, v odd: mcl 6 (t) = v 2 t 2 + 2ut + m, U (t) = 1 2 (v 2 t + u), V (t) = 1 2 v; mcl 7 (t) = (u -2) 2 v 2 t 2 + 2t + m, U (t) = 1 2 (u -2)(v 4 t 2 + 2v 2 t) + u , V (t) = 1 2 v 3 t + v ; mcl 8 (t) = (u + 2) 2 v 2 t 2 + 2t + m, U (t) = 1 2 (u + 2)(v 4 t 2 + 2v 2 t) + u , V (t) = 1 2 v 3 t + v ; mcl 9 (t) = (u + 2) 2 v 2 t 2 + 2(u 2 -4)t + m, U (t) = 1 2 (u + 2) 2 u -2 v 4 t 2 + 2(u + 2)v 2 t + u , V (t) = 1 2 u + 2 u -2 v 3 t + v ; mcl 10 (t) = (u -2) 2 v 6 t 4 + 4v 4 t 3 + 6v 2 t 2 + 4(u -2)(u -1)t + m, U (t) = 1 2 (u -2)(v 6 t 3 + 3v 4 t 2 + 3v 2 t) + u , V (t) = 1 2 v 3 t + v .
5.2. Application to finding units. In fact, these numerous families of parametrized units are nothing but the units E s (T ) = 1 2 (T + √ T 2 -4s) when the parameter T = U (t) is a given polynomial expression. This explain that the properties of the units E s (T ) are similar to that of the two universal units E s (t), for t ∈ Z ≥1 , but, a priori, the F.O.P. algorithm does not give fundamental units when T (t) is not a degree 1 monic polynomial; nevertheless it seems that the algorithm gives most often fundamental units, at least for all t ≫ 0. We give the following example, using for instance the Mc Laughlin polynomial mcl 10 (t) with m = 301, u = 22745, v = 1311, corresponding to, ε m = 1 2 (22745 + 1311 √ 301) of norm 1 (program of Section 3); this will give enormous units E M(t) =: ε n M(t) . The output is of the form [M(t), r(t), n]. Then there is no exception to E M(t) = ε M(t) (i.e., n = 1); moreover, one sees many cases of non-square-free integers mcl 10 (t):
Mc LAUGHLIN UNITS {B=10^3;LN=List;LM=List;u=22745;v=1311;for(t=1,B, mt=(u-2)^2*(v^6*t^4+4*v^4*t^3+6*v^2*t^2)+4*(u-2)*(u-1)*t+301; ut=1/2*((u-2)*(v^6*t^3+3*v^4*t^2+3*v^2*t)+u);vt=1/2*(v^3*t+v); C=core(mt,1);M=C [1];r=C [2];D=quaddisc (M);w=quadgen(D); Y=quadunit(D);res=Mod (M,4); if(res!=1,Z=ut+r*vt*w);if(res==1,Z=ut-r*vt+2*r*vt*w); z=1;n=0;while(Z!=z,z=z*Y;n=n+1);L=List ([M,r,n]); listput (LM,vector(3,c,L[c]))); VM=vecsort(vector(B,c,LM[c]),1,8); print(VM);print("#VM = ",#VM); for(k=1,#VM,n=VM[k] [3];if(n>1,Ln=VM[k]; listput (LN,vector(3,c,Ln[c]))));Vn=vecsort(LN,1,8); print("exceptional powers:",Vn)} [M,r,n] = [656527122296918386395032242,2,1],[1594671238615711306590405613,63245,1],[6538031892707128354912512481,1400,1],[8374054846220987469202089646,14,1] Using the Remark 1.5, with B = 10 3 and m(t) of degree 4, with leading coefficient:
a 4 B 4 = (22745 -2) 2 • 1311 6 • 10 12 = 2626102377422775499879732689000000000000,
one gets: log(2626102383534535069268098426753041168301)/ log(a 4 B 4 ) ≈ 1.000000000025... 6. Non p-rationality of quadratic fields 6.1. Recalls about p-rationality. Let p ≥ 2 be a prime number. The definition of p-rationality of a number field lies in the framework of abelian p-ramification theory. The references we give in this article are limited to cover the subject and concern essentially recent papers; so the reader may look at the historical of the abelian p-ramification theory that we have given in [START_REF] Gras | Practice of the Incomplete p-Ramification Over a Number Field -History of Abelian p-Ramification[END_REF]Appendix], for accurate attributions, from Šafarevič's pioneering results, about the numerous approaches (class field theory, Galois cohomology, pro-p-group theory, infinitesimal theory); then use its references concerning developments of this theory (from our Crelle's papers 1982-1983, Jaulent's infinitesimals [START_REF] Jaulent | S-classes infinitésimales d'un corps de nombres algébriques[END_REF] (1984), Jaulent's thesis [START_REF] Jaulent | L'arithmétique des ℓ-extensions Thèse de doctorat d'Etat[END_REF] (1986), Nguyen Quang Do's article [Ng] (1986), Movahhedi's thesis [Mov] (1988) and subsequent papers); all prerequisites and developments are available in our book [Gra1] (2005). Definition 6.1. A number field K is said to be p-rational if K fulfills the Leopoldt conjecture at p and if the torsion group T K of the Galois group of the maximal abelian p-ramified (i.e., unramified outside p and ∞) pro-p-extension of K is trivial.
We will use the fact that, for totally real fields K, we have the formula:
(6.1) #T K = #C ′ K • #R K • #W K , where C ′
K is a subgroup of the p-class group C K and where W K depends on local and global p-roots of unity; for
K = Q( √ M ) and p > 2, C ′ K = C K and W K = 1 except if p = 3 and M ≡ -3 (mod 9), in which case W K ≃ Z/3Z. For p = 2, C ′ K = C K except if K( √ 2)/K is unramified (i.e., if M = 2M 1 , M 1 ≡ 1 (mod 4))
. Then R K is the "normalized p-adic regulator" of K (general definition for any number field in [START_REF] Gras | The p-adic Kummer-Leopoldt Constant: Normalized p-adic Regulator[END_REF]Proposition 5.2]). For
K = Q( √ M ) and p = 2, #R K ∼ 1 p log p (ε M ); for p = 2, #R K ∼ 1 2 d log 2 (ε M )
, where d ∈ {1, 2} is the number of prime ideals above 2.
So #T K is divisible by the order of R K , which gives a sufficient condition for the nonp-rationality of K. Since C K = W K = 1 for p ≫ 0, the p-rationality only depends on R K in almost all cases. Proposition 6. 2. ([Gra6,Proposition 5.1]) Let K = Q( √ m) be a real quadratic field of fundamental unit ε m . Let p > 2 be a prime number with residue degree f ∈ {1, 2}.
(i) For p ≥ 3 unramified in K, v p (#R K ) = v p (ε p f -1 m -1) -1. (ii) For p > 3 ramified in K, v p (#R K ) = 1 2 (v p (ε p-1 -1) -1)
, where p 2 = (p).
(iii) For p = 3 ramified in
K, v 3 (#R K ) = 1 2 (v p (ε 6 -1) -2 -δ)
, where p 2 = (3) and δ = 1 (resp. δ = 3) if m ≡ -3 (mod 9) (resp. m ≡ -3 (mod 9)).
A sufficient condition for the non-triviality of R K that encompasses all cases (since the decomposition of p in Q( M (t)) is unpredictable in the F.O.P. algorithm) is log p (ε m ) ≡ 0 (mod p 2 ); this implies that ε m is a local pth power at p. It suffices to force the parameter t to be such that a suitable prime-to-p power of E s (t) = 1 2 t + r(t) M (t) is congruent to 1 modulo p 2 . So, exceptions may arrive only when E s (t) is a global pth power. 6.2. Remarks about p-rationality and non-p-rationality. In some sense, the prationality of K comes down to saying that the p-arithmetic of K is as simple as possible and that, on the contrary, the non p-rationality is the standard context, at least for some p for K fixed and very common when K varies in some families, for p fixed. a) In general, most papers intend to find p-rational fields, a main purpose being to prove the existence of families of p-rational quadratic fields (see, e.g., [START_REF] Assim | Half-integral weight modular forms and real quadratic p-rational fields[END_REF][START_REF] Benmerieme | Les corps multi-quadratiques p-rationnels[END_REF][START_REF] Boeckle | Wieferich Primes and a mod p Leopoldt Conjecture[END_REF]BeMo,[START_REF] Bouazzaoui | Fibonacci numbers and real quadratic p-rational fields[END_REF][START_REF] Barbulescu | Numerical verification of the Cohen-Lenstra-Martinet heuristics and of Greenberg's p-rationality conjecture[END_REF]By,[START_REF] Gras | Les θ-régulateurs locaux d'un nombre algébrique : Conjectures p-adiques[END_REF][START_REF] Gras | Heuristics and conjectures in the direction of a p-adic Brauer-Siegel theorem[END_REF]Kop,[START_REF] Maire | Composantes isotypiques de pro-p-extensions de corps de nombres et p-rationalité[END_REF][START_REF] Maire | A note on p-rational fields and the abc-conjecture[END_REF][START_REF] Chattopadhyay | On the p-rationality of consecutive quadratic fields[END_REF]); for this there are three frameworks that may exist in general, but, to simplify, we restrict ourselves to real quadratic fields:
(i) The quadratic field K is fixed and it is conjectured that there exist only finitely many primes p > 2 for which K is non p-rational, which is equivalent to the existence of finitely many p for which 1 p log p (ε K ) ≡ 0 (mod p). (ii) The prime p > 2 is fixed and it is proved/conjectured that there exist infinitely many p-rational quadratic field K, which is equivalent to the existence of infinitely many K's for which the p-class group is trivial and such that 1 p log p (ε K ) is a p-adic unit; this aspect is more difficult because of the p-class group.
(iii) One constructs some families of fields K(p) indexed by p prime. These examples of quadratic fields often make use of Lemma 5.1 to get interesting radicals and units.
For instance we have considered in [START_REF] Gras | Heuristics and conjectures in the direction of a p-adic Brauer-Siegel theorem[END_REF]§ 5.3] (as many authors), the polynomials t 2 p 2ρ + s for p-adic properties of the unit E = t 2 p 2ρ + s + tp ρ t 2 p 2ρ + 2s of norm 1.
Taking "ρ = 1 2 , t = 1", one gets the unit E = p + s + p(p + 2s) considered in [Ben] where it is proved that for p > 3, the fields Q( p(p + 2)) are p-rational since the p-class group is trivial (for analytic reasons) and the unit p+1+ p(p + 2) is not a local p-power. Note that 4p(p + 2s) = m 1 (2p + 2s), since N(E) = 1 for all s.
Similarly, in [BeMo], is considered the bi-quadratic fields Q( p(p + 2), p(p -2)), containing the quadratic field Q( p 2 -4) giving the unit 1 2 (p + p 2 -4) still associated to m 1 (p); the p-rationality comes from the control of the p-class group since the p-adic regulators are obviously p-adic units.
Finally, in [Kop], is considered the tri-quadratic fields Q( p(p + 2), p(p -2), √ -1) which are proven to be p-rational for infinitely many primes p; but these fields are imaginary, so that one has to control the p-class group by means of non-trivial analytic arguments.
The p-rational fields allow many existence theorems and conjectures (as the Greenberg's conjecture [START_REF] Greenberg | Galois representation with open image[END_REF] on Galois representations with open images, yielding to many subsequent papers as [START_REF] Assim | Half-integral weight modular forms and real quadratic p-rational fields[END_REF][START_REF] Benmerieme | Les corps multi-quadratiques p-rationnels[END_REF]BeMo,[START_REF] Bouazzaoui | Fibonacci numbers and real quadratic p-rational fields[END_REF][START_REF] Barbulescu | Numerical verification of the Cohen-Lenstra-Martinet heuristics and of Greenberg's p-rationality conjecture[END_REF]GrJa,[START_REF] Jaulent | L'arithmétique des ℓ-extensions Thèse de doctorat d'Etat[END_REF]Kop]); they give results in the pro-p-group Galois theory [START_REF] Maire | Composantes isotypiques de pro-p-extensions de corps de nombres et p-rationalité[END_REF]. Algorithmic aspects of p-rationality may be found in [START_REF] Gras | On p-rationality of number fields[END_REF][START_REF] Gras | Practice of the Incomplete p-Ramification Over a Number Field -History of Abelian p-Ramification[END_REF][START_REF] Pitoun | Computing the torsion of the p-ramified module of a number field[END_REF] and in [BeJa] for the logarithmic class group having strong connexions with T K in connection with another Greenberg conjecture [Gre1] (Iwasawa's invariants λ = µ = 0 for totally real fields); for explicit characterizations in terms of pramification theory, see [Jau4, Gra8], Greenberg's conjecture being obvious when T K = 1.
b)
We observe with the following program that the polynomials: m s (p + 1) = (p + 1) 2 -4s and m s (2p + 2) = 4(p + 1) 2 -4s always give p-rational quadratic fields, apart from very rare exceptions (only four ones up to 10 6 ) due to the fact that the units E s (p + 1) = 1 2 p + 1 + (p + 1) 2 -4s and E s (2p + 2) = p + 1 + (p + 1) 2 -s may be a local p-power as studied in [START_REF] Gras | Les θ-régulateurs locaux d'un nombre algébrique : Conjectures p-adiques[END_REF] in a probabilistic point of view (except in the case of E 1 (2p + 2) = 1 + p + p 2 + 2p ≡ 1 (mod p), with p 2 = (p), thus never local pth power):
{nu=8;L=List ([-4,-1,1,4]);for (j=1,4,d=L[j]; print("m(p)=(p+1)^2-(",d,")");forprime(p=3,10^6, M=core((p+1)^2-d);K=bnfinit(x^2-M); wh=valuation(K.no,p); Kmod=bnrinit(K,p^nu);CKmod=Kmod.cyc;val=0;d=#CKmod;for(k=1,; w=valuation (Cl,p);if(w>0,val=val+w));if(val>0, print("p=",p," M=",M," v_p(#(p-class group))=",wh, " v_p(#(p-torsion group) The case of p = 3, M = 15 does not come from the regulator, nor from the class group, but from the factor #W K = 3 since 15 ≡ -3 (mod 9); but this case must be considered as a trivial case of non-p-rationality. c) For real quadratic fields, the 2-rational fields are characterized via a specific genus theory and are exactly the subfields of the form Q( √ m) for m = 2, m = ℓ, m = 2ℓ, where ℓ is a prime number congruent to ±3 (mod 8) (see proof and history in [START_REF] Gras | On p-rationality of number fields[END_REF]Examples IV.3.5.1]). So we shall not consider the case p = 2 since the non-2-rational quadratic fields may be easily deduced, as well as fields with non-trivial 2-class group. d) Nevertheless, these torsion groups T K are "essentially" the Tate-Šafarevič groups (see their cohomological interpretations in [Ng]):
)=",val))))} m(p)=(p+1)^2+4, p=13 M=2
III 2 K := Ker H 2 (G K,Sp , F p ) → p∈Sp H 2 (G Kp , F p ) ,
where S p is the set of p-places of K, G K,Sp the Galois group of the maximal S p -ramified pro-p-extension of K and G Kp the local analogue over K p ; so their non-triviality has an important arithmetic meaning about the arithmetic complexity of the number fields (see for instance computational approach of this context in [START_REF] Gras | Tate-Shafarevich groups in the cyclotomic Z-extension and Weber's class number problem[END_REF] for the pro-cyclic extension of Q and the analysis of the Greenberg's conjecture [START_REF] Greenberg | On the Iwasawa invariants of totally real number fields[END_REF] in [START_REF] Gras | Algorithmic complexity of Greenberg's conjecture[END_REF]). When the set of places S does not contain S p , few things are known about G K,S ; see for instance Maire's survey [Mai] and its bibliography, then [Gra5, Section 3] for numerical computations.
In other words, the non-p-rationality (equivalent, for p > 2, to III 2 K = 0) is an obstruction to a local-global principle and is probably more mysterious than p-rationality. Indeed, in an unsophisticated context, it is the question of the number of primes p such that the Fermat quotient 2 p-1 -1 p is divisible by p, for which only two solutions are known; then non-p-rationality is the same problem applied to algebraic numbers, as units ε M ; this aspect is extensively developed in [START_REF] Gras | Les θ-régulateurs locaux d'un nombre algébrique : Conjectures p-adiques[END_REF] for arbitrary Galois number fields).
6.3. Families of local p-th power units -Computation of T K . We shall force the non triviality of R K to obtain the non-p-rationality of K.
(T ) = 1 2 T + √ T 2 -4 = 1 δ ap 4 t 2 -δs+p 2 t a 2 p 4 t 2 -2δas
, of norm 1 and local pth power at p. The cases (a, δ) ∈ {(1, 1), (1,2), (2,1), (3,1), (3,2), (4, 1), (5,1), (5,2)} give distinct units. (b) Consider T := t 0 + p 2 t and m s (T ) = T 2 -4s and the units of norm s:
E s (T ) = 1 2 T + T 2 -4s ;
they are, for all t, local pth power at p for suitable t 0 depending on p and s, as follows: (i) For t 0 = 0, the units E s (T ) = E s (p 2 t) are local pth powers at p.
(ii) For p ≡ 5 (mod 8), there exist s ∈ {-1, 1} and t 0 ∈ Z ≥1 solution of the congruence t 2 0 ≡ 2s (mod p 2 ) such that the units E s (T ) are local pth powers at p. (iii) We get the data (p = 3, s = -1, t 0 ∈ {4, 5}), (p = 7, s = 1, t 0 ∈ {10, 39}), (p = 11, s = -1, t 0 ∈ {19, 102}), (p = 17, s = -1, t 0 ∈ {24, 265}; s = 1, t 0 ∈ {45, 244}).
As t grows from 1 up to B, for each first occurrence of a square-free integer M ≥ 2 in the factorization m(t) = a 2 p 4 t 2 -2δas = M (t)r(t) 2 (case (a)), or the factorization m(t) = (t 0 + p 2 t) 2 -4s = M (t)r(t) 2 (case (b)), the quadratic fields Q( M (t)), are non p-rational, apart possibly when 1 δ ap 4 t 2 -δs + p 2 t r(t) M (t) ∈ ε p M(t) (case (a)), or
1 2 t 0 + p 2 t + (t 0 + p 2 t) 2 -4s ∈ ε p M(t) (case (b)).
Proof. The case (a) is obvious. Since the case (b) (i) is also obvious, assume t 0 ≡ 0 (mod p 2 ). We have:
E s (T ) 2 ≡ 1 2 T 2 -2s + T T 2 -4s (mod p 2 ), whence E s (T ) 2 ≡ t0 2 √ T 2 -4s (mod p 2 ) under the condition t 2 0 ≡ 2s (mod p 2 ). So, E s (T ) 4 ≡ 1 4 t 2 0 (t 2 0 -4s) ≡ -1 (mod p 2 )
, whence the result. One computes that t 2 0 ≡ 2s (mod p 2 ) has solutions for (p -1)(p + 1) ≡ 0 (mod 16) when s = 1 and (p -1)(p + 5) ≡ 0 (mod 16) when s = -1.
For instance, in case (a) we shall use m (b) has the advantage that the traces of the units are in O(t) instead of O(t 2 ) for case (a).
(t) = p 4 t 2 -s, m(t) = p 4 t 2 -2s, m(t) = p 4 t 2 -4s, m(t) = 9p 4 t 2 -6s, m(t) = 9p 4 t 2 -12s, m(t) = 4p 4 t 2 -2s, m(t) = 25p 4 t 2 -10s, m(t) = 25p 4 t 2 -20s. The case
Since in many computations we are testing if some unit E s (T ) is a global pth power, we state the following result which will be extremely useful in practice because it means that the exceptional cases are present only at the beginning of the F.O.P. list: Theorem 6.4. Let T be of the form T = ct h + c 0 , c ≥ 1, h ≥ 1, c 0 ∈ Z fixed and set T 2 -4s = M (t)r(t) 2 when t runs through Z ≥1 . For B ≫ 0, the maximal bound M pow B of the square-free integers M (t), obtained by the F.O.P. algorithm, for which E s (T ) := For instance T = t 0 + p 2 t, of the case (b) of Theorem 6.3, gives a bound M pow B , of possible exceptional Kummer radicals, of the order of (p 4 B 2 ) 1/p . This implies that when B → ∞, the density of Kummer radicals M such that E s (T ) is not a global pth power is equal to 1. With B = 10 6 , often used in the programs, the bound M pow B tends to 1 quickly as p increases. In practice, for almost all primes p, the F.O.P. lists are without any exception (only the case p = 3 gives larger bounds, as M pow 10 6 ≈ 43267 for the above example; but it remains around 10 6 -43267 = 956733 certified solutions M ). 6.3.2. Program of computation of T K . In case a) of Theorem 6.3, we give the program using together the 16 parametrized radicals and we print short excerpts. The parameter e must be large enough such that p e annihilates T K ; any prime number p > 2 may be illustrated (here we take p = 3, 5, 7). A part of the program is that given in [START_REF] Gras | On p-rationality of number fields[END_REF] for any number field. For convenience, we replace a data of the form [7784110,List([9])], in the outputs, by [7784110, [9]] giving a 3-group T K of Q( √ 7784110) isomorphic to Z/9Z.
6.4. Infiniteness of non p-rational real quadratic fields. All these experiments raise the question of the infiniteness, for any given prime p ≥ 22, of non p-rational real quadratic fields when the non p-rationality is due to R K ≡ 0 (mod p) (i.e., log(ε M ) ≡ 0 (mod p 2 )).
The case p = 2 being trivial because of genus theory for 2-class groups, we suppose p > 2. However, it is easy to prove this fact for p = 2 by means of the regulators.
6.4.1. Explicit families of units. We will built parametrized Kummer radicals and units, in the corresponding fields, which are not pth power of a unit; the method relies on the choice of suitable values of the parameter trace t. This will imply the infiniteness of degree p -1 imaginary cyclic fields of the Section 7 having non trivial p-class group.
Theorem 6.5. (i) Let q ≡ 1 (mod p) be prime, let c / ∈ F ×p q and t q ∈ Z ≥1 such that t q ≡ c 2 + s 2cp 2 (mod q). Then, whatever the bound B, the F.O.P. algorithm applied to the polynomial m(t q + qx) = p 4 (t q + qx) 2 -s, x ∈ Z ≥0 , gives lists of distinct Kummer radicals M , in the ascending order, such that Q( √ M ) is non-p-rational.
(ii) For any given prime p > 2 there exist infinitely many real quadratic fields K such that R K ≡ 0 (mod p), whence infinitely many non p-rational real quadratic fields.
Proof. (i) Criterion of non pth power. Consider m(t) = p 4 t 2 -s and the unit E s (2p 2 t) = p 2 t + p 4 t 2 -s of norm s and local pth power at p. Choose a prime q ≡ 1 (mod p) and let c ∈ Z >1 be non pth power modulo q (whence (q -1) 1 -1 p possibilities). Let t ≡ c 2 + s 2cp 2 (mod q); then: N(E s (2p 2 t) -c) = N(p 2 t -c + p 4 t 2 -s) = (p 2 t -c) 2 -p 4 t 2 + s = c 2 + s -2cp 2 t ≡ 0 (mod q). Such value of t defines the field Q( M (t)), via p 4 t 2 -s = M (t)r(t) 2 , and whatever its residue field at q (F q or F q 2 ), we get E s (2p 2 t) ≡ c (mod q), for some q | qZ; since in the inert case, #F × q 2 = (q -1)(q + 1), with q + 1 ≡ 0 (mod p), c is still non pth power, and E s (2p 2 t) is not a local pth power modulo q, hence not a global pth power.
Corollary 4 . 4 .
44 Let M ≥ 2 be a given square-free integer and consider the two lists given by the F.O.P. algorithm, for m -1 and m 1 , respectively. Then, assuming B large enough, M appears in the two lists if and only if
MAIN PROGRAM FOR FUNDAMENTAL UNITS OF NORM s {B=10^7;s=-1;LM=List;for(t=2+s,B,mt=t^2-4*s; M=core(mt);L=List([M]);listput(LM,vector(1,c,L[c]))); VM=vecsort(vector(B-(1+s),c,LM[c]),1,8); print(VM);print("#VM = ",#VM)} s=-1 [M]= [2],[5],[10],[13],[17],[26],[29],[37],[41],[53],[58],[61],[65],[73],[74],[82],[85], [89],[97],[101],[106],[109],[113],[122],[130],[137],[145],[149],[157],[170],[173], [181],[185],[193],[197],[202],[218],[226],[229],[233],[257],[265],[269],[274],[277], [281],[290],[293],[298],[314],[317],[346],[349],[353],[362],[365],[370],[373],[389], (...) [99999860000053],[99999900000029],[99999940000013],[99999980000005]] #VM = 9999742 s=1 [M]= [2],[3],[5],[6],[7],[10],[11],[13],[14],[15],[17],[19],[21],[22],[23],[26],[29],[30], [31],[33],[34],[35],[37],[38],[39],[41],[42],[43],[46],[47],[51],[53],[55],[57],[58], [59],[61],[62],[65],[66],[67],[69],[70],[71],[73],[74],[77],[78],[79],[82],[83],[85], [86],[87],[89],[91],[93],[94],[95],[101],[102],[103],[105],[107],[109],[110],[111], (...) [99999820000077],[99999860000045],[99999900000021],[99999979999997] #VM = 9996610The same program with outputs of the form[M, r, t] for s = 1 gives many examples of squares of fundamental units. For instance, the data[29, 5, 27] defines the unit E 1 (27since 27-2 = 5 2 , then t ′ = 5, r ′ = 1 and E 1
,[2],[13],[5],[29],[10],[53],[17],[85],[26],[5],[37],[173],[2],[229],[65],[293],[82], [365],[101],[445],[122],[533],[145],[629],[170],[733],[197],[5],[226],[965],[257], [1093],[290],[1229],[13],
So, we have to estimate the sums t∈[1,b]
v_p(#(p-class group))=0 v_p(#(p-torsion group))=1 m(p)=(p+1)^2+1, p=11 M=145 v_p(#(p-class group))=0 v_p(#(p-torsion group))=2 p=16651 M=277289105 v_p(#(p-class group))=0 v_p(#(p-torsion group))=1 m(p)=(p+1)^2-1, p=3 M=15 v_p(#(p-class group))=0 v_p(#(p-torsion group))=1 m(p)=(p+1)^2-4
2 - 1 p
21 4s may be a pth power in ε M(t) (whence the field Q( M (t)) being prational by exception), is of the order of (c 2 B 2h ) as B → ∞. Proof. Put ε M = 1 2 (a + b √ M ) as usual; then we can write ε M ∼ b √ M and E s (T ) ∼ T so that T and (b √ M ) p are equivalent as M and B tend to infinity; taking the most unfavorable case b = 1, we conclude that M pow B ≪ (c 2 B 2h ) 2/p in general.
7. 2 .
2 Imaginary cyclic fields with non-trivial p-class group, p > 3. Let χ be the even character of order 2 defining K := Q( √ M ), let p ≥ 3 and let L := K(ζ p ) be the field obtained by adjunction of a primitive pth root of unity; we may assume that K ∩ Q(ζ p ) = Q, otherwise M = p in the case p ≡ 1 (mod 4), case for which there is no known examples of p-primary fundamental unit. Let ω be the p-adic Teichmüller character (so that for all τ ∈ Gal(L/Q), any list of quadratic fields Q( √ M ) obtained by the previous F.O.P. algorithm giving p-primary units E, the ωχ -1 -component of the p-class group of L is non-trivial as soon as E / ∈ ε p M and gives an odd component of the whole p-class group of L. Theorem 7.2. As t grows from 1 up to B, each first occurrence of a square-free integer M ≥ 2 in the factorization m(t) := p 4 t 2 -4s =: M r 2 , the degree p -1 cyclic imaginary subfield of Q( √ M , ζ p ), distinct from Q(ζ p ),has a class number divisible by p, except possibly when the unit E s (p 2 t) := 1 2 [p 2 t + r √ M )] is a p-th power in ε M .
1 .
1 Introduction and main results 1.1. Definition of the "F.O.P. " algorithm 1.2. Quadratic integers 1.3. Quadratic polynomial units 1.4. Main algorithmic results 2. First examples of application of the F.O.P. algorithm 2.1. Kummer radicals and discriminants given by m s (t) 2.2. Application to minimal class numbers 2.3. Application to minimal orders of p-ramified torsion groups 2.4. Application to minimal orders of logarithmic class groups 3. Units E s (t) vs fundamental units ε M(t) 3.1. Polynomials m s (t) = t 2 -4s and units E s (t) 3.2. Checking of the exponent n in E s (t) = ε n
M(t)
Let a ∈ Z ≥1 and δ ∈ {1, 2}. We consider T := 2δ -1 (ap 4 t 2 -δs) and m 1 (T ) giving rise to the unit E 1
6.3.1. Definitions of local p-th power units. Taking polynomials stemming from suitable
polynomials m s we can state:
Theorem 6.3. Let p > 2 be a prime number and let s ∈ {-1, 1}. (a)
The property holds from n = 0 (T(1) =
2), except for M = 5 (T(ε M ) = 1, T(ε 2 M ) = 3) and M = 2 (T(ε M ) = 2, T(ε 2 M ) = 6)).
(ii) Infiniteness. Now, for simplicity to prove the infiniteness, we restrict ourselves to the case m(t) = p 4 t 2 -1 (the case m(t) = p 4 t 2 + 1 may be considered with a similar reasoning in Z[ √ -1] instead of Z). Let ℓ be a prime number arbitrary large and consider the congruence: p 2 (t q + qx) ≡ 1 (mod ℓ); it is equivalent to x = x 0 + yℓ, y ∈ Z ≥0 , where x 0 is a residue modulo ℓ of the constant 1tqp 2 qp 2 ; so, we have p 2 (t q + qx 0 ) -1 = λℓ n , n ≥ 1, ℓ ∤ λ. Computing these m(t)'s, with t = t q + (x 0 + yℓ)q, gives:
the right factor is prime to ℓ; the left one is of the form λℓ n + qyp 2 ℓ, and whatever n, it is possible to choose y such that the ℓ-valuation of λℓ n-1 + qyp 2 is zero. So, for such integers t, we have the factorization m(t) = ℓM ′ r 2 , where M ′ ≥ 1 is square-free and M ′ r 2 prime to ℓ, which defines M := ℓM ′ arbitrary large.
This proves that in the F.O.P. algorithm, when B → ∞, one can find arbitrary large Kummer radicals M (t q + (x 0 + yℓ)q) such that the corresponding unit E 1 (t q + (x 0 + yℓ)q) is a local pth power modulo p, but not a global p-th power.
The main property of the F.O.P. algorithm is that the Kummer radicals obtained are distinct and listed in the ascending order; without the F.O.P. process, all the integers t = t q + (x 0 + yℓ)q giving the same M give E 1 (t q + (x 0 + yℓ)q) = ε n M with n ≡ 0 (mod p). 6.4.2. Unlimited lists of non-p-rational real quadratic fields. Take p = 3, q = 7, c ∈ {2, 3, 4, 5}. With m(t) = 81t 2 -1, then t q ∈ {2, 5}; with m(t) = 81t 2 + 1, then t q ∈ {3, 4} and t = t q + 7x, x ≥ 0. The F.O.P. list is without any exception, giving non 3-rational quadratic fields Q( √ M ) (in the first case, p = 3 is inert and in the second one, p = 3 splits. We give the corresponding list using together the four possibilities: 7. Application to p-class groups of some imaginary cyclic fields Considering, now, the case b) of Theorem 6.3 for p > 2, we use the polynomial m s (T ) = T 2 -4s, with T = t 0 + p 2 t, and the unit of norm s:
for suitable s and t 0 such that E s (T ) be a local pth power at p, which is in particular the case for all p > 2 and all s when t 0 = 0. For t 0 = 0, we get the particular data when the equation t 2 0 ≡ 2s (mod p 2 ) has solutions (which is equivalent to p ≡ 5 (mod 8)):
For p = 2, a "mirror field" may be taken in Q( √ -1, √ M ) (see, e.g., [START_REF] Gras | Sur la norme du groupe des unités d'extensions quadratiques relatives[END_REF] for some results linking 2-class groups and norms of units).
The programs are testing that E s (T ) is not the pth power in ε M .
7.1. Imaginary quadratic fields with non-trivial 3-class group. From the above, we obtain, as consequence, the following selection of illustrations (see Theorem 6.5 claiming that the F.O.P. lists are unbounded as B → ∞): Proof. If E s (t 0 +9t) is not a third power in ε M but a local 3th power at 3, it is 3-primary in the meaning that if ζ 3 is a primitive 3th root of unity, then K(ζ 3 , 3 E s ((t 0 + 9t))/K(ζ 3 ) is unramified (in fact 3 splits in this extension). From reflection theorem (Scholz's Theorem in the present case), 3 divides the class number of Q( √ -3M), even when r > 1 in the factorization m(t) =: M r 2 . The case of t 0 = 0 and s = ±1 is obvious. The second claim comes from Theorem 6.5 (see numerical part below).
Program for lists of 3-class groups of imaginary quadratic fields.
Note that the case where E s (t 0 + 9t) is a third power is very rare because it happens only for very large t 0 + 9t giving a small Kummer radical M . One may verify the claim by means of the following program, in the case s = -1 valid for all t 0 , where [M, Vh] gives in Vh the 3-structure of the class group of Q( √ -3M ); at the end of each output, one sees the list of exceptions (case of third powers), where the output [M, n] means that for the Kummer radical M = M (t), then E -1 (t 0 + 9t) = ε n M . We may see that any excerpt for t large enough give no exceptions:
LISTS OF 3-CLASS GROUPS OF IMAGINARY QUADRATIC FIELDS {p=3;B=10^5;L3=List;Lh=List;Lt0=List ([0,4,5]);for(t=1,B, for(ell=1, 3,t0=Lt0[ell];mt=(t0+9*t)^2+4;ut=(t0+9*t)/2;vt=1/2; C=core(mt,1);M=C [1];r=C [2];res=Mod (M,4) B ≈ 9321.76 give a good verification of the Heuristic 6.4. This also means that all the integers M larger than 9029 leads to non-trivial 3-class groups, and they are very numerous !
We note that some M 's (as 29, 74, 82, 85, . . .) are in the list of exceptions despite a non-trivial 3-class group; this is equivalent to the fact that, even if E -1 (t 0 + 9t) ∈ ε 3 M , either the 3-regulator R K of K is non-trivial or its 3-class group is non-trivial.
In this interval, all the 5-class groups obtained are non-trivial, except for s = -1 and M = 29, then for s = 1 and M = 21. From Remark 1.5, we compute: log(155625629)/ log (5 4 • 25 • 10 4 ) ≈ 0.99978777. Theorem 6.4 gives possible exceptions up to M Consider the case p = 7, s ∈ {-1, 1}; exceptionally, we give the complete lists: |
00410881 | en | [
"phys.astr.im",
"sdu.astr.im"
] | 2024/03/04 16:41:24 | 2009 | https://hal.in2p3.fr/in2p3-00410881/file/cNodeV1_Report-1_-_Fabrizio09.pdf | Fabrizio Ameli
email: [email protected]
Stefano Russo
email: [email protected]
Gabriele Giovanetti
email: [email protected]
Fabrice Gensolen
email: [email protected]
Performance results of a prototype board for copper data transmission
The experience gained in designing submarine neutrino telescopes suggested to explore new ways of realizing the data transmission backbone at the detection unit level. In order to decrease the difficulties of integration and handling of the backbones, some effort has been spent in developing a backbone based on copper links with simple tracts of cable connecting contiguous storeys. This work is aimed at the presentation of the general architecture of the system, at the description of an electronic board prototype designed to test the project feasibility with the first results obtained. The main goal of the experimental setup was measuring the recovered clock jitter under various conditions, with and without cables. The jitter measured on the cleaned clock amounts to hundreds of picoseconds, well below the sub-nanosecond time resolution required by this kind of experiments.
I. INTRODUCTION
Lessons learnt from current experiments developing neutrino telescopes, led us to look for new ways of implementing data transmission at the detection unit (DU) level. The use of fiber optics in backbones usually implies high cost mainly due to constraints of maintaining the optical power budget. This implies a high cost of cables and connectors, substantial manpower effort for the integration (fiber handling is a delicate operation), the difficulty of testing the integrated system, the power required by electro-optical transceivers and, in the case of DWDM systems, the high price of the transceiver itself. Most of these issues are circumvented through the use of copper links: handling -and hence integration -is much easier, connectors are more common and cheaper than the optical counterpart, while the devices which implement the physical layer of the transmission system are inexpensive. On the other hand, the use of copper links requires a sophisticated implementation of the transmission system, both from a hardware and software point of view.
Recent technological progress, specifically in audio/video data transmission, made available on the market suitable devices capable of transmitting high data rates (many Gb/s) over long distances (hundreds of meters, depending on the medium). The idea behind this paper is to adapt such a proven technology for the purposes of a submarine experiment.
The basic unit of neutrino telescopes can be defined as the DU. The DU consists of many "storeys" vertically distributed in a structure as high as 800 m; each storey hosts photomultipliers (PMTs), usually grouped in 3 to 6 elements. The total number of sensors per DU ranges from 60 to 120. The telescope is far from the shore laboratory, typically it is placed at a depth of thousands of meters and at distances of many tens of kilometers, and produces an amount of data on the order of 1 Gb/s; these constraints require a telescope-to-shore data transmission system based on fiber optics. The data produced by a group of sensors are collected by the storey electronics and routed to on-shore through a DU backbone. The transmission backbone inside the DU has been successfully implemented in optical fibers in NEMO and Antares experiments [START_REF]NEMO website[END_REF], [START_REF]ANTARES website[END_REF], [START_REF] Ameli | The data acquisition and transport design for nemo phase 1[END_REF], where DWDM backbones based on Add and Drop modules have been realized.
In Section II the requirements of a copper system will be briefly discussed and, in Section III, both a possible implementation of the general architecture and the prototype board designed to verify the feasibility of the system are presented. After a description of the experimental setup in Section IV, preliminary results measured on the prototype board will be shown. Finally in Section V some conclusions are drawn on the performance of the system and on its applicability.
II. COPPER LINK HIGHLIGHTS
The short distances (maximum 50 m) between storeys and the relatively low data rate per storey led us to evaluate a new approach based on copper wires in the DU backbone. The main constraints taken into consideration in the design of a copper backbone were:
• a maximum inter-storey distance of 50 m;
• a reasonable data rate on the link is about 1 Gb/s; • that the cable should be mechanically manageable;
• that storeys be connected by single tracts of cables. The data rate over the backbone is highly asymmetrical: data flowing from on-shore to the storeys are for control and setup, i.e. the data rate can be as low as few hundreds of kb/s per storey. On the opposite direction the data rate must support the transmission of physics data, at least 10 Mb/s per 10" PMT. For both directions we assume that the best (and only choice, if minimization of cable number is required) is to transmit a synchronous bitstream which embeds both clock and data. The low speed of the control channel allows the receiver to recover the clock signal guaranteeing the necessary sub-nanosecond precision required by this kind of experiment.
A low wire count is necessary in order to have a manageable submarine cable. The requirment that storeys should be connected with independent tracts, without signal extraction from the backbone by means of breakouts, aids in this respect: connectors can be realized with a low number of electrical pins.
III. SYSTEM ARCHITECTURE
The general architecture of the copper backbone has been already explained in other documents [START_REF] Ameli | Km3net: A proposal design for a detection unit data transmission system based on a copper backbone[END_REF]. Briefly, the daisy chain architecture, shown in fig. 1, requires that each node receives the slow serial data stream transmitted by the previous node on the blue link, recovers the clock embedded in the stream, extracts clock and data, cleans the clock and uses this de-jittered clock to transmit data to the next node in the chain and to provide the clock reference for transmission in the other direction. Of course, the clean clock is also used to feed each electronic device which needs a reference synchronous to the backbone clock. The Copper Node hosts an FPGA so that the daisy chain mechanism is completely user transparent: each node in the chain has to manage all the data flow and can "add and drop" its own payload; the node appears as a parallel interface to transmit and receive data. There are also controls and clock flags to manage timing issues and handshake protocols. This document describes the preliminary tests of the reduced Copper Node prototype board, the so called "cNodeV1" board. Actually the cNodeV1 board is a subset of a complete copper node which would be able to provide the required full duplex, asymmetrical full daisy chained communication link on copper media. The cNodeV1 implements only the slow path of the chain: data are serially transmitted at a rate of about 200 Mb/s.
The main goal of the prototype is to demonstrate that the full daisy chain scheme is capable to transmit serially on a copper medium both clock and data, that the Bit Error Rate (BER) on data is acceptable, the clock is recoverable, and its quality is still adequate for neutrino experiments. This last requirement imposes a maximum jitter on the clock well below 1 ns. This being the major issue regarding the data transmission on copper, the cNodeV1 board is intended to give an answer to this prolem.
To avoid the constraint to choose in advance a specific transmission cable, i.e. the characteristic cable impedance, the line interface has not been embedded on the board. Instead, use of high speed SMA connectors allows to change both the line driver and equalizer and also the cable. Moreover we do not address here the problems concerning either the impedance change, due to the high pressure in the submarine environment typical of a neutrino telescope, or the behavior of the system with real submarine connectors. Figure 2 shows a block diagram of the cNodeV1 board with the line interface.
IV. EXPERIMENTAL SETUP
The cNodeV1 can be plugged into a Xilinx ML50x evaluation board using an adapter already developed for another application: this configuration is shown in fig. 3. For the tests we used both a ML507 board, which hosts a Virtex-5 FX FPGA with a hardwired embedded PowerPC processor, and a ML505 board, where a MicroBlaze embedded processor has been synthesized in the Virtex-5 LX FPGA. To measure the BER, a dedicated module has been specifically written: the FPGA transmits a sequence of random bytes, coded according to 8b10b protocol, to the transmitter which serializes them; after the cable, the stream is received by the receiver, de-serialized and passed back to the FPGA. After an initial phase, the FPGA logically locks onto the received stream and transmitted data are compared against received data: in case of errors a counter is incremented and updated each second. A program has been written to manage the system: the cNodeV1 board can be intialized, the BER module can be switched on and off or reset, errors on the transmission can be forced to check if the BER measure is effective 1 , and the BER results can be read by the processor. A simple RS232 connection is used for the communication.
All the measures have been taken using a LeCroy SDA Zi760, with 6 GHz of bandwidth and 40 GS/s of sampling rate. This instrument is specifically designed for serial data analysis allowing the user to extract bitstream feature and characterizing the clock performance of the system independently from the receiver.
A. Single hop setup
The single hop configuration consists of one cNodeV1 board plugged into an ML505 board. Fig. 4 schematically shows the block diagram of the system. The clock used for transmission is a local 16.384 MHz oscillator. The receiver, using an internal Clock and Data Recovery (CDR) module, extracts from the stream both clock and data, which are acquired by the FPGA. The recovered clock is also fed to the PLL which provides a synchronous and clean version of it.
In the next Sections the results obtained under different line conditions will be analyzed.
1) Single hop with coaxial cables pair:
In this configuration data are transmitted to the receiver through high speed coaxial cables pair 20 cm long without inserting any driver or equalizer: this is the best possible condition which gives us an idea of the minimum jitter value attainable. The waveforms of the source clock and of the clocks before (CDR Clock) 1 This is particularly useful when the link is too good to show errors in a reasonable time The standard deviation of the TIE for the source clock is about 340 ps; even in this optimal case the TIE of the recovered clock before been cleaned has a standard deviation of 450 ps while, after applying the PLL, the jitter is reduced down to 10 ps.
2) Single hop with 2 m long CAT5 cable: With this setup, a CAT5 cable 2 m long has been inserted between the line driver and the receiver; the TIE standard deviation of the clock cleaned by the PLL is slightly worse than what we found previously in Section IV-A1 but it is still very good being about 13 ps for the clean clock. The TIE of the recovered clock is now 500 ps due to the longer cable.
3) Single hop with 30 m long CAT5e cable: The same performance values have been evaluated using a CAT5e cable 30 meters long. The TIE standard deviation for the clean clock is still about 15 ps, while the value for the recovered clock increased up to 550 ps.
B. Double Hop setup
In the double hop configuration depicted in fig. 5 we have one cNodeV1 prototype board plugged into the ML505 board and a second cNodeV1 which, plugged into the adapter, can work in a stand-alone fashion. Data are transmitted by the first node to the second which recovers the clock, cleans it by means of the PLL and retransmits data back to the first node which, in turn, recovers and cleans the clock. Measures are made to assess system synchronicity and clock jitter.
1) Double hop with 300 m long CAT5e cable and short coaxial cables: In the examined configuration, the first hop uses a 300 m long CAT5e cable with a driver and equalizer pair; the second hop is covered by 2 high speed coaxial cables 1 m long without any driver or equalizer. Each node recovers the line clock and cleans it by means of its own PLL: the clock from the first node shows a standard deviation of the TIE of about 19 ps while the clock from the second node has a value of 37 ps. The mean period for the two clocks is the same: 61.03563 ns (calculated over about 8 million time windows 50 µs each) with a standard deviation of 6 ps and a maximum peak-to-peak deviation of 90 ps. This confirms that the two nodes are synchronous and the instantaneous deviation is small.
C. Jitter analysis performed by SDA analyzer
We tried also to estimate the BER on data letting the SDA analyzer do the job for us. We transmitted the random data on the 300 m long CAT5e cable using both driver and equalizer; then the two differential signals driven by the equalizer were read by the instrument and subtracted to yield the single ended serialized stream. Fig. 6 shows the jitter analysis performed on this signal: the eye diagram at a BER value of 10 -12 is still open by a 20 % of the unit interval, as extrapolated in the bathtub curve. The TIE time evolution does not exhibit any evident trend and the TIE histogram looks gaussian, stressing that there are no deterministic jitter components. The estimated total jitter on the signal is less than 4 ns. The clock extracted from the stream will be cleaned using the PLL to provide the performance stated in IV-B1.
V. CONCLUSION
The preliminary results shown in this paper are very encouraging: the transmission of a 200 Mb/s stream on a CAT5e cable seems feasible even over distances longer than 300 m. Data received show a low BER value; the clock can be extracted from the serial stream and, after a cleaning step implemented by a PLL, the jitter is reduced to tens of ps. The tests used only two hops, but in the near future a higher number of boards will be cascaded to check the performance of a longer chain.
The effects of change in cable impedance due to the pressure of submarine environment coupled with the use of inexpensive connectors -not normally deemed suitable for use with fast differential signals -will be analyzed in the near future using a hyperbaric chamber and a setup based on commercial cables and connectors. Since the maximum distance between adjacent storeys is expected to be 50 m, we are optimistic that the performance at high pressures will be similar to that measured in this work, using standard connectors and cables within the 300 m length already studied.
Figure 1 .
1 Figure 1. Full Double Daisy Chain architecture.
Figure 2 .
2 Figure 2. Block diagram of the Master Chain block shown in fig. 1. Also the line interface is present with a cable driver and an equalizer.
Figure 3 .
3 Figure 3. Experimental setup with the cNodeV1 board plugged into the ML505 board through the adapter. The line driver transmits the signal through a CAT5e cable 30 meters long and the signal is received by the cNodeV1 receiver after the equalization.
Figure 4 .
4 Figure 4. Single hop block diagram: the cNodeV1 board, plugged into the host board, drives the cable and receives back the data.
Figure 5 .
5 Figure 5. Double hop block diagram: the first hop consists of a cNodeV1 plugged into the ML505 board and acting as the chain source. The second hop is implemented with a cNodeV1 board working stand-alone. The driver and equalizer pair are inserted only in the path between first and second node.
Figure 6 .
6 Figure 6. Jitter analysis provided by the LeCroy SDA instrument: the line differential signals after the driver and equalizer pair and a CAT5e cable 300 m long are analyzed.
The TIE is a first rough measure of the jitter: it represents the distance between the edges of the acquired clock and the expected edges inferred from the clock itself. See for example[START_REF]Understanding and characterizing timing jitter[END_REF]
ACKNOWLEDGMENTS
This work is supported through the EU-funded FP6 KM3NeT Design Study Contract No. 011937. |
04108826 | en | [
"shs.gestion"
] | 2024/03/04 16:41:24 | 2015 | https://hal.science/hal-04108826/file/V-MOSKALU_New-Social-Contract-for-Ukraine-VoxUkraine-2015.pdf | New Social Contract for Ukraine https://voxukraine.org/en/new-social-contract-for-ukraine Dr Violeta MOSKALU, founder of Global Ukraine Foundation Moskalu, V. (2015), New Social Contract for Ukraine, VoxUkraine, https://voxukraine.org/en/new-social-contract-for-ukraine/.
May 2015
A systemic analysis of the Ukrainian political crisis shows that inclusive leadership is the key success factor of reforms in Ukraine. Post-Maidan Ukraine needs a New Deal (a kind of agreement between civil society, business and government, a new social pact to build a modern state), centered around the rule of law and the effective management tools.
In Western democracies, the three main categories of actors (power, business, and civil society) are separated, and there are clear interaction rules to assure their cooperation and to provide a framework in order to avoid conflicts of interest. From this point of view, the real problem in reforming the Ukrainian political system is the need to cut the Gordian knot of political-oligarchic system where financial resources, political parties and the media are in the hands of the same people or clans.
Ukrainian crisis -how to change the system?
The oligarchic political system is a source of tension and the origin of a strong sense of injustice in society. That is why it is urgent and critical to dismantle this oligarchic system in order to save the country. New rules must elevate the Ukrainian political life to Western standards based on fundamental democratic principles that aim to reduce the conflicts of interests within the society. For this, the three branches of governmentlegislative, executive and judicial -should also be completely separated. Most of the necessary changes are known by the Ukrainians, the main points were even fixed in the coalition agreement signed after the parliamentary elections in October 2014.
The issue of injustice and the inequality in the distribution of national wealth is one of the key issues for changing the rules and reforming the old corrupted system that is why fighting massive corruption at the highest level is priority number one. The famous economist and Nobel laureate Joseph Stiglitz noted in his latest book that the issue of inequality and injustice is a political choice, the result of the accumulation of unfair political decisions and misplaced priorities.
How then to explain the difficulties of the realization and implementation of the coalition agreement? Why are reforms not implemented even though everyone knows WHAT should be done? The answer is simple -the resistance to change expressed by the old system. The resistance to change is a known phenomenon; there is even a specific scientific approach and particular understanding of the key success factors by the times of change. For successful change management in this reforms period, the «resistance phenomenon» should be taken into account. Certain people fear changes because there are some instinctive psychological reactions. Fear of losing certain benefits (resources, influence, power, social status, etc.) explains the strong resistance to change.
System analysis of post-Maidan Ukrainian political situation shows clearly a separate category of «conservative» actors (certain representatives of the old political parties of the past, some bureaucrats and officials who are part of the system that needs to be changed), who talk a lot about reforms, but do little or nothing (particularly in the fight against corruption or purification of the judiciary system). The paradox is that on the one hand, they can be able to take some initiatives for changes (or at least to sham it), but as long as they try to «win something» or «to avoid any losses», the final results will be very disappointing, because too much energy is directed to support their own ambitions or to strengthen personal positions. The last Ukraine -EU summit stressed the absolute necessity to begin finally doing real reforms, because otherwise Ukraine risks losing the support of Western partners in the near future.
Meanwhile, most of the civil society is more mature than the political elite. Civil society has learned to observe and analyze real facts and concrete actions, rather than to rely on political promises. Thanks to new information and communication technologies, despite the fact that traditional media are under the influence of oligarchic clans, conscious citizens instantly realize everything that is done or not done by political leaders. The government did not know how (or did not want?) to restore public confidence in state institutions, the economic situation has deteriorated significantly, and the war in the East can no longer serve as an universal excuse. A careful analysis and comparison of obligations (in words) and real acts of post-Maidan political leaders shows the weakness of our political elites. Or their weak will to respect and to fulfill their recent political promises. Political will -which is defined as the «will to act», the «ability to act» and the «feeling that we are required to act» -is critical for real changes to occur.
The situation may seem paradoxical: the elite do not want real changes, to avoid losses of their benefits; the business community and civil society understand that changes are inevitable. As publicly stated by former Finance Minister Oleksandr Shlapak (2014), the «political will exists within civil society», but the political will is not strong enoughamong the political elite, as we have seen through the lack of real actions, such as fight against corruption (on high political level). close ties and strong relationships among political elites make implementing and enforcing policy changes particularly challenging.
The two key factors for successful changes are synergy research and inclusive leadership.
The Post-Maidan volunteer phenomenon shows that in Ukraine more and more people are mobilized to solve problems at local and national level, or in the promotion of the interests of Ukraine in the world. They want to act and they feel the need or even the duty to act, and they finally will change the system. The Ukrainian volunteers, or the phenomenon of «the-do-it-yourself-country» are the proof that in Ukraine there are more and more responsible citizens. It only remains to spread the momentum of solidarity and collective action. The basic rule of change management, the core idea of the famous French author Jean-Christian Fauvet is very simple: those who really want changes should look for allies, because very often we lose not because of the opponents, but due to lack of allies.
That is why constant communication is needed in order to involve all potential allies in the project, to develop trust and collective action synergy based on cooperation. A very important thing: trust does not mean "don't attack", but rather "don't defend yourself". As long as the other person has done nothing to lose trust, he/she can be treated as trustful.
At the same time, any government will resist creating tools to limit its power. Civil society, the intellectual and business environment should show an example of coordination, as during the Maidan. They must create the conditions of an "intellectual and economic Maidan".
Inclusive leadership and human capital investments for successful reforms in
Ukraine
Post-Maidan Ukraine needs a New Deal (a kind of agreement between civil society, business and government, a new social pact to build a modern state), centered around the rule of law and the effective management tools. The key words for this New Deal System are cooperation and trust. Because knowledge economy in the 21st century is built mainly through intellectual capital, trust and credibility are the central elements that are required for the development of collective intelligence. A series of scientific papers focused on the issue of trust and its determinants, the factors that «create» trust. The researchers highlighted the qualities that people must have in order to generate trust [START_REF] Coleman | Social Capital in the Creation of Human Capital[END_REF][START_REF] Mcallister | Affect -and cognition -based trust as foundations for interpersonal cooperation in organizations[END_REF]. These factors are competence, transparency, honesty and support (or compassion).
Intellectual capital consists of human capital (knowledge and skills that belong to each individual) and social capital (based on trust that enables people to cooperate). Strong social capital makes it easier for people to share their ideas and knowledge and finally creates needed conditions for the development of collective intelligence. The centerpiece of this new paradigm -thinking about collective intelligence and invest in human capital to succeed in the globalization era where cooperation and trust are important components of economic value-added (cf. diagram). © Moskalu, Violeta (2012), A socio-cognitive analysis of the employee ownership impact on value creation of French Public Companies, PhD in Management Sciences, University of Lorraine, Metz-Nancy, France. Therefore, in the actual context of changes within the current political and socio-economic system in Ukraine, successful leaders are the ones who quickly realize that times have changed and there is no other choice but to match the expectations of society. The lack of competence, transparency and integrity will significantly reduce the duration of political careers, because 21st century's basic capital will be the leader's authenticity. «Say what you do and do what you say,» -if you want that people believe you, this will be the new political life postulate. Times change, and Ukraine is changing. Now there is a great demand for the new elite to take initiative and assume political and administrative responsibility for the future development of Ukraine. Ukraine needs competent, transparent, honest, sincere, and especially caring leaders who are able to put the collective interest and public service above their personal interests. As the famous English statesman Benjamin Disraeli wrote, «a politician thinks of the next election and a statesman thinks about future generation». |
04108836 | en | [
"shs.gestion"
] | 2024/03/04 16:41:24 | 2023 | https://hal.science/hal-04108836/file/2023-03-30_PROLOG_V-Moskalu_LUXEMBOURG.pdf | Dr Violeta Moskalu
email: [email protected]
A THEORETICAL FRAMEWORK FOR MANAGERIAL STUDIES OF DIASPORA ENTREPRENEURIAL ORIENTATION
Keywords: diaspora entrepreneurship, entrepreneurial situation, entrepreneurial orientation, trust, entrepreneurial support, social capital
Our research proposes a conceptual framework for understanding diaspora entrepreneurship and outlines a future research agenda to advance the field. The diaspora entrepreneurship is a unique phenomenon that involves the creation and development of new ventures by members of a diaspora community, who draw on their cultural, social, and economic ties to their homelands to create economic opportunities in their host countries, or vice versa (Moskalu, 2018).
by using an international business and entrepreneurship lens to analyze the diaspora entrepreneurship phenomena.
The two main scientific contributions of our research are 1) we have proposed a research design on diaspora entrepreneurship for the creation of public value, with an unprecedented intersection articulating different dimensions of the phenomenon of entrepreneurship; and 2) we have designed the future research agenda on diaspora entrepreneurship.
A THEORETICAL FRAMEWORK FOR MANAGERIAL STUDIES OF DIASPORA ENTREPRENEURIAL ORIENTATION
The concept of diaspora has been a topic of increasing interest in recent years, as the number of people living outside their home countries continues to rise. This phenomenon has been linked to a variety of outcomes, including economic development and the emergence of new business opportunities.
THE RESEARCH CONTEXT
"Diaspora" is a term used to describe a dispersion or scattering of people from their original homeland or geographic region to other parts of the world. The term comes from the Greek word "diaspeirein," which means "to scatter" or "to disperse". The concept of diaspora has been used to describe a wide range of historical and contemporary movements of people, including forced migration, voluntary migration, and economic migration. [START_REF] Stoyanov | The embedding of transnational entrepreneurs in diaspora networks: Leveraging the assets of foreignness[END_REF] defines "diasporians" as "immigrants and emigrants, including their descendants, who feel attracted by the country of their origin and maintain strong emotional relationships with it". Diaspora communities can be spread across the world and often maintain transnational connections and networks, including economic, political, and cultural ties with their homelands. These connections are facilitated by modern communication technologies and transportation systems (Moskalu, 2018).
In other words, diaspora is a "transnational community whose members (or their ancestors) emigrated or were dispersed from their original homeland but remain oriented to it and preserve a group identity" [START_REF] Grossman | Toward a definition of diaspora[END_REF]. In addition, it should also be understood that the host country of the diaspora, and in our case, diaspora entrepreneurs is the country to which they migrated to; and the home country is the country from which they migrated [START_REF] Krasniqi | Migration and intention to return: entrepreneurial intentions of the diaspora in post-conflict economies[END_REF].
Questions that have emerged over the past few decades address the differences, the "otherness" of migrants and diasporans as international entrepreneurs and approach internationalization from a variety of angles, bringing new empirical insights into the research on entrepreneurship. The complexity of their contextual setting, which extends beyond the area of their primary activity, needs to be highlighted, particularly in research on immigrants, migrants, and diasporans and their entrepreneurship (Moskalu, 2018).
THE PROBLEM AND THE THEORETICAL AND METHODOLOGICAL POSITIONING
Our research problem focuses on the link between the entrepreneurial spirit of the diaspora and the creation of public or social value, the main question being the outline of the design of research on entrepreneurship in the diaspora.
As for the methodological positioning, this contribution is the result of action research (fifteen years of work and observation "in immersion"), with a deliberate action to transform reality. This research was carried out with a double objective: to transform reality and to produce knowledge concerning these transformations.
Roy & Prevost (2013) consider that action research is a "research approach attached to the paradigm of pragmatism which starts from the principle that it is through action that we can generate scientific knowledge useful for understanding and changing the social reality of individuals and social systems".
In other words, action research finds its roots in action, in the need to act to change things.
While the traditional research process follows a linear path, action research adopts a more cyclical approach, the researcher must give himself the means to take a reflective look at the action as it unfolds.
We have therefore completed the theoretical framework of research on diaspora entrepreneurship with the contributions of [START_REF] Moskalu | Une analyse socio-cognitive de l'impact de l'actionnariat salarié sur la création de valeur des entreprises françaises cotées, Tome I et Tome II[END_REF] who demonstrated the key role of trust and social capital for the creation of intellectual capital, and ultimately for the value creation. This research demonstrates theoretically and empirically that intellectual capital is made up of human capital (knowledge and skills that belong to everyone) and social capital (based on trust that allows people to cooperate).
A strong social capital allows the actors involved to share their ideas and their knowledge more easily and thus creates the conditions necessary for the development of collective intelligence. This theoretical paradigm allows us to rethink the success factors of the collective intelligence of the diaspora entrepreneurship, in full expansion of the new sharing economy paradigm which requires strong investments in human capital, and where cooperation and trust are the major components for the value creation.
This in-depth analysis of the issue of trust as the strong element of social capital (at the heart of value creation) highlights its determining factors (which "create" trust) by highlighting the essential qualities that the Diaspora entrepreneur must develop to generate trust (competence, transparency, integrity and benevolent support) within the network.
THE CONCEPTUAL FRAMEWORK
Diaspora entrepreneurs are the people, migrants themself or their direct descendants, who set up new ventures and provide entrepreneurship activities that cross national borders of both host country and country of origin and ties them by adding value to both markets. Diaspora entrepreneurial orientation refers to the set of attitudes, values, and behaviors that characterize the mindset of diaspora entrepreneurs [START_REF] Moskalu | The strategic dilemma of the Ukrainian Ministry of Foreign Affairs: how to stop the extinguishing of fires, and to start, finally, to play chess? On the need to create a global system of people's diplomacy[END_REF]. It encompasses their willingness to take risks, their commitment to innovation and opportunity recognition, and their ability to adapt to changing circumstances in order to achieve their goals. Diaspora entrepreneurial orientation is influenced by a range of factors, including the cultural, social, and economic context in which diaspora entrepreneurs operate, as well as their personal characteristics, experiences, and motivations.
Our research proposes a conceptual framework for understanding diaspora entrepreneurship and outlines a future research agenda to advance the field. Based on the value creation theoretical model mixing the intellectual and financial capital [START_REF] Moskalu | Une analyse socio-cognitive de l'impact de l'actionnariat salarié sur la création de valeur des entreprises françaises cotées, Tome I et Tome II[END_REF] that underlines the key role the human and social capital play for organizational performance, we propose to explore the diaspora entrepreneurship characteristics such as their risk-taking capacity, innovativeness, autonomy, proactiveness through the prism of their innovation strategies and the factors that contribute to their success, including the benefits and challenges of using diaspora networks for international business.
The entrepreneurial mindset of diaspora, expatriates and migrants provides interesting resources for entrepreneurial activities both in their country of residence and in their country of origins, their knowledge, skills and their network they develop in both countries can strengthen the intellectual capital they use in their managerial activities.
The diaspora entrepreneurship is a unique phenomenon that involves the creation and development of new ventures by members of a diaspora community, who draw on their cultural, social, and economic ties to their homelands to create economic opportunities in their host countries, or vice versa (Moskalu, 2018).
We propose a conceptual framework that includes three dimensions of the diaspora entrepreneurial orientation: diaspora networks, diaspora mindset and psychological ownership and institutional context (rules, regulations, policies, cultural factors), to bridge the gap between theory, practice, and policy (Elo, Täube, Servais 2022) by using an international business and entrepreneurship lens to analyze the diaspora entrepreneurship phenomena.
Krasniqi and Williams (2019) point out that the "diaspora entrepreneurship can benefit from sharing of capital, technical knowledge, expectations of how business should be conducted, direct investment and the harnessing of further entrepreneurial activity". In addition, they highlight the importance of networks in diaspora entrepreneurship -they utilize mainly two ways: "first, to access resources that are unavailable or more expensive to acquire from other sources; and second, to provide access to markets for goods and services".
The diaspora networks can bridge the divide between the home and host countries by combining knowledge from both. This knowledge system equips members with the necessary skills and expertise to adjust to a new environment, as well as giving them access to the most up-to-date information in their field. [START_REF] Stoyanov | The embedding of transnational entrepreneurs in diaspora networks: Leveraging the assets of foreignness[END_REF]. [START_REF] Riddle | Transnational diaspora entrepreneurship in emerging markets: Bridging institutional divides[END_REF] highlighted the relationship between diaspora networks and international trade. They state that diasporas have been credited with facilitating international commerce, and a World Bank study of USA foreign direct investment (FDI) abroad found evidence to support the idea that diaspora's ethnic networks can affect FDI by providing information flows across borders and acting as contract-enforcement mechanisms.
It should also be noted that entrepreneurs from diasporas who bring their ideas, resources, and employment opportunities can have a major influence on the economic and social growth of their native countries. [START_REF] Riddle | Transnational diaspora entrepreneurship in emerging markets: Bridging institutional divides[END_REF].
Nevertheless, diaspora entrepreneurs possess a wealth of knowledge and experience from their home markets, as well as potentially other markets, which makes them more suited to international business operations. This highlights the importance of international experience for entrepreneurs, founders, and top management teams. Diaspora entrepreneurs are particularly well-equipped to succeed in international business due to their greater international experience than local managers. [START_REF] Etemad | Advances and challenges in the evolving field of International Entrepreneurship: The Case of migrant and diaspora entrepreneurs[END_REF].
The post-conflict economies can benefit from engaging their diaspora in investment for reconstruction [START_REF] Krasniqi | Migration and intention to return: entrepreneurial intentions of the diaspora in post-conflict economies[END_REF], diaspora entrepreneurs also appear to be particularly beneficial for countries that may not be as attractive to non-diaspora investors due to small domestic markets, inadequate infrastructure or less favorable structural characteristics [START_REF] Riddle | Transnational diaspora entrepreneurship in emerging markets: Bridging institutional divides[END_REF].
THE CONTRIBUTIONS OF RESEARCH
The two main scientific contributions of our research are 1) we have proposed a research design on diaspora entrepreneurship for the creation of public value, with an unprecedented intersection articulating different dimensions of the phenomenon of entrepreneurship; and 2) we have designed the future research agenda on diaspora entrepreneurship.
The design of research on diaspora entrepreneurship proposed in the article is built on a triple conceptual articulation around the notions of the entrepreneurial situation [START_REF] Schmitt Ch | Les situations entrepreneuriales: Définition et intérêts pour la recherche en entrepreneuriat, 9e Congrès de l[END_REF], entrepreneurial orientation (Miller, 1983[START_REF] Miller | revisited: A reflection on EO research and some suggestions for the future[END_REF] and the key role of trust and social capital in value creation modelization [START_REF] Moskalu | Une analyse socio-cognitive de l'impact de l'actionnariat salarié sur la création de valeur des entreprises françaises cotées, Tome I et Tome II[END_REF].
This research has highlighted the potential for diasporas to foster entrepreneurship and economic growth through their networks and resources. International business researchers can explore deeper the relationship between diaspora and entrepreneurship, with a focus on how diasporic networks can facilitate entrepreneurial activities implications for policy makers seeking to promote entrepreneurship among diasporic populations.
Diaspora entrepreneurs are particularly well-equipped to succeed in international business due to their greater international experience than local managers, as well as the social networks that provide them with resources such as capital and information.
The future research agenda includes 1) the need for more empirical studies that examine the experiences of diaspora entrepreneurs, the role of diaspora networks in promoting entrepreneurship, and the factors that shape entrepreneurial behavior and outcomes in different institutional contexts; 2) the need for more comparative studies that examine the similarities and differences across different diaspora communities and host countries; 3) the need to explore the role of technology in diaspora entrepreneurship, including how digital platforms and communication technologies enable and support diaspora entrepreneurs; 4) the need for more research that addresses the practical implications of diaspora entrepreneurship, including how policymakers, business leaders, and diaspora communities can support and promote entrepreneurship, as well as 5) the potential impact of diaspora entrepreneurship on economic development and social inclusion.
A comprehensive overview of the key dimensions of diaspora entrepreneurship is needed to advance our understanding of this important and growing phenomenon. The proposed conceptual framework and research agenda provide a valuable roadmap for future research in this field, to examine the impact of diaspora networks on knowledge flows and entrepreneurship in developing countries, highlighting the potential for diaspora networks to promote economic development.
As for managerial contributions, we have demonstrated that the ability of the entrepreneurfacilitator of the diaspora network to build meaning and a forward-looking vision of the situation must go hand in hand with his ability to generate trust and practice inclusive management to create value.
Like any research, our contribution obviously has several limitations while offering many perspectives. As for the validity of our conclusions, we must specify that our objective is not the generalization of results, but to propose new avenues of research regarding the diaspora entrepreneurship in international business research field. |
03812773 | en | [
"info.info-ro"
] | 2024/03/04 16:41:24 | 2022 | https://hal.science/hal-03812773/file/MIP_Based_SP_2022.pdf | Pegdwendé Minoungou
email: [email protected]
Vincent Mousseau
email: [email protected]
Wassila Ouerdane
email: [email protected]
Paolo Scotton
A MIP-based approach to learn MR-Sort models with single-peaked preferences
Keywords: Multicriteria Sorting, MR-Sort, Single-Peaked Preferences, Preference Learning
The Majority Rule Sorting (MR-Sort) method assigns alternatives evaluated on multiple criteria to one of the predefined ordered categories. The Inverse MR-Sort problem (Inv-MR-Sort) consists in computing MR-Sort parameters that match a dataset. Existing learning algorithms for Inv-MR-Sort consider monotone preference on criteria. We extend this problem to the case where the preference on criteria are not necessarily monotone, but possibly single-peaked (or single-valley). We propose a mixed-integer programming based algorithm that learns from the training data the preference on criteria together with the other MR-Sort parameters. Numerical experiments investigate the performance of the algorithm, and we illustrate its use on a real-world case study.
Introduction
In this paper, we consider multiple criteria sorting problems in which alternatives evaluated on several criteria are to be assigned to one of the pre-defined ordered categories C 1 , C 2 , ..., C p , C 1 (C p , respectively) being the worst (best, respectively) category.
Many multiple criteria methods have been proposed in the literature (see e.g. [START_REF] Doumpos | Multicriteria Decision Aid Classification Methods[END_REF], [START_REF] Zopounidis | Multicriteria classification and sorting methods: A literature review[END_REF]). We are interested in a pairwise comparison based method: the Non-Compensatory Sorting model (NCS, see [START_REF] Bouyssou | An axiomatic approach to noncompensatory sorting methods in mcdm, i: The case of two categories[END_REF][START_REF] Bouyssou | An axiomatic approach to noncompensatory sorting methods in MCDM, II: More than two categories[END_REF]). NCS assigns alternatives to categories based on the way alternatives compare to boundary profiles representing frontiers between consecutive categories and can be viewed as an axiomatic formulation of the Electre Tri method (see [START_REF] Roy | The outranking approach and the foundations of Electre methods[END_REF]). More specifically, we consider a particular case of NCS in which the importance of criteria is additively represented using weights: the Majority Rule Sorting (MR-Sort, see [START_REF] Leroy | Learning the parameters of a multiple criteria sorting method[END_REF]).
In real-world decision problems involving multiple criteria sorting, the implementation of a sorting model requires eliciting the decision-maker's (DM) preferences and adequately representing her preferences by setting appropriate values for the preference-related parameters. It is usual to elicit the sorting model parameters indirectly from a set of assignment examples, i.e., a set of alternatives with corresponding desired categories. Such preference learning approach has been developed for MR-Sort (Inv-MR-Sort, see, e.g. [START_REF] Leroy | Learning the parameters of a multiple criteria sorting method[END_REF], [START_REF] Sobrie | Learning monotone preferences using a majority rule sorting model[END_REF]), and makes it possible to compute MR-Sort parameters that best fit a learning set provided by the DM. Such a preference learning approach requires considering criteria involving monotone preferences (criteria to be maximized or minimized). This applies in the context of Multiple Criteria Decision Aid (MCDA), in which the decision problem is structured and carefully crafted through an interaction between the DM and an analyst. In contrast, we are interested in this paper in application in which the evaluations of alternatives on criteria do not necessarily induce monotone preferences. We illustrate hereafter such a situation in the two illustrative examples.
Example 1: Consider a veterinary problem in cattle production. A new cattle disease should be diagnosed based on symptoms: each cattle should be classified as having or not having the disease. New scientific evidence has indicated that the presence of substance A in the animal's blood can be predictive in addition to usual symptoms. Still, there is no clue how the level of substance A should be considered. Does a high, a low level, or a level between bounds of substance A indicate sick cattle? The veterinarians' union has gathered a large number of cases and wants to benefit from this data to define a sorting model based on usual symptom criteria and the level of substance A in the animal's blood. Hence, the sorting model should be inferred from data, even if the way to account for the substance A level is unknown.
Example 2: A computer-products retail company is distributing a new Windows tablet and wants to send targeted marketing emails to clients who might be interested in this new product. To do so, clients are to be sorted into two categories: potential buyer and not interested. To avoid spamming, only clients in the former category will receive a telephone call. To sort clients, four clients characteristics are considered as criteria, all of them being homogeneous to a currency e.g. e : the turnover over the last year of (i) Windows PC, (ii) Pack Office, (iii) Linux PC, and (iv) Dual boot PC. As the company advertises a new Windows tablet, both first two criteria are to be maximized (the more a client buys Windows PCs and Pack Office, the more she is interested in products with a Windows system), and the third criterion is to be minimized (the more a client buys Linux PCs, the less he/she is interested in products with a Windows system). The marketing manager is convinced that the last criterion should be considered but does not know if it should be maximized or minimized if preferences are single-peaked; a subset of clients has been partitioned into not interested/potential buyers. Based on this dataset, the goal is to simultaneously learn the classifier parameters and the preference direction for the last criterion.
In the previous examples, it is unclear for the DM how to account for some of the data (level of substance A in blood, Dual boot PC turnover) on the classification of alternatives (cattle, client). These examples correspond to single-peaked criteria, i.e. criteria for which preferences are defined according to a "peak " corresponding to the best possible value; on such criteria, the preference decreases with the distance to this peak. In other words, the peak corresponds to a target value below which the criterion is to be maximized, and above which the criterion is to be minimized. Such criteria are frequent in the medical domain (getting close to a normal blood sugar level), chemical applications (get close to a neutral PH), ... In MCDA, there exist works that account the non-monotonicity of preferences in valuebased models (see e.g. [START_REF] Despotis | Building Additive Utilities in the Presence of Non-Monotonic Preferences[END_REF][START_REF] Kliegr | UTA-NM : Explaining stated preferences with additive non-monotonic utility functions[END_REF][START_REF] Doumpos | Learning non-monotonic additive value functions for multicriteria decision making[END_REF], and Section 2). However, there does not exist, to the best of our knowledge, such work concerning pairwise comparisons methods. This paper aims to extend the literature on MCDA for non-monotone criteria to outranking methods and, in particular, to MR-Sort. Specifically, we tackle the problem of inferring from a dataset an MR-Sort with possibly non-monotone criteria. The challenge is that this inference problem is already known to be difficult with monotone criteria, see [START_REF] Leroy | Learning the parameters of a multiple criteria sorting method[END_REF].
More specifically, we assume that evaluations on criteria should be either maximized, minimized or corresponds to single-peaked (or single-valley) preferences. We propose a mixed-integer mathematical programming (MIP) approach to learn the MR-Sort parameters and criteria type (gain, cost, single-peaked, or single-valley) from a dataset of assignment examples.
The paper is organized as follows. Section 2 reviews the existing works in the field of MCDA that consider criteria that are not necessarily monotone. The NCS and MR-Sort methods are presented and extended to the case of single-peaked (single-valley) criteria in section 3. In section 4 we specify the Inv-MR-Sort problem in the presence of single-peaked criteria, and a MIP based algorithm is proposed in Section 5. Section 6 presents the performance of the algorithm on a generated dataset and a real-world case study. The last section groups conclusions and further research issues.
Related work
In Multiple Criteria Decision Aid (MCDA), preference learning methods require a preference order on criteria. Such preference order on criteria directly results from the fact that alternatives evaluations/scores correspond to performances that are to be maximized (profit criterion) or minimized (cost criterion), which result in monotone preference data. In multicriteria sorting problems, this boils down to a higher evaluation on a profit criterion (on a cost criterion, respectively) favours an assignment to a higher category (to a lower category, respectively).
However, there are numerous situations in which the criteria evaluation is not related to category assignment in a monotone way. Such a situation is indeed considered in the induction of monotone classification rules from data.
Classification methods in the field of machine learning usually account for attributes (features) that are not supposed to be monotone. Some specialized methods have been proposed to consider monotone feature (see [START_REF] Gutiérrez | Current prospects on ordinal and monotonic classification[END_REF], [START_REF] Cano | Monotonic classification: An overview on algorithms, performance measures and data sets[END_REF]), for decision trees [START_REF] Feelders | Monotone relabeling in ordinal classification[END_REF], or for decision rules [START_REF] Greco | Rough sets theory for multicriteria decision analysis[END_REF]. Some of these approaches have been extended to partially monotone data (see [START_REF] Pei | Partially monotonic decision trees[END_REF], [START_REF] Wang | Induction of ordinal classification rules from decision tables with unknown monotonicity[END_REF]). Blaszczyǹski et al. in [START_REF] Blaszczynski | Inductive discovery of laws using monotonic rules[END_REF] present a non-invasive transformation applied to a dominance-based rough set approach to discover monotonicity relationships (positively/negatively global/local monotonicities) between attributes and the decision considering non-ordinal and ordinal classification problems. With their proposed transformation applied on non-monotone data, they can deduce laws with interval conditions on attributes because they are positively monotone in one part of the evaluation space and negatively monotone in the other.
In the context of the multicriteria decision aid, several preference learning/disaggregation approaches consider non-monotone preferences on criteria. To the best of our knowledge, however, almost all these contributions consider a utility-based preference model, in which nonmonotone attributes are represented using non-monotone marginal utility functions.
Historically, Despotis and Zopounidis [START_REF] Despotis | Building Additive Utilities in the Presence of Non-Monotonic Preferences[END_REF] are the first to consider single peaked value functions with an additive piece-wise linear model. The UTA-NM method proposed in [START_REF] Kliegr | UTA-NM : Explaining stated preferences with additive non-monotonic utility functions[END_REF] allows for non-monotone marginals and prevents over-fitting by introducing a shape penalization. Also, in the context of an additive utility model, Eckhardt and Klieger [START_REF] Eckhardt | Preprocessing algorithm for handling non-monotone attributes in the UTA method[END_REF] define a heuristic pre-processing technique to encode the original non-monotone attributes input into a space monotone; in other words, each alternative x originally described by attribute values (x 1 , ..., x n ) is encoded values f 1 (x 1 ), ..., f n (x n ), where f i (x i ) intuitively corresponds to an "average" of DM's ratings across objects that have value x i in attribute i. Another contribution proposed by [START_REF] Doumpos | Learning non-monotonic additive value functions for multicriteria decision making[END_REF] proposes a heuristic approach to learn non-monotone additive value-based sorting model from data.
Liu et al. [START_REF] Liu | Preference disaggregation within the regularization framework for sorting problems with multiple potentially non-monotonic criteria[END_REF] model sorting with a piece-wise linear additive sorting model, using a regularization framework to limit non-monotonicity. Guo et al. [START_REF] Guo | A progressive sorting approach for multiple criteria decision aiding in the presence of non-monotonic preferences[END_REF] propose a progressive preference elicitation for multicriteria sorting using a utility model with non-monotone attributes. A framework to rank alternatives with a utility model using slope variation restrictions for marginals is proposed in [START_REF] Ghaderi | Understanding the impact of brand colour on brand image: A preference disaggregation approach[END_REF][START_REF] Ghaderi | A linear programming approach for learning non-monotonic additive value functions in multiple criteria decision aiding[END_REF]. Based on a mixed-integer program, [START_REF] Kadzinski | Preference disaggregation for multiple criteria sorting with partial monotonicity constraints: Application to exposure management of nanomaterials[END_REF][START_REF] Kadzinski | Preference disaggregation method for value-based multi-decision sorting problems with a real-world application in nanotechnology[END_REF] proposes to disaggregate an additive piece-wise linear sorting model with different types of monotone (increasing, decreasing) and non-monotone (single-peaked, single caved) marginal value functions. Recently some contributions aim at inferring non-compensatory sorting models involving non-monotone criteria from data. Sobrie et al. [START_REF] Sobrie | A new decision support model for preanesthetic evaluation[END_REF] consider a medical application in which some attributes are single-peaked, and duplicates these attributes into two criteria (to be maximized and minimized). Moreover, [START_REF] Minoungou | Learning an MR-sort model from data with latent criteria preference direction[END_REF] proposed a heuristic to learn an MR-Sort model and criteria preference directions from data. [START_REF] Sobrie | A new decision support model for preanesthetic evaluation[END_REF] and [START_REF] Minoungou | Learning an MR-sort model from data with latent criteria preference direction[END_REF] are forerunners to the present work, but do not investigate in a systematic way how to learn MR-Sort models from non-monotone data, which justifies the present work.
In this paper, we extend the literature in the following way: we consider non-monotone preferences in the context of an outranking-based sorting model, whereas the literature mainly focuses on additive value-based preference models. We propose a learning-based formulation in which the MR-Sort sorting model and the (possibly non-monotone) structure of preferences on criteria are simultaneously inferred from a set of assignment examples.
3 NCS, MR-Sort, and single-peaked preferences
NCS: Non-compensatory Sorting
Non-compensatory Sorting (NCS) [START_REF] Bouyssou | An axiomatic approach to noncompensatory sorting methods in mcdm, i: The case of two categories[END_REF][START_REF] Bouyssou | An axiomatic approach to noncompensatory sorting methods in MCDM, II: More than two categories[END_REF] is an MCDA sorting model originating from the ELEC-TRE TRI method [START_REF] Figueira | Electre methods[END_REF]. NCS can be intuitively formulated as follows: an alternative is assigned to a category if (i) it is better than the lower limit of the category on a sufficiently strong subset of criteria, and (ii) this is not the case when comparing the alternative to the upper limit of the category.
Consider the simplest case involving 2 categories Good (G) and Bad (B) with the following notations. We denote X i the finite set of possible values on criterion i, i ∈ N = {1, . . . , n}; we suppose w.l.o.g. that X i = [min i , max i ] ⊂ R. Hence, X = i∈N X i represents the set of alternatives to be sorted. We denote A i ⊆ X i the set of approved values on criterion i ∈ N . Approved values on criterion i (x i ∈ A i ) correspond to values contributing to the assignment of an alternative to category G. In order to assign alternative a to category G, a should have approved values on a subset of criteria which is "sufficiently strong". The set F ⊆ 2 N contains the "sufficiently strong" subsets of criteria; it is a subset of 2 N up-closed by inclusion. In this perspective, the NCS assignment rule can be expressed as follows:
x ∈ G iff {i ∈ N : x i ∈ A i } ∈ F, ∀x ∈ X (1)
With more than two categories, we consider an ordered set of p categories 1 , where ▷ denotes the order on categories. Sets of approved values A h i ⊆ X i on criterion i (i ∈ N ) are defined with respect to a category h (h = 2..p), and should be defined as embedded sets such that A 2 i ⊇ ... ⊇ A p i . Analogously, sets of sufficiently strong criteria coalitions are relative to a category h, and are embedded as follows: F 2 ⊇ ... ⊇ F p . The assignment rule is defined bellow, for all x ∈ X, where
C p ▷ • • • ▷ C h ▷ • • •▷C
A 1 i = X i , A p+1 i = ∅, F 1 = P(N ), and F p+1 = ∅. x ∈ C h iff {i ∈ N : x i ∈ A h i } ∈ F h and {i ∈ N : x i ∈ A h+1 i } / ∈ F h+1 (2)
A particular case of NCS corresponds to the MR-Sort rule [START_REF] Leroy | Learning the parameters of a multiple criteria sorting method[END_REF]. When the families of sufficient coalitions are all equal F 2 = ... = F p = F and defined using additive weights attached to criteria, and a threshold: F = {F ⊆ N : i∈F w i ≥ λ}, with w i ≥ 0, i w i = 1, and λ ∈ [0, 1]. Moreover, as the finite set of possible values on criterion i, X i = [min i , max i ] ⊂ R, the order on R induces a complete pre-order ≽ i on X i . Hence, the sets of approved values on criterion i,
A h i ⊆ X i (i ∈ N , h = 2...p) are defined by ≽ i and b h i ∈ X i the minimal approved value in X i at level h: A h i = {x i ∈ X i : x i ≽ i b h i }. In this way, b h = (b h 1 , . . . , b h n ) is interpreted as the frontier between categories C h-1 and C h ; b 1 = (min 1 , ..., min n ) and b p+1 = (max 1 , ..., max n )
are the lower frontier of C 1 and the upper frontier of C p . Therefore, the MR-Sort rule can be expressed as:
x ∈ C h iff i:x i ≥b h i w i ≥ λ and i:x i ≥b h+1 i w i < λ (3)
It should be emphasized that in the above definition of the MR-Sort rule, the approved sets A h i can be defined using b h ∈ X, which are interpreted as frontiers between consecutive categories, only if preferences ≽ i on criterion i are supposed to be monotone. A criterion can be either defined as a gain or a cost criterion:
Definition 1. A criterion i ∈ N is:
a gain criterion: when
x i ≥ x ′ i ⇒ x i ≽ i x ′ i -a cost criterion: when x i ≤ x ′ i ⇒ x i ≽ i x ′ i
Indeed, in case of a gain criterion, we have
x i ∈ A h i and x ′ i ≥ x i ⇒ x ′ i ∈ A h i , and x i / ∈ A h i and x i > x ′ i ⇒ x ′ i / ∈ A h i . Therefore A h i is specified by b h i ∈ X i : A h i = {x i ∈ X i : x i ≥ b h i }.
In case of a cost criterion, we have
x i ∈ A h i and x ′ i ≤ x i ⇒ x ′ i ∈ A h i , and x i / ∈ A h i and x i < x ′ i ⇒ x ′ i / ∈ A h i . Therefore A h i is specified by b i ∈ X i : A h i = {x i ∈ X i : x i ≤ b h i }.
We study hereafter the MR-Sort rule in the case of single-peaked preferences [START_REF] Black | On the rationale of group decision-making[END_REF].
Single-peaked and single-valley preferences
In this paper, we consider preferences that are not necessarily monotone on all criteria. Definition 2. Preferences ≽ i on criterion i are:
single-peaked preferences with respect to ≥ iff there exist p i ∈ X i such that:
x i ≤ y i ≤ p i ⇒ p i ≽ i y i ≽ i x i , and p i ≤ x i ≤ y i ⇒ p i ≽ i x i ≽ i y i -single-valley preferences with respect to ≥ iff there exist p i ∈ X i such that: x i ≤ y i ≤ p i ⇒ p i ≽ i x i ≽ i y i , and p i ≤ x i ≤ y i ⇒ p i ≽ i y i ≽ i x i
In an MCDA perspective, single-peaked preferences (single-valley, respectively) can be interpreted as a gain criterion to be maximized (a cost criterion to be minimized, respectively) below the peak p i , and as a cost criterion to be minimized (a gain criterion to be maximized, respectively) above the peak p i . Note also that single-peaked and single-valley preferences embrace the case of gain and cost criteria: a gain criterion corresponds to single-peaked preferences when p i = max i or single-valley preferences with p i = min i , and a cost criterion corresponds to single-peaked preferences when p i = min i or single-valley preferences with p i = max i .
When considering MR-Sort with single-peaked criteria, approved sets can not be represented using frontiers between consecutive categories. However, approved sets should be compatible with preferences, i.e. such that:
x i ∈ A h i and x ′ i ≽ i x i ⇒ x ′ i ∈ A h i x i / ∈ A h i and x i ≽ i x ′ i ⇒ x ′ i / ∈ A h i (4)
In case of a single-peaked criterion with peak p i , we have:
x i ∈ A h i and p i ≤ x ′ i ≤ x i ⇒ x ′ i ∈ A h i x i ∈ A h i and x i ≤ x ′ i ≤ p i ⇒ x ′ i ∈ A h i x i / ∈ A h i and p i ≤ x i ≤ x ′ i ⇒ x ′ i / ∈ A h i x i / ∈ A h i and x ′ i ≤ x i ≤ p i ⇒ x ′ i / ∈ A h i (5)
Therefore it appears that with a single-peaked criterion with peak p i , the approved sets
A h i can be specified by two thresholds b h i , b h i ∈ X i with b h i < p i < b h i defining an interval of approved values: A h i = [b h i , b h i ].
Analogously, for a single-valley criterion with peak p i , the approved sets
A h i can be specified using b h i , b h i ∈ X i (such that b h i < p i < b h i ) as A h i = X i \ ]b h i , b h i [ Given a single-peaked criterion i for which approved set is defined by the interval A h i = [b h i , b h i ]. Consider the function ϕ i : X i → X i defined by ϕ i (x i ) = |x i - b h i +b h i 2 |, i.e., the absolute value of x i - b h i +b h i 2 .
Then, the approved set can be conveniently rewritten as :
A h i = {x i ∈ X i : ϕ i (x i ) ≤ b h i -b h i 2 }.
In other words, when defining approved sets, a single-peaked criterion can be re-encoded into a cost criterion, evaluating alternatives as the distance to the middle of the interval [b h i , b
h i ], and a frontier corresponding to half the width of this interval. Analogously, given a single-valley criterion i for which approved set are defined by the interval
A h i = X i \ ]b h i , b h i [.
Using the same function ϕ i , approved set can be conveniently rewritten as :
A h i = {x i : ϕ(x i ) ≥ b h i -b h i 2 }.
Hence, when defining approved sets, a single-valley criterion can be re-encoded into a gain criterion, evaluating alternatives as the distance to the middle of the interval [b h i , b
h i ], and a frontier corresponding to half the width of this interval.
Inv-MR-Sort: Learning an MR-Sort model from assignment examples
MR-Sort preference parameters, e.g. weights, majority level, and limit profiles, can be either initialized by the "end-user", i.e. the decision-maker, or learned through a set of assignment examples called a learning set. We are focusing on the learning approach. The aim is to find the MR-Sort parameters that "best" fit the learning set. We consider as input a learning set, denoted L, composed of assignment examples. Here, an assignment example refers to an alternative a ∈ A ⋆ ⊂ X, and a desired category c(a) ∈ {1, . . . , p}. In our context, the determination of MR-sort parameters values relies on the resolution of a mathematical program based on assignment examples: the Inv-MR-Sort problem takes as input a learning set L and computes weights (w i , i ∈ N ), majority level (λ), and limit profiles (b h , h = 2..p) that best restore L, i.e. that maximizes the number of correct assignments. This learning approach -also referred to as preference disaggregation -has been previously considered in the literature. In particular, [START_REF] Mousseau | Inferring an ELECTRE TRI model from assignment examples[END_REF], [START_REF] Zheng | Learning criteria weights of an optimistic Electre Tri sorting rule[END_REF] learned the ELECTRE TRI parameters using mathematical programming formulation (non-linear programming for the former, mixed-integer programming for the latter). In contrast, [START_REF] Doumpos | An evolutionary approach to construction of outranking models for multicriteria classification: The case of the ELECTRE TRI method[END_REF] propose an evolutionary approach to do so. Later, a more amenable model, the MR-Sort -which derives from the ELECTRE TRI method and requires fewer parameters than ELECTRE TRI -was introduced by Leroy et al. in [START_REF] Leroy | Learning the parameters of a multiple criteria sorting method[END_REF]. They proposed a MIP implementation for solving the Inv-MR-Sort problem. In contrast, Sobrie et al. [START_REF] Sobrie | Learning preferences with multiple-criteria models[END_REF] tackled it with a metaheuristic, and Belahcene et al. [START_REF] Belahcene | An efficient SAT formulation for learning multiple criteria non-compensatory sorting rules from examples[END_REF] with a Boolean satisfiability (SAT) formulation. Other authors proposed approaches to infer MR-Sort incorporating veto phenomenon [START_REF] Meyer | Integrating large positive and negative performance differences into multicriteria majorityrule sorting models[END_REF], and imprecise/missing evaluations [START_REF] Meyer | Handling imprecise and missing evaluations in multi-criteria majority-rule sorting[END_REF], and [START_REF] Nefla | Interactive elicitation of a majority rule sorting model with maximum margin optimization[END_REF] presented an interactive elicitation for the learning of MR-Sort parameters with given profiles values. Recently [START_REF] Kadzinski | Enriched preference modeling and robustness analysis for the ELECTRE Tri-B method[END_REF] proposes an enriched preference modelling framework that accounts for a different type of input. Lastly, [START_REF] Minoungou | Learning an MR-sort model from data with latent criteria preference direction[END_REF] proposed an extension of Sobrie's algorithm for solving the Inv-MR-Sort problem with latent preference directions, i.e. considering criteria whose preference direction, in terms of gain/cost, is not known beforehand.
In this paper, we aim to extend the resolution of the Inv-MR-Sort problem to the case where each criterion can be either a cost criterion, a gain criterion, a single-peaked criterion, or a single-valley criterion.
Exact resolution of Inv-MR-Sort with single-peaked criteria
In this section, we present a Mixed Integer Programming (MIP) formulation to solve the Inv-MR-Sort problem when each criterion can either be a cost, gain, single-peaked, or singlevalley criterion. More precisely, the resolution will take as input a learning set containing assignment examples and computes:
the nature of each criterion (either cost, gain, single-peaked, or single-valley criterion), the weights attached to criteria w i , and an associated majority level λ, the frontier between category C h and C h+1 , i.e. -as defined in Section 3 -the value b h i such that if criterion i is a cost or a gain criterion, and the interval
[b h i , b h i ] if criterion i is a single-peaked or single-valley criterion.
For the sake of simplicity, we describe the mathematical formulation in the case of two categories; the extension to more than two categories is discussed in Section 5.6.
Let us consider a learning set L, provided by the Decision Maker, containing assignment examples corresponding to a set of reference alternatives A
* = A * 1 ∪ A * 2 partitioned into 2 subsets A * 1 = {a j ∈ A * : a j ∈ C 1 } and A * 2 = {a j ∈ A * : a j ∈ C 2 }.
We denote by J * , J * 1 , and J * 2 the indices j of alternatives contained in A * , A * 1 , and A * 2 , respectively.
In the MIP formulation proposed in this section, we represent single-peaked or single-valley criteria only. As discussed in Section 3.2, this is not restrictive because the cost and gain criteria are particular cases of single-peaked (or single-valley) criteria, with a peak corresponding to the endpoints of the evaluation scale.
Variables and constraints related to approved sets and profiles
Suppose that criterion i is single-peaked and that the set of approved values is defined by
A i = [b i , b i ]. Let us denote b ⊥ i = b i +b i 2
the middle of the interval of approved values. Consider an alternative a j ∈ A * in the learning set; its evaluation on criterion i is approved (i.e,
a j i ∈ A i ) if a j i ∈ [b i , b i ]. The condition |a j i -b ⊥ i | ≤ b i -b i 2 guaranties that a j i ∈ [b i , b i ].
This allows to rewrite the set A i as
A i = {x i ∈ X i : |x i -b ⊥ i | ≤ b i -b i 2 }. To test whether a j i ∈ A i , we define α j i = a j i -b ⊥ i such that a j i ∈ A i ⇔ |α j i | ≤ b i -b i 2 .
In other words, we re-encode criterion i as a cost criterion representing the distance to b ⊥ i , and accepted values correspond to α j i which are lower or equal to
⊥ i as A i = {x i ∈ X i : |x i -b ⊥ i | ≤ b i }.
In order to linearize the expression
|α j i | = |a j i -b ⊥ i |
in the MIP formulation, we consider two positive variables α j+ i , α j- i (defined such that |α j i | is equal to α j+ i + α j- i ) and binary variables β j i verifying constraints (6a)-(6c), where M is an arbitrary large positive value. Constraints (6b) and (6c) ensure that at least one variable among α j+ i and α j- i is null.
α j i = a j i -b ⊥ i = α j+ i -α j- i (6a) 0 ≤ α j+ i ≤ β j i M (6b) 0 ≤ α j- i ≤ (1 -β j i )M (6c)
Let δ ij ∈ {0, 1}, i ∈ N , j ∈ J * , be binary variables expressing the membership of evaluation a j i in the approved set A i (δ ij = 1 ⇔ a j i ∈ A i ). In order to specify constraints defining δ ij , we need to distinguish the case where criterion i is a single-peaked or a single-valley criterion. In the first case, the single-peaked criterion is transformed into a cost criterion and the following constraints hold :
δ ij = 1 ⇐⇒ |α j i | ≤ b i =⇒ M (δ ij -1) ≤ b i -(α j+ i + α j- i ) (7a)
δ ij = 0 ⇐⇒ |α j i | > b i =⇒ b i -(α j+ i + α j- i ) < M δ ij (7b) δ ij ∈ {0, 1} (7c)
In the second case, the single-valley criterion is transformed conversely into a gain criterion as follows :
δ ij = 1 ⇐⇒ |α j i | ≥ b i =⇒ M (δ ij -1) ≤ (α j+ i + α j- i ) -b i (8a) δ ij = 0 ⇐⇒ |α j i | < b i =⇒ (α j+ i + α j- i ) -b i < M δ ij (8b) δ ij ∈ {0, 1} (7c)
In order to jointly consider both cases (7a)-( 7b) and (8a)-(8b) in the MIP, we introduce a binary variable σ i , i ∈ N which indicates whether criterion i is a single-peaked (σ i = 1) or single-valley criterion (σ i = 0). When σ i = 1, the constraints (9c) and (9d) concerning the single-peaked criteria hold while the constraints (9a) and (9b) for single-valley criteria are relaxed, and conversely when σ i = 0.
-
M σ i + M (δ ij -1) ≤ α j+ i + α j- i -b i (9a) α j+ i + α j- i -b i < M δ ij + M σ i (9b) M.(σ i -1) + M (δ ij -1) ≤ b i -α j+ i -α j- i (9c) b i -α j+ i -α j- i < M δ ij + M (1 -σ i ) (9d) δ ij ∈ {0, 1} (7c) σ i ∈ {0, 1} (9e)
Lastly, in order to restrain the bounds of the single-peaked/single-valley interval within [min i , max i ], we add the 2 following constraints :
b ⊥ i -b i ≥ min i (10a) b ⊥ i + b i ≤ max i (10b)
Variables and constraints related to weights
As in [START_REF] Leroy | Learning the parameters of a multiple criteria sorting method[END_REF], we define the continuous variables c ij , i ∈ N , j ∈ J * such that δ ij = 0 ⇔ c ij = 0 and δ ij = 1 ⇔ c ij = w i , where w i ≥ 0 represent the weight of criterion i with the normalization constraint: ∀i∈N w i = 1. To ensure the correct definition of c ij , we impose:
c ij ≤ δ ij (11a) δ ij -1 + w i ≤ c ij (11b) c ij ≤ w i (11c) 0 ≤ c ij (11d)
Variables and constraints related to the assignment examples
So as to check whether assignment examples are correctly restored by the MR-Sort rule, we define binary variables γ j ∈ {0, 1}, j ∈ J * equal to 1 when the alternative a j is correctly assigned, 0 otherwise. The constraints below guarantees the correct definition of γ j (where λ ∈ [0.5, 1] represents the MR-Sort majority threshold).
i∈N c ij ≥ λ + M (γ j -1), ∀j ∈ J * 2 (12a) i∈N c ij < λ -M (γ j -1), ∀j ∈ J * 1 (12b)
Objective function
The objective for the Inv-MR-Sort problem is to identify the MR-Sort model which best matches the learning set. Therefore, in order to maximize the number of correctly restored assignment examples, the objective function can be formulated as: M ax j∈J * γ j Finally, the MIP formulation for the Inv-MR-Sort problem with single-peaked and single valley criteria is given bellow (where M an arbitrary large positive value, and ε an arbitrary small positive value). Table 1 synthesizes the variables involved in this mathematical program.
Variable Domain Number of variables Definition
α j+ i R + n × |A ⋆ | first component of the absolute value |a j i -b ⊥ i | α j- i R + n × |A ⋆ | second component of the absolute value of |a j i -b ⊥ i | β j i {0, 1} n × |A ⋆ | binary variable indicating the sign of a j i -b ⊥ i σi {0,1} n σi = 1 if criterion i is single-peaked, σi = 0 if i is single-valley γj {0,1} |A ⋆ | γj = 1 if alternative a j is correctly assigned by the model, γj = 0 if not δij {0,1} n × |A ⋆ | δij = 1 if a j i ∈ Ai, δij = 0 if a j i / ∈ Ai cij [0,1] n × |A ⋆ | cij = 1 if a j i ∈ Ai (i.e, if δij = 1), cij = 0 if a j i / ∈ Ai (i.e, if δij = 0) b ⊥ i R n middle of the interval [b i , bi] bi R n value of
i∈N c ij ≥ λ + M (γ j -1) ∀j ∈ J * 2 (12a) i∈N c ij + ε ≤ λ -M (γ j -1) ∀j ∈ J * 1 (12b) i∈N w i = 1 (13b) c ij ≤ δ ij ∀j ∈ J * , ∀i ∈ N (11a) c ij ≥ δ ij -1 + w i ∀j ∈ J * , ∀i ∈ N (11b) c ij ≤ w i ∀j ∈ J * , ∀i ∈ N (11c) b ⊥ i -a j i = α j+ i -α j- i ∀j ∈ J * , ∀i ∈ N (6a) α j+ i ≤ β j i M ∀j ∈ J * , ∀i ∈ N (6b) α j- i ≤ (1 -β j i )M ∀j ∈ J * , ∀i ∈ N (6c) -M.σ i + M (δ ij -1) ≤ α j+ i + α j- i -b i ∀j ∈ J * , ∀i ∈ N (9a)
α j+ i + α j- i -b i + ε ≤ M.δ ij + M.σ i ∀j ∈ J * , ∀i ∈ N (9b) M.(σ i -1) + M (δ ij -1) ≤ b i -α j+ i -α j- i ∀j ∈ J * , ∀i ∈ N (9c) b i -α j+ i -α j- i + ε ≤ M.δ ij + M.(1 -σ i ) ∀j ∈ J * , ∀i ∈ N (9d) b ⊥ i -b i ≥ min i ∀i ∈ N (10a) b ⊥ i + b i ≤ max i ∀i ∈ N (10b) c ij ∈ [0, 1], δ ij ∈ {0, 1} ∀j ∈ J * , ∀i ∈ N (13c) α j+ i , α j- i ∈ R + ∀j ∈ J * , ∀i ∈ N (13d)
β j i ∈ [0, 1] ∀j ∈ J * , ∀i ∈ N (13e) b i ∈ R, w i ∈ [0, 1], b ⊥ i ∈ R, σ i ∈ {0, 1} ∀i ∈ N (13f) γ j ∈ {0, 1} ∀j ∈ J * (13g) λ ∈ [0.5, 1] (13h)
Interpretation of the optimal solution
Once the optimal solution to the above mathematical program is found, it is necessary to derive, from the optimal solution, the corresponding MR-Sort model, i.e:
the nature of each criterion (either cost, gain, single-peaked, or single-valley criterion), the weights attached to criteria w i , and associated majority level λ, the frontier between category C 1 and C 2 , i.e., the value b i if criterion i is a cost or a gain criterion, and the interval [b i , b i ] if criterion i is a single-peaked or single-valley criterion.
Criteria weights w i , and associated majority level λ are directly obtained from the corresponding variables in the optimal solution. The preference directions and criteria limit profiles are deduced as follows:
-Case σ i = 1 (criterion i is represented as a single-peaked criterion in the optimal solution):
• if b ⊥ i -b i ≤ min j∈J * {a j i }, then criterion i is a cost criterion, and the maximal approved value on criterion i is b
⊥ i + b i , i.e. A i =] -∞, b ⊥ i + b i ], see Fig. 1 case 3, • if b ⊥ i + b i ≥ max j∈J * {a j i }
, then criterion i is a gain criterion, and the minimal approved value on criterion i is b
⊥ i -b i , i.e. A i = [b ⊥ i -b i , ∞[, see Fig. 1 case 2,
• otherwise, i is a single-peaked criterion, and
A i = [b ⊥ i -b i , b ⊥ i + b i ],
see Fig. 1 case 1 -Case σ i = 0 (criterion i is represented as a single-valley criterion in the optimal solution):
• if b ⊥ i -b i < min j∈J * {a j i }, then criterion i is a gain criterion, and the minimal approved value on criterion i is b
⊥ i + b i , i.e. A i = [b ⊥ i + b i , ∞[, see Fig. 2 case 3, • if b ⊥ i + b i > max j∈J * {a j i },
A i = [-∞, b ⊥ i -b i ] ∪ [b ⊥ i + b i , ∞[, see Fig. 2 case 1.
Fig. 1: Three cases for single-peaked criteria Fig. 2: Three cases for single-valley criteria
Extension to more than 2 categories
Our framework can be extended to more than two categories, at the cost of adding supplementary variables and constraints to the mathematical program. So as to extend to p categories (p > 2), sets of approved values A h i ⊆ X i on criterion i (i ∈ N ) should be defined with respect to a category level h (h = 2, . . . , p), and should be embedded such that
A p i ⊆ A p-1 i ⊆ ... ⊆ A 2 i . In the MIP formulation, the variables δ ij , c ij , α j+ i , α j- i , β j i , b i ,
• i∈N c p-1 ij ≥ λ + M (γ j -1), ∀a j ∈ C p • i∈N c 1 ij + ε ≤ λ -M (γ j -1), ∀a j ∈ C 1 • i∈N c h-1 ij ≥ λ + M (γ j -1), ∀a j ∈ C h ⊂ [C 2 , C p-1 ] • i∈N c h ij + ε ≤ λ -M (γ j -1), ∀a j ∈ C h ⊂ [C 2 , C p-1
] Lastly, constraints on b h i , and b h⊥ i should be imposed so as to guaranty that the approved sets are embedded such that
A p-1 i ⊆ A p-2 i ⊆ ... ⊆ A 1 i , i.e, [b p-1⊥ i -b p-1 i , b p-1⊥ i + b p-1 i ] ⊆ [b p-2⊥ i -b p-2 i , b p-2⊥ i + b p-2 i ] ⊆ . . . ⊆ [b 1⊥ i -b 1 i , b 1⊥ i + b 1 i ].
Experiments, results and discussion
In this section, we report numerical experiments to empirically study how the proposed algorithm behaves in terms of computing time, ability to generalize, and ability to restore an MR-Sort model with the correct preference direction (gain, cost, single-peaked, or single-valley). The experimental study involves artificially generated datasets and ex-post analysis of a real-world case study.
Tests on generated datasets
In this section we are focusing on synthetic data.
Experimental design
Assuming a generated MR-Sort model M 0 representing perfectly the Decision Maker preferences, we first randomly generate n-tuples of values considered as alternatives (each tuple corresponding to n criteria evaluations). Then we simulate the assignments of these alternatives following the model M 0 and obtain, therefore, assignment examples which constitute the learning set L, used as input to our MIP algorithm. Alternatives are generated in such a manner to obtain a balanced dataset (i.e. equal number of assignments in each category). The Inv-MR-Sort problem is then solved using the proposed algorithm and, as a result, generating a learned model noted M ′ .
Generation of instances and model parameters. We consider a learning set of 200 assignment examples. A vector of performance values of alternatives is drawn in an independent and identically distributed manner, such that the performance values are contained in the unit interval discretized by tenths. We then randomly generate profiles values (either b i , or b i and b i ) for each criterion; also these values are chosen with the unit interval discretized by tenths.
In order to draw uniformly distributed weights vectors (see [START_REF] Butler | Simulation techniques for the sensitivity analysis of multi-criteria decision models[END_REF]), we uniformly generate |N | -1 random values in [0, 1] sorted in ascending order. We then prepend 0 and append 1 to this set of values obtaining a sorted set of |N | + 1 values. Finally, we compute the difference between each successive pair of values resulting in a set of |N | weights such that their sum is equal to 1. We randomly draw λ in [0,1]. In order to assess the ability of the algorithm to restore preference directions, we consider q criteria out of n for which we consider the preference direction as unknown, and we uniformly draw a random preference direction among gain, cost, single-peaked and single-valley. For each single-peaked and single-valley criteria, the peak is uniformly drawn in the [0,1] interval discretized by tenths 4 . Hence, the preference direction of these q criteria are assumed to be unknown. Meanwhile, the remaining n -q criteria are considered as gain criteria.
Performance metrics and tests parameters. To study the performance of the proposed algorithm, we are considering three main metrics.
-Computing time: we consider here the time (CPU) necessary to solve the MIP algorithm.
-Restoration rate of assignment examples: as our MIP algorithm is an exact method, it is expected that the entire learning set L will be restored by M ′ . Therefore we assess the restoration performance on a test set which is run through M 0 and M ′ . This test set comprises randomly generated alternatives not used in the learning set; that is, assignment examples that the algorithm has never seen. This allows to assess the restoration rate (also called classification accuracy in generalization or CAg), that is, the ratio between the number of alternatives identically assigned in categories by both M 0 and M ′ , and the number of alternatives. -Preference direction restoration rate (PDR): considering the set of criteria where the preference direction is unknown, PDR is defined as the ratio between the number of criteria where the preference direction has been correctly restored and the cardinality of this set.
In order to account for the statistical distribution of all the randomly selected values, we independently select 100 different learning sets, each one associated with a randomly generated M 0 MR-Sort model. We then ran 100 independent experiments and aggregated the results.
In our experiments, we vary n the number of criteria in {4, 5, 6, 7, 8, 9}; q the number of criteria with unknown preference directions vary in {0, 1, 2, 3, 4}, and the number of categories is set to 2. The test set is composed of 10000 randomly generated alternatives.
We executed experiments on a server endowed with an Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz, 80 cores and 384 GB RAM. The tests were performed using the Cplex solver version 20.1.0 [START_REF]IBM ILOG CPLEX Optimization Studio CPLEX User's Manual[END_REF] on the server using 10 reserved threads and limiting the computing time to a maximum of 1h.
Results
In the following we present the results of the randomly generated tests.
Computing time Table 2 presents the median CPU time of the terminated instances (timeout set to 1 hour). The execution time increases with the number of criteria and the number of criteria with unknown preference direction up to n = 7 and q = 2. Beyond this limit, the execution time fluctuates, and we observe a relatively large dispersion of CPU times. For instance, when n = 7 and q = 3, the median value of CPU time is just above 1 minute (76.3 sec.), while the 90%th percentile value exceeds 1 hour. Additionally, Table 2 shows the percentage of instances that terminated within the time limit, set to 1 hour. Unsurprisingly, the number of terminated instances decreases with both the number of criteria and the number of criteria with unknown preference direction. In particular, the rate jumps from 95% with 4 criteria to 31% with 9 criteria in the model when q = 3.
Restoration rate of the test set. Regarding the classification accuracy (CAg) of the learned models (involving 4 to 9 criteria in the model and 0 to 4 criteria with unknown preference direction), globally, the performance values are between 0.9 and 0.95, with 0.93 on average. We do not notice a significant trend over both the number of criteria and the number of criteria with unknown preference directions. However, the figures reflect the performance of only terminated instances. Therefore, the CAg rate could degrade when considering executions above the timeout, assuming these are the most difficult instances to learn. Preference direction restoration rate. Figure 3 illustrates the evolution of the preference direction restoration rate (PDR). Globally, the PDR falls with the increase of the number of criteria in the model. In addition, this indicator degrades moderately with the number of criteria with unknown preference directions with respectively 55% and 35% for q = 1 and q = 4 considering 9 criteria in the model. The results illustrated in the Table 3 give more insights of the behaviour of the algorithm regarding PDR. We consider instances involving one criterion with unknown preference direction, q = 1 (it corresponds to criterion 1). We analyze how the importance of this criterion (w 1 ) on the restoration rate of the preference direction. The PDR rate is averaged over the number of criteria in the model (n ∈ {4, .., 9}) and distributed over three intervals: [0, 1 2n ] ,] 1 2n , 2 n [, [ 2 n , 1] which can be interpreted as three levels of importance of w 1 (respectively low level, medium level, and high level). As expected, the average PDR rises with the importance of w 1 ; we have 44% of PDR for a low level of importance whereas 74% and 78% correspond respectively to a medium and high level of the importance of w 1 . It appears that the MIP has more difficulties in correctly detecting the preference direction of a criterion when this criterion has low importance.
Discussion
The experiments carried out on randomly generated instances give us the following insights.
Although exact methods are typically computationally intensive, the computation time is relatively affordable for medium-sized models (less than 3 minutes for 200 alternatives in the learning set and up to n = 9 and q = 4 in the model when the timeout is set to 1 hour). Moreover, The computation time could be reduced as our experiments were performed with a limited number of threads set to 10.
The algorithm can restore accurately new assignment examples based on the learned models (0.93 on average up to 9 criteria) and remains relatively efficient with the number of criteria with unknown preference directions. Extended experiments should be done without the limit of time to accurately predict the restoration rate in generalization with the increase of parameters n and q.
Our algorithm restores with difficulty preference directions when the number of criteria grows while keeping the learning set constant. The PDR rate also decreases with the increase of the number of criteria with unknown preference direction in the model with similar learning set sizes (but still greater than the random choice that is 25%). It would be instructive to discover the algorithm's behaviour in terms of PDR for non-terminated instances for more insights.
Finally, the restoration rate of criteria preference direction correlates with such criteria's importance in the model. It appears that the preference direction of criteria with importance below 1 2n are the most difficult to restore. These results are valid with a learning set of fixed size (200); It would be interesting to investigate experimentally whether larger learning sets would make it possible to accurately learn the direction of preference.
Tests on a real-world data: the ASA dataset
The ASA 5 dataset [START_REF] Lazouni | A new computer aided diagnosis system for pre-anesthesia consultation[END_REF] constitutes a list of 898 patients evaluated on 14 medical indicators (see Table 4) enabling to assign patients into 4 ordered categories (ASA1, ASA2, ASA3, ASA4). These categories correspond to 4 different scores that indicate the patient health. Based on the score obtained for a given patient, anesthesiologists decide whether or not to admit such a patient to surgery. The relevance of the dataset for our tests relies on the presence of a criterion with single-peaked preference, which is "Blood glucose level" (i.e. glycemia). For practicality, we restrain the ASA dataset to the 8 most relevant criteria for our experiments. They are in bold in Table 4 To have two categories, we first divide the dataset into two parts: category 2 representing patients in categories ASA1 and ASA2 (67% of the population) and category 1 representing those in categories ASA3 and ASA4 (33% of the population).
In the following, we illustrate how to learn the model parameters and the preference type (gain, cost, single-peaked (SP), single-valley) of the criterion "Glycemia" with three different sets of assignment examples chosen in the original set of 898 patients. In this medical application, we suppose that the "Glycemia" criterion type is unknown and expect to "discover" a single-peaked criterion. We report for each experiment the number of distinct performances considered per criterion. First Dataset: we initially consider the whole original dataset with all 898 assignment examples in the learning set as input to our MIP algorithm. We infer the type (gain, cost, singlepeaked, single-valley) of criterion Glycemia and the MR-Sort parameters from this first dataset.
The inferred model given in Table 5 is computed in 40h33mn execution time. The obtained model allows restoring CA = 99.4% of the assignment examples in the learning set. However, in the inferred model, the glycemia criterion is detected as a cost criterion to be minimized (whereas we expect it to be inferred as single-peaked). Note that the inferred value for the limit profile on the glycemia criterion (1.18 g/l) makes it possible to distinguish patients with hyperglycemia from the others but does not distinguish hypoglycemia from normal glycemia (normal glycemia corresponds to [0.9,1.2]). This is due to the distribution of the glycemia values over the patients shown in Figure 4. This distribution shows that all patients with glycemia above 1.2g/l (hyperglycemia) are assigned to Category 1. However, some patients with normal glycemia [0.9,1.2] are also assigned to Category 1, and some patients with glycemia equal to 0.8 g/l or below (hypoglycemia) are assigned to Category 2.
In the following, we check if it is possible to restore the "correct" preference direction (i.e. single-peaked) with a subset of carefully selected patients. To do so, we will remove the patients with normal glycemia ([0.9, 1.2] g/l) assigned to category 1.
Second Dataset : in a second step, we choose to remove the 97 patients of the learning set assigned to Category 1 and whose glycemia values lie within [0.9, 1.2] g/l, i.e., with normal glycemia. Our goal is to foster the algorithm to retrieve a single-peaked preference for the glycemia criterion. The distribution of glycemia values in the new learning set of the remaining 801 patients is provided in Fig. 4.
Using this second learning set, we solve the inference problem with the MIP algorithm. Computation time is 56mn, and the inferred model (see Once again, the restoration rate is high. However, the glycemia criterion is still detected as a cost criterion to be minimized (instead of a single-peaked criterion). The inferred model does not distinguish patients with hypoglycemia from normal glycemia ones. Third Dataset : Finally, we remove patients in Category 2 for which the glycemia value is lower than 0.9 (hypoglycemia). This new configuration leads to a dataset of 624 patients. In this third dataset, the distribution of glycemia values (see Figure 6) in which hypo and hyperglycemic patients are assigned to Category 1 while patients with normal glycemia are in Category 2. With this dataset, the MIP algorithm runs in 4mn30s and results are presented in Table 7. The computed model restores all the assignment examples, and glycemia is now detected as a single-peaked criterion. Furthermore, the approved values [0.93, 1.18] can be reasonably interpreted as normal glycemia. This illustrative example shows that our model can infer an MR-Sort model and retrieve single-peaked criteria; however, to do so, the learning set should be sufficiently informative. Specifically, when inferring from a dataset a "ground truth" in which a specific criterion i is single-peaked with a set of acceptable values A i = [b i , b i ], it is necessary that some examples in the learning set are evaluated on criterion i below b i , in [b i , b i ], and above b i .
Conclusion and future work
This paper proposes a MIP-based method to infer an MR-Sort model from a set of assignment examples when considering possibly non-monotone preferences. More precisely we learn an Mr-Sort model with criteria that can be either of type (i) cost, (ii) gain, (iii) single-peaked or (iv) single-valley criteria. Our inference procedure simultaneously infers from the dataset an MR-Sort model end the type of each criterion.
Our experimental test on simulated data shows that the MIP resolution can handle a dataset involving 200 examples and nine criteria. Experiments suggest that the correct restoration of the criteria type (i)-(iv) requires a dataset of significant size.
Our work opens avenues for further research. First, it would be interesting to test our methodology on real-world case studies to assess further and investigate our proposal's performance and applicability. Another research direction aim at pushing back computational barrier: our MIP resolution approach faces a combinatorial explosion. The design of an efficient heuristic would be beneficial in this respect.
b i -b i 2 (
2 i.e., the half the interval [b i , b i ]). In the following, we denote b i = b i -b i 2 . Hence, in our formulation, the sets A i are defined using two variables: b ⊥ i representing the middle of the interval [b i , b i ], and b i representing half of the interval [b i , b i ] allowing to define A i using b i and b
then criterion i is a cost criterion, and the maximal approved value on criterion i is b ⊥ i -b i , i.e. A i = [-∞, b ⊥ i -b i ], see Fig. 2 case 2, • otherwise, i is a single-valley criterion, and
Fig. 3 :
3 Fig.3: Preference direction restoration rate (PDR) considering 1 to 4 criteria with unknown preference direction (q) (average performance over terminated instances)
Fig. 4 :Fig. 5 :
45 Fig. 4: Distribution of patients glycemia in the first dataset
Fig. 6 :
6 Fig. 6: Patients glycemia in the third dataset
Table 2 :
2 Median CPU time (sec.) of instances solved in 1h / 9th decile of CPU time, and proportion of terminated instances, with 4 to 9 crit. (n), 0 to 4 crit. with unknown pref. dir.
Table 3 :
3 PDR (averaged over n) according to the range of weight of criterion c 1
Table 4 :
4 Original criteria in the ASA dataset
Table 6 )
6 restores 99.8% of the learning
Instance settings Model parameters learned
Attributes #values Direction pref. dir. bi wi pref. dir.
Age 103 (origin) min. known 72.9 0.01
Diabetic 2 (origin) min. known 0.99 0
Hypertension 2 (origin) min. known 0 0.01
Respiratory F 2 (origin) min. known 0.99 0.88
Pacemaker 2 (origin) min. known 0 0.02
Systolic BP 24 (origin) min. known 15 0.03
Diastolic BP 17 (origin) min. known 8.92 0.02
Glycemia 82 (origin) SP unknown 1.18 0.03 min
λ = 0.98
Table 5 :
5 Inferred model with the first dataset (898 assignment examples)
Table 6 :
6 Inferred model with the second dataset (801 assignment examples)
Instance settings Model parameters learned
Attributes #values Direction pref. dir. bi wi pref. dir.
Age 103 (origin) min. known 5.9 0
Diabetic 2 (origin) min. known 0.99 0
Hypertension 2 (origin) min. known 0 0.01
Respiratory F 2 (origin) min. known 0 0.01
Pacemaker 2 (origin) min. known -0.01 0
Systolic BP 23 min. known 15 0.01
Diastolic BP 15 min. known 8.5 0.01
Glycemia 82 (origin) SP unknown 1.18 0.96 min
λ = 0.99
Table 7 :
7 Inferred model with the third dataset (624 assignment examples)
Instance settings Model parameters learned
Attributes #values Direction pref. dir. bi wi pref. dir.
Age 103 (origin) min. known 3.3 0
Diabetic 2 (origin) min. known 0 0
Hypertension 2 (origin) min. known 0 0
Respiratory F 2 (origin) min. known 0.99 0
Pacemaker 2 (origin) min. known 0 0
Systolic BP 23 min. known 12.88 0.01
Diastolic BP 15 min. known 9 0.01
Glycemia 73 (origin) SP unknown [0.93,1.18] 0.99 SP
λ = 1
It should be noted that if the peak is drawn as an extreme value, the single-peaked (or single-valley) criterion actually corresponds to a monotone (gain or cost) criterion. |
04018464 | en | [
"math.math-ho"
] | 2024/03/04 16:41:24 | 2023 | https://hal.science/hal-04018464/file/2023%20Balacheff%20Note%20transpo%20proof%20F2.pdf | Notes for a study of the didactic transposition of mathematical proof Nicolas Balacheff* It is nowadays common to consider that proof must be part of the learning of mathematics from Kindergarten to University 1 . As it is easy to observe, looking back to the history of mathematical curricula, this has not always been the case either because following an old pedagogical tradition of rote learning proof was reduced to the formalism of a text and deprived from its meaning or, despite its acknowledged presence anywhere in mathematics, proof did not get the status of something to learn for what it is. On the long way from its absence as such in the past to its contemporary presence as a content to be taught at all grades, proof has had to go through a process of didactical transposition to satisfy a number of different constraints either of an epistemic, didactical, logical or mathematical nature. I will follow a chronological order to outline the main features of this process with the objective to better understand the didactical problem that our current research is facing.
A note on didactic transposition The concept
As for any other content, teaching and learning mathematics do require, as far as possible, its complete and precise specification as a knowledge to be taught, be it of a conceptual, methodological or technical nature (e.g. integers, multiplying two binomial -FOIL, factorization, integration of a function). It must first be identified, then uttered and finally chosen. This is the social responsibility of several organizations which include those who put it in use as well as decision makers. A complex process takes charge of its formalization as a content to be taught which outcomes are curricula, textbooks and other texts published and disseminated by the organizations in charge. It is recognized since the origin of modern education that the knowledge in use and the knowledge to be taught share similarities while showing significant differences. These phenomena are conceptualized and modelled by the concept of didactic transposition coined by Yves [START_REF] Chevallard | Analyse des pratiques enseignantes et didactique des mathématiques : L'approche anthropologique[END_REF]. It refers "to the transformations an object or a body of knowledge undergoes from the moment it is produced, put into use, selected and designed to be taught, until it is actually taught in a given educational institution." (Chevallard & Bosch, 2014, p. 170).
Numerous organizations contribute to the decision to turn a piece of knowledge into a content to be taught and participate in shaping its transposition: professional mathematicians, various users from engineers to dealers, associations of teachers, bodies at the different layers of the educational institution and educational decision makers. The forces which interact are of a professional, political and social nature. Yves Chevallard approaches the related phenomena through "the study of the conditions enabling and the constraints hindering the production, development and diffusion of knowledge and, more generally, of any kind of human activity in social institutions" (ibid. p. 173). The didactic transposition is a human and social enterprise which study is now part of the Anthropological Theory of Didactic [START_REF] Chevallard | Analyse des pratiques enseignantes et didactique des mathématiques : L'approche anthropologique[END_REF][START_REF] Chevallard | What is a theory according to the anthropological theory of the didactic[END_REF]. In this theoretical framework, taking into consideration "transpositive phenomena means moving away from the classroom and being provided with notions and elements to describe the bodies of knowledge and practices involved in the different institutions at different moments of time." (Chevallard & Bosch, 2014, p. 172).
About the method
The process of didactic transposition is multi-stage, it involves several forms of a piece of knowledge, whose relations are shaped by the interactions between its different contributors. Four forms play a pivotal role: the scholarly knowledge, the knowledge to be taught, the taught knowledge and the learnt knowledge (Chevallard & Bosch, 2014, p. 170). Since the objective of these notes is to outline the dynamic of the didactic transposition of mathematical proof from an historical perspective, I will align my quest on the first two forms, the scholarly knowledge and the knowledge to be taught. This will set a limit to this exploration since the distance between the knowledge to be taught and the knowledge taught may be important, as we know that it is the case between the knowledge taught and the learned knowledge. Moreover, there are little historical data on what happens in classroom either from the teaching or the learning perspective. But this limit is not too severe given my purpose. I expect that the chosen approach for these notes informs us sufficiently precisely about the nature and the complexity of the phenomena underlying the transposition process.
These notes are based on published research on the history of mathematics education, especially the history of the teaching of geometry, readings of original treatises and textbooks, institutional comments and curricula. The study of these resources is faced to the fragility of knowledge which paradoxically despite a "constant form" does not keep constant meaning (Kang & Kilpatrick, 1992, p. 3). This classic remark for linguistics and communication sciences has profound consequences for the epistemological analysis of processes dedicated to knowledge communication and dissemination since they assume hypothetical students, teachers and classrooms whose perception and models are rarely documented (ibid. p. 5).
Hence, readers must keep in mind that the analysis and comments shared in these notes have a conjectural stance, though they paradoxically pretend a sufficiently robust contribution to research on proof as a content to be taught. This robustness comes from the quality of the data and resources, and their significance in relation to the phenomena identified.
Caveat: I use the French quotation marks « … » when inserting a quote translated by myself from the French, and classical quotation marks "…" when the quote is originally in English.
Geometry, the theoretical ground of proof Euclid's elements Mathematical proof appeared very late as an explicit content to be taught, when considering its early formalization by the Greeks, in the 3 rd century BC. It comes nowadays with a vision of writing mathematically with absolute rigour deductive reasoning based on explicit foundations, definitions and postulates. Indeed, this is an idealization of what underpins Euclid elements of geometry which shaped the construction of the concept of proof and stimulated the development of mathematics as a discipline. This perennial reference has contributed to the constitution of standards of communication between mathematicians. Even if it has been contested and transformed in the course of the history of mathematics, it constitutes a landmark of its epistemology.
Euclid's Elements are considered as one of the treasures of the ancient Greek legacy, though they have been left out for a long time, until the 12 th century. Some geometry was probably taught for its practical inputs, but there are not so many evidences about this period when education was not organized in a systematic way; the quadrivium had neither teachers nor students, and other domains than mathematics had priority (grammar, rhetoric) (Høyrup, 2014, sect. 2 & 3). The first milestone of importance from the perspective of mathematics education, following Jens Høyrup is, somewhere in the first half of the 12 th century, the translation of Euclid's Elements from the Arabic, "presumably" by Adelard of Bath (fl. 1116Bath (fl. -1142)).
It is still too commonly claimed that the journey from Euclid BC to Euclid AD in the West only took a diversion through Arab countries. On the contrary, the Arabic translation of the Elements in the 8 th century was the subject of discussion among mathematicians of the time about the text and the usefulness of studying it. In particular, Sonja [START_REF] Brentjes | Teaching the Mathematical Sciences in Islamic Societies Eighth-Seventeenth Centuries[END_REF] refers to Al-Sizjī disagreeing "vehemently" with those who downplayed the study of Euclid, and insisting on "the need to study and work hard to become a good geometer" (ibid. p. 92). Al-Sizjī gives reason for the study of the Elements among which: "[to follow] the methods of them (these theorems and preliminaries) in a profound and successful way, so that you rely not only on the theorems and preliminaries and constructions and arrangements which we mentioned. But you must combine with that (your own) cleverness and guesswork and tricks. The pivotal factor in this art is the application of tricks, and not only (your own) intelligence, but also the thought of the experienced (mathematicians), the skilled, those who use tricks" (Brentjes, 2014, p. 92).
Then, the 10 th century Islamic position was that "the beginner must learn the first theorems of Euclid's Elements" (ibid.). 2
Later versions of Euclid Elements derived from the first Adelard translation, possibly due to his students (Høyrup, 2014, p. 114), were marked by "didactical concerns": "at this point it is clear that the matter presented in the work had become the primary aim, while further utility for astronomy (and, still further, for astrology) had retreated into the background" (ibid.).
To a certain extent, because it was not compulsory, the Elements have been taught since then but it was a challenge given the scarce educational material and the style of the lectures. This was the pre-Gutemberg era (ibid. 119). The print of the Elements in 1482 in Venice radically changed access to the text. Controversies over the Elements and their translations developed until the Latin text of Commandin (1572) which seems to have reached some form of consensus [START_REF] Loget | Héritage et réforme du quadrivium au XVIe siècle[END_REF].
The interest for Geometry 3 grown during the 16 th century together with critical readings of Euclid's Elements for its logic, principles and order of propositions (E. Barbin & Menghini, 2014, p. 474). The perspective was theoretical, questioning the text, hence the Euclidean standard pattern of proof.
The criticisms of Euclid
Let us open this section with a quotation of the foreword of the Elemens d'Euclide of R. P. Dechalles de la Compagnie de Jésus (1660), first translated in French by Ozanam, member of the Royal Academy of Sciences, in 1677: « Having noticed for a long time, that most of those who learn the elements of Euclid, are very often disgusted with them, for not knowing what is the use of such apparently insignificant propositions, and nevertheless so useful; I thought it would be very appropriate, not only to make them as easy as possible; but also, to add some small uses, after each proposition, which would show their usefulness. This is what obliged me to change some demonstrations, which I judged to be too awkward, and beyond the ordinary reach of 2 Although difficult because of the methodological problems of working with a large historical corpus and avoiding cultural bias in the analysis [START_REF] Brentjes | Pourquoi et comment étudier l'histoire de l'enseignement des mathématiques dans les sociétés islamiques entre 750 et 1500[END_REF], the study of Islamic mathematics education in this period is most definitely needed. 3 I will write Geometry with a capital letter to refer to pure geometry, a body of theoretical knowledge, as opposed to practical geometry, a body of knowledge meant to be used to solve technical, professional or everyday problems. The latter is not well defined, but one may understand that this kind of knowledge claims some direct usefulness. The boarder between theoretical and practical is not in the form of the text, for example Descriptive geometry has a utilitarian raison d'être and is based on a theoretical construction. Both were taught, and are still taught, but with radically different educational objectives and with different pedagogical approaches. Geometry was the natural terrain of the debates on the teaching of mathematical proof. This distinction may be debatable under the light of specific cases, and I thank the first readers of this note for their comments, but it is useful for guiding the analysis. beginners, and to substitute a few more intelligible ones. » (Dechalles, 1660, sect. Avant propos -my translation) This foreword to the Elements sets out the main issues that would be disputed in the centuries to come: the conflict between theoretical and practical views of geometry4 , and the necessity and complexity of proofs, with Euclid standing out as the key reference for learning mathematics.
The theoretical character of Geometry, which gives priority to the logical organisation of the text, was denounced by Descartes. In his essay on the Rules for the direction of the mind (ca 1628)5 , he criticizes the ancient mathematicians (in fact, the Elements) for the priority given to convincing to the detriment of understanding, and thus their inability to allow the reason and meaning of proofs, and therefore of theorems, to be grasped; that is to say the understanding of Geometry itself. This criticism points to a tension between convincing and explaining that we know now is inherent in argumentative discourse. Descartes conceptualised this tension by distinguishing analysis and synthesis, the former "is the best and truest method of instruction", while le latter "is very suitable to deploy in geometry"6 . As a matter of fact, he sets the problem of the teaching of geometry in a way which is still relevant: how to manage the equilibrium between convincing and explaining. These criticisms of the Euclid's Elements had a translation in the writing of the Géométrie of Arnaud (1667) who proposed a new organisation following the principles of analysis. Sylvestre-François Lacroix, more than a century later, wrote about Arnaud's Geometry: « This work, is, I believe, the first one in which the order of the proposals of Geometry agrees with that of abstractions, considering first the properties of lines, then that of surfaces, and then that of bodies. » (Lacroix, 1799, p. xix).
In the first half of the 18 th century, Étienne Bonnot de Condillac in Essay on the origins of human knowledge (1746) in which he acknowledge the model of rigour of the Geometry, but criticizing its lack of, let say, simplicity. His fundamental postulate claims the superiority of metaphysics for the formation of « a luminous, precise and extensive mind, and which, consequently, must prepare for the study of all the other sciences » (ibid. p. 13). While he takes geometry as a model for the construction of his essay, he criticises it for its failure to find « an order simple enough and easy enough to arrive at the obvious » (ibid.). His essay includes considerations on the communication of knowledge which had a long-lasting influence: "Finally, after having developed the progress of the operations of the soul and those of language, I try to indicate the means by which one can avoid error, and to show the order one must follow, either to make discoveries, or to instruct others of those one has made." (ibid. p.16).
In the same period the mathematician Alexis Claude Clairaut published his Elements de Géométrie (1741) with a preface which takes a clear didactical position rooted in a pedagogical observation: «[...] it's common for beginners to get tired & put off, before they have any distinct idea of what we wanted to teach them. » (ibid. p. ij). Clairaut searched for an approach « [...] bringing together the two advantages of interesting & enlightening the beginners ». (ibid. p. iij). Looking into the history of Geometry he chosen the problems of « measuring lands » (fr: arpentage) to give meaning and to avoid proofs for the obvious because those interested in geometry « enjoyed exercising their minds a little; & on the contrary, they were put off, when they were overwhelmed with demonstrations, which were almost useless » 7 (ibid. p. x) but he warns the reader: « this is no ordinary treatise on land-surveying » (ibid. p. xij). However, it is not a textbook either. The Elements of Clairaut should be read as a manifesto bringing a contribution to the ongoing discussions on the Euclid Elements [START_REF] Glaeser | A propos de la pédagogie de Clairaut vers une nouvelle orientation dans l'histoire de l'éducation-Revue RDM[END_REF].
At the end of the 18 th century, Euclid's Elements are the reference which criticisms nourish the search for a text of Geometry for both its theoretical establishment and its communication. This search focussed on the organisation of the text and the need for proof for some propositions, the Euclidean standard remaining stable.
The 19 th century, an epistemological dispute
The French organisation of higher education at the end of the 18 th century widened the gap between the theoretical and the utilitarian approach of geometry. Two different courses of study developed, corresponding to two different systems. On the one hand, a teaching of geometry oriented towards applications, essentially in engineers and military schools, on the other hand, a teaching of Geometry centred on geometry for itself, taught in the Normal schools8 and Faculties. In this context, the criticisms of Euclid lead not only to the writing of new Elements, but also to the writing of treatises with the objective to satisfy practical needs. A significant example is the contribution of the mathematician Gaspard Monge9 to the project for a Public instruction: « There is an order of knowledge of an indispensable necessity for the stone masons, stonemasons, carpenters, joiners, carpenters, stone locksmiths, contractors of all kinds, painters, engineers of the bridges and roads, engineering officers [...]. The order of knowledge in question here is based on a particular geometry of the three dimensions which does not exist of a well-made treatise; on a purely descriptive, but rigorous geometry, and the purpose of which is to represent by drawings that have only two dimensions of the objects that have three. » (Monge 1793 quoted by Eveline Barbin (2021, p. 104)). This « particular geometry » is the Descriptive geometry, which is not a new geometry but a reliable, robust and efficient geometrical instrument designed on and with Geometry in order to manipulate graphical representations of geometrical objects modelling objects from the physical world. Rigor is evoked and required, but proof is not the central preoccupation. This orientation is very strong, it could be seen as a seed of what is now known as Applied Mathematics. It developed in France with the Grandes Écoles system for higher education. It had weaker links with general education and the secondary school system than Universities, which were more theoretically oriented. I will continue this exploration focusing on the latter where mathematical proof as a teaching object emerged at the end of the century.
The 19 th century saw the nurturing of educational systems at national scales for primary education and secondary education as well, though the population having access to the latter was rather limited. First, secondary education was expensive, second, it was essentially oriented towards University education, third, it was reserved to boys. Two textbooks played a distinctive role because of their impact in France and abroad: the Éléments de géométrie (1794) of Adrien Marie Legendre and the Éléments de géométrie à l'usage de l'École Centrale des quatre nations (1799) written by Sylvestre-François Lacroix (E. Barbin & Menghini, 2014, sect. 4.1 & 4.2). They influenced teacher education and the teaching of geometry in secondary schools, with multiple editions along the century. The writing of each of these two books was driven by different objectives.
Adrien Marie Legendre acknowledged writing his Elements following the « method » of Euclid and Archimedes. He wrote in his preface: « in trying to equal or even surpass my models of accuracy, I also wanted to spare the reader as much trouble as I could, and I made my efforts to give the demonstrations all the clarity and brevity that the subject matter entails » (Legendre, 1794, p. vj-viij). He did not hesitate to use Algebra when relevant because « it would be childish to always use a laborious method when it can be replaced by a much simpler and safer one. » 10 (ibid.). The Legendre Elements, are of a theoretical nature. This is illustrated by his refusal to use of the fifth of the Euclid postulates, hence ever and ever searching a new proof of the theorem on the sum of the angles of a triangle.
Sylvestre-François Lacroix inherits from Condillac, and from his apprenticeship as an assistant of Gaspard Monge. The long preface to his book-a « preliminary discourse » -advocates a priority of analysis over synthesis, understanding over convincing and argues the educational value of Geometry « [which] is perhaps, of all the parts of mathematics, the one that one should learn first; it seems to me very likely to interest children, as long as it is presented to them mainly in relation to its applications, either on paper or in the field. » (1799/1804 p.xxix) 11 However, while its structure is different, the body of the book is written according to the traditional Euclidean style.
Both books fix the geometric terms at the beginning of the text, following the Euclidean tradition. Then, they add second order vocabulary defining the terms axiom, theorem, corollary and problem. With respect to the Greek tradition, this is an innovation 12 .
The Elements of [START_REF] Legendre | Éléments de géométrie, avec des notes[END_REF] includes in the second order terms a definition of theorem and demonstration in the same sentence: « Theorem is a truth that becomes evident by means of a reasoning called demonstration. » (ibid. p. 4). The text is organised as a sequence of books, themselves made of a sequence of numbered Propositions. For each Proposition, a subtitle indicates its type, Theorem or Problem, then comes the text in italics of the theorem or of the problem immediately followed by the text of its proof or its solution in roman character. An important part of Legendre's Elements are notes on some of the demonstrations.
The Elements of Lacroix (1798), defines the mathematical terms and second order terms. He does not define the word demonstration, but indicate that a theorem is a statement which must be demonstrated. The structure of a theorem is made explicit indicating that it has two parts, a hypothesis and a conclusion and warning the reader that their role cannot in general be exchanged (ibid. p. lxxxviij). Then the presentation of the text deviates from the Euclidean model. It is organised in two parts, themselves cut into sections with subsections entitled following its theme. In a section, comes a sequence of numbered subsections which subdivisions are labelled first indicating the nature of the subsection (theorem, problem) 10 « […] mon but a été de faire des éléments très rigoureux. J'ai suivi d'assez près la méthode des éléments d'Euclide, et celle du livre d'Archimède de Sphaera et Cylindro : mais en tâchant d'égaler ou même de surpasser mes modèles d'exactitude, j'ai voulu aussi épargner la peine du lecteur autant qu'il m'a été possible, et j'ai fait mes efforts pour donner aux démonstrations toute la clarté et la brièveté que le sujet comporte. Je suppose que le lecteur ait connaissance de la théorie des proportions, qu'on trouve expliquée dans les traités ordinaires d'arithmétique ou d'algèbre ; je lui suppose même la connaissance des premières règles de l'algèbre […] Pour nous, qui avons cet instrument de plus qu'eux, nous aurions tort de n'en pas faire usage s'il en peut résulter une plus grande efficacité. […] et il serait puéril d'employer toujours une méthode laborieuse tandis qu'on peut lui en substituer une beaucoup plus simple et aussi sûre. » (Legendre, 1794, p. vj-viij) 11 « La géométrie est peut-être, de toutes les parties des mathématiques, celle que l'on doit apprendre la première ; elle me parait très propre à intéresser les enfants, pourvu qu'on le leur présente principalement par rapport à ses applications, soit sur le papier, soit sur le terrain. » (1799/1804 p.xxix) 12 Following Reviel Netz (1999, p. 98), Greek mathematicians did not define second-order terms, the metalanguage taking its terms from natural language. This innovation is not a creation of the mathematicians of the 18 th century, second order terms were defined in Geometry of the 17 th century, may be before.
then the nature of the related discourse (resp. demonstration, solution). Although the terms demonstration or solution are not defined, they are clearly distinguished hence drawing explicitly attention of the reader on their different functions. Other subsections are corollary and remarks (or scholia), the former has the structure of a remark with no specific subdivisions although it contains two different parts, the statement of the corollary and its justification.
Adrien Marie Legendre wrote his Elements during the French Revolution Terror. He was not teaching at the time and took the opportunity of this tragic break to revisit Euclid Elements. It is the work of a mathematician, with a mathematics agenda 13 . Nevertheless, his writing was driven by a concern for simplicity [START_REF] Barbin | On the argument of simplicity in Elements and schoolbooks of Geometry[END_REF] with in mind an educated public 14 . An important aspect of the book are the Notes which discuss and analyse certain proofs15 or conceptual difficulties 16 . As a textbook, it is remarkable that the Elements focused as much on Geometry as on proofs, which in themselves have to be understood and learned.
The style and organisation of the text of Lacroix's Elements share the same characteristics which put the issue of proving on the fore. The content is organised in relation to the need for rigorous proofs without unnecessary complexity. These Lacroix's Elements are a textbook written by a mathematician for an advanced level of education at the time. Aimed at teaching Geometry, it includes an unusually lengthy preface dedicated to the method in mathematics, emphasizing and discussing the logic and rigour of the mathematical discourse. It makes Geometry as much as proof the object of learning, as it is witnessed by this excerpt of the Preliminary Discourse: « I would add that one must not neglect to present in the geometric demonstrations, an example of the various forms of reasoning, to show how the rules of Descartes and Pascal are observed, and how the certainty of Geometry results from the precise determination of the objects it considers, and therefore each one which can only be envisaged under a very limited number of faces, lends itself to complete enumerations, which leave no doubt as to the result of the reasoning. Elements of Geometry treated in this way would in some way become excellent elements of logic, and would perhaps be the only ones that should be studied ». (Lacroix, 1799, p. xxviij).
The criticisms of Euclid's Elements from a mathematical point view and the point of view of their usefulness (i.e. Geometry versus Practical geometry) have accompanied their dissemination from the outset. What changes at the end of the 18 th century is the social and political context of the teaching of mathematics. Until then, Geometry was taught to a privileged class of people, mostly adults. That changed in France, after the 1789 Revolution. The teaching of mathematics became part of a national educational policy. It was the case as well in most European countries (E. Barbin & Menghini, 2014, p. 475;Schubring, 2015, p. 242-244).
The need for better educated citizens and workers led nations to organize public educational systems, to develop primary and secondary education, and to establish institutions to train teachers; the "normal schools" as they were called after the name of their German precursors of the 18 th 13 The report to the Conseil des Cinq-cents on the elementary books submitted to the competition opened by the law of 9 Pluviôse, Year II [START_REF] Lakanal | Rapport fait au Conseil des Cinq-cents, par Lakanal, un de ses membres, sur les livres élémentaires présentés au concours ouvert par la loi du 9 pluviôse, an II : séance du 14 brumaire, an IV[END_REF], cites the works whose manuscripts were presented and retained, giving the main reasons for the choices. For mathematics, it is noted that works that are « not very rigorous, and not very suitable for accustoming children's minds to exact reasoning» were excluded. The list of works chosen is followed by a note recommending Legendre, which, among the printed books, "must be placed in first place" because "his reputation is not disputed, even by envy". This is the only reason; in my opinion Legendre proposed his Elements without considering the very purpose of the competition. 14 See (Legendre, 1794, preface) century. In France, the first one, known as the Ecole Normale de l'an III, was created in 1795, for four months, with the assignment to teach the art of teaching to educators whose mission was to create afterwards Normal School in townships 17 . Dedicated to primary education, which had a political priority, these schools trained educators to teach practical geometry. The structuration of education beyond primary education, during the first third of the 19 th century, distinguished different teaching of mathematics depending on the political views on the future of students. As Hélène [START_REF] Gispert | Mathematics Education in France[END_REF] commented it: "to each social class 'its' mathematics: formation of the mind versus training for practice" (ibid. section 2.1). The need to legislate on educational contents elicited the epistemological and educational rupture between Geometry and Practical geometry. The theoretical nature of the former and the emphasis on the role of proof -as the expression of deductive reasoning -is without doubt source of this rupture. The resources of the educational system, limiting the duration of studies and subject to society priorities, requested making a choice.
The massive development of basic education raised the need for elementary textbooks, hence the search for an efficient didactic transposition considering the students and privileging understanding over convincing (Barbin, 2021, p. 106-107). But the notion of elementariness differs whether one considers primary 18 or secondary education. The former targeted basic literacy and knowledge of practical value for citizens and an industry which was rapidly growing 19 , the latter targeted the acquisition of the foundations by students meant to enter higher education. We must have in mind that only a small number of students, mostly boys 20 , entered secondary education and were exposed to the teaching of Geometry. However, this teaching had little room compared to the study of Latin and humanities (Gispert, 2002, p. 4). I focus for the rest of this subsection on secondary education.
The writing of Geometry textbooks was driven by arguments of simplicity 21 with a focus which varied from authors to authors, addressing the ordering of the theorems (logical versus natural), or the nature of geometrical objects (simple and elementary objects versus compound objects), or the nature of the first ideas (axioms, definitions) or eventually principles and proofs (E. Barbin, 2007, p. 226-236). In this context Legendre's Elements and Lacroix's Elements deserve a special attention, because they were largely used and disseminated until the end of the 19 th century. Lacroix's Elements, thanks to the centralized ruling of the educational system, took a monopolistic position in France, while Legendre was the most disseminated internationally being in some countries seen as true rival of Euclid [START_REF] Schubring | La diffusion internationale de la géométrie de Legendre-Différentes visions des mathématiques[END_REF]. For instance, Nathalie Sinclair points "an invasion of French mathematics: The geometry textbook of Adrien-Marie Legendre -and textbooks were the defining curriculum then -began taking the place of Euclid at the American universities, and the influence of the British waned." (Sinclair, 2006, p. 17).
With very different arguments developed in their preface, Legendre and Lacroix textbooks evidence the emergence of mathematical proofdemonstration in the text -as an essential part of the learning of Geometry in secondary education. This level of education being mostly attended by students ambitioning long studies and the entrance in universities. The tension between the 17 (« 10. 30 octobre 1794 (9 brumaire an III). Décret relatif à l'établissement des écoles normales », 1992) 18 Compulsory school lasted until the age of 12 at that time (Gispert, 2002, p. 6). 19 The geometry taught in primary schools favoured a concrete approach and know-how by mobilising drawing and manual work (D'Enfert, 2003, p. 7). 20 In France, during almost all the 19th century, « the lycée "was reserved for 3 to 4% of an age group and for young people only, a paid education, of culture, for which mathematics was disqualified, relegated to the final classes as an element of specialisation. » (Gispert, 2002, p. 4). Most girls were left out secondary education, when they got access at the end of the century it was with a special "female" course of study which could be qualified "second-rate education" (ibid.). 21 The word simplicity translates here the French élémentation-which is of the same family as the word élément-introduced by Evelyne [START_REF] Barbin | L'écriture de manuels de géométrie pour les Écoles de la Révolution : Ordre des connaissances ou « élémentation[END_REF]. It refers to « the ordering of a science, here elementary geometry, which seems to be the most appropriate for its teaching. The term "elements" is present in the title of the oldest mathematical work that has come down to us, that of Euclid dated from the 3rd century BC. » (ibid. p.99).
utilitarian and the theoretical nature of the curriculum at this level was not too high; the teaching of geometry had a theoretical coloration. More important was the tension between proof that explain and proof that convince, to use the terms of contemporary discussions [START_REF] Hanna | Proof, Explanation and Exploration: An Overview[END_REF], or, with words of the time, the tension between analysis and synthesis. The « Preliminary comments » of Sylvestre-François Lacroix was dedicated to this issue. This will be one of the issues to be addressed by the 20 th century with the democratization of secondary education.
Patricio Herbst calls the Era of Text22 that period in which: "The study of geometry was done through reading and reproducing a text; such work would train the reasoning faculties of students. But, the texts do not hint at the existence of official mechanisms to verify or steer the evolution of students' reasoning." (P. G. Herbst, 2002b, p. 288 ff.). This discrepancy between intentions and means received particular attention in the USA in the last decade of the 19th century. In the USA, education has never been driven by a single state institution, but by many local agencies with a wide range of organisational, pedagogical and epistemological views. This created problems for all disciplines in recruiting students at college level, which the National Education Association addressed by creating the Committee of Ten in 1892. The committee's role was to help school districts and private schools make changes by providing arguments to support decisions based on what universities would expect. In mathematics (P. G. Herbst, 2002b, sect. 2), the diversity of approaches highlighted the tension between educating the mind and transmitting knowledge.
The Mathematics Conference, convened by the Committee of Ten in 1893, reached a consensus that the education of the mind of secondary school students should take priority, and that Geometry should have a role in the development of reasoning skills. " [It] recommended changes to the geometry curriculum to accommodate the tension between training mental faculties [i.e. justification] and imparting culturally valued geometric knowledge" (P. G. Herbst, 2002b, p. 295). How to teach students how to construct proofs in mathematics became an explicit question. The pioneering work of George Albert Wentworth had provided an answer by proposing a norm for layout in which "each distinct assertion in the demonstration, and each particular direction in the constructions of the figure, begins a new line; and in no case is it necessary to turn the page in reading a demonstration" (Wentworth, 1877, p. iv). The preface to the third edition 23 includes a section "For the teacher" with, among other recommendations: "The teacher is likewise advised to give frequent written examinations. These should not be too difficult, and sufficient time be allowed for accurately constructing the figures, for choosing the best language and for determining the best arrangement." (ibid. p.vi).
This was a precursor to the two-column form that dominated geometry teaching in the USA in the 20th century.
However, for the time being, we can notice that at the end of the 19 th century, proof was a named but implicit content to be taught while teaching Geometry remained the explicit agenda.
The first part of the 20 th century, proof and the formation of scientific mind
The rapid development of the industrial economy and of manufacturing engineering in the early 20 th century highlighted the need to improve mathematical literacy and skills of the workforce at all levels. This concern is international [START_REF] Nabonnand | Les réformes de l'enseignement des mathématiques au début du XXe siècle. Une dynamique à l'échelle international[END_REF]. The turn of the 20 th century is also the time of the international organization of mathematicians with establishment of "The International Commission on the Teaching of Mathematics" (IMUK), in 190824 and the creation of the International Mathematical Union (IMU), in 1920. The creation of IMUK demonstrates the international concern for the development of the teaching of mathematics. One of the first decisions of this Commission is « to survey and publish a general report on current trends in mathematics education in the various countries » (H. F. Fehr, 1908, p. 8). It requires the survey to consider applied as well as pure mathematics, and recommends that it focuses on principles which should inspire the teacher, but it leaves aside curricula which are the responsibility of nations.
The question of rigor received a special attention. A report on this issue is presented to the IMUK delegates. The rapporteur, Guido Castelnuovo, proposed to limit the discussion to the upper secondary schools and to the teaching of geometry. The topic is the extent to which the systematic presentation of mathematics can be considered. A classification of the degree of rigor is proposed: « A) Entirely logical method -All axioms are stated; their independence is discussed; further development is rigorously logical. No appeal is made to intuition; primitive notions (point, etc.) are subject only to the condition of satisfying the axioms. B) Empirical foundations, logical development -From the observation of real space, we deduce the primitive propositions on which the following logical development is based. The following logical development is based on these propositions --three subgroups can be distinguished: BA all axioms are stated, BB some of the axioms are stated, BC only those axioms which are not absolutely self-evident are stated. C) Intuitive considerations alternate with the deductive method -Evidence is used whenever appropriate, without it being clear what is admitted and what is demonstrated. D) Intuitive-experimental method -Theorems are presented as facts that are intuitive or can be demonstrated by experience, without the logical connection between these facts being apparent. » (H. Fehr, 1911, p. 462) It appears that no country chose A or D. Guido Castelnuovo noticed that Roman nations and the UK prefer B, and that German nations are closer to C. His comments suggested an influence of culture and possibly of the economical context (esp. « industrialism »).
The exchanges underline the importance of a non-excessive rigour considering the average «intelligence » of the students: Rigour must be compatible with teaching, and if learning geometry favours the development of logical reasoning, it is not necessary to go as far as the installation of an axiomatic (ibid. pp. 465-466). The reference to the psychology of the young if frequent in the justification of the choices made by nations. It is proposed to discuss in the future the organisation of the teaching of geometry and to study its psychological grounds.
From now on modern mathematical education became a national stake in most nations, some henceforth searching for curricula balancing the applicative and theoretical value of the teaching of Geometry (e.g. [START_REF] González | Competing Arguments for the Geometry Course: Why Were American High School Students Supposed to Study Geometry in the Twentieth Century?[END_REF].
The classification of the arguments of the Geometry course by Gloriana González and Patricio Herbst's facilitates distinguishing and understanding the different rationales that shape the didactic transposition of mathematical proof (ibid. p. 13):
1. a formal argument that defines the study of geometry as a case of learning logical reasoning through the practice and application of deduction; 2. a utilitarian argument that geometry would provide tools for the future work or nonmathematical studies; 3. a mathematical argument justifies the study of geometry as an opportunity to experience the work of doing mathematics; 4. an intuitive argument aligns the geometry course with opportunities to learn a language that would allow students to model the world;
These arguments are not mutually exclusive, several of them could contribute to the didactic transposition of geometry, but with different weights. With regards to mathematical proof, the "formal argument" and the "mathematical argument" support the raison d'être of its teaching. The other two arguments are less decisive because its teaching cannot claim to provide a model of proving for all areas of knowledge, whether scientific or practical as the fourth argument suggests. It happens that proof is often seen as an obstacle to the learning of geometry for its lack of practical value. On the contrary, the arguments of the mathematicians of the beginning of the century is that mathematical proof is constitutive of Geometry as a paragon of mathematics as science. This concern for the teaching of mathematics as a science is well illustrated by a comment of Giuseppe Veronese after the Castelnuovo reporting: « If industrialism or general utilitarianism were indeed a dominant influence in middle school education, mathematicians would have to fight it. » (H. Fehr, 1911, p. 465).
At the turn of the 20 th century, it was clear that the learning of deductive reasoning is an important educational objective. Mathematics got more importance than the Humanities which were until then the educational priority. It was meant to play a privileged role in the formation of the scientific mind, as the French 1946 General instructions called it. Geometry was the elected domain: « It is important to make the difference felt very early on between the certainty given by the geometric method and that resulting from the experimental method: it is on this condition that the need will develop for demonstration » (French Instructions of 2 September 1925). However, achieving this objective for early grades proved to be a challenge:
« But is it possible to ensure understanding of mathematics among young pupils, especially in the sixth and fifth grades? The question is still being discussed and the instructions that followed the 1902 reform went so far as to prohibit theoretical explanations on certain points and in certain classes. One had to be content to have the rules learned and applied, for a wellfixed mechanism. » (French Instructions, 1923).
Decision makers searched for solutions introducing pedagogical recommendations, as for example le following:
« As the hypotheses or data are recorded on the figure itself, by the means most likely to ensure their immediate vision and scope, the teacher would slowly deduce, with the help of the class if possible, the hypotheses or data; he or she would summarise the results acquired at each moment and have the pupils formulate them themselves. The pupils would no longer be confused by the assembly of terms accumulated in synthetic statements whose formation would be partly their own work. They would stop more at the most important ones: the theorems would take shape at the right moment; they would be fixed in the memory by the usual procedures. » (French Instruction of September 2, 1925).
The limitations of such an approach were anticipated: « faith in the correctness of the rule and confidence in the authority of the teacher contribute to delaying the awakening of the critical sense» (ibid.).
The search for the most efficient way of teaching mathematical proof was constant along this first part of the 20 th century. The driving idea was to engage students in problem solving and managing a seamless transition from the manipulation of objects to reasoning on abstract representations. We may say that decision makers and mathematicians understood the rupture between practical thinking and theoretical thinking, but looked for a way to bypass it instead of facing it. Here is another evidence of this approach:
« Guided by the teacher and first carrying out concrete operations applied to given objects, the child will acquire the abstract notion of an operation of a welldefined nature but concerning an indeterminate element. Then he will become capable of imagining that he is applying another operation to the result of the first one without having carried it out. Finally, designing the continuation of the mechanisms of the operations thus defined, he will be able to predict certain properties of the results: he will have carried out his first demonstration. » (French decree of 20 July 1960).
During this first part of the 20 th century, the teaching of Geometry included exercises and problems, providing students possibilities to craft proofs either to achieve simple deductive tasks of one or two steps, or more complex ones requiring students to engage in problem-solving; however, these more complex problems were often cut into parts making them easier to solve. Eventually, although learning was supported by more significant activities, the basic approach consisted of observing, reading and replicating proofs.
Patricio Herbst identifies the results of the Committee of 10 as a turning point between pedagogical approaches, following which students had the opportunity to use their reasoning on corollaries of theorems or theorems not to be included in the main text of the course, namely the Era of Originals (P. G. Herbst, 2002b, sect. 4), and approaches, which proposed activities aimed at learning what a proof is and at practising proving, namely the Era of Exercise (ibid. sect. 6). This evolution was accompanied by the development of a distinctive didactic tool, of which the layout standard proposed by Wentworth was a precursor: the two-column proof. The pattern of this layout is made of two lines forming a T. Above the horizontal line is written the statement to be proved, below that line, separated by a vertical line, two columns display the proof writing with on the right-hand side the sequence of inferences and on the left-hand side the warrant of each of them. It is commonly acknowledged that distinctive layout was first used in the second edition of the Schultze and Sevenoak' Geometry textbook in 1913 25 . It was meant to be a tool for the students as well as for the teacher:
"[This arrangement in two columns] seems to emphasize more strongly the necessity of giving a reason for each statement made, and it saves time when the teacher is inspecting and correcting written work." (Shibli (1932) comment quoted by P. G. Herbst, 2002b, p. 297) Two-column proofs brought stability to the Geometry course in the USA, but over emphasizing a formal display of the logical structure of proofs, it tended to hide its role in knowledge construction. (P. G. Herbst, 2002b, sect. 7).
During this first half of the 20 th century, proof got the explicit status of a mathematical tool to be taught and learned but which learning was induced by the learning of Geometry which had an exclusive focus in curricula. Following the Patricio Herbst's formula: "To know geometry and to be able to prove the theorems of geometry were indistinguishable." (P. G. Herbst, 2002b, p. 289).
The second part of the 20 th century, proof liberated from Geometry
In the middle of the 20 th century, mathematics was present in all domains from natural to human and social sciences. Mathematical competences imposed themselves for their key role in the development of modern industry and economic sectors (D'Enfert & Gispert, 2011, p. 30;Gispert, 2002, p. 9). Mathematics was emerging as a universal language for accessing knowledge. Again, countries expressed the need for people better educated in mathematics.
On the academic side, the increasing distance between school mathematics and mathematics as a science and the intellectual influence of the French mathematicians' group Bourbaki and of its Elements, not to mention the Sputnik crisis in the US, led to a definitive rupture with the text of Geometry inherited from Euclid.
The Royaumont seminar on school mathematics in December 1959 26 gave the direction for the future. The Euclidean text was definitely considered obsolete from both a mathematical and a pedagogical perspective, but this left the mathematics education community with more problems than solutions. Dieudonné exclamation à bas Euclide attracted the attention of the general public, but didn't account for the discussions on what the desired evolution should be like. There was a large consensus on the final goal of geometry instruction, viz. that after the early stages of intuitive learning, there should come « the breaking of the bridge with reality -that is, the development of an abstract theory. » (OECD 27 quoted by (Bock & Vanpaemel, 2015, p. 159)). The OEEC28 official report was published in 1961, under the title "New Thinking in School Mathematics" (popularised as "New Math"), and "Mathématiques nouvelles" for the French version (ibid. p. 163).
The movement spurred on by some mathematicians looked for an epistemological break. It led to two guiding principles for the design of new curricula (D'Enfert & Gispert, 2011, p. 35):
1. mathematics is a deductive science, not an experimental science.
2. mathematics forms a theory29 which must bring together under the same structure knowledge that has hitherto been presented in a scattered manner.
Geometry had to evolve. This evolution was radical in the US where Geometry was relegated to teacher training and became optional in schools (Sinclair, 2006, p. 73-74). In other places it was deeply transformed. In France it maintained itself in high school curricula but with a new face in which the study of geometrical objects gave way to that of structures, much attention being paid not to confuse the concrete world with its mathematical model using different terminology. The main influences leading to these evolutions were of different origins in the different countries. Dirk De Bock and Geert Vanpaemel analysing the OEEC seminar at Royaumond noticed that "In France the demand for modernization came from the universities and was aimed at introducing modern 'Bourbaki mathematics' in secondary schools; in the United States the renewal of mathematics 26 "The origin of the Seminar must perhaps be placed as far back as the 1952 meeting of CIEAEM in La Rochette par Melun on "mathematical and mental structures", which had brought together Dieudonné, Choquet and Servais in dialogue with psychologist Jean Piaget and philosopher Ferdinand Gonseth. In several countries the reform of school mathematics was well underway by 1959, with a large number of specialist meetings on a regular basis" (Bock & Vanpaemel, 2015, p. 165-166). 27 Organisation for Economic Co-operation and Development education was urged by industry and politics, and aimed at the modernization of teaching methods" (Bock & Vanpaemel, 2015, p. 167). Moreover, the participants to this seminar which was decisive for the mid-20 th century evolution were not all aligned on the Dieudonné's position often quoted as the slogan of Modern mathematics30 . A much more balanced approach to the reform was being proposed (ibid. 152). Dieudonné himself outlined a curriculum "quite concrete, roughly starting from 'experimental' mathematics, concentrating on techniques and practical work, to a rigorous, axiomatic treatment of two-and three-dimensional space." (ibid. 157). This has found a direct translation in the French official documentation of curricula: « Success will be achieved when the pupil, having become aware of the difference between an experimental verification, even if it is repeated a hundred times, and a demonstration, comes not to be satisfied with the first and to demand the second. » (Decree of 20 July 1960). However, the transition from the so-called experimental verification to mathematical proof (i.e. demonstration) was radical; classically, the rupture happened at the 8 th grade. As emphasizes Gert [START_REF] Schubring | From the few to the many: On the emergence of mathematics for all[END_REF], France engaged a reform with no consideration for the needs of the different students' orientation towards vocational studies or university studies: "[For the 8th grade and the 9th grade], common to all the diverse curricular directions, the Commission had planned to teach the same contents, and according to the same spirit and methodology -conceiving this exclusively from the logic of a curriculum for those who would continue to university studies." (ibid. p. 250).
Axiomatic, which I consider the true heritage from Euclid, is of a paramount importance in the reform. Proof is its backbone, axioms and definitions are its ground. Geometry is the privileged terrain for being acculturated to this new epistemology: « born from experience, [it] should appear to students as a true mathematical theory » at the end of the 8 th year (French Decree of 22 July 1971). However, this does not apply to geometry only but to all mathematics. The learning of mathematics as a discipline should train students in deductive thinking, encourage them to be rigorous in logic, teach them to build a chain of deductions, to develop -in a constructive way -their critical mind31 .
The theoretical orientation of the New Math was criticised internationally, by both mathematics educator and policy makers, with -at least -two arguments: the break between mathematics and its applications including its use by other scientific disciplines, and its irrelevance for a large population of students. This was well expressed by this judgement from French the pioneer of applied mathematics, Jean Kuntzmann (1976, p. 157):
« One could regret that it is not possible to lead the students to the deductive phase. In reality, nothing is lost because the philosophy of this phase: autonomy of mathematics organizing itself in view of its own objectives, is perfectly useless to the average Frenchman32 (even of the year 2000). I affirm very clearly that for the average Frenchman, therefore for the teaching of the first cycle, the suitable philosophy is that of the conceptual phase. That is to say:
-Duality situation-model, fundamental for the uses of the mathematics; -Training in logical reasoning but without going as far as organized deductive theory (one will meet in everyday life occasions to reason, but few constituted deductive theory). »
In the USA, the New Math movement declined in the early 1970s under the pressure of the public concern reflected by the catch phrase "Back to basics", a decrease of students' performances and the criticisms of some leading mathematicians (Sinclair, 2006, p. 108 sqq). In fact, it had strong opponents since the very beginning (e.g. [START_REF] Goodstein | Reviewed Work(s): New Thinking in School Mathematics: Synopses for Modern Secondary School Mathematics[END_REF]. Moreover, the rapid development of computer-based technologies and the growing evidence that computers will change the role of and demand on mathematics called for questioning curricula. The National Advisory Committee on Mathematics Education (NACOME) published in 1975 recommendations for an evolution of compulsory school curricula which did not reject all the contributions of the New Math but redefined the "basics" [START_REF] Hill | Issues from the NACOME Report[END_REF] putting on the fore applications of mathematics, statistics and probability, the use of calculators and computers. Remarkably, Geometry was not part of the Basics mathematical skills listed by the NACOME report but the work for reforms following the recommendations included it (Sinclair, 2006, p. 111). Proof and logical reasoning were down played in favour of a "wider conception of geometry" giving room to visualisation and intuition (ibid. p.113); a move echoing the call of leading opponents for abandoning the "chief innovation" of the New Math: the logical approach to the teaching of mathematics (Kline, 1976, p. 451-453). By the end of the century, the NCTM Principles and Standards for School Mathematics (PSSM) made a synthesis of both positions:
"Geometry has long been regarded as the place in high school where students learn to prove geometric theorems. The Geometry Standard takes a broader view of the power of geometry by calling on students to analyse characteristics of geometric shapes and make mathematical arguments about the geometric relationship, as well as to use visualization, spatial reasoning, and geometric modelling to solve problems. Geometry is a natural area of mathematics for the development of students' reasoning and justification skills." (NCTM, 2000, p. 3)
The New Math movement faded away in all countries by the early 1980s. This end was a consequence of the constant criticism of curricula which, burrowing the words of José Manuel Matos33 , "render mathematics hermetic" either for students or their parents and most stakeholders as well, and a consequence of the lack of consensus among decision makers, mathematics educators and teachers as well as mathematicians themselves. It happens that the gap between the reform and the reality of the classrooms was such that it was not surprising to see the New Math cohabiting with the preceding classical teaching, or even being ignored, in particular where the educational system left enough autonomy to schools and teachers. In France, where mathematicians had a peculiar responsibility in launching the movement, the tension between the protagonists led to the creation of a Union of the users of mathematics opposing the reform 34 .
The following reforms did not return back to the old curricula. The conception of mathematics, its scholastic organisation, the content of its various components and the focus on the mathematical activity rather than on the text of mathematics moved significantly. The debate initiated in the USA in the beginning of the 1970s lasted ten years before the release of consensual recommendations in the form of An Agenda for Action endorsed by the National Council of Teachers of Mathematics in 1980:
"An Agenda for Action (NCTM 1980), recommended that problem solving should be the focus of school mathematics, that basic skills should be defined more broadly than simple arithmetic and algebraic calculation, that calculators and computers should be used at all grade levels, and that more mathematics should be required of students." (Fey and Graeber 2003, p. 553 quoted by (Kilpatrick, 2014, p. 331))
This NCTM Agenda led to a questioning of the teaching of proof which, through a metacognitive shift 35 , had in practice evolved into the teaching of the two-column proof technique. It eventually motivated the recommendation in the 1989 Standards to increase attention to "deductive arguments expressed orally and in sentence or paragraph form" (p. 126) and to decrease attention to "twocolumn proofs" (p. 127)." (Quoted by P. [START_REF] Herbst | On proof, the logic of practice of geometry teaching and the two-column proof format[END_REF].
The creation of the International Congress on Mathematical Education in 1969 and of the international group for the Psychology of Mathematics Education in 1976 favoured the dissemination of the various positions and ideas, allowing international debates and exchanges between mathematics educators and teachers at an international level. Thus, the post-New Math orientation of the curricula was rather similar in most of the countries. For example, in South Asia:
"The Math Reforms lasted for 12 years, ending in the early 1980s, when it was realized they did not work and had to be stopped. Although many new topics introduced during the Math Reforms stayed on (e.g., Venn diagrams and statistics), the formal approach in teaching mathematics was replaced by the so-called problem-solving approach. In the years that followed, change in content was minor. The major change was in the teaching approach used in the classroom." (Lee, 2014, p. 388-my emphasis) Similar lines have been written in several other educational contexts 36 .
In France -which is an interesting case given the constraints of a strong and centralized educational administration and the influential position of professional mathematicians -the 1980s post-New Math era prepared a rupture in the mathematics education policies of the 21 st century:
"Moreover, challenged since the early 1970s, including by its supporters who believed the reform did not correspond to their recommendations, the "modern mathematics" reform was abandoned in the early 1980s in favour of a teaching method that, envisioning mathematics in the diversity of its applications, placed the accent on problem solving and favoured "applied" components of the discipline. These two aspects now occupy a central place in mathematics teaching. At the same time, since the early 2000s, there has developed the ambition to make mathematics into a subject that allows students to throw themselves into a true research program, capable of developing their abilities to reason and argue but also to experiment and imagine." (Gispert, 2014, p. 239) But, Geometry, at the end of the 20 th century, had lost its special and somewhat isolated position in mathematics curricula. It is no longer the Geometry of the Euclidean educational tradition, nor is it the Geometry of the New Math. The study of geometrical figure-objects is part of its educational content along with geometrical transformations, with sometimes important differences among national curricula but this orientation is shared.
The emphasis is on making mathematics meaningful and on understanding, mathematical proof loses ground 37 to deductive reasoning, which opens up a broader conception of validation in the learning 35 (Brousseau, 1997, p. 26 ff.) 36 For references, see for example (Karp & Schubring, 2014, part. V) 37 This claim may have exceptions, as it is the case of Russia where "Russian schoolchildren of the 1980s-1990s, and even the schoolchildren of today, spend much more time than their Western counterparts on algebraic transformations and proofs." (Karp, 2014, p. 320) and teaching of mathematics. One observes an epistemological revolution more than a new educational one.
The 21 st century, proof for all grades
Since 1995, the "Trends in International Mathematics and Science Study" (TIMSS) provides an instrument to get a picture of the institutional views of mathematics education 38 . Its objective is to study 4 th and 8 th graders' achievements in mathematics and science in all participating countries 39 . Since 2003, the publication of the assessment framework gives a view on teaching and learning which pays attention to including goals considered important in a significant number of countries. It can be seen as a consensual conception of the essential basis of the curriculum, although there are many differences, some of which are substantial. These documents provide a reliable basis for getting an idea of how proof and proving are perceived in the early 21 st century.
The design of the TIMSS assessments distinguishes two types of domains, the content domains and the cognitive domains: "The content domains define the specific mathematics subject matter covered by the assessment, and the cognitive domains define the sets of behaviours expected of students as they engage with the mathematics content." (TIMSS 2003/ O'Connor et al., 2003, p. 9) 40 .
Issues related to validation are addressed in the sub-domain "reasoning" which is presented as follows:
"Reasoning mathematically involves the capacity for logical, systematic thinking. It includes intuitive and inductive reasoning based on patterns and regularities than can be used to arrive at solutions to non-routine problems. Non-routine problems are problems that are very likely to be unfamiliar to students. They make cognitive demands over and above those needed for solution of routine problems, even when the knowledge and skills required for their solution have been learned. Non-routine problems may be purely mathematical or may have real-life setting. Both types of items involve transfer of knowledge and skills to new situations, and interactions among reasoning skills are usually a feature. Most of the other behaviours listed within the reasoning domain are those that may be drawn on in thinking about and solving such problems, by each by itself represents a valuable outcome of mathematics education, with the potential to influence learners' thinking more generally. For example, reasoning involves the ability to observe and make conjectures. It also involves making deductions based on specific assumptions and rules, and justifying results." (ibid. p.26 -my emphasis)
The "cognitive domains" includes "Knowing", "Applying" and "Reasoning". Each of these is characterised by "a list of objectives covered in a majority of the participating countries, at either grade 4 or 8." (ibid. p. 9). In the case of Reasoning, the objectives expressed in behavioural terms are: Analyse, Generalize, Synthetize/Integrate, Justify. The latter was labelled Justify/Prove in 2003, but only Justify 41 remained for the following assessment campaigns. 2003 Justify/Prove "Provide evidence for the validity of an action or the truth of a statement reference to mathematical results or properties; develop mathematical 38 There are two major international assessment campaigns, PISA and TIMSS. The former assesses the learning achievement of 15-yearolds. The second one does it for 4th and 8th graders. I think the latter is more relevant to the issues I am addressing here. 39 They were 64 in 2019. 40 Similarly, the frameworks of the TIMSS 1995 and of the TIMSS 1999 assessments included content areas and performance expectations. The 2003 TIMSS assessment framework associates justify and prove, however "to prove" must not be interpreted as providing a mathematical proof but as providing mathematical arguments. This interpretation is coherent with the need to have a formulation adequate to the 4 th grade as well as for the 8 th grade. "Proving mathematically" appears explicitly only at the latter grade in most curricula, when it does.
The reference to mathematical proof being abandoned, the keyword which is chosen is "justification" with specific requirements: "reference to mathematical results or properties" (TIMSS 2007, TIMSS 2008), "reference to known mathematical results or properties" (TIMSS 2011). Then comes back the key expression "mathematical argument" (TIMSS 2015, TIMSS 2019) in a short and allusive statement compared with the preceding formulation.
Moreover, the section of the TIMSS 2003 document, entitled "Communicating mathematically" disappears in the following editions. It said that "Communication is fundamental to each of the other categories of knowing facts and procedures, using concepts, solving routine problems, and reasoning, and students' communication in and about mathematics should be regarded as assessable in each of these areas." (ibid p. 34).
These TIMSS assessment framework documents reflect on the one hand an institutional vision of a detachment of the teaching of proof from Geometry, and on the other hand an objective to introduce the learning of a proper way to address the question of "truth" in the mathematics classroom. This evolution of TIMSS witnesses a trend of curricula in a large number of countries. It strengthens a scholastic epistemological revolution which the following quote clearly exemplifies:
"One hallmark of mathematical understanding is the ability to justify, in a way appropriate to the student's mathematical maturity, why a particular mathematical statement is true or where a mathematical rule comes from." (NGA Center & CCSSO, 2010, p. 4 -my emphasis).
It should be noted that at that time research in mathematics education was reaching academic maturity. The scientific community had acquired the necessary professional tools to consolidate and to establish a science that had been asserting itself since the mid-1970s: international journals and conferences with clear scientific policies and quality control. The ICME conferences and the ICMI studies, under the umbrella of IMU, maintained the links between researchers in mathematics education and mathematicians. The creation of the ICMI awards in 2000 in keeping with the tradition of the mathematical community is another evidence. All these are indicators that researchers in mathematics education are now members of the stakeholders' community42 which contributes to the didactic transposition of the mathematics to be taught and learned.
Researchers in mathematics education have fully embraced the problems of teaching proof, which they claim essential for the learning of mathematics. The number of articles and conference papers on the learning and teaching of proof and proving in the mathematics classroom has impressively increased since the pioneer work of Alan [START_REF] Bell | A study of pupils' proof-explanations in mathematical situations[END_REF]. The number of working groups and of study groups speaks for the dynamic of the community, including edited books contributing to shorten gaps in research (e.g. [START_REF] Boero | Theorems in school : From history, epistemology and cognition to classroom practice[END_REF][START_REF] Hanna | Proof and proving in mathematics education : The 19th ICMI study[END_REF][START_REF] Reid | Proof in mathematics education: Research, learning and teaching[END_REF].
In her preface to the book "Theorems in School" [START_REF] Boero | Theorems in school : From history, epistemology and cognition to classroom practice[END_REF], Gila Hanna writes in clear words that "[…] proof deserves a prominent place in the curriculum because it continues to be a central feature of mathematics itself, as the preferred method of verification, and because it is a valuable tool for promoting mathematics understanding" (ibid.p.3). Paolo Boero idea of this book was born in the context of the 21 st PME conference (Pehkonen, 1997, vol. I-p. 179-198) following a research forum which demonstrated "the renewed interest for proof and proving in mathematics education" and that of "the reconsideration of the importance of proof in mathematics education was leading to important changes in the orientation for the curricula in different all over the world" (Boero, 2007, p. 20).
2007, ICMI launched its 19 th study on "Proof and proving in mathematics education" [START_REF] Hanna | Proof and proving in mathematics education : The 19th ICMI study[END_REF][START_REF] Barbin | L'écriture de manuels de géométrie pour les Écoles de la Révolution : Ordre des connaissances ou « élémentation[END_REF] which Discussion document introduces the idea of "developmental proof" (ibid. p. 444):
"The study will consider the role of proof and proving in mathematics education, in part as a precursor for disciplinary proof (in its various forms) as used by mathematicians but mainly in terms of developmental proof, which grows in sophistication as the learner matures towards coherent conceptions. Sometimes the development involves building on the learners' perceptions and actions in order to increase their sophistication. Sometimes it builds on the learners' use of arithmetic or algebraic symbols to calculate and manipulate symbolism in order to deduce consequences. To formulate and communicate these ideas require a simultaneous development of sophistication in action, perception and language. The study's conception of "developmental proof" has three major features:
1. Proof and proving in school curricula have the potential to provide a long-term link with the discipline of proof shared by mathematicians. 2. Proof and proving can provide a way of thinking that deepens mathematical understanding and the broader nature of human reasoning. 3. Proof and proving are at once foundational and complex, and should be gradually developed starting in the early grades."
These features resonate with the modal arguments -for the Geometry course -proposed by Gloriana Gonzáles and Patricio [START_REF] González | Competing Arguments for the Geometry Course: Why Were American High School Students Supposed to Study Geometry in the Twentieth Century?[END_REF] namely the formal, utilitarian, mathematical. This categorization can be reused substituting proof and proving for geometry without losing its relevance.
The analysis from a developmental proof perspective of contemporary institutional texts that provide teachers with comments and pedagogical indications shows the necessary modulation of this teaching considering both the school levels, societal needs and its possibility in relation to mathematical requirements. Let us take the case of France [START_REF] Balacheff | Penser l'argumentation pour la classe de mathématique[END_REF]: from grades 1 to 3, the students' discourse must be argued and based on observations and research and not on beliefs; from grades 4 to 6, the teaching must contribute to students building the idea of proof and argumentation (e.g. by moving gradually from empirical validations to validations based solely on reasoning and argumentation). In grades 7-9, the challenge is to move from inductive to deductive reasoning, and to put this deductive reasoning into the form of a communicable proof (i.e. a demonstration in the French text) 43 . This is, with variations, what is observed internationally and reflected in TIMSS. The main point of divergence is the point at which acculturation to the socio-mathematical norm of mathematical proof is targeted; in many countries this is left to the upper secondary school level and often, but not always, to the learning of Geometry (e.g. in the US, Jones & Herbst, 2012, p. 263).
Although the institutions stress that the teaching and learning of proof should not be confined to Geometry, this domain remains an ecological niche for achieving this goal. Writing a "Spotlight on the Standard" for the NCTM journal "Teaching Mathematics in the Middle School", Edward A. [START_REF] Silver | Spotlight on the standards: Improving Mathematics Teaching and Learning: How Can Principles and Standards Help? Mathematics teaching in the Middle School[END_REF] made the remark that "Although many middle school students love to argue (about almost anything!), they need to learn to argue effectively in mathematics. The study of geometry offers many opportunities to gain experience with mathematical argumentation and proof" (ibid. p.23). As it happens, the discussion document of the 19 th study reserves a special place for geometry when questioning the research community on the relation between proof and empirical science, "given geometry deals with empirical statements about the surrounding space as well as with a theoretical system about space" (Hanna & de Villiers, 2012, p. 451). The availability of Dynamic geometry microworlds resonates with this questioning, at a point where the designers of the study devote to it a whole section under the title "Dynamic geometry software and transition to proof" which first question is: "To what extent can explorations within DGS foster a transition to the formal aspects of proof? What kinds of didactic engineering can trigger and enhance such support? What specific actions by students could support this transition?" (ibid. p. 449)
An epistemological rupture in need for an instructional 44 bridge
Coming back over several decades of study of the problems raised by the teaching of proof in the mathematics classroom, I realised that I lacked a comprehensive study of the history of this instructional objective. Research on the history of mathematics education provides us with a number of information and analysis, especially in the literature on the history of the teaching of geometry. My objective was to gather this information, adding when needed some evidence from primary resources, and to structure it in order to get a picture of the history of the teaching of proof which could be useful to carry out a study which is still to be done.
To conclude these notes, I will first outline what can be retained from a historical point of view, and then present arguments in favour of searching for a precursor concept of mathematical proof that allows the question of truth to be addressed in the early teaching of mathematics.
Milestones in a long journey in search of a solution for the early learning of proof From c. 300 BC to the late 18 th -century, Euclid's Elements stood as the model of the text of "scholarly knowledge" 45 of Geometry, which is the material for the process of the didactic transposition. At the turn of the 19 th century, under the growing criticisms of the Elements and a development of the 43 It is interesting to notice that the institutional discourse avoids the reference to abductive reasoning -Polya would write plausible reasoning. Possibly in fear of opening room to severe logical errors (often induced by natural reasoning), although abductive reasoning has a heuristic value and is the source of creative ideas. 44 Following and understanding the remarks of Keith Jones and Patricio Herbst (2012, p. 261-262), I use instructional as an interpretationnot a translation -of the French word didactique although it has a larger denotation; but instructional and didactical objective being tightly related this does not open serious misunderstanding in the context of this Note.
progressive ideas on education, started a scientific work pursuing the objective of writing a rigorous presentation of Geometry but with the associated intention to facilitating its understanding for a reader eager to learn it. Yet, it is a transpositive work insofar as it "improves the organization of knowledge and makes it more understandable, structured and accurate, to the point that the knowledge originally transposed is itself bettered." (Chevallard & Bosch, 2014, p. MS 2).
Until the end of the 18 th century, Euclid's geometry was taught to a privileged class of the society, in particular -burrowing the words of Legendre -to those who were devoted to mathematics; they were mainly adults. Along the 19 th century, with the development of national policies, the challenge of teaching Geometry initiated a transposition process which progressively included learning issues with concerns for ever wider segments of the population. The cases of Dechalles, Legendre, Clairaut and Lacroix are significant milestones of this early period of the history of the teaching of Geometry and the way the issue of Proof was discussed and addressed. The former and the latter wrote textbooks with an explicit critical position towards the seminal Elements, while the other two were first writing treatises. But the work of the four of them evidences the awareness of the epistemic complexity of the project:
-the tension between proving and explaining -the conflict between the abstractness of Geometry as a theoretical construction and its practical value as a tool for numerous human activities.
Proof and proving were a core concern for their role in the understanding of Geometry and in establishing the truth of geometrical statements. Legendre and Lacroix included demonstration among the metamathematical terms which understanding was necessary for the presentation of their treatises. All discussed the need to prove, the difficulty and the clarity of proofs. However, despite being a named mathematical object, proof was not yet constituted as an object of teaching. Geometry was the topic at stake and the ecological niche for proof to make sense.
It is with the emergence of state-based educational policies that in the mid-19th century the didactic transposition process began in earnest. However, at the very beginning of the century, Sylvestre-François Lacroix was already aware of the epistemic complexity of an educational project for mathematics. His Elements can be considered the first textbook as such: "it is his historical merit to have substantially contributed to the restructuring of a poorly-organized and scattered corpus of mathematical knowledge, guided by educational objectives" (Schubring, 1987, p. 43).
The didactic transposition process of Geometry continued to develop in order to respond to the growing needs of both industry and economy, and natural sciences as well. In this dynamic, mathematical proof remained untouched in its Euclidean norms. Organized as a professional body at the turn of the 20 th century46 , mathematicians did recognize their responsibility towards mathematics education and contributed to the thinking on what mathematics should be as a content to be taught. Their concern was first to ensure that it kept its theoretical nature, not being reduced to a tool to the service of applications. They were aware of the problem of teaching students but they were not prepared to sacrifice their discipline; one of the first issues they considered was that of rigour. In fact, it was a question of deciding the acceptable limits of the didactic transposition of mathematical proof. This didactical process took another dimension when decision-makers published specification of the "knowledge to be taught". This took different forms in different nations depending on their educational organization and policy, but the movement was general.
Three key stages marked the spread of teaching of mathematics through the modern educational systems of the 20 th century and the first decades of the 21 st century. They determined the transposition of proof:
-First stage: The extension of the teaching of Geometry from grades 10-12 to grades 6-9; the latter being part of the compulsory school hence the teaching for all children. This required to choose a time for the introduction of mathematical proof. In general, the choice was to do it at grade 8. This was a real challenge and mostly a failure revealing the impossibility to escape a rupture in the nature of the mathematics "before" and an "after" the will to teach proof was made explicit. -Second stage: The New math coup de force made mathematical proof a standard of validation in a unified mathematics. The rapid failure of this radical movement had the effect of the replacement in most of the following curricula of mathematical proof by deductive reasoning associated to a priority given to problem solving. -Third stage: the willingness to introduce proof in the teaching at all grades of the compulsory school. This objective could not be reached without renouncing the standards of mathematical proof for teaching at the earlier grades. This comes with a vocabulary now including the words argumentation, justification and proof but without establishing a clear relation between the terms (and understanding the consequences of this absence on teaching).
The didactic transposition is a never-ending process as it is tightly dependant on the evolution of the society, its priority and shared epistemology, as well as on the evolution of mathematics itself and of the progress of knowledge on its teaching and learning.
How to answer the question of truth before the availability of mathematical proof?
Until the beginning of the 20 th century the idea of preparing the transition to mathematical proof preceding its introduction in the curricula was simply not considered. As it were, the difficulty of learning mathematical proof was recognized but viewed as the cost to pay for engaging in the learning of mathematics as a discipline. The failure of too many students in the context of the democratisation of mathematics education called for a response more effective than the one given when the psychology of development led to think that the transition could by itself be made possible by the children's access to the formal operational stage47 . It calls for an evolution of the school epistemology necessary to answer the question that best captures the contemporary situation: What should be taught before teaching mathematical proof? Or, better: how to answer the question of in the mathematics classroom before having mathematical proof available?
The evolution of the didactic transposition is evidence of a pragmatic response from international bodies (e.g. TIMSS) and national educational institutions: teaching must allow the development of an argumentative competence -i.e. reasoned justification -as part of the early learning of mathematics, before the learning of mathematical proof 48 . For its part, research has undertaken work and projects to answer these questions 49 . But there is still a lack of contributions and results that are robust enough to allow curricula and teaching approaches to be designed in a reliable and efficient way. Contemporary research on proof and proving paints a complex picture of the relationship between argumentation and proof. There is debate and perhaps still a lack of consensus, although some points seem to be accepted:
-the structure of the text of an argumentation and of the text of an elementary proof are not radically foreign the one to the other. -argumentation and proof have close relationships in the problem-solving process, but the transition from argumentation to proof needs a specific work. -to get the status of a proof, an argumentation has to go through a social process ensuring its collective acceptance by the students and by the teacher This closeness suggests that the question of the relation between argumentation and proof can have a response opening on a didactic transposition of proof in the form of an argumentation in the mathematics classroom acceptable from both a mathematical and a teaching perspective. However, argumentation has no formal status for the professional mathematician although it has a presence in the history of mathematics and in problem-solving processes. Then, with this absence of scientific status in contemporary mathematics: could argumentation in the mathematics classroom be a didactic transposition of mathematical proof adjusted to the exigencies and constraints of teaching and learning at the compulsory school grades?
Contemporary institutional texts, whether international or national, suggest a positive response, but do not share its foundations. What one gets instead, and in the first place what teachers get, is the idea of a seamless transition from arguing to proving in the mathematics classroom. Moreover, proof is present in the text of curricula and in their comment not as an object (i.e. content domain) but as a competence (i.e. cognitive domain). It induces that it cannot be taught directly but be stimulated and developed in situations that have mathematical and social characteristics to justify and give access to socio-mathematical norms (P. G. [START_REF] Herbst | Establishing a custom of proving in American school geometry: Evolution of the two-column proof in the early twentieth century[END_REF][START_REF] Yackel | Sociomathematical Norms, Argumentation, and Autonomy in Mathematics[END_REF]. Research since the early 1970s has worked out characteristics of situations which engage students in establishing actively the validity of a statement (e.g. Brousseau, 1997, Chapitres 1, section 6) and the way that such situations challenge teachers (e.g. [START_REF] Ball | With an Eye on the Mathematical Horizon : Dilemmas of Teaching Elementary School Mathematics[END_REF]P. G. Herbst, 2002a;[START_REF] Lampert | When the Problem Is Not the Question and the Solution Is Not the Answer : Mathematical Knowing and Teaching[END_REF].
Andreas [START_REF] Stylianides | Proof and Proving in School Mathematics[END_REF] proposed a characterization from which we could start:
"Proof is a mathematical argument, a connected sequence of assertions for or against a mathematical claim, with the following characteristics:
1. it uses statements accepted by the classroom community (set of accepted statements) that are true and available without further justification; 2. it employs forms of reasoning (modes of argumentation) that are valid and known to, or within the conceptual reach of, the classroom community; and 3. it is communicated with forms of expression (modes of argument representation) that are appropriate and known to, or within the conceptual reach of, the classroom community." (ibid. p.291)
This characterization is appropriate, but it applies to any scientific discipline. It is too general, leaving open the main question for mathematics teachers and educators: what would be the specific characteristics to add to account for the case of mathematics?
Let us start from a remark: Mathematics develops on mathematics. This remark expresses the inward-looking epistemology which coins its form of abstraction. This does not contradict a mathematical activity which in many ways resembles the scientific activity, but as Christian [START_REF] Houzel | Histoire des mathématiques et enseignement des mathématiques[END_REF] put it: in mathematics the « already theorised knowledge... plays the role of the experimental instance » 50 . This is the origin of the radical abstractness of mathematics and of the specific nature of proof in this discipline.
The set of accepted statements -criterion 1 of Stylianides proposal -is more than a set, nor is it a repertoire: it is a set organised as a system which constitutes the material and the milieu 51 for the mathematical work. It was the objective of the construction of such a system which ultimately drove the writing of Euclid's Elements at the same time that it introduced a rupture with the sensory world 52 . The organization of the set of statements is the consequence of the fact that any of its elements is related to a subset of the whole by the links a proof establishes.
In the context of the classroom, this structured set of statements is not a proper theory insofar as its evolution is agile, including new admitted elements when necessary, and the modes of argumentation may vary in their nature having stronger roots in the community consensus than in a formalized ground. For this reason, I suggest to refer to it as a structured Knowledge base 53 . It will correspond to the first term Theory of the defining triplet of Mathematical Theorem in the sense of Alessandra Mariotti:
"Proof is traditionally considered in itself, as if it were possible to isolate a proof from the statement to which it provides support, and from the theoretical frame within which this support makes sense. When one speaks of proof, all these elements, although not always mentioned, are actually involved at the same time, and it is not possible to grasp the sense of a mathematical proof without linking it to the other two elements: a statement and overall a theory." (2006, p. 183) Moreover, we have to add two more constraints in order for an argumentation to reach the mathematical standard:
-on the one hand, that a common norm of argumentation is accepted and that any statement in the sequence of statements of an argumentation either is backed by an argumentation which meets the same requirements or comes from the knowledge base or, -the other hand, that it is ensured that any gap in the argumentation can be filled with an argumentation conforming the agreed norm.
This means establishing a practice that requires a deliberate transition from a pragmatic conception to a rigorous conception of proving. That is to say, a student shift from the position of a practitioner to the position of a theoretician [START_REF] Balacheff | Beyond a psychological approach of the psychology of mathematics education[END_REF].
Eventually, it is very unlikely that we will be able to find a solution for a seamless transition from arguing, in the general sense, to proving mathematically. For this reason, my position is to accept the creation of a didactical object: mathematical argumentation, and to work on its definition so that it provides a ground to building instructional bridges by creating the conditions for a sociomathematical norm to become a precursor of mathematical proof.
This object, mathematical argumentation, cannot be conceived as a transposition of the mathematical proof unless one considers that the "social" function of the latter, within the scientific community, is constitutive of it (Balacheff, preprint). This would be an epistemological as well as a theoretical error: although being the product of a human activity that is certified at the end of a 50 One could find cognitive consequences of this statement in [START_REF] Tall | Cognitive Development of Proof[END_REF] 51 Milieu is used in the sense of the Theory of didactical situation [START_REF] Brousseau | Theory of didactical situations in mathematics[END_REF]).
52 This is not contradictory with the use of mental experiments in some of the Elements' proofs, and with the recognition that the physical world and the other sciences contribute to the development of mathematic by the importation of certain intuitions, or by raising problems questioning mathematical concepts and models.
social process, mathematical proof is independent of a particular person or group [START_REF] Delarivière | Mathematical Explanation: A Contextual Approach[END_REF]. This will not be the case for a mathematical argumentation in the classroom. The standardisation of proof in mathematics, in addition to the institutional character of its reference (mathematical knowledge), has required its depersonalisation, its decontextualization and its timelessness. Yet argumentation is intrinsically carried by an agent, individual or collective, and dependent on the circumstances of its production.
The characteristics of mathematical argumentation must not only distinguish it from other types of argumentation used in scientific or non-scientific activities, in order to guarantee the possibility of the transition to the norm of mathematical proof, it must also be operational when it comes to arbitrating the students' proposals and eventually institutionalizing them in order to organize and to capitalize them in the classroom. Mathematical argumentation requires an institutionalisation. The recognition of its mathematical character cannot be reduced to a judgement on its form alone. How, for example, can we arbitrate the case of the generic example that balances the general and the particular, whose equilibrium is found at the end of a contradictory debate seeking an agreement that is as little as possible tainted by compromise?
Finally, proof is both a foundation and an organiser of knowledge. In the course of learning, it contributes to reinforcing knowledge evolution and to providing tools for its organisation. In teaching, it legitimises new knowledge and constitutes a system: knowledge and proof linked together provide the knowledge base with a structure which can work as a precursor to the theoretical ground mathematics need. The institutionalisation function of proof situations places explicit validation under the arbitration of the teacher who is ultimately the guarantor of its mathematical character. This social dimension, in the sense that scientific functioning depends on a constructed and accepted organisation, is at the heart of the difficulty of teaching proof in mathematics.
Table 1 -
1 Expression of the objective "Justify" in the Assessment framework document of the TIMSS from 2003 to 2019
This tension is still unsolved. The two conflicting questions "how to cater for the elite" versus "how to cater for the wider group of students for whom mathematics should be grounded in real world problem solving and daily life applications", as expressed by Gert[START_REF] Schubring | From the few to the many: On the emergence of mathematics for all[END_REF] in a critical analysis of the "Mathematics for all" movement, have not yet receive proper responses.
Original reading Règles pour la direction de l'esprit(ca 1628/1953, p. 37-118)
Original reading Méditations, objections et réponses(1641/1953, p. 387 sqq.). English translations of the quotes from[START_REF] Cunning | Analysis versus Synthesis[END_REF]
Ceux qui s'intéressent à la géométrie "se plaisaient à exercer un peu leur esprit ; & au contraire, ils se rebutaient, lorsqu'on les accablait de démonstrations, pour ainsi dire inutiles"
"Normal school" is the equivalent of the "College of education" or "Teacher training school" of the contemporary educational systems.
Gaspard Monge is the renowned creator of Descriptive geometry, this course given at the École Normale and École Polytechnique was published in 1799 after notes on his 1795 lectures (his assistant Sylvestre-François Lacroix had published in 1795 an "Essais de géométrie sur les plans et les surfaces courbes -ou "Élémens de géométrie descriptive").
The famous note on the proof of the invariance of the sum of the angles of a triangle (e.g. Note 2)
Patricio Herbst coined this expression for the USA, in my opinion it can be extended to Europe.
In 1954 IMUK changed its name for ICMI (International Commission on Mathematical Instruction) with the mission of "[the] conduct of the activities of IMU, bearing on mathematical and scientific education"[START_REF] Furinghetti | The First Century of the International Commission on Mathematical Instruction[END_REF].
Organisation for European Economic Co-operation
In French, even the name of mathematics changes, losing the mark of the plural to become "La Mathématique", on the model of "la physique" which is a singular word in French (but a plural in English "physics"). This did not last for long.
The section of the seminar report subtitled Sharp Controversy Provoked. "After some discussion, both groups modified their positions on the programme and reached general agreement on a set of proposals which did not remove Euclid entirely from the secondary-school curriculum"(OEEC, 1961, p. 47) (quoted by(Bock & Vanpaemel, 2015, p. 157)).
Excerpt from the French 29 April 1977 circular.
Only a third of the students engage then in the secondary school (D'Enfert & Gispert, 2011, p. 40).
In(Ausejo & Matos, 2014, p. 298)
The noosphere is "the sphere of those who "think" about teaching, an intermediary between the teaching system and society."(Chevallard & Bosch, 2014, p. MS 1)
The of professional mathematicians emerged as such with journals and societies in the course of the XIX° century. A first milestone is the creation of the French journal "Annales de Mathématiques Pures et Appliquées", founded and edited by J. D. Gergonne. It was published from 1810 to 1831. The first professional mathematical society is the Wiskundig Genootschap, founded in Amsterdam in 1778, but most others were founded in the second half of the nineteenth century(Bartle, 1995, p. 3).
The work of Jean Piaget had a historical and a significant impact on curriculum specification, possibly not only for what Constructivism brought about children learning, but because of the clarity of the Piagetian stages and their apparent simplicity. In practice, stakeholders and decision makers reduced the levels of argumentation to two categories: before and after the formal operational stage. The natural cognitive development was considered the determining factor of the levels of argumentation. The thought of Piaget was somewhat more sophisticated than that (see e.g.[START_REF] Piaget | Remarques sur l'éduction mathématique[END_REF].
i.e. the Euclidean standard of proof, which is still the reference structure of the mathematical discourse in classrooms.
e.g.[START_REF] Bieda | Conceptions and Consequences of Mathematical Argumentation, Justification, and Proof[END_REF] Hanna & de Villiers, 2012, Chapitre 15)
Balacheff -Notes for the study of the didactic transposition of proof -03/03/2023 09:20
Acknowledgements
I am grateful to Gert Schubring for his careful reading and attention to historical accuracy. I would particularly like to thank Evelyne Barbin, Gila Hanna, Patricio Herbst, Janine Rogalski and Nathalie Sainclair for their comments and suggestions. Of course, the responsibility for the entire paper, and especially for any remaining errors or misinterpretations, rests with me alone. |
01287505 | en | [
"spi.meca.ther"
] | 2024/03/04 16:41:24 | 2015 | https://hal.science/hal-01287505/file/Numerical%20study%20of%20the%20wind%20patterns%20inside%20and%20around%20buildings.pdf | Lucie Merlier
Frederic Kuznik
Gilles Rusaouen
Julien Hans
Frédéric Kuznik
Numerical study of the wind
come
Introduction
A wide diversity of urban fabrics exists around the world, especially in traditional neighborhoods. They are generally linked with typical spatial configurations of buildings. In particular, open, compact and attached forms were developed depending on the local geography, climate and culture. The different levels of porosity that characterize these urban forms involve different very-local micro-climates and internal wind patterns. Indeed, the very-local interactions between mean winds and the shape and layout of urban structures determine to a large extent the flow structures that develop in the urban canopy layer. These aerodynamic phenomena affect in turn, among other things, urban ventilation processes, wind and thermal comfort as well as the climate at the upper scale of the city.
At the spatial scales of the building and street canyon, specific air flow structures develop within courtyards and other urban outdoor spaces linked with constructions. They depend on the orientation of these outdoor spaces in relation to the mean wind incidence, as well as openness, i.e. whether these outdoor spaces are external, partially or totally surrounded by a building or contained inside a building group. This paper presents a numerical study performed to analyze and better understand the aerodynamic processes leading to these flow structures developing around buildings and inside urban blocks depending on their topological features. Only forced convection processes are considered. The effects of the horizontal openness of courtyards and internal open spaces of urban blocks are more specifically examined. Providing information on the wind conditions next to buildings and on ventilation processes, such observations linking urban morphological properties to physical phenomena would support a better understanding of the urban heat island phenomenon on larger urban scales, as well as a more integrated and bio-climatic design of buildings and urban areas on smaller scales.
Computational model
Validation study
The study is based on numerical experiments, which are performed using computational fluid dynamic (CFD) models and the commercial software Ansys Fluent, versions 14.5 and 15 (Fluent inc. 2013). A Reynolds averaged Navier-Stokes (RANS) method is used. The model was preliminarily validated by comparison with high quality reduced-scale experimental data from wind-tunnel tests (CEDVAL, Meteorological Institute of Hamburg 2013) as well as detailed numerical predictions obtained using the lattice Boltzmann Method and a large eddy simulation approach (LBM LES, [START_REF] Obrecht | Towards Aeraulic Simulations at Urban Scale Using the Lattice Boltzmann Method[END_REF]. The accuracy of steady RANS predictions was evaluated in cases of an isolated and a regular array of rectangular obstacles. More specifically, the performance of two turbulence models, namely the realizable k-ε model (Rk-ε) and the Reynolds stress model (RSM), was evaluated. In comparison with k-ε turbulence models, the RSM accounts for anisotropic effects of turbulence on the mean flow.
For more details about the validation study, please refer to [START_REF] Merlier | On the Interactions between Urban Structures and Air Flows: A Numerical Study of the Effects of Urban Morphology on the Building Wind Environment and the Related Building Energy Loads[END_REF].
On the one hand, the Rk-ε model was found to predict flow structures that better match experimental data very close to the isolated obstacle. On the other hand, the RSM better predicts the recirculation in the cavity zone downstream the rectangular block. In addition, this later model better reproduces the vortical structure developing between obstacles in the multi obstacle case, whereas no recirculation is predicted by the Rk-ε. RSM predictions are also generally in better agreement with the LBM LES results than the Rk-ε results are. Nonetheless, note that the flow intermittence probably affects RSM results. No clear stabilization of the mean flow profiles could be achieved where the flow is reported very unsteady by the experimental documentation. Hence, according to the validation study, the steady RANS RSM is able to reproduce the main physical phenomena, although imperfectly being a steady RANS model. As a consequence, actual simulations were performed using the steady RANS RSM.
Settings of the computational model
CFD simulations were performed for different building and urban block generic types, which were specifically designed for that purpose. These morphological types are based on an analysis and abstraction of urban textures that exist in different regions of the world, as well as an identification of the urban morphological factors that affect aerodynamic processes that develop in the urban canopy layer. Examples of the analyzed urban patterns can be found in [START_REF] Firley | Fluent release 14.5 user guide[END_REF] and [START_REF] Salat | Cities and Forms: on sustainable urbanism[END_REF]. To examine the effects of the topological properties of built structures, a cubic, a U, and a patio buildings as well as a cube and a continuous patio arrays are focused on. The height of each construction is H = 10 m. Figure 1 illustrates the different case studies currently considered.
The approach flow was designed using a preliminary simulation of a virtual 10 km long empty tunnel. According to the Davenport's classification, the resulting mean wind profile corresponds to an intermediate roughness class between an open and a roughly open landscape [START_REF] Wieringa | New revision of Davenport roughness classification[END_REF]). The mean velocity speed at 10 m high equals 4.3 m.s -1
. The computational domain respects the recommendations of [START_REF] Tominaga | AIJ Guidelines for Practical Applications of CFD to Pedestrian Wind Environment around Buildings[END_REF] in terms of size and spatial discretization. Cell dimensioning down to 15 cm was used for complex cases. Mesh sensitivity was verified for the isolated building types but not for the arrays because of the mesh size already involved (> 15-25.10 6 cells). The numerical schemes used are second order at least. Convergence was verified by monitoring mean velocity and turbulent properties profiles for several lines along the domain and by checking the overall evolution of the simulation residuals. Note that no stabilization of the profiles could be achieved for the continuous patio array case. Numerical instabilities periodically occurred. This might be explained by the physical complexity of the flow field, which might not show clear and stable recirculation zones. For this case, results were taken after 2.10 3 iterations during which no numerical instability occurred.
Results and analysis
Figure 2 -Figure 15 show the 3D mean velocity streamlines as well as the 3D mean vorticity contours1 around the generic constructions and within courtyards. Simulation results were post processed using the free and open source software ParaView. Typical flow structures around sharp edged obstacles basically develop:
the horseshoe vortex which begins upwind and extends downstream on the sides of obstacles the separated bubbles after the leading edges of obstacles and the arch vortex downstream obstacles.
Recirculation phenomena located in under-pressure zones generally correspond to low velocity zones. On the contrary, corner streams show high velocities. Within courtyards and canyons, vortical recirculation phenomena develop. Their features depend on the openness of the court or canyon involved, its orientation with respect to the wind incidence and its location with respect to the upstream leading edge of the built structure in case of arrays. In cases of isolated buildings, results show quite different flow patterns and recirculation phenomena in courts depending on their horizontal openness and orientation in relation to the approach flow. Regarding semi-open courts, the internal flow recirculation is often merged with the surrounding flows. Typically, the court of the U building 1 modifies the basic standing vortex as part of it is entrapped inside. A recirculation shaped as a semi arch vortex develops in the court of the U building 2 driven by the above and lateral flows. Note that such flow structures are observed because of the small depth of courts. When deeper, a self-contained vertical structure, which resembles to that developing within the patio, may develop [START_REF] Merlier | On the Interactions between Urban Structures and Air Flows: A Numerical Study of the Effects of Urban Morphology on the Building Wind Environment and the Related Building Energy Loads[END_REF].
The flow patterns developing within the open spaces of urban blocks show flow paths or recirculation phenomena, which more or less interfere with the surrounding flows. These different flow patterns are characterized by different levels of vorticity and mean velocities, creating higher wind speeds or rather sheltered zones in flow recirculation regions. Such vortical recirculation regions develop in courts, canyons and downstream obstacles. In comparison with the flow structures developing around corresponding isolated building types, the flow structures developing in arrays are altered because of the presence of the additional constructions. These constructions directly affect the basic recirculation phenomena or create a general group effect which modifies the driving aerodynamic forces. Essentially, corner streams are altered in the first line of cubes and the general recirculation that develops above the upstream part of the continuous patio array affects the vortical structures within the underlying patios. Two structures occur inside: one related to the top separated bubble and the second below. The different flow structures then evolve streamwise showing less defined shapes in the cube array because of the sheltering provided by the upstream obstacles and the convergence of lateral flows, and showing more usual shapes in patios.
Conclusion
This study uses CFD to analyze the flow structures that develop around built structures depending on their topological properties. A steady RANS approach is implemented. Generic case studies and forced convection processes are considered.
The presence of courts or surrounding constructions modifies flow structures with respect to the basic structures usually observed around isolated sharp-edged obstacles. The different basic flow structures are intensified; reduced or may even be prevented depending on the topology of the built structure. Hence, the built design appears to be critical in defining urban air flows and therefore urban microclimates. The different flow patterns created may affect the pedestrian wind comfort, the convective cooling of surfaces, the turbulent heat removal as well as the building energy loads due to air infiltration and heat transmission through the building envelope. Nevertheless, to evaluate these effects in details, a more detailed computational approach is required.
Figure 1 :
1 Figure 1: Case studies
Figure 2 :Figure 9 :
29 Figure 2: Flow structures around the cube Figure 3: Flow structures within and around the patio
Blue : 0.5 rad. s -1 ; green : 1.5 rad ;s -1 , red : 5.5 rad.s -1 . |
00972018 | en | [
"spi",
"spi.nrj"
] | 2024/03/04 16:41:24 | 2013 | https://minesparis-psl.hal.science/hal-00972018/file/Planning-oriented%20yearly%20simulation%20of%20energy%20storage%20operation.pdf | Seddik Yassine Abdelouadoud
email: [email protected]@mines-paristech.fr
Robin Girard
Thierry Guiot
email: [email protected]
PLANNING-ORIENTED YEARLY SIMULATION OF ENERGY STORAGE OPERATION IN DISTRIBUTION SYSTEM FOR PROFIT MAXIMIZATION, VOLTAGE REGULATION AND RESERVE PROVISIONNING
The connection of generation units at the distribution level is expected to increase significantly in the future. This phenomenon will have impacts on the planning and operation of the electric system, both at the local (e.g. appearance of reverse power flows) and global (e.g. modification of the supply-demand equilibrium and of the reserve provisioning capabilities) level. In this context, storage devices are increasingly seen as a possible way to mitigate these impacts, with the caveat -compared to alternative solutions-that their versatility comes with high investment costs. Consequently, it is of the utmost importance to make sure that the full capabilities of these storage devices are put to use. Following this line of thought, this paper presents an algorithm suitable for planning purposes that simulates the yearly operation of distributed energy storage units connected at the medium voltage level to maximize the profit drawn from market operation while respecting voltage and reserve requirement constraints.
INTRODUCTION
In order to ensure the sustainability of electric power system, the share of renewable energies in the production mix will increase in the future. For example, the European Union has set goals for its member states in order to attain a 20% share of renewable energy in its final energy consumption by 2020. This target will be partially met by integrating significant amounts of dispersed renewable energy generators (mainly photovoltaic (PV) and wind power) to the distribution grid. These developments will have considerable impact on the design and operation of the electric system, both at the national and local level, some of which are detailed in the remainder of this introduction. The rest of this paper is organized as follows: first, we will outline the potential applications for storage in this context and how they are modelled, then we will present the methodology applied to solve the resulting optimization problem and end by applying it to a case study and discussing the influence of relevant parameters.
Impacts at the global level
The production side of current electric systems consists in a set of base, intermediate and peak load power plants that is optimal considering the load pattern -and its expected evolution-of the area served. Due to their inherent non-dispatchable nature, wind and PV power can be essentially considered, for this purpose, as negative loads. Consequently, a massive development of such production means will entail a change in the optimal distribution between base, intermediate and peak load power plants, as the resulting net load pattern will be different, and probably more variable, than the existing one. In a liberalized electricity market, this translates into a modification of prices patterns, consisting mainly in lowering prices when non-dispatchable generation levels are high and load levels are low. Taken even further, it can lead to the appearance of negative prices on the spot market.
Spinning reserves are used by system operators to compensate for unpredictable imbalances between supply and demand that can be caused by errors in load forecasting or unplanned outages of generation units. Thus, the massive introduction of non-dispatchable renewable generation units will have an impact on reserve requirement calculation, as the uncertainty on their forecasting is likely to have different characteristics from that of the load. However, it is not yet clear if this will translate into an increase in reserve requirements compared to the traditional rule of using the power generated by the largest unit connected to the grid (see [START_REF] Doherty | A new approach to quantify reserve demand in systems with significant installed wind capacity[END_REF] and [START_REF] Ortega-Vazquez | Estimating the spinning reserve requirements in systems with significant wind power generation penetration[END_REF] for contrasting results). What is clear nonetheless is that the replacement of conventional generators by non-dispatchable ones -who are unable to provide spinning reserves in their current operation mode-will induce a change in how reserves are provisioned, either by forcing conventional generators to operate further from their optimal set-points or by resorting to alternatives solutions.
Impacts at the local level
Traditionally, the distribution system has been designed to deliver low-voltage electricity to end-users from centralized power plants connected to the transmission system. It is thus operated under the assumption of unidirectional power flows. The integration of distributed energy sources challenges this assumption since, at high penetration level, it is possible that local generation will surpass local load. The first issue created by these new operating conditions concerns voltage levels. Indeed, current distribution systems have been designed to take full advantage of the permissible voltage range by setting the voltage near the upper limit at the substation and selecting line characteristics that ensure the voltage at the end of the feeder in peak load conditions is still above the lower threshold, while minimizing their cost. The consequence is that, in case of a reverse power flow, it is possible to encounter upper-limit voltage violation even if the magnitude of the reverse power flow is significantly lower than that of the expected peak load [START_REF] Liu | Distribution System Voltage Performance Analysis for High-Penetration Photovoltaics[END_REF].
THE POTENTIAL ROLE OF STORAGE IN THIS CONTEXT
Storage is often considered, among other alternatives, as a natural candidate to accompany a significant development of intermittent energy sources (see, for example, [START_REF] Schaber | Utility scale storage of renewable energy[END_REF]). But let's not forget that storage means are already today an integral part of some electric systems. For example, in France, 5 GW of pumped hydro are in operation, mainly to accommodate for a large inflexible nuclear generation fleet and provide ancillary services. For an exhaustive account of the roles that storage devices can play in an electric system, please refer to [START_REF] Eyer | Energy Storage for the Electricity Grid: Benefits and Market Potential Assessment Guide[END_REF].
In the introduction, we have outlined some of the excepted impacts on the electric system of a shift to intermittent renewable energy sources. The objective of this section is to present how storage devices could help mitigate these and how to model such applications. Henceforth, we will operate under the assumption of the existence of a deregulated electricity market and an Active Distribution Network, as outlined in [START_REF] Kramer | Advanced Power Electronic Interfaces for Distributed Energy Systems[END_REF]. In particular, we suppose that the DSO has the possibility to control active and reactive power from distributed storage and generation means and has access to the relevant network physical quantities through a suitable ICT infrastructure. Furthermore, the DSO is responsible for the local supply-demand equilibrium downstream of the substation considered and is able to participate in electricity market where it is a price taker.
Energy Time-Shift
The first impact outlined in the introduction is the modification of the supply-demand equilibrium. If we assume that market prices are a relevant indicator of the state of the supply-demand equilibrium, an operator using a storage device to maximize its profit on the market by buying at low prices and selling at high ones (an application sometimes referred to as "price arbitrage" or "energy arbitrage", as in [START_REF] Graves | Opportunities for electricity storage in deregulating markets[END_REF]) will take part in the mitigation of such an impact.
As in [START_REF] Graves | Opportunities for electricity storage in deregulating markets[END_REF], we adopt a simplified model for energy storage devices, taking in account maximum power for charge and discharge, energy capacity and efficiency. In this light, the energy time-shift application can be modelled by the following set of equations : is the maximal power transacted on the market.
T t t Market t Market P C min (1) K k t k st t k RE t k
init k t p k dis p k dis p k ch k ch t k SOC P P SOC K k T t 1 , , , , , , (3) max , , max , , 0 , 0 , , k dis t k dis k ch t k ch P P P P K k T t (4) max 0 , , k t k SOC SOC K k T t (5) (2)
(1) represents the minimization of the cost incurred to the DSO, (2) expresses the local supply-demand equilibrium for which the DSO is responsible, (3) models the evolution of the storage device state-of-charge while (4), ( 5) and ( 6) deal with technical limitations.
Spinning Reserve Provisioning
A potential fair solution -in terms of reducing negative externalities-to the second impact mentioned in the introduction would be to devise a mechanism requiring non-dispatchable energy producers to provide spinning reserves as a function of the power produced, so that the diminished reserve provisioning capabilities of conventional generating units are compensated for at the system level. Hereafter, we assume that such a mechanism exists, that the reserve requirement is proportional to the power produced and that it is fulfilled by the storage devices deployed in the system considered. From a modeling perspective, this translates into the following set of equations :
K k t k RE UpReserve K k t k ch t k dis k dis P P P P T t , , , max , (7)
K k t k RE e DownReserv K k t k dis t k ch k ch P P P P T t , , , max , (8)
where
Voltage Control
Henceforth, we assume that the storage devices and renewable energy units are connected to the grid through an Advanced Power Electronic Interface (APEI), as defined in [START_REF] Kramer | Advanced Power Electronic Interfaces for Distributed Energy Systems[END_REF], and that the substation transformer is equipped with an On-Load Tap Changer (OLTC), for which we use a simplified continuous model. We also assume that the grid downstream of the substation has a radial structure, which allows us to use the set of network equations defined in [START_REF] Venkatesh | An accurate voltage solution method for radial distribution systems[END_REF]
Q Q U Q P X Q j j j , , 2 2 (10) 0 2 2 2 2 2 t j j t j t i j t j j t j t j S Z U U X Q R P U (11) 2 2 2 2 2 2 2
, ,
t t t l l l t t j j j j j Q P S X R Z V U (
Q Q Q Q , , , , (14) max min max min , OLTC t OLTC OLTC t j V V V V V V ( 15
METHODOLOGY
The optimization problem defined by equations ( 1) to ( 16) is non-linear, due to the presence of quadratic terms, and non-convex because of the terms
t kk t kk t kk U Q P 2 2
in equations ( 9) and (10). The control variables are the vectors of active and reactive power injected by the storage devices, the vectors of reactive power injected by the renewable energy APEI and the vector of voltage downstream of the OLTC-equipped transformer. (3) introduces a time-coupling constraints that prevents us from solving the problem time step by time step. Consequently, if we want to solve it for a year with onehour time steps, the dimensionality of the problem becomes very high. These combined characteristics make the problem difficult to solve, especially in a context of distribution system planning, for which a high number of varying instances have to be solved (see, for example, [START_REF] Martins | Active distribution network integrated planning incorporating distributed generation and load response uncertainties[END_REF]) in order to attain an optimal planning. Hereafter, we present the simplification adopted to find a satisfactory compromise between optimality and computational time.
Partial time-decoupling
As explained before, the coupling between the time steps is induced by the evolution of the state-of-charge of the storage units. Moreover, these state-of-charges are dependent only from the active powers produced by the storage devices, which are mainly driven by the evolution of market prices. As these market prices often present strong daily and weekly pattern, we can guess that there may be a time horizon above which no significant additional benefits can be captured. We verify this by solving (1) to [START_REF] Venkatesh | An accurate voltage solution method for radial distribution systems[END_REF]
Scheduling and voltage control decoupling
The observation of (1) shows that only active powers have an influence on the objective function, while reactive powers only play a role when voltage magnitudes are inadequate. For the remainder, we define three types of time steps: a time step is deemed noncritical if the network equations and constraints are not active, semi-critical if they are active but not binding and critical if they are. From [START_REF] Liu | Distribution System Voltage Performance Analysis for High-Penetration Photovoltaics[END_REF], we know that, even at high levels of penetration of photovoltaic generators, voltage limit violations are infrequent events. To further this claim in the presence of storage devices, we execute simulations on the 69-bus medium-voltage network used in [START_REF] Venkatesh | An accurate voltage solution method for radial distribution systems[END_REF] for various levels of photovoltaic and storage penetration (defined as the ratio between the maximal apparent power of PV and storage connected at a given node and its annual maximal load) and compute the numbers of semi-critical and critical time steps.
Figure 1 : number of critical and semi-critical time steps
We thus observe that, even at unconventionally high levels of penetration, the number of critical time steps remains relatively low. This allows us to separate the complete problem in a linear multi-stage master problem of scheduling active powers defined by ( 1) to ( 8), for which we further reduce the number of variables by aggregating the storage units with the same efficiency and discharge duration, and a single-stage non-linear nonconvex problem defined by ( 9) to ( 16) to which we add the objective of minimizing the quadratic deviation from the master problem results. We then recombine both results to find a solution respecting the constraints while being close enough to the objective function optimal value (an upper bound on the loss of optimality induced can be evaluated, as the results of the master problem give us a lower bound on the cost function).
CASE STUDY
We apply this methodology to a case study consisting in the 69-bus network mentioned above, with residentialtype loads connected at each node and scaled so that the annual maximal load is equal to the loads used in [START_REF] Venkatesh | An accurate voltage solution method for radial distribution systems[END_REF]. Reactive loads are defined, so that the power factor remains constant and equal to the one used in [START_REF] Venkatesh | An accurate voltage solution method for radial distribution systems[END_REF]. We study the impact of the deployment of PV generators and storage units by making the number of nodes equipped (N) and the level of penetration, as defined above, (L) vary. For each number of nodes equipped, the nodes are chosen in increasing order of driving-point resistance. Below, we present some of the results obtained.
CONCLUSION
First, we have outlined some of the opportunities for storage units presented by the expected deployment of renewable energy generators in the distribution system. Then, we have modelled these applications in the form of a non-convex non-linear optimization problem that we separate in a master and a slave problem in order to make it tractable. We end by applying in to case study, and verify that the results are coherent with what was expected. Future research will be conducted along three axis : improving the computational efficiency of the algorithm, adding other applications of storage to the model and integrating it in an exhaustive distribution system planning process.
Figure 2 :Figure 3 :
23 Figure 2 : Influence of voltage limit violations on active powers for a critical time step
Figure 4 :
4 Figure 4 : (a) Voltage magnitude by node and by time step, sorted in increasing order of voltage magnitude
load, the renewable energy power injected, the charge, discharge, algebraic power injected in the grid and state-of-charge of the storage device, at bus k during time step t .
max Market P where T is the set of time steps considered, K is the set max , t Market Market P P T t (6) of buses considered, t Market C is the market price forecast, t Market P is the algebraic power transacted on the market (equivalent here to the power transiting through the substation and positive when buying) during time step t , t Loss P are the aggregate network losses downstream of the substation, t k t k s t k dis t k ch t k RE t k Load SOC P P P P P , , , , , , , , , , are, respectively, the max , max , max , , k dis k ch k P P SOC are the energy capacity, maximal discharge power and maximal charge power of the storage device connected at bus k , while max Market P
Table 1 : Evolution of optimality and computation time as a function of the time horizon
1 for a year and one-hour time steps and for various time horizons, while measuring the computation time. Results are presented in the table below and lead us to choose a 7-day time horizon.
Time Horizon Cost variation Computation
(in days) (in %) time (in seconds)
1 0 26
7 -2.02 70
70 -2.11 709
140 -2.11 2614
CIRED
2013, 22nd International conference & exhibition on electricity distribution, Electricity distribution systems for a sustainable future June 10-13, 2013, Stockholm, Sweden [Paper 1243]
CIRED 2013, 22nd International conference & exhibition on electricity distribution, Electricity distribution systems for a sustainable future June 10-13, 2013, Stockholm, Sweden [Paper 1243]
CIRED 2013, 22nd International conference & exhibition on electricity distribution, Electricity distribution systems for a sustainable future June 10-13, 2013, Stockholm, Sweden [Paper 1243] |
04109117 | en | [
"sde"
] | 2024/03/04 16:41:24 | 2008 | https://hal.ird.fr/ird-04109117/file/Barthes_2008_Geoderma.pdf | Bernard G Barthès
email: <[email protected]>
Ernest Kouakoua
Marie-Christine Larré-Larrouy
Tantely M Razafimbelo
Edgar F De Luca
Anastase Azontonde
Carmen S V J Neves
Pedro L De Freitas
Christian L Feller
Texture and sesquioxide effects on water-stable aggregates and organic matter in some tropical soils
Keywords: Kaolinitic soils, Aggregation, Organic matter, Texture, Sesquioxides, Aluminium
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Soil organic matter (OM) and aggregation are important determinants of soil fertility and productivity, and are key factors in the global carbon (C) cycle. Indeed, soil OM is a source of nutrients, promotes structure and water retention, and is a substrate for soil heterotrophs [START_REF] Baldock | Role of the soil matrix and minerals in protecting natural organic materials against biological attack[END_REF]. Soil aggregation has a major influence on root development, on water and C cycling, and on soil resistance to erosion [START_REF] Kay | Soil structure and organic carbon: a review[END_REF].
Moreover, soil OM and aggregation are closely linked one to another: OM is considered an important aggregate binding agent, on the one hand, and may be physically protected against decomposition within stable aggregates, on the other hand [START_REF] Feller | Physical control of soil organic matter dynamics in the tropics[END_REF][START_REF] Angers | Dynamics of soil aggregation and C sequestration[END_REF].
However, the stability of soil OM and aggregation as well as relationships between OM and aggregation are affected by the presence of sesquioxides. Indeed, the aggregating role of OM could be less effective in soils that include large amounts of such inorganic constituents (Tisdall and Oades, 1982;[START_REF] Six | Soil organic matter, biota and aggregation in temperate and tropical soils -Effect of no-tillage[END_REF]. Due to the role of sesquioxides in the stabilization of OM [START_REF] Dalal | Aggregation and organic matter storage in sub-humid and semi-arid soils[END_REF][START_REF] Guggenberger | Effect of mineral colloids on biogeochemical cycling of C, N, P, and S in soil[END_REF], the importance of aggregation in the stability of OM should also be re-examined for soils that are rich in sesquioxides. This is the case for low-activity clay (LAC) soils, which include clay minerals dominated by 1:1 phyllosilicates, as kaolinite, associated with crystallized and poorly or non-crystallized iron (Fe) and aluminium (Al) oxides and hydroxides. They cover about 60-70% of tropical areas [START_REF] Segalen | Les Sols Ferrallitiques et leur Répartition Géographique. Tome 1. Introduction Générale. Les Sols Ferrallitiques : leur Identification et Environnement Immédiat[END_REF], where they are mainly represented by Alfisols, Oxisols and Ultisols according to the US Soil Taxonomy (Soil Survey Staff, 1999) or by Acrisols, Ferralsols and Nitisols according to the FAO soil classification (FAO-ISRIC-ISSS, 1998). Though they are widely distributed, LAC tropical soils have received much less attention than soils from temperate areas, and knowledge on OM and aggregation in LAC tropical soils still remains fragmentary.
As far as LAC tropical soils are concerned, more information is thus useful to verify the importance of sesquioxides in soil OM and aggregate stability, and complementarily, to characterize the interrelations between OM and aggregation. The influence of texture on soil OM is more clearly established. Several studies, some of them regarding LAC tropical soils, showed that soil C content increases with clay content [START_REF] Feller | Physical control of soil organic matter dynamics in the tropics[END_REF]. However, the role of texture in stable aggregation remains questionable (Le [START_REF] Bissonnais | Soil characteristics and aggregate stability[END_REF][START_REF] Amézketa | Soil aggregate stability: a review[END_REF], and requires additional investigations.
Soil OM may first be considered through the determination of total soil C content (Ct), as far as soils do not include carbonates. It can also be characterized by chemical fractionations, but the importance of the obtained fractions (e.g. fulvic and humic acids) towards major soil processes such as aggregation and OM mineralization is not yet well established [START_REF] Feller | Physical control of soil organic matter dynamics in the tropics[END_REF][START_REF] Baldock | Interactions of organic materials and microorganisms with minerals in the stabilization of soil structure[END_REF]. Therefore, increasing attention has progressively been paid to procedures that involve physical fractionations of soil OM, e.g. particle size fractionations. Indeed, the characterization of OM associated with soil primary particles (sands, silts, clay) has proved to be a relevant approach to separate fractions that have different morphologies, composition, and dynamics [START_REF] Christensen | Physical fractionation of soil and organic matter in primary particle size and density separates[END_REF][START_REF] Feller | Physical control of soil organic matter dynamics in the tropics[END_REF]. The characterization of water-stable aggregate distribution, which may be achieved using simple laboratory procedures, also provides useful information on soil quality and behaviour (Tisdall and Oades, 1982;[START_REF] Amézketa | Soil aggregate stability: a review[END_REF][START_REF] Barthès | Field-scale run-off and erosion in relation to topsoil aggregate stability in three tropical regions (Benin, Cameroon, Mexico)[END_REF].
The objectives of the present work were (i) to characterize the size distributions of stable aggregates and organic constituents associated with primary particles, (ii) to study how these distributions were affected by soil texture and Al-and Fe-sesquioxides, and (iii) how these distributions were interrelated, for a range of topsoil layers (0-10 cm) from LAC soils from sub-Saharan Africa and Brazil.
Material and methods
Sites and sampling
Composite soil samples were collected at 0-10 cm depth in 18 plots: five in southern Congo-Brazzaville (two sites in the same region), four in southern Benin (one site), and nine in Brazil, including three in Goiás (one site), three in São Paulo (three sites), and three in Paraná (one site). Table 1 provides information on location, climate, soil type, parent material, texture, clay mineralogy, and land use. The climate is tropical (Congo, Benin, Goiás) or subtropical (Paraná, São Paulo), ranges in mean annual rainfall and temperature being 1100-1600 mm and 23-27°C, respectively. Soils are mainly Oxisols and Ultisols, with sandy to clayey texture, and clay mineralogy is dominated by kaolinite. Further information on sites and soils was given by [START_REF] Kouakoua | Relations entre stabilité de l'agrégation et matière organique totale et soluble à l'eau chaude dans des sols ferrallitiques argileux (Congo, Brésil)[END_REF] for Congo, Goiás and Paraná, [START_REF] Barthès | Field-scale run-off and erosion in relation to topsoil aggregate stability in three tropical regions (Benin, Cameroon, Mexico)[END_REF] for Benin, and De Luca (2002) for São Paulo. Samples were generally collected using 0.25-to 0.5-L cylinders, air-dried at room temperature, then gently crushed using a mortar and pestle and sieved through a 2-mm mesh. Aliquots were finely ground to pass a 0.2-mm mesh, also using a mortar and pestle.
Aggregate fractionation
The distribution of water-stable aggregates was determined on 2-mm sieved air-dried soil samples using a test adapted from [START_REF] Kemper | Aggregate stability and size distribution[END_REF]. Four grams of soil were rapidly immersed into deionised water for 30 min then wet-sieved during 6 min, using a motor-driven holder lowering and raising 200-µm sieves into containers of water. On the one hand, the fraction > 200 µm was dried at 105°C then weighed. It was successively sieved into dispersive NaOH solution (0.05 M), dried at 105°C, and weighed to determine coarse sand content. Stable macroaggregate (> 200 µm) content was calculated as the difference between fraction > 200 µm and coarse sands. On the other hand, the fraction < 200 µm was transferred to a glass cylinder, which was shaken by hand (30 end-over-end tumblings) and left to stand during the correct settling time for the 0-20 µm fraction (pipette method). An aliquot of the fraction < 20 µm was then siphoned from the upper 10 cm of the suspension, dried at 105°C, weighed, and referred to the cylinder volume to determine microaggregate (< 20 µm) content.
Mesoaggregate (20-200 µm) content was calculated by difference between total soil and other fractions. Compared to the macroaggregate fraction (> 200 µm), which included genuine aggregates only due to coarse sand-correction, the meso-and microaggregate fractions included aggregates as well as primary particles of the same size. Measurements were made on four replicates per sample. It should be noticed that the aggregate fractions were not extracted thus were not available for further analyses.
Particle size fractionation of organic matter
The particle size fractionation of OM requires maximum soil dispersion with minimum particle alteration. It was carried out on 2-mm sieved air-dried soil samples of 40 g (sandy soils) or 20 g (other soils), in duplicate, following [START_REF] Gavinelli | A routine method to study soil organic matter by particle-size fractionation: examples for tropical soils[END_REF]. Each sample was pre-soaked overnight at 4°C in deionised water with 2.5 g L -1 sodium metaphosphate. It was then shaken with agate balls in a rotary shaker for 2 h (sandy soils) or 6 h (others).
Indeed, a long shaking duration is necessary to achieve maximum dispersion in clayey samples, but in coarse textured samples it may lead to an alteration of coarse organic particles by sands [START_REF] Feller | Utilisation des résines sodiques et des ultrasons dans le fractionnement granulométrique de la matière organique des sols. Intérêt et limites[END_REF]. Next the soil suspension was wet-sieved through 200-and 50µm sieves, successively. The fractions remaining in the sieves were washed with water and the washings added to the 0-50 µm suspension. The suspension < 50 µm was ultrasonicated for 10 min with a probe-type ultrasound generating unit, then it was sieved on a 20-µm screen, and the residual material obtained from the sieve was washed. The three fractions > 20 µm were dried at 60°C and weighed. The suspension < 20 µm was transferred to a 1-L glass cylinder, where water was added to bring the volume to 1 L. Then the cylinder was shaken by hand (30 end-over-end tumblings) and left to stand during the correct settling time for the 0-2 µm fraction (pipette method). The upper 30 cm of the suspension, which included particles < 2 µm only, was siphoned off. The process (i.e. water addition, shaking, settling, and siphoning) was repeated until the supernatant was clear thus the fraction 0-2 µm extracted completely (five cycles at least). The 0-2 µm suspension was then flocculated with strontium chloride, centrifuged, filtered, and the pellets were washed with water, dried at 60°C and weighed. For some samples (Benin, Saõ Paulo), the supernatant from centrifugation and the washing waters from previous sieving operations were grouped, then an aliquot was collected and filtrated at 0.2 µm for the analysis of soluble organic C. The 2-20 µm fraction, which remained at the bottom of the sedimentation cylinder, was also dried at 60°C and weighed.
All these fractions, separated from completely dispersed soil samples, were called dispersed fractions, as opposed to aggregate fractions. In the present paper, data regarding the dispersed fractions 50-200 and 20-50 µm have been combined (20-200 µm), as well as those regarding the fractions 2-20 and 0-2 µm (< 20 µm).
Analyses of carbon, pH, and iron and aluminium sesquioxides
Carbon contents of finely ground (< 0.2 mm) soil aliquots and dispersed fractions were determined by dry combustion using a Leco CHN 600 elemental analyzer (St Joseph, MI). In the absence of carbonates, all C was assumed to be organic. Soluble organic C was analyzed using a Shimadzu TOC 5000 analyzer (Kyoto, Japan). The pH in water was determined with a soil:solution ratio of 1:2.5. On whole soil aliquots, "free" Fe (Fe CBD ) and Al (Al CBD ) were extracted with citrate-bicarbonate-dithionite (CBD), "amorphous" Fe (Fe OX ) and Al (Al OX ) with ammonium oxalate, and all were analyzed using an Unicam P 900 (Cambridge, UK) atomic absorption spectrometer [START_REF] Rouiller | Méthodes d'analyses des sols[END_REF].
Statistical analyses
Simple and multiple linear correlation coefficients were calculated between variables, and their significance was denoted NS for p 0.05, * for p < 0.05, ** for p < 0.01, and *** for p < 0.001. Principal component analyses (PCAs) were also carried out on the dataset. The objective of PCA is to reduce the number of original variables in a smaller number of representative and uncorrelated factors, and to detect structure in the relationships between original variables [START_REF] Lebart | Statistique Exploratoire Multidimensionnelle[END_REF].
Results
Studied variables are presented in Table 2, and main correlations between them in Table 3.
Soil texture and contents in Fe-and Al-sesquioxides
The cumulative yield of dispersed fractions ranged from 95 to 101%, and averaged 99%. Clay plus fine silts (< 20 µm) ranged from 8 to 87%; they correlated negatively with coarse sands (> 200 µm), and to a lesser extent, with coarse silts plus fine sands (20-200 µm).
Soil content in Fe CBD ranged from 8 to 99 g kg -1 , and Al CBD , Fe OX and Al OX from 0.4 to 7.6 g kg -1 . All correlated positively one with another in general, particularly Fe CBD and Fe OX , and Al CBD and Al OX . Moreover, all correlated positively with clay plus fine silts, Al CBD especially. Clay plus fine silts tended to increase and coarse sands to decrease with land use intensification (i.e. cultivation, tillage, reduced surface cover) 1 . In contrast, the effect of land use on Fe-and Al-sesquioxides was not clear.
Soil aggregate distribution; relations with texture and Fe-and Al-sesquioxides
The cumulative yield of aggregate fractions plus coarse sands was 100% by definition, the mesoaggregate fraction being calculated by difference. Aggregate distributions were generally dominated by macroaggregates (> 200 µm): they included 42 to 82% of the mass of aggregate fractions (coarse sands being excluded), and represented a higher soil proportion than mesoaggregates (20-200 µm) in 16 out of the 18 samples. The proportion of microaggregates (< 20 µm) was very low. Macro-and mesoaggregate fractions did not correlate significantly (except on a coarse sand-free basis, due to the low proportion of microaggregates). The proportion of microaggregates did not relate clearly with other aggregate fractions.
Considering all 18 plots, the macroaggregate fraction correlated negatively with coarse sands and positively with clay plus fine silts, Al CBD , Al OX (Figure 1a), and to a lesser extent, with Fe CBD and Fe OX . It seemed that macroaggregates correlated with clay plus fine silts "indirectly" (relation A), as a consequence of their close negative correlations with coarse sands (relation B), which related negatively with clay plus fine silts (relation C). Indeed, the product of coefficient correlations assigned to relations B and C was similar to the coefficient correlation assigned to relation A. Meso-and microaggregate fractions related less closely with texture and sesquioxides.
Soil carbon; relations with texture and Fe-and Al-sesquioxides
For the studied samples, Ct ranged from 5 to 43 g C kg -1 . The cumulative C yield in dispersed fractions ranged from 78 to 101%, and averaged 90%. Rather low recoveries could be attributed, at least partly, to soluble C. Indeed, when it was determined, soluble C accounted for 6 to 9% of Ct (data not shown), which agreed with data reported by [START_REF] Christensen | Physical fractionation of soil and organic matter in primary particle size and density separates[END_REF]. On average, the dispersed fractions > 200, 20-200 and < 20 µm included respectively 37, 15 and 27 g C kg -1 fraction (C concentrations). This corresponded to 1.5, 0.8 and 1.9 times 1 The effects of land use on many soil properties are fairly well established (cf. end of section 4.7) and the present paper did not aim at discussing them extensively. However, increasing topsoil clay content upon land use intensification has not been often reported. It might be due to erosion, which generally increases with land use intensification and leads to the outcropping of sub-superficial layers that are more clayey than superficial ones in many LAC tropical soils.
Ct (C enrichments), represented 1.6, 3.1 and 13.2 g C kg -1 soil (C amounts), and accounted for 8, 17 and 65% of Ct, respectively. Size distributions of organic constituents were thus dominated by fine OM (46 to 80% of Ct). Considering all 18 soil samples, Ct correlated positively with C amount in every dispersed fraction, especially with that of fraction < 20 µm.
Carbon amounts in the dispersed fractions 20-200 and > 200 µm correlated closely one with another, but correlated less closely with that of fraction < 20 µm.
Considering all 18 soil samples, texture related clearly with the characteristics of several C pools (Figure 2). Clay plus fine silts correlated positively with Ct and C amount in the dispersed fraction < 20 µm (Figures 1b and1c), with C concentration and enrichment in the dispersed fraction > 200 µm, and negatively with C enrichment in the dispersed fraction < 20 µm. Soil content in Al CBD , and in Al OX to a lesser extent, correlated closely with Ct and C amount in the dispersed fraction < 20 µm (Figures 1b and1c), and less closely, with C amounts in the coarser fractions. By contrast, Fe CBD and Fe OX did not relate clearly with Ct and fraction C amounts. Multiple linear correlations between C amount in a given fraction, on the one hand, and two or more variables regarding texture and/or sesquioxides, on the other hand, were not closer than corresponding simple linear correlations or were not significant..
Possible determinants of soil aggregation and organic matter
Considering all 18 soil samples, soil content in stable macroaggregates correlated positively with Ct and with C amount in the dispersed fraction < 20 µm (Figure 3), but each of them correlated more closely with Al CBD (Figure 1). To a lesser extent, macroaggregate content correlated with C amount in the dispersed fractions > 200 and 20-200 µm, but each of them correlated more closely with Al OX . Multiple linear correlations between macroaggregate content, on the one hand, and Al-extracts and fraction C amounts, on the other hand, were not closer than corresponding simple correlations. Relationships between meso-or microaggregate fractions and Ct or fraction C amounts were not significant.
Data thus suggested that Al CBD , and Al OX to a lesser extent, were major determinants of soil aggregation and OM content in the studied soils. However, when considering each location separately, Al CBD and Al OX varied generally within narrow ranges, and the abovementioned correlations were not significant in general. By contrast, the effect of land use became important: land use intensification (i.e. cultivation, tillage, reduced surface cover) generally caused a decrease in the proportion of macroaggregates, in C concentration and amount in every dispersed fraction, thus in Ct. These effects have already been described for several of the studied sites [START_REF] Kouakoua | Relations entre stabilité de l'agrégation et matière organique totale et soluble à l'eau chaude dans des sols ferrallitiques argileux (Congo, Brésil)[END_REF][START_REF] Barthès | Field-scale run-off and erosion in relation to topsoil aggregate stability in three tropical regions (Benin, Cameroon, Mexico)[END_REF], and will not be detailed here.
Principal component analysis (PCA)
Three main PCAs were carried out, with active variables that contributed to factor definition, and supplementary variables that did not. Considering supplementary variables helped limiting the over-influence of linked variables (e.g. C amounts of fractions and their sum Ct). Clay plus fine silts, coarse silts plus fine sands, Al CBD and Fe CBD were always considered active variables. For the first PCA, C amounts in dispersed fractions were also considered active, while Ct, aggregate fractions, Al OX and Fe OX were considered supplementary. For the second PCA, fraction C concentrations were considered active, and Ct, fraction C amounts and aggregate fractions supplementary. Finally, for the third PCA, macro-and mesoaggregate fractions were considered active, and Ct and fraction C amounts supplementary.
On the whole, the three PCAs showed similar structure in the relationships between variables (the first PCA is presented in Figure 4). The first factor included 58-62% of the total variability, the second factor 20-22%, and the three first factors together 90-92%.
Macroaggregate content and Al CBD were always very closely associated, and were associated with C amount < 20 µm and Ct, confirming their strong interrelations. The coordinates of these four variables on the first factor ranged from -0.77 to -0.96. Clay plus fine silts were associated with this group of variables, but to a lesser extent. Amounts of C in the dispersed fractions > 200 and 20-200 µm were associated one with another, as well as the corresponding C concentrations. When Fe OX and Al OX were included in the analysis, they were associated with Fe CBD and Al CBD , respectively.
Discussion
Size distribution of stable aggregates
Stable macroaggregates (> 200 µm) dominated aggregate distributions in general. A high content in stable macroaggregates has often been reported for clayey LAC tropical soils [START_REF] Oades | Aggregate hierarchy in soils[END_REF][START_REF] Albrecht | Déterminants organiques et biologiques de l'agrégation : implications pour la recapitalisation de la fertilité physique des sols tropicaux[END_REF][START_REF] Six | Soil organic matter, biota and aggregation in temperate and tropical soils -Effect of no-tillage[END_REF] but less frequently for coarse-textured ones [START_REF] Dalal | Aggregation and organic matter storage in sub-humid and semi-arid soils[END_REF]Spaccini et al., 2001), where it has been considered that macroaggregation could be less stable [START_REF] Albrecht | Déterminants organiques et biologiques de l'agrégation : implications pour la recapitalisation de la fertilité physique des sols tropicaux[END_REF]. Thus stable aggregation was well developed in the studied soils, and this probably stands for most LAC tropical soils. Within aggregate fractions (i.e. on a coarse sand-free basis), the very close correlation between macro-and mesoaggregates suggested an aggregate hierarchy: macroaggregate disruption released mesoaggregates mainly, and conversely, macroaggregates resulted mainly from bounding of mesoaggregates. Aggregate hierarchy has not often been reported in LAC soils. [START_REF] Oades | Aggregate hierarchy in soils[END_REF] reported such a hierarchy for an Alfisol and a Mollisol (dominated by 2:1 clay mineralogy), but not for a clayey Oxisol. Six et al. (2000a) also found a higher degree of aggregate hierarchy in 2:1 than in 1:1 clay-dominated soils.
However, [START_REF] Feller | Aggregation and organic matter storage in kaolinitic and smectitic tropical soils[END_REF] reported a tendency to an aggregate hierarchy for an Ultisol, an Inceptisol and an Oxisol.
Size distribution of organic constituents
In the present study, the dispersed fractions > 200, 20-200 and < 20 µm included 8, 17, and 65% of Ct in average, respectively. This was consistent with data from tropical soils reviewed by [START_REF] Feller | Physical control of soil organic matter dynamics in the tropics[END_REF], which indicated that proportions of Ct in the dispersed fractions > 50, 2-50 and < 2 µm generally ranged from 10 to 30%, 20 to 40%, and 35 to 70%, respectively. The present study also showed that Ct correlated positively with C amount in every dispersed fraction (in g C kg -1 soil), with fine fraction C especially. Thus changes in Ct resulted from changes in every size class, but depended more closely on those in the fine fraction. Studying 21 grassland topsoils from Saskatchewan to Texas, [START_REF] Amelung | Carbon, nitrogen, and sulfur pools in particle-size fractions as influenced by climate[END_REF] similarly reported positive correlations between Ct and C amount in every dispersed fraction, in the fine one especially, and found that the dispersed fraction < 20 µm included 58 to 99% of Ct. For a clayey Oxisol under bush savanna and pasture in Brazil, [START_REF] Roscoe | Soil organic matter dynamics in density and particle size fractions as revealed by the 13 C/ 12 C isotopic ratio in a Cerrado's oxisol[END_REF] also found positive correlations between Ct and fraction C amounts. Moreover, the present study showed that C amounts in the dispersed fractions 20-200 and > 200 µm correlated closely one with another but correlated poorly with C amount in the fine fraction. Similar relationships were found between fraction C concentrations (in g C kg -1 fraction). This suggested that organic compartments > 200 and 20-200 µm had similar dynamics, which differed from those of organic fraction < 20 µm. Indeed, several works supported the evidence that organic constituents > 20 µm are plant debris mainly, the coarser ones being less decomposed and having a more recognizable morphology than the finer ones, whereas organic constituents < 20 µm are more humified and have a slower turnover [START_REF] Feller | Physical control of soil organic matter dynamics in the tropics[END_REF][START_REF] Baldock | Role of the soil matrix and minerals in protecting natural organic materials against biological attack[END_REF][START_REF] Guggenberger | Effect of mineral colloids on biogeochemical cycling of C, N, P, and S in soil[END_REF].
Influence of texture on soil aggregation
Macro-and mesoaggregate fractions correlated positively with clay plus fine silts.
However, this seemed to result largely from negative correlations between all three variables and coarse sands: an increase in coarse sands caused simultaneous decreases in clay plus fine silts and aggregate fractions, and there was no evidence of direct influence of clay plus fine silts on aggregate fractions, macroaggregates especially. This contrasted with the rather general agreement that clay content positively affects aggregate stability, as reported in reviews by Le [START_REF] Bissonnais | Soil characteristics and aggregate stability[END_REF] and [START_REF] Amézketa | Soil aggregate stability: a review[END_REF]. However, these authors mentioned contradictory results, especially when considering wide ranges of soils, because significant correlations between clay and aggregation have often been established within narrow ranges of soils. [START_REF] Dalal | Aggregation and organic matter storage in sub-humid and semi-arid soils[END_REF] did not consider that clays were important agents of aggregate stabilization in Oxisols and Ultisols, and [START_REF] Six | Soil organic matter, biota and aggregation in temperate and tropical soils -Effect of no-tillage[END_REF] did not find any correlation between clay content and aggregate stability over a wide range of soils from tropical and temperate areas.
Influence of texture on soil organic matter
The present study confirmed the positive relation between Ct and clay plus fine silts, which was reported for LAC tropical soils by [START_REF] Feller | Physical control of soil organic matter dynamics in the tropics[END_REF]. When plotting Ct against clay plus fine silts (in %), slope coefficient was very close to that calculated by these authors (0.036 vs. 0.037). Indeed, soil texture is considered a key variable affecting Ct, as a result of electrostatic binding between negatively charged clay surfaces and organic colloids via cation bridges [START_REF] Ingram | Managing carbon sequestration in soils: concepts and terminology[END_REF][START_REF] Paustian | Environmental and management drivers of soil organic carbon stock changes[END_REF]. Due to sorption of fine organic constituents to fine mineral particles, C amount in the fine fraction increases with the proportion of fine primary particles, as confirmed by [START_REF] Feller | Aggregation and organic matter storage in kaolinitic and smectitic tropical soils[END_REF] over a range of tropical soils. As fine fraction C includes the highest proportion of Ct, Ct also increases with clay plus fine silts. Thus clayey soils contain more C, but the present study showed that C enrichment affected their coarse fraction mainly, whereas in coarse-textured soils C enrichment rather affected the fine fraction. [START_REF] Christensen | Physical fractionation of soil and organic matter in primary particle size and density separates[END_REF] also reported in his review that C enrichment in the fine fraction was inversely related to its proportion in the whole soil. [START_REF] Dalal | Aggregation and organic matter storage in sub-humid and semi-arid soils[END_REF] similarly noticed preferential C enrichment of the clay fraction in coarse-textured soils. Studying grassland topsoils from Canada to Texas, Amelung et al.
(1998) observed that C concentrations in the dispersed fractions 250-2000 and 20-250 µm were inversely related to their respective proportions in the whole soil, and suggested that it was largely explained by a dilution effect. The influence of texture on C concentration in coarse fractions could also be attributed to encrustation of plant debris by fine mineral particles, which are attached to mucilages produced by microorganisms that feed on debris (Tisdall and Oades, 1982;[START_REF] Angers | Dynamics of soil aggregation and C sequestration[END_REF]. This helped explain the positive correlation between clay plus fine silts and C concentration and enrichment in the dispersed fraction > 200 µm. Additionally, the present study showed that the relationship between the mass of the fraction and the C amount it contains was less clear for the dispersed fractions 20-200 and > 200 µm than for that < 20 µm. This suggested that soil OM dynamics were more affected by fine than by coarse primary particles. This was confirmed by [START_REF] Amelung | Carbon, nitrogen, and sulfur pools in particle-size fractions as influenced by climate[END_REF] for a range of North American grasslands, where C amount in the dispersed fraction < 20 µm correlated positively with clay plus fine silts, whereas C amounts in coarser fractions did not relate to texture.
Influence of sesquioxides on soil aggregation
Soil content in stable macroaggregates correlated closely with Al CBD, Al OX , and to a lesser extent, with Fe CBD and Fe OX . Many authors mentioned the importance of sesquioxides in aggregate stability, in LAC soils especially [START_REF] Kemper | Aggregate stability of soils from western United States and Canada[END_REF][START_REF] Oades | Aggregate hierarchy in soils[END_REF][START_REF] Bissonnais | Soil characteristics and aggregate stability[END_REF]Six et al., 2000b). The role of these compounds has been attributed to their flocculation capacity, to their binding effect of clay particles to organic molecules, and to their possible precipitation as gels on clay surfaces, but this was reported to affect mesoaggregation mainly [START_REF] Amézketa | Soil aggregate stability: a review[END_REF]. Electrostatic bindings between clays and oxides has also been mentioned, but due to their limited range of action, it has been hypothesized that these interactions result in macroaggregates that are not stable enough to resist fast wetting [START_REF] Six | Soil organic matter, biota and aggregation in temperate and tropical soils -Effect of no-tillage[END_REF].
While the predominant role of free Fe oxides in macroaggregate stability has been reported by several authors, for Oxisols especially [START_REF] Kemper | Aggregate stability of soils from western United States and Canada[END_REF][START_REF] Dalal | Aggregation and organic matter storage in sub-humid and semi-arid soils[END_REF], the importance of Al-containing crystalline sesquioxides, as observed in the present study, has not been strongly supported by the literature. The effectiveness of gibbsite in maintaining soil aggregation has been reported to be higher than that of Fe-oxides [START_REF] El-Swaify | Changes in the physical properties of soil clays due to precipitated aluminum and iron hydroxides: swelling and aggregate stability after drying[END_REF]. However, correlation between Al CBD and macroaggregation could not be attributed to gibbsite because: (i) gibbsite is little attacked by CBD thus releases little Al CBD (Jackon et al., 1986), and (ii) Al CBD was maximum in Congolese samples, which did not include gibbsite. Aluminium-substituted crystalline hematite and goethite are dissolved by CBD [START_REF] Jackson | Oxides, hydroxides, and aluminosilicates[END_REF], and were probably the main sources of non-oxalate extractable Al CBD in the studied samples. As macroaggregate content correlated moderately with Fe CBD , the aggregating role of crystalline Fe-sesquioxides seemed to result principally from the degree of Al-for-Fe substitution in their structure. Indeed, several authors reported that the specific surface area of crystalline Fe-sesquioxides increases with Al-substitution, mainly due to decreasing crystallite size [START_REF] Schwertmann | Iron oxides[END_REF][START_REF] Ruan | Dehydroxylation of aluminous goethite: unit cell dimensions, crystal size and surface area[END_REF]. As aggregation relates to specific surface area [START_REF] Goldberg | Factors affecting clay dispersion and aggregate stability of arid-zone soils[END_REF][START_REF] Bissonnais | Soil characteristics and aggregate stability[END_REF][START_REF] Amézketa | Soil aggregate stability: a review[END_REF], the increase in stable macroaggregates with Al CBD might thus result from the increase in specific surface area with Al-substitution on crystalline Fe-sesquioxides.
Influence of sesquioxides on soil organic matter
The present study also showed close relationships between organic constituents and Al-containing sesquioxides, coarse fractions (> 20 µm) relating with non-crystalline compounds preferentially (Al OX ), and fine fractions (< 20 µm), much more closely, with both crystalline and non-crystalline ones (Al CBD ). The affinity of OM to Al-and Fe-sesquioxides, either crystalline or non-crystalline, has been reported in the literature, regarding fine organic fractions as well as incompletely humified residues, which can be stabilized by sorption, by entrapment, or by complexation [START_REF] Guggenberger | Effect of mineral colloids on biogeochemical cycling of C, N, P, and S in soil[END_REF].
The role of amorphous Al and Fe in soil OM protection against biodegradation, through the formation of organo-mineral complexes, has been highlighted by numerous authors [START_REF] Boudot | Carbon mineralization in Andosols and aluminium-rich highland soils[END_REF][START_REF] Baldock | Role of the soil matrix and minerals in protecting natural organic materials against biological attack[END_REF]2000;[START_REF] Guggenberger | Effect of mineral colloids on biogeochemical cycling of C, N, P, and S in soil[END_REF]. [START_REF] Baldock | Role of the soil matrix and minerals in protecting natural organic materials against biological attack[END_REF] reported that Al-complexes resulted in a greater stabilization of OM than Fe-complexes, as observed in the present study: C amount in every dispersed fraction correlated more closely with Al OX than with Fe OX . Moreover, it is likely that the main sources of non-oxalate extractable Al CBD in the studied samples were Al-substituted crystalline Fe-sesquioxides, as mentioned in the previous section. As also mentioned, the increase in Al-for-Fe substitution causes an increase in specific surface area, and this in turn causes an increase in OM stabilization [START_REF] Baldock | Role of the soil matrix and minerals in protecting natural organic materials against biological attack[END_REF]2000;[START_REF] Kaiser | Mineral surfaces and soil organic matter[END_REF]. This helped to explain the correlations between Al CBD and Ct or fraction C amounts. Considering that micropores are preferential sites for OM sorption [START_REF] Kaiser | Mineral surfaces and soil organic matter[END_REF], the particular affinity of organic particles < 20 µm to non-oxalate extractable Al CBD might be due to the small crystallite size of Al-substituted Fe-sesquioxides, the micropores of which are accessible to small particles only.
Studying the mineral association of OM in the surface horizons of an Alfisol and an Oxisol, [START_REF] Shang | Organic matter stabilization in two semiarid tropical soils: size, density, and magnetic separations[END_REF] also observed the enrichment of the fine fractions in Al CBD , while Al OX was particularly concentrated in the light fractions (i.e. including coarse organic particles mainly).
Determinants of soil aggregation and organic matter content
Many authors observed positive correlations between stable macroaggregation and OM, which were attributed to the aggregating role of OM and to physical protection of OM within aggregates [START_REF] Feller | Physical control of soil organic matter dynamics in the tropics[END_REF][START_REF] Angers | Dynamics of soil aggregation and C sequestration[END_REF][START_REF] Six | Soil organic matter, biota and aggregation in temperate and tropical soils -Effect of no-tillage[END_REF]. Several works reported that some organic fractions, such as polysaccharides or particulate OM, were particularly involved in these relationships (Tisdall and Oades, 1982;[START_REF] Kouakoua | Relations entre stabilité de l'agrégation et matière organique totale et soluble à l'eau chaude dans des sols ferrallitiques argileux (Congo, Brésil)[END_REF][START_REF] Baldock | Interactions of organic materials and microorganisms with minerals in the stabilization of soil structure[END_REF]. This was not supported by the present results, macroaggregation being more closely correlated with Ct than with any organic size fraction.
Moreover, considering all 18 soil samples, stable macroaggregation was more closely linked with Al CBD than with other parameters, organic ones especially. Tisdall and Oades (1982) already underlined that organic materials were not necessarily the main aggregating agents, especially in soils that included more than 10% sesquioxides. [START_REF] Oades | Aggregate hierarchy in soils[END_REF] also suggested that oxides were dominant stabilizing agents in the Oxisol they studied. [START_REF] Six | Soil organic matter, biota and aggregation in temperate and tropical soils -Effect of no-tillage[END_REF] similarly hypothesized that stable macroaggregation was less dependent on OM in highly weathered tropical soils than in temperate ones, because the formers had a greater potential to form physicochemical macroaggregates, in the presence of oxides especially. Moreover, C amount in any dispersed fraction was more closely linked with Al CBD or Al OX than with macroaggregation or texture. This suggested that the stability of OM was more dependent on its association with Al-containing sesquioxides than on physical protection within stable aggregates or sorption to clay particles. It could thus be assumed that Al-containing sesquioxides were the main determinants of stable macroaggregation and OM content in the LAC tropical soils under study. However, when each location was considered separately, the influence of land use increased markedly. The negative consequences of land use intensification (i.e. cultivation, tillage, reduced surface cover) on many soil properties are fairly well established. As far as tropical soils are concerned, they have been reported for topsoil aggregation [START_REF] Resck | Impact of conversion of Brazilian Cerrados to cropland and pastureland on soil carbon pool and dynamics[END_REF]Spaccini et al., 2001) and for total and fraction C [START_REF] Feller | Physical control of soil organic matter dynamics in the tropics[END_REF]Solomon et al., 2002;Vågen et al., 2005). This underlines that at the local scale, due to rather homogeneous mineralogical properties, land use remains an important determinant of soil OM content and aggregation.
Conclusion
Over the range of LAC tropical soils that were studied, aggregate distributions were dominated by stable macroaggregates (> 200 µm), and C distributions in primary particles by fine fraction C (< 20 µm). Macroaggregation was not clearly affected by texture, whereas total C and fine fraction C increased with clay plus fine silts. The results also suggested that Al-containing sesquioxides were major determinants of macroaggregate and OM stability. Indeed, macroaggregate content correlated positively with Ct and with C in every fraction, with fine fraction C especially, but it correlated more closely with CBD-extractable Al. This suggested that Al-containing sesquioxides had a more important aggregating role than OM in LAC tropical soils. Moreover, C amounts in fine (< 20 µm) and coarse (> 20 µm) fractions related more closely with CBD-and oxalate-extractable Al, respectively, than with stable macroaggregate content or texture. Thus the stability of soil OM was more influenced by its association with Al-containing sesquioxides than by physical protection within stable macroaggregates or sorption to clay particles. It could also be assumed that Al-containing sesquioxides played a dominant role in the relations between OM and aggregation. However, considering each location separately, soil content in sesquioxides varied within narrow ranges, and as it cannot be managed easily, the importance of land use on OM and aggregation remained fundamental. Solomon, D., Fritzsche, F., Lehmann, J., Tekalign, M. and Zech, W., 2002. Soil organic matter dynamics in the subhumid agroecosystems of the Ethiopian highlands: evidence from natural 13 C abundance and particle-size fractionation. Soil Sci. Soc. Am. J., 66: 969-978. Spaccini, R., Zena, A., Igwe, C.A., Mbagwu, J.S.C. and Piccolo, A., 2001. Carbohydrates in water-stable aggregates and particle size fractions of forested and cultivated soils in two contrasting tropical ecosystems. Biogeochemistry, 53: 1-22. Tisdall, J.M. and Oades, J.M., 1982. Organic matter andwater-stable aggregation in soils. J. Soil Sci., 33: 141-163. Vågen, T.G., Lal, R. andSingh, B.R., 2005. Soil
Fraction
** 0.653** 0.759*** 0.611** 0.760*** 0.632** * p < 0.05; ** p < 0.01; *** p < 0.001.
Figure 1 .Figure 2 .Figure 3 .
123 Figure 1. Relationships between clay plus fine silts (< 20 µm), citrate-bicarbonate-dithioniteextractable Al (Al CBD ), oxalate-extractable Al (Al OX ), on the one hand, and: 1a, soil content in water-stable macroaggregates (> 200 µm); 1b, total soil C content (Ct); and 1c, C amount in the dispersed fraction < 20 µm (Cgo, Ben, Goi, SPau and Par represent samples from Congo, Benin, Goiás, São Paulo and Paraná, respectively).
Figure 4 .
4 Figure 4. Principal component analysis (PCA) with seven active variables, in bold, and six supplementary variables, in italics (C>200, C20-200 and C<20 represent C amount in the dispersed fractions > 200, 20-200 and < 20 µm, respectively).
Table 1 .
1 carbon sequestration in sub-Saharan Africa: a review. Land Degrad. Dev., 16: 53-71. Presentation of the studied sites, soils, and plots.
Country Latitude, Mean annual Soil type a Texture Clay mineralogy Land use No
(state), longitude rainfall and (and parent material)
city temperature K H Go Gi
Congo- 04°00'S, 1100 mm Typic Haplorthox Clayey +++ 0 ++ 0 Savanna 1
Brazzaville, 13°30'E 25°C Orthic Ferralsol Cassava 2
Loudima (schisto-calcareous sediment) Natural fallow 3
Congo- 04°10'S, 1400 mm Typic Haplorthox Clayey +++ 0 ++ 0 Savanna 4
Brazzaville, 13°30'E 25°C Orthic Ferralsol Cassava 5
Mantsoumba (schisto-calcareous sediment)
Benin, 06°24'N, 1200 mm Typic Tropudult Sandy +++ ++ ++ 0 Maize, no input 6
Cotonou 02°20'E 27°C Dystric Nitisol clay loam Fertilized maize 7
(sandy clay sediment) Maize-legume intercrop 8
Rotation of maize & maize-legume intercrop 9
Brazil 16°S, 1500 mm Typic Haplustox Clayey +++ ++ ++ ++ Bush savanna 10
(Goiás), 49°30'W 23°C Orthic Ferralsol Maize/bean rotation 11
Goiânia (detritic-lateritic sediment) Bracharia pasture 12
Brazil 21°22'S, 1600 mm Typic Hapludox Clayey +++ ++ 0 ++ Sugarcane (residue burning) 13
(São Paulo), 48°03'W 23°C Orthic Ferralsol
Pradópolis (basalt)
Brazil 21°36'S, 1500 mm Typic Hapludult Sandy +++ ++ ++ 0 Sugarcane (residue burning) 14
(São Paulo), 48°22'W 23°C Orthic Acrisol clay loam
Matão (sandstone)
Brazil 21°12'S, 1500 mm Typic Quartzipsamment Sandy +++ + + + Sugarcane (residue burning) 15
(São Paulo), 47°35'W 23°C Ferralic Arenosol
Serrana (sandstone)
Brazil 23°23'S, 1600 mm Typic Haplorthox Clayey +++ ++ ++ ++ Forest 16
(Paraná), 51°11'W 21°C Rhodic Ferralsol Oat 17
Londrina (basalt) Citrus orchard with legume intercropping 18
a Soil Survey Staff (1999) and FAO-ISRIC-ISSS (1998).
K: kaolinite; H: hematite; Go: goethite; Gi: gibbsite. +++: dominant; ++: present; +: traces; 0: absent.
Table 2 .
2 Size distributions of aggregate and dispersed fractions, C contents of whole soil and dispersed fractions, and whole soil pH (in water) and contents in citrate-bicarbonate-dithionite-and oxalate-extractable iron and aluminium (for No, see Table1).
No Mass of Mass of Total soil C concentration of C amount in pH Fe CBD Al CBD Fe OX Al OX
dispersed fractions aggregate fractions a C content dispersed fractions dispersed fractions (H 2 O)
(g kg -1 soil) (g kg -1 soil) (g C kg -1 soil) (g C kg -1 fraction) (g C kg -1 soil) (g kg -1 soil)
> 200 20-200 < 20 > 200 20-200 < 20 > 200 20-200 < 20 > 200 20-200 < 20
µm µm µm µm µm µm µm µm µm µm µm µm
1 38 104 857 669 238 55 35.0 76.2 32.1 27.6 2.9 3.3 23.7 5.0 30.4 6.4 1.3 2.1
2 40 188 775 571 354 48 21.8 34.7 11.8 21.5 1.4 2.2 16.7 6.0 35.6 6.8 1.4 2.2
3 31 201 760 621 309 41 36.4 69.0 27.8 36.6 2.1 5.6 27.8 5.5 32.0 7.6 1.5 3.3
4 93 260 632 746 154 7 42.5 72.0 39.5 35.5 6.7 10.3 22.4 5.0 33.7 6.9 2.9 5.8
5 32 179 771 585 366 17 18.8 51.0 13.5 18.6 1.6 2.4 14.4 4.4 nd nd 1.2 1.9
6 524 202 226 164 207 18 5.1 1.4 4.5 11.1 0.7 0.9 2.9 5.1 9.0 0.7 0.5 0.4
7 625 216 145 210 132 13 6.5 1.3 6.9 22.8 0.8 1.5 3.3 5.2 7.7 0.7 0.4 0.4
8 611 202 175 238 97 11 11.4 2.1 11.4 30.3 1.3 2.3 5.3 5.0 8.6 0.7 0.4 0.5
9 590 208 192 246 148 17 8.5 1.5 7.7 21.4 0.9 1.6 4.1 5.2 8.4 1.0 0.4 0.4
10 81 449 469 615 313 2 22.6 21.3 8.0 34.0 1.7 3.6 15.9 5.5 35.0 4.8 2.2 nd
11 47 383 571 541 382 10 21.4 25.3 3.9 29.9 1.2 1.5 17.1 5.2 38.9 4.7 1.9 4.9
12 67 456 477 623 276 8 22.0 22.8 3.7 34.9 1.5 1.7 16.6 5.9 29.9 5.4 1.9 3.9
13 78 325 597 661 246 20 20.7 11.1 16.9 21.1 0.9 5.5 12.6 nd 98.8 6.0 1.7 3.1
14 503 300 192 329 143 12 7.4 1.4 4.8 27.2 0.7 1.5 5.2 nd 15.1 2.5 0.5 0.7
15 411 505 83 298 320 15 7.0 1.6 4.4 49.2 0.7 2.2 4.1 nd 11.2 1.5 1.0 0.6
16 47 146 766 752 189 13 30.8 118.1 35.3 22.6 2.6 5.2 17.3 6.8 87.1 5.1 5.3 5.4
17 39 101 868 551 388 23 17.8 71.9 21.7 14.9 0.5 2.2 12.9 5.7 70.6 4.5 5.3 2.7
18 29 122 834 639 309 30 23.0 87.4 18.9 18.9 1.1 2.3 15.7 5.9 72.0 5.1 5.2 2.8
nd: not determined.
a aggregate fraction > 200 µm did not include coarse sands; aggregate fractions 20-200 and < 20 µm included primary particles of the same size.
Table 3 .
3 Main correlations between the studied variables (C concentration in g kg -1 fraction, C amount and other variables in g kg -1 soil < 2 mm).
Dispersed fraction mass Macroag- Ct Fraction C concentration Fraction C amount Fe CBD Al CBD Fe OX
> 200 20-200 < 20 gregates > 200 20-200 < 20 > 200 20-200 < 20
µm µm µm µm µm µm µm µm µm
Acknowledgements
The authors thank Vincent Eschenbrenner for his help regarding the clay mineralogy of the Brazilian samples, and two anonymous referees for their helpful comments. |
04109141 | en | [
"shs"
] | 2024/03/04 16:41:24 | 2022 | https://hal.science/hal-04109141/file/Undersanding%20human%20life%20Courgeau%20whole.pdf | Understanding and misunderstanding human life:
This book addresses the challenge of understanding human life. We compare our life experience with the attempts to grasp it by astrologers, eugenicists, psychologists, social scientists, and philosophers. How have these various disciplines sought to give substance to an experience at once so intimate and so universal?
The main opposition in the list above lies between understanding and misunderstanding. For example, the astrologers' and eugenicists' approach, fully accepted in their day, is now largely viewed as a form of misunderstanding. To show why this is so, we examine their methodology. For practitioners of the other disciplines, their understanding may be limited by various methodological problems they encounter but are trying to overcome. We shall explore these issues as well.
("Economists had not predicted the onset of the crisis. Some predict that his marriage will not last.")
As we can see, the first definition is totally suited for examining the various forms of divination (mantic methods), while the second seems more appropriate to the examination of eugenics, which relies on conjectures that we shall show to be fallacies.
The second definition, however, is more general, for in this case the prediction will be able to rely on reasoning, which can lead to a far more accurate science than divination or an approach based on erroneous premises. That is how astronomy-initially indistinct from astrology in ancient times-came into its own as a full-fledged science thanks to the work of Galileo, Kepler, and Newton. The second definition also leads us to Part 2 of our book, which addresses the question: "What can one capture of a human life and how?" Prediction will provide us with an answer to the question of "how?"
In English, the Oxford Learner's Dictionary (9 th edition, 2015) tells us that the verb to predict appeared in the early seventeenth century with the same etymology. The dictionary, however, gives only one meaning:
To say, expect or suggest that a particular thing will happen in the future or will be the result of something.
Here, C 1 , C 2 , …, C k are statements describing the particular facts invoked; L 1 , L 2 , …, L r are the general laws; jointly, these statements will be said to form the explanans. The conclusion E is a statement describing the explanandum statement [. . .] Later on, he clearly states that he is talking about empirical generalizations, thus signaling his support for Hume's argument.
Conversely, the mechanistic approach concurs with the views of Francis Bacon (1620), who restored the deductive conception of induction (Franck, 2002, p. 290). We elaborate on Bacon's position throughout our book.
From the outset, in Chapter 5 of their work on Discovering complexity : The rejection of mechanisms, Bechtel and Richardson describe the oppositions to the mechanistic approach encountered in the history of science. These include the holistic position of vitalists in physiology (p. 95):
They thus affirm a version of holism according to which the properties of life are treated as properties of the whole that cannot be refined into the properties of the parts, even when relations are taken into account.
However, neither in that chapter nor in the rest of their book do they discuss systemic and holistic studies such as those by Maturana and Varela, despite the fact that Varela's Principles of biological autonomy was published in 1979. As we shall see, Bechtel did not recognize the importance of these studies until 2007.
The methods used by the mechanistic approach can be summarized by the term models, as Bechtel and Abrahamson (2005, p. 425) explicitly state:
Generically, one can refer to these internal and external representations as models of the mechanisms. A model of a mechanism describes or portrays what are taken to be its relevant component parts and operations, the organization of the parts and operations into a system, and the means by which operations are orchestrated so as to produce the phenomenon.
P Var
List of Figures and Map
Fig. 3.1 Change in search interest for "astronomy", "astrology", and "horoscope" in all countries since Jan. 1, 2004. Fig. 3.2 Change in search interest for "astrology" and "astronomy" on YouTube in all countries since Jan. 1, 2008.
Map 3.1 Breakdown by country of search interest for astrology (grey) and astronomy (black). Fig. 4.1 How the percentage of "eminent" members of the families of English judges decreases with the degree of kinship. Fig. 5.1 Change in searches for "astrology", "cartomancy", and "chiromancy" in Switzerland since Jan. 1, 2004. Fig. 8.1 Natural logarithm of instantaneous rates of migration estimated for men and women. Natural logarithm of instantaneous rates of migration estimated for couples and from population register.. Residential mobility analysis: effect of time since marriage, duration of residence (in years), and tenure status on the probability of moving by data set (parameter estimates with standard deviation in parentheses).
List of tables
General introduction
1.1.
Varieties of understanding or misunderstanding
The word "understanding" has many meanings (see, for example, [START_REF] Baumberger | What is understanding? An overview of recent debates in epistemology and philosophy of science[END_REF], as no consensus about its definition has emerged. For our purpose, it will be helpful to narrow our focus in order to better understand the aim of this work.
Let us begin by looking at the meaning of the term "to grasp," previously used to characterize the attempts made by different approaches to understand human life, and very often adopted by epistemologists (Hannon, 2021, p. 19). The definition of grasping comprises two main meanings, which are totally interconnected. The first is physical: to hold someone or something firmly. The second is figurative and applies more specifically to ideas: a person is said to grasp an idea if (s)he understands it fully, i.e., comprehends it perfectly. This work therefore examines the intimate comprehension that we can have of our own life and that of others. Is it possible that we can grasp the essence of this life even before it unfolds-as claimed by astrologers and, later, Galton's eugenics or the theory of inheritability? We shall also examine other approaches. For example, when analyzing memory, psychologists adopt another perspective and seek to share with us a more general view of the life of an individual, a group of individuals, or even an entire people. Scientists use methods approved by their community in order to gain a deeper understanding of life. Social scientists explore ways of going beyond the personal approach in order to provide a more complex picture of the social world in which we live. Psychologists focus on autobiographical memory, trying to see how people develop and use it. Conversely, philosophers will seek a more precise meaning of the nature of this comprehension, of the various approaches to it, of the notion of causality that can be attached to it, and so on.
Another important distinction is found in Wilhelm Dilthey's hermeneutic approach (1883) between comprehension (German: Verstehen) and explanation (German: Erklären). A detailed discussion of his work lies outside the scope of our book, but we refer the reader to the excellent study by [START_REF] Mesure | Dilthey et la fondation des sciences historiques[END_REF] on the establishment of historical science, which includes a detailed presentation of Dilthey's work and the critical reactions to it.
For our purposes here, suffice it to say that comprehension, initially viewed in the context of psychic life, was defined as literally reliving another person's lived experience. The definition was then substantially revised in the attempt to establish the sciences of the mind. As Mesure (1990, p. 231) clearly states: [. . .] comprehension indeed consists in taking lived experiences and building the whole that brings them together, and, from what was a mere sequence, achieving the emergence of a life in the proper sense of the word, i.e., a totality oriented toward an end that gives meaning to every one of its stages. In this sense as well, autobiography provides the model for history, for [. . .] its task will always be to overcome the heterogeneity of events or stages and bring to light-in their succession-the continuity of an unfolding, as if this were a life. 1 We can see here how comprehension is essential for autobiography and, at the same time, allows a reconstruction of history.
While used in the natural sciences, explanation is not excluded from the social sciences. First, the physical and biological constitution of man must legitimately be subjected to this form of reasoning. Second, as Dilthey recognized, there was the possibility of a descriptive and analytical psychology that did not set out to comprehend lived experiences.
All these ways of understanding life, however, come up against the elusiveness of life. The notion of elusiveness embraces all that is hard to understand-in all the meanings of the verb identified above. Clearly, we lack complete information on our own life. Entire facets of our existence elude us today, because we have either forgotten them or, on the contrary, deliberately erased them from memory. How will psychologists explain this forgetfulness to make it credible, and how will they deal with the possibility of recalling lost events? For scientists as well, this mechanism poses many problems. Social scientists search for the reasons that can drive a community to forget certain traumatic facts and the ways in which it manages to do so. Biologists grapple with the difficulty of 1 French text: … la compréhension consiste bien, en partant des expériences vécues, à construire l'ensemble qui les réunit et, de ce qui n'était qu'une simple succession, fait émerger proprement une vie, c'est-à-dire une totalité orientée vers une fin qui donne sa signification à chacune des étapes : en ce sens aussi l'autobiographie fournit le modèle de l'histoire, dans la mesure où, …, sa tâche consistera à surmonter l'hétérogénéité des événements et à faire paraître dans leur succession la continuité d'un déplacement, comme s'il s'agissait du cours d'une vie.
defining life and grasping its emergence and development. Philosophers ask: how can one explain the significance of this act of forgetting events that are trivial or, on the contrary, traumatic? Moreover, the term "human life" can take on different meanings. The first encompasses all the events that mark our existence, events whose significance may partly or totally elude us. The second, more general meaning covers our biological life, connected to our environment and the other species living in it alongside us. The third meaning, which is more philosophical, concerns the spiritual life of human beings and their comprehension of it.
Alongside the verb "to understand," another term is key for defining the core argument of our book: the verb "to predict." If we can grasp something correctly, we should be able to predict it. But it is important to understand how this prediction will operate.
Let us begin by taking a closer look at the etymology and definition of the verb: praedicere in Latin is literally "to say beforehand," but its meaning can be multiple and varies across different periods and languages.
In French, the Dictionnaire de l'Académie française (2019) tells us that in the twelfth century the verb prédire signified "to issue an order," and that it did not acquire its current meaning until the fifteenth century. The word now has two definitions: 1. To reveal what will happen, under the effect of a divine inspiration or through an alleged divination, by resorting to magical practices or occult procedures. Les prophètes ont prédit la venue de Jésus-Christ. L'oracle de Delphes prédit à OEdipe qu'il tuerait son père puis épouserait sa mère. Les voyantes, les diseuses de bonne aventure font métier de prédire l'avenir.
("The prophets predicted the coming of Jesus Christ. The oracle of Delphi predicted to Oedipus that he would slay his father and marry his mother. The business of psychics and soothsayers is to predict the future.") 2.
To announce what must happen on the basis of reasoning or conjecture. Les économistes n'avaient pas prédit l'arrivée de la crise. D'aucuns prédisent que son mariage ne durera pas.
It provides various examples of the verb's use, but without clearly distinguishing between them: the factors that predict outcome in acute illness in the very old require further explorations; the good genes hypothesis predicts that females will prefer to mate with the healthiest males; and so on. Curiously, these examples do not specify how one can predict the future, and at no point do they envisage divination, which lost much of its impact in the twentieth century. Several other English-language dictionaries searched online (e.g. Cambridge and Collins) offer definitions very similar to Oxford's and just as vague.
The fuller versions of these dictionaries give us more details on the different meanings of the verb. The Oxford English Dictionary (OED) (2009), for example, offers four distinct definitions, of which only three are of interest to us here:
1. to say beforehand, foretell, give notice of, advise, charge.
2. a. to foretell, prophesy, announce beforehand (an event, etc.). b.
to have as a deducible or inferable consequence; to imply.
3. to utter prediction; to prophesy.
4. to direct fire at with the aid of a predictor.
We can disregard the fourth meaning, as it is clearly not relevant to our purpose. Definitions 2a and 3 correspond to the mantic methods examined in Chapter 2, most notably astrology, discussed in greater detail in Chapter 3. Definition 1 fits eugenics, "which will say beforehand that it is possible to predict the future of a lineage." Definition 2b covers the issues analyzed in Part 2 of our book.
In sum, while the Oxford English Dictionary definitions overlap some of those given by the Académie Française, they do not enable us to draw clear distinctions. We shall give precedence here to the two Académie Française definitions.
Having defined the subject of our work, we must now see which methods will make it possible to objectivate life stories, and how and why they were developed. Next, and most important, we must see if these methods allow a truly scientific approach for analyzing life stories.
A multi-perspective methodological view
Life stories are by nature multidisciplinary. With their varying degrees of completeness, they serve as the basic tools in all social sciences and also in hermeneutics. We must therefore assess the different methods and approaches for studying them. Some methods regard biographies as partly predetermined by external factors.
One example is astrology-not only the sort practiced by ancient civilizations, but also the kind practiced today by many pseudo-savants. Its adherents claim that its specific methods are wholly logical and can even be axiomatized [START_REF] Vetter | Astrology's paradigm: its main axioms and implications[END_REF], but we need to examine their foundations and, above all, their scientific validity. It is also important to look at the concept of the scientificity of divination in Roman antiquity (Cicero,44 B.C.E.), and to see which social groups subscribe to astrology today [START_REF] Bauer | Belief in astrology: a social-psychological analysis[END_REF]. Lastly, we turn to astronomy, which was born at the same time as astrology but took a different path in the Renaissance, in order to become a scientific endeavor that led to the discoveries of Johannes Kepler and Isaac Newton.
Another, more recent approach regarded human life as predetermined by genes: eugenics. The methods used by eugenics, which totally negate individual freedom, were introduced by men whose scientific status was undisputed in their time, such as Francis Galton and Raphael Weldon. These "Ancestrians," as they were then known, were the complete opposites of the "Mendelians" (Mendel, Bateson, and others), most notably for the methods each side used to justify its findings. Fisher tried to reconcile the two camps in 1918, but-here as well-we need to examine whether the axioms he defined for his demonstration were verified?
The end of World War II spelled the end of the preeminent status of eugenics, which tried to dominate the social sciences not only under the Nazi and fascist regimes of the first half of the twentieth century, but also in many countries around the globe-for the stated purpose of improving humanity. However, the notion of human predestination continued to inspire political regimes of the "developed" world under the name of "hereditarianism." For example, the introduction of behavior genetics was based on Fisher's 1918 axiomatics, and sought to predict individual behaviors entailed by the presence of certain genes. We must assess such claims with caution and see whether human behaviors obey such rules.
Apart from these attempts to treat life stories as if they were predetermined, it is important to recall how the stories were generated by humanity. They have existed since earliest times, and they were initially produced as imaginary narratives before becoming more realistic. The methods used to craft them have varied across time, as we shall discuss.
The earliest known life story is that of Gilgamesh, the fifth ruler of the city of Uruk in Mesopotamia, who reigned nearly 2,500 years before our era. The text was discovered on Assyrian tablets in the mid-nineteenth century. It offers a fascinating and highly revealing insight into Mesopotamian culture, totally different from ours yet emotionally so close, in particular in its attitude to death. In Greek antiquity, Plato and Aristotle addressed the issue of Poetics, i.e., epic, tragedy, and comedy, which were, at the time, different ways of narrating one or more human lives. For Plato, art could only be the opposite of philosophy. In contrast, Aristotle saw art as an activity that helps us to understand the human mind.
In more recent times, different methodological approaches were developed for dealing with life stories. The first consisted of population science, founded by Graunt in 1662. It began to analyze several important phenomena of human lives: first deaths, then births, union formations, mobility, and so on. This approach allowed an "explanatory" vision of human behavior. For this purpose, however, the persons studied had to be anonymized. Population science can never predict the behavior of a given individual; it can only study a population. This did not prevent it from making its analysis ever more detailed by introducing a growing number of characteristics of individuals and an ever more complex time frame. Its progress was marked by paradigms such as cross-sectional analysis, longitudinal analysis, biographical analysis, and multilevel analysis. Each of these various methods successively took population science to a new level of complexity, without erasing most of the results obtained with previous methods.
The second, more modern approach was philosophical hermeneutics, introduced in the early twentieth century. Unlike population science, it gave precedence to the study of a specific biography-applicable to one or more individuals-in all its dimensions. It allowed the analysis of imaginary as well as real lives. This approach thus endorsed a "comprehensive" vision of the different forms of literature that present life stories in varying degrees of completeness, such as epic, tragedy, and the novel. It is, of course, possible and even necessary to analyze the actual lives of men and women using this approach in order to fully understand their complexity. From real-life narratives, the analysis aims to construct the complex that brings them together: from a simple succession of facts, it must produce a totality directed toward a purpose. In assessing the method's relevance, we shall see whether the results obtained from a very small number of cases studied can apply in a general way.
The third approach seeks to understand how our memory works. It has been followed by successive schools of psychology (chiefly structuralism, functionalism, behaviorism, cognitive psychology, and evolutionary psychology), neuroscience, psychoanalysis and other disciplines since the late nineteenth century. We shall concentrate on the memory of our own life story-our autobiography.
These specialties have used very different methods to analyze human memory. One method consists in analyzing how human memory works by focusing on a single individual-very often the researcher himself-or on a very small set of individuals. In 1879, the functionalist psychologist Francis Galton tried to recall the dates of events that he had memorized, and he compiled a typology of the different kinds of memory. In 1884, the neurologist John Hughlings Jackson used the clinical examination of a few patients to deduce a concept of mental processes and the unconscious. In 1895, from a detailed study of the case of Anna O., Sigmund Freud and Joseph Breuer drew the foundations of psychoanalysis. This method was still followed by later practitioners. In 1986, the cognitive psychologist Marigold Linton recorded the daily events of her life for twelve years, then tried to recall them freely in order to study her memory lapses. In 1991, the neurologists Goodale et al. identified an unconscious behavior of Mrs. D.F., suffering from "aperceptive visual agnosia"-a behavior that contradicted Freud's concept of the unconscious. In 2015, in a study involving only three patients, Naselaris et al. showed that they could memorize information received in visual as well as propositional form. We shall therefore examine this method as applied to the various cases described above and assess the underlying theories that could justify it.
A second method uses larger samples to verify these theories. In 1880, the functionalist psychologist Galton studied 100 adults' visual memory of their morning breakfast table. His results were therefore conditional upon his having observed a non-representative sample of the English population of his time. Indeed, psychologists very often use such samples, which can produce totally erroneous results. Moreover, these studies are often not replicated in order to validate their findings. This has led to the replication crisis, which has always been an underlying issue for psychological studies but climaxed in around 2015. We shall therefore need to examine these studies with great care.
Lastly, two methodological approaches emerged in the second half of the twentieth century and now seem to be the most robust for properly researching human life. The first is the general systemic approach, initially proposed by Ludwig von [START_REF] Bertalanffy | General systems theory[END_REF], but which has developed toward an autonomy approach for the biological and social sciences promoted by Humberto Maturana and Francisco Varela (1973). The second is the mechanistic approach, first proposed for biological studies (see, for example, the book by William Bechtel and Robert Richardson (1993), but which has now flourished across the social sciences (see the book by Stuart Glennan and Phyllis Illari (2018)). These methodological approaches will be discussed in greater detail later, but we offer a brief presentation here.
The systemic approach was born of the dissatisfaction on the part of biologists, social scientists, and ecologists with the reductionism prevailing in the physical sciences. The concept of system is a new paradigm, which views the world as a complex organization that needs to be analyzed as a whole. If we cannot fully understand a phenomenon simply by decomposing it into more elementary units, we must apply an overall vision in order to understand how it works. This approach covers a vast field of research with different conceptualizations. For example, von Bertalanffy (1969, p. 31) shows how it can apply to psychology and the social sciences that we discuss here:
While classical association psychology attempted to resolve mental phenomena into elementary units-psychological-atoms as it were-such as elementary sensations and the like, gestalt psychology showed the existence and primacy of psychological wholes which are not a summation of elementary units and are governed by dynamic laws. Finally, in the social sciences the concept of society as a sum of individuals as social atoms, e.g., the model of Economic Man, was replaced by the tendency to consider society, economy, nation as a whole superordinated to its parts.
Few methods were initially available for systemic analysis. The main one is simulation-based modeling. One example is the model applied by Dennis and Donella Meadows' team (1972Meadows' team ( , 1992Meadows' team ( , 2004) ) to predict the future evolution of humanity. We shall examine its validity.
In the life sciences, particularly those concerning human life, the systemic approach was also applied by [START_REF] Varela | The embodied mind[END_REF]. It treats human intelligence and memory as a whole that cannot be broken down into parts.
We discuss the systemic approach more fully in Chapter 9. Here, we shall simply mention the notion of autonomy, which serves as the link to the mechanistic approach, discussed later. Kepa Ruiz-Mirazo and Alvaro Moreno (2004, p. 240) define autonomy as: [. . .] the capacity of a system to manage the flow of matter and energy through it so that it can, at the same time, regulate, modify, and control: (i) internal self-constitutive processes and (ii) processes of exchange with the environment. Thus this system must be able to generate and regenerate all the constraintsincluding part of its boundary conditions-that define it as such, together with its own particular way of interacting with the environment.
The methods proposed to reveal autonomy fall into two broad groups. The first is based on "first-person data," i.e., collected from personal experiences. We may regard them, quite legitimately, as allowing the comprehension of a lived experience in hermeneutic terms. Indeed, Varela et al. (1993, p. 149) clearly state:
The term hermeneutics originally referred to the discipline of interpreting ancient texts, but it had been extended to denote the entire phenomenon of interpretation, understood as the enactment or bringing forth of meaning for a background of understanding.
While admitting that many authors challenge the basic assumptions of this non-objectivist approach, they believe it is necessary to link the study of human experience-as put into practice in a given culture-with the study of human cognition in neuroscience, linguistics, and cognitive psychology. They contrast first-person data with "third-person data," which allow the use of scientific methods such as encephalograms in order to access these physiological processes. We will see if this can provide a connection between hermeneutic comprehension and scientific explanation of human facts.
The approach described above was embodied in Dynamical systems theory and Systems biology in the late 1990s, broadening its scope from the study of a simple neuron to the most complex social systems. It has become highly mathematized but also highly diversified. Some authors argue that it leads to the emergence of a new explanatory paradigm differing from that of the mechanistic approach, discussed below. In their 2008 article "After the philosophy of mind: replacing scholasticism with science," Anthony Chemero and Michael Silberstein, while recommending a holistic approach to cognitive science, point out the problems this raises (p. 24):
However, the biggest pragmatic or practical problem with developing holistic science is obvious: explanatory and predictive successes are hard to come by when dealing with complex problems. The principle worry here is that too much holism makes science impossible.
They thus show the difficulty of defining a holistic science. We shall therefore argue, as Robert Franck did in 1995, for the need to transcend the opposition between holism and methodological individualism.
In contrast, the mechanistic approach does not regard the totality of human life as a unit that cannot be broken down. Rather, it focuses on a specific phenomenon in order to analyze its parts and see how they are organized for the purpose of producing the observed phenomenon. The method uses two strategies. The first, decomposition, allows the subdivision of the explanatory work so as to make it feasible and to make the system studied intelligible. As we can easily see, this strategy is contrary to the systemic approach.
The second strategy consists in localization, which assigns responsibility for specific functions to particular structures. Bechtel and Richardson set out these strategies in their seminal work, Discovering complexity: decomposition and strategies in scientific research (1993).
The mechanistic approach is completely different from the covering law model proposed by Carl Hempel in 1942 in his article on "The function of general law in history" and by Hempel and Paul Oppenheim in 1946 in their article on "Studies in the logic of explanation." The two authors adopt David Hume's empirical view (1748), according to which causal mechanisms are not observable, and they use his notion of induction as an empirical generalization of the facts. Hempel (1962, p. 10) explains what he means by the covering law explanation:
This explanatory account may be regarded as an argument to the effect that the event to be explained (let me call it the explanandum event) was to be expected by reason of certain explanatory facts. These may be divided into two groups: (i) particular facts and (ii) uniformities expressed by general laws. [. . .] If we imagine these various presuppositions explicitly spelled out, the idea suggests itself of construing the explanation as a deductive argument of this form:
C 1 , C 2 , …, C k L 1 , L 2 , …, L r ¯¯¯¯¯¯¯¯¯¯¯ E
The authors present various forms of these representations, which differ greatly from the standard nomological scientific explanations: they provide diagrams to characterize them, and simulations to reflect on them. The models of mechanisms are developed for individual cases and are not represented in terms of universal formulations, as in the covering-law approach.
The power of models, however, is not unlimited. Agentbased models, which "pre-suppose rules of behavior and verify whether these micro-based rules can explain macroscopic regularities" (Billari and Prskawetz, 2003, p. 2), operate only at individual level and so do not avoid ad hoc and arbitrary explanations-as shown later.
In 2007, in a chapter on "Biological mechanisms: organized to maintain autonomy" published in a volume on Systems biology, Bechtel began a rapprochement with the advocates of the systemic approach. He wrote (p. 297):
Vitalists and holists play an important function when they remind mechanists of the shortfalls of the mechanistic accounts on offer. Ideas such as negative feedback, self-organizing positive feedback, and cyclic organization are critical for explaining the phenomena exhibited by living organisms. [. . ] These critical features are nicely captured in Moreno's conception of basic autonomy in which we reorganize living systems as so organized to metabolize inputs to extract matter and energy and direct these to building and repairing themselves.
As it happens, this chapter follows the one entitled "A systemic approach to the origin of biological organization" by Alfredo Moreno in the same book, in which he defends the holistic views of [START_REF] Maturana | Autopoiesis and cognition. The realization of the living[END_REF] from thirty years earlier-without mentioning the mechanistic approach. Yet all these authors agree on the notion of autonomy, which we defined earlier.
The notion of level lies at the root of many disagreements between the systemic and mechanistic approaches. Comparing the two, Chemero and Silberstein (2008, pp. 22-23) write:
There are of course many gradations of both positions, ultimately shading off into one another. Individualists can be more or less holistic, for example. Even having decided that good cognitive and neuroscience must confine itself to the boundaries of the head, there still remains the question of which scale of cognitive or brain activity to pitch the explanation at. At what "level" should we explain cognitive systems? Those explanations involving the more basic elements of a system (such as a single neuron) and the purportedly intrinsic or local properties of these elements are the more deeply individualistic in kind. Individualist explanations that focus on large scale and inherently relational features of cognitive systems such as functional features or large scale neural dynamics are the least individualistic.
Surely this amounts to a recognition of the need to go beyond holism and individualism, as Franck proposed in 1995. We return to this question in Chapter 9.
What is certain is that the mechanistic and systemic approaches must converge, as both traditions-which developed largely along independent paths-provide crucial insights for understanding human life [START_REF] Bich | Mechanism, autonomy and biological explanation[END_REF].
Book outline
We now turn to a brief presentation of the parts and chapters of this book.
After this introductory chapter, Part 1 looks at how certain approaches may lead to misunderstanding human life before it unfolds.
Chapter 2 presents an overview of the approaches, some of whose more specific aspects are addressed in greater detail in the following chapters. We begin by noting the widespread practice of the divination arts in both past and present, pointing out their prevalence around the world. We distinguish between the "-logies" and the "mancies," the former making greater use of techniques closer to those of modern science than the latter. We go on to give the reasons for the closer examination of astrology in Chapter 3. Next, we discuss the history of eugenics, from its ancient roots such as texts by Plato and Aristotle, to the Enlightenment and the writings of early nineteenth-century French physicians. This introduces Chapter 3 on eugenics. Lastly, we examine the various meanings of "freedom," a concept that Chapter 4 addresses only in its current meaning. The notion emerged in Greece in the eighth century B.C.E. and later in Rome. Its definition at the time was very different from the one assigned to it by the monotheistic religions of the early modern age, i.e., free will. The scientific revolution that developed in the sixteenth century further modified the concept. Descartes gave it a philosophical dimension, while classical liberalism and the industrial revolution gave it a political dimension.
Chapter 3 takes a closer look at how astrology and astronomy, once indistinguishable, were deemed capable of predicting future events or the future of a human life. The two disciplines emerged in remote antiquity, most notably in Mesopotamia but also in ancient Egypt. By observing celestial phenomena such as the paths of planets and stars, it was believed that one could foretell future events both in human societies and in individual lives. The disciplines spread to the Greek world in later centuries in the form of horoscopes predicting the fate of a given person. However, the scientific revolution that took hold in the sixteenth century introduced a clear distinction between astrology as a divination art and astronomy as a science describing the movements of celestial bodies. Astronomy became a leading science, achieving major discoveries such as those of Kepler and Newton, whereas astrology was increasingly rejected. Yet, in the twentieth century and especially since the 1950s, there have been ever more attempts at a scientific demonstration of the validity or invalidity of astrology, and we examine their findings. We also try to identify the reasons why astrology, despite its rejection on scientific grounds, retains a strong worldwide appeal.
Chapter 4 focuses on a more recent period, from the late nineteenth century to the first half of the twentieth, when Galton's eugenics was increasingly recognized and practiced not only by specialized organizations but also by extreme political regimes. The new discipline sought to predict our future life from our genetic endowment at birth. As with astrology versus astronomy, eugenics came to compete with genetics-a product of Mendel's laws-and we examine how the two disciplines evolved in the early twentieth century. Eugenics became a formidable concept with the advent of fascist and Nazi regimes, which sought to impose the supremacy of certain "races" by eliminating other ethnic groups. World War II ended these regimes, making the promotion of eugenics impossible. However, many former eugenicists such as Osborn continued to pursue the same goals under another label: hereditarianism. We describe the different forms of this theory and show the reasons for their scientific and political failure.
Chapter 5 takes a broader view, showing first how these approaches-which regard individual lives as predestined-are not truly scientific but are "idols," to use Bacon's term for such investigations. At the same time, we verify that astronomy and genetics qualify as genuine sciences, again in the Baconian sense, and that we can state their main axioms. However, as the "idols" still enjoy widespread acceptance, we must explore why so many people continue to believe in them. We move on to a more general view of religious practices, which facilitate these beliefs in supernatural forces. Cicero's analysis, produced in a context of polytheistic religion, gives us a better insight into the motives for such beliefs. In more recent periods, monotheistic religions have prevailed, and psychology, bio-cultural anthropology, and cognitive sciences have tried to identify the factors driving these faiths. But the detailed examination of the findings does little to explain the complexity of these phenomena-and even less to account for atheism, whose prevalence is far from negligible. We conclude the chapter with a discussion of the current concept of freedom, which-some argueoffers people a way out of these beliefs, and an ethical discussion on the different approaches presented in this first part.
Part 2 looks at whether one can attempt to understand a human life, and at how different social sciences can do so despite some failures.
Chapter 6 looks at the earliest forms of life stories, dating from the invention of writing in the third millennium B.C.E. We then follow the evolution of these stories over time. We may qualify them as imaginary, although they originate in what are often real lives but magnified into epics, myths, novels, and other narratives that nourish our minds. First, we discuss the notion of genre, which aims to define more precisely the different ways of inventing or transforming lives so as to express deeper reflections on the society and culture in which they take shape. We begin with the Greek philosophers, Plato and Aristotle, who defined epic, tragedy, and comedy by their characteristics. The Middle Ages saw the appearance of a new genre: the novel. In our time, the focus has shifted away from the formal characteristics of genres to their underlying mental processes and the devices they use to bring readers to reflect on and live in their society. Here, we encounter the notion of "comprehension" proposed by hermeneutic philosophy as the opposite of "explanation." To better understand this approach, we take a closer look at how three examples of the genres were introduced and used. First, the Epic of Gilgamesh, which enables us to understand Mesopotamian civilization; second, Sophocles' tragedy Oedipus Tyrannus, which give us an understanding of divine causality and human freedom among the Greeks; third, the romances on the life of Henri de Joyeuse, which give us an insight into Christian thought in seventeenth-century France. The chapter concludes with a fuller discussion of the notion of "comprehension."
In Chapter 7, we turn to real biographies of famous figures, trying to identify which aspects of their lives are highlighted and which are left out, depending on the period in which the events occurred. The aspects on which ancient biographers focused were clearly very different from those recorded today. Another crucial issue is the veracity of the facts gathered. Ancient biographies were often written long after the person's death, when the memory of that life had faded considerably and was influenced by external events. For biographies written during a person's lifetime, the biographer's interests are more important to keep in mind than those of the subject. These interests intermingle with the story told, turning the account into a view reflected by a distorted mirror. In the case of autobiography, we must pay critical attention to the aspects most frequently emphasized by the writer. The more recent interest in the life stories of more ordinary people should also be examined. The biography by Thomas and Znanieki (1919) of Polish peasants who migrated to the United States offers a perfect example, followed by many other similar studies. At the same time as the "comprehensive" approach, another more "explanatory" approach to life stories emerged with the recourse to population sciences. We extensively describe the evolution of the paradigms adopted by these sciences and the relevance of the "explanatory" approach, which we compare with the "comprehensive" approach. We then discuss the methods used by the social sciences to analyze the outline of a human life recorded in one or more interviews and depending on whether the survey is retrospective or prospective. While the approaches to event histories may differ substantially from one science to another, the basic material is the same: the collection, in an interview, of a very small portion of the life of one or more persons. We describe these different methods in detail, including the statistical tools used for the analysis and the scientific foundations of the methods.
Chapter 8 looks at the problems relating to memory and its transmission, which are crucial in the collection of life histories. We focus on the results of psychological and psychoanalytical studies on human autobiographical memory since their beginnings in the late nineteenth century. Over time, psychology and psychoanalysis developed different schools, which placed varying emphasis on the importance of studying memory. They include structuralism, functionalism, psychoanalysis, behaviorism, cognitive psychology, and evolutionary psychology. We deal separately with psychoanalysis, whose connections to neuroscience-discussed in greater detail in Chapter 9-are significant. The structuralist school, which used introspection, makes no contribution to the study of autobiographical memory, but it does offer interesting findings on the memorization of syllables alone, not of life stories. The functionalist school is barely interested in biographical memory. Its only representative, Galton, provided some elements-some important, others debatable-which we analyze. The behaviorist school, which enjoyed great success in the United States in the first half of the twentieth century, totally rejected the study of mental imagery. In the end, it was the cognitive school that made memory the focus of its study in 1950. We show its many contributions to this field, with the use of data from official registers to verify major life events, and the use of neuroimaging. While evolutionary psychology contributes little to the study of memory, psychoanalysis takes us into new territory. This discipline, introduced by Freud in 1895, relied on Cajal's discovery of neurons in 1888 as possible storehouses of memory. We describe and critique the way in which Freud incorporated this notion into psychoanalysis with the aid of the unconscious, and compare this with the picture provided by today's neuroscience. The chapter's conclusion looks at the challenges to the psychological approach generated by the replication crisis and to the broader notion of "statistical significance."
Chapter 9 offers a general conclusion to the whole volume. We return to the various topics discussed and propose a synthesis. The first obvious conclusion is that it is impossible to grasp human life in its entirety; we can only capture a small portion of it. The choice of the portion is therefore essential in order for a social science to analyze it. It is up to researchers to define the limits of that portion, which are often dictated by their field of scientific expertise, the survey's cost, and its feasibility. We start with a detailed examination of the theories that, across the centuries, assigned a major but not exclusive role to demography. Before the nineteenth century, economics and population science were often practiced in tandem: the populationists believed that an increase in population would be matched by an increase in wealth, while a greater number of authors argued that population depended on wealth. In the late eighteenth century, new concepts emerged, most notably linked to the French Revolution. The notion of the perfectibility of the human species appeared in the writings of many authors such as Godwin and Condorcet, while Malthus developed a theological conception that went in the opposite direction. In the nineteenth century, the industrial revolution elaborated economic theories that made little room for demography. The theory of industrialism crowned the notion of perfectibility, whereas the theories of Proudhon and Marx were essentially economic. It was not until the twentieth century that Landry theorized the demographic revolution-freed from the grip of economicsthrough the notion of the rationalization of life. Most recently, the late twentieth and early twenty-first centuries have seen the emergence of three main theories that have been adopted in many social sciences but are far from having won unanimous acceptance among demographers: systemic theory, developed by von Bertalanffy and applied to demography, most notably by Meadows; agent-based models, developed in demography by Billari and Prskawetz; and viability theory, developed by Aubin and applied to demography by Bonneuil. We go on to examine theories of memory and show that artificial intelligence, despite its successes, cannot constitute a theory of human memory. We must turn to neuroscience to see a productive application of mechanistic theories by Bechtel, Craver, and others, who try to describe how the parts of a mechanism are organized to produce human memory-while voicing doubts about the theories' exhaustivity. We then discuss systemic theory, and in particular Maturana and Varela's autonomy theory, which focuses on the organism as a whole and on how memory allows it to activate self-maintenance. These two approaches, which developed independently at first, have encountered each other very recently. To conclude this book, we show how, in combination, they offer a more effective approach to more general biological phenomena, while hermeneutics resist to a more scientific approach of human life.
Part I. How certain approaches may lead to misunderstanding human life?
Chapter2 Predestination versus human liberty
In this first part, we restrict the meaning of prediction, discussed in the general introduction, to that of predestination. We must therefore examine in greater detail what the term can cover.
In French, the Dictionnaire de l'Académie française states that the term prédestination is attested by the twelfth century and was borrowed from the Christian Latin praedestinatio used as early as the fifth century by Saint Augustine. The dictionary distinguishes between a theological definition and a second definition that is less common but applicable to mantic methods:
1. Effect of the will of God, who for all eternity decides human fates and destines some persons-the chosen-to receive a special grace leading to eternal salvation. In the fifth century, Saint Augustine defended the reality of predestination against the supporters of pelagianism. Protestant theologians argued the existence of the predestination of reprobates. The Council of Trent reasserted, against Calvin, that predestination does not rule out free will.
inevitable manner. The effects of predestination. Predestination to glory, misfortune, crime. 2As examined in greater detail in Chapters 3 and 4, the second meaning is wholly congruent with the definition of astrology (inability to escape one's fate) and eugenics (being determined ineluctably and inevitably by one's genetic endowment). In this chapter, we generalize the notion to all forms of divination, i.e., mantic methods. The first definition, linked to religion, will be discussed in Chapter 5.
Similarly, in English, the OED gives a synthetic definition of predestination:
The theory or the belief that everything that has been decided or planned in advance by God or by fate and that the humans cannot change it.
God and fate are thus combined here as entities presiding over predestination, without the Académie Française's distinction. We therefore prefer to keep the Académie's definition as the reference for our purposes here.
In the first section of this chapter, we note the diversity of divination methods around the world, and discuss the reason for our special focus on astrology in Chapter 3; in the second section, we describe the origin of eugenics in human history; in the third and final section, we discuss the past evolution of the notion of freedom.
Diversity of predictions in the world
We begin by examining how various signs have been used to predict future events. This approach, known as divination, establishes a connection between the sign used and the invisible forces supposedly controlling our world. Divination emerged very early in the history of humanity and took many forms.
Most of the divination arts have names ending in "mancy"-from the Greek μαντέία (divination)-which Plato, in Phaedrus (244), traces back to μανία (mental derangement). They comprise more than a hundred methods involving the use of different objects: yarrowmancy (yarrow stalks), acultomancy (pins or needles), astragalomancy or astragyromancy (knucklebones or other small bones), cartomancy (playing cards), chiromancy or palmistry (reading the lines on a person's palm), necromancy (invoking the dead), oneiromancy (interpreting dreams), taromancy (tarot cards), and others. 3 Many of these methods are still practiced today.
A less commonly used suffix is "-logy" from the Greek λογια (science). It designates methods that more closely resemble the sciences, such as astrology, i.e., divination by the study of heavenly bodies. Like the "-mancies," astrology is still practiced today, and we shall devote the Chapter 3 to it.
The use of the two suffixes, in our view, reflects a more profound reality. Divination (or "manticism") involves randomness, that is, using events that cannot be predicted at the time they occur to foretell another, future event. Such unpredictable events include a card drawn at random by a cartomancer, the relative positions of beans cast by the favomancer, and the fortuitous signs observed in nature or living beings. By contrast, the "-logies" involve events that are more predictable through their reasoned observation, albeithere as well-for the purpose of predicting another event in the future. For example, astrology and astronomy (whose suffix comes from the Greek νόμος or law) were practiced contemporaneously by the ancient Assyrians, Babylonians, Egyptians, Greeks, and others, who were already able to calculate the positions of heavenly bodies with reasonable accuracy. By the fifth century B.C.E., the Babylonians were using a system of celestial coordinates to determine these positions [START_REF] Ossendrijver | Ancient Babylonian astronomers calculated Jupiter's position's from the area under a time velocity graph[END_REF].
The number of different mantic methods is considerable. They are documented in all civilizations, both ancient and modern. The monograph La divination (Caquot and Leibovici, eds, 1968) fully demonstrates this by describing such methods in countries around the world. We offer some examples here, with no claim to exhaustiveness.
In 1500 B.C.E., during the reign of Thutmose I, the oracle of Amon at Karnak predicted that his daughter Hatshepsut would eventually succeed him as pharaoh [START_REF] Vandersleyen | L'Égypte et la vallée du Nil: De la fin de l'Ancien Empire à la fin du Nouvel Empire[END_REF]. The oracle is a form of theomancy or divination through the supposed inspiration of a divinity. Methods practiced by the ancient Egyptians include idolomancy or divination using images, lecanomancy or divination using oil drops, and oneiromancy.
The library of Ashurbanipal (668-627 B.C.E.), which includes Babylonian and Assyrian tablets, contains an abundance of texts on divination, of which the oldest are undated [START_REF] Finke | The Babylonian Texts of Nineveh: Report on the British Museum's 'Ashurbanipal Library Project[END_REF]. One text (Ashurbanipal inscription L4 ) states: I was calculating the liver [which is] an image of heaven together with the [most] competent oil [divination] experts. 4 The Babylonians used many mantic methods, such as haruspicy (inspection of animal entrails), lecanomancy, oneiromancy, and teratomancy (observation of monsters).
Auguste Bouché-Leclercq inventoried multiple divination methods among the Greeks and Romans in his four-volume compendium, Histoire de la divination dans l'antiquité (History of divination in Antiquity) . Examples of Hellenic divination include chresmology (divination using pure intuition), cleromancy (casting objects), necromancy, oneiromancy, divination by instinctive actions of living beings, and divination by signs read in the structure of inanimate objects.
In China, the Hi ts'eu, a divination manual dating from the fourth or third century B.C.E., shows the exact correspondence between divination methods and the operations of nature (Granet, 1934, p. 75). The terms Yin and Yang occur frequently, linking divination to a vast set of Chinese methods and doctrines. The methods are highly diverse, ranging from cleromancy using yarrow stalks (I Ching) to cheloniomancy (using turtle shells), scapulimancy (using ox shoulder blades), and oneiromancy.
Madagascar's different ethnic groups are mainly of Indonesian origin. Over time, however, migrants from the Middle East, Africa, and Asia settled on the island. Their blending largely explains the diversity and originality of the island's divination practices [START_REF] Molet | La conception malgache du monde surnaturel et de l'homme en Imerina. 2 Tomes[END_REF]. For example, sikidy, or sikily, derives from the Arabic sikl (figure). It comprises two variants: the first, sikily alànana, consists of amathomancy (divination by drawing lines in the sand); the second, sikidy joria, is a form of favomancy or divination by casting seeds or other parts of plants (bean seeds, corn seeds, blades of grass, and particularly fano seeds). But other divination methods are also practiced, such as chiromancy, ornithomancy (observing bird behavior, flight or song), and metoposcopy or metopomancy (observing lines of the forehead).
In concluding this brief overview of mantic methods around the world, we should recall that they have been and are still practiced in all countries. The "-logies" will be examined in greater detail, more specifically astrology in Chapter 3. We shall explore the multiple origins of this ancient belief-which was supposed to allow an at least partial prediction of a person's life or of an event-how it then broke away from astronomy, and why it nevertheless remains so popular today.
The origins of eugenics
There is another way to predict an individual's future behavior, outlined in this section.
It is, in fact, a recurrent and very ancient approach, addressed in writings as early as those of Plato, who quotes Socrates in The Republic (V, 459):
Why, I said, the principle has been already laid that the best of either sex should be united with the best as often, and the inferior with the inferior, as seldom as possible, and that they should rear the offspring of the one sort of union, but not of the other, if the flock is to be maintained in first rate condition. Now these goings on must be a secret which the rulers only know, or there will be a further danger of our herd, as the guardians may be termed, breaking out into rebellion. […] There are many other things which they will have to consider, such as the effect of wars and diseases and any similar agencies, in order to prevent the State from becoming either too large or too small.5 Shortly after, he adds (V, 460):
The proper officers will take the offspring of the good parents to the pen or fold, and there they will deposit them with certain nurses who dwell in a separate quarter; but the offspring of the inferior, or of the better when they chance to be deformed, will be put away in some mysterious, unknown place, as they should be. Yes, he said, that must be done if the breed of the guardians is to be kept pure. 6These texts thus imply a choice of spouse defined by quality standards ("the best" opposed to "the inferior") and a restriction of the right to procreate for the inferiors. This arrangement makes it possible to eliminate from the outset those individuals who might prove to be a burden or a danger to society. In particular, for consanguineous unions, Plato specifies (V, 461): "to prevent any embryo which may come into being from seeing the light," and, in the event of a forced delivery, "the parents may understand that the offspring of such an union cannot be maintained, and arrange accordingly seeing the light."7 Aristotle, while accepting the abandonment of deformed children, is more concerned about excess births. In Politics (VII, 1335b), he writes:
As to the exposure and rearing of children, let there be a law that no deformed child shall live, but that on the ground of an excess in the number of children, if the established customs of the State forbid that (for in our State a population has a limit), no child is to be exposed, but when the couples have children in excess, let abortion be procured before sense and life have begun; what may or may not be lawfully done in these cases depends on the question of life and sensation. 8Aristotle's position on abandoning children thus differs from Plato's. He makes no allusion to infanticide, accepts exposure only for deformed children, and proposes abortion with the couples' consent. At the same time, however, he elaborates a theory of slavery as a natural practice (Politics, I, 2):
For that which can foresee by the exercise of mind is by nature intended to be lord and master, and that which can with its body give effect to such foresight is a subject, and by nature a slave; hence master and slave have the same interest.9
He describes the master-slave relationship here as the indispensable bond between that which commands by nature (ἄρχον φύσει) and that which is commanded (ἀρχόμενον) to ensure their common interest.
In this connection, we should bear in mind the large size of the slave population in Plato and Aristotle's time. A census conducted under Demetrius of Phalerum between 317 and 307 B.C.E.-about thirty years after Plato's death and ten years after Aristotle's-gives the following figures for the inhabitants of Attica: "the Athenians were found to amount to twenty-one thousands, and the metics to ten thousand, and the slaves to four hundred thousand" (quoted by Athenaeus, in Deipnosophistae, Book VI, 103 10 ). A more detailed study of the period (Van Wees, 2011) shows that the enumeration requires an adjustment. To be more precise, Van Wees finds 21,000 propertied citizens, 10,000 citizens without political rights, and 400,000 women, children, and slaves. This reduces the number of slaves to 323,000, "so that for every free person in Attica there were three slaves-according to Demetrius' census, at least" (Van Wees, p. 107).
Children were sacrificed in Rome as well, for the law required the exposure of all those whose parents did not want to raise them. On this subject, Cicero notes (De Legibus III,8,19): "as one of those monstrous abortions which, by a law of the Twelve Tables, are not suffered to live […]" 11 Indeed, the Pater familias, vested with the sacred right of life or death, was allowed to put his children to death deliberately.
In the Christian West, the elimination of deformed children was discontinued-at least officially-until the Renaissance, when some authors began to question religious prohibitions. Tommaso Campanella, in La città del Sole (The city of the Sun) (1602), clearly discusses the option of promoting eugenic unions:
Love is foremost in attending to the charge of the race. He sees that men and women are so joined together, that they bring forth the best offspring. Indeed, they laugh at us who exhibit a studious care 10 Ἀθήνησιν ἐξετασμὸν γενέσθαι ὑπὸ Δημητρίου τοῦ Φαληρέως τῶν κατοικούντων τὴν Ἀττικὴν καὶ εὑρεθῆναι Ἀθηναίους μὲν δισμυρίους πρὸς τοῖς χιλίοις, μετοίκους δὲ μυρίους, οἰκετῶν δὲ μυριάδας μʹ 11 Deinde quom esset cito necatus tamquam ex XII tabulis insignis ad deformitatem puer, brevi tempore nescio, ... for our breed of horses and dogs, but neglect the breeding of human beings. 12He proposes an ideal society where parental choice is subjected to a detailed ritual, and generation is regarded as a collective good rather than a private one.
The eugenic idea re-emerged in the Enlightenment. In 1756, Vandermonde, a young physician, attempted to solve the mysteries of heredity and generation in Essai sur la manière de perfectionner l'espèce humaine (Essay on the means to improve the human species). Having observed human success in perfecting animals, he makes the following recommendation (p. 155): By the explanation of this system, we can easily see that one can perfect animals, by varying them in different ways. Why should we not work for the human species as well? By combining all the circumstances we have discussed, by grouping our rules together, we would be able to make men more beautiful as surely as one can rely on an able sculptor to hew a model of beautiful appearance out of a block of marble. 13Vandermonde sees the "crossing of races" as a particularly efficient method of fostering the improvement of the human species. He accordingly suggests crossing individuals just as botanists graft plants or breeders cross animals.
In the nineteenth century, many French physicians defended the principle of selecting parents in order to combat degeneration. In 1801, Robert coined a term to describe this principle: mégalanthropogénésie, from the Greek roots for "great," "man, and "procreation." His goal was to create intelligent individuals by crossing elite men and women. He takes up Vandermonde's argument (p. ij): 14[…] I have thought that the identity of physiological laws in man and animals allowed me to believe in the possibility of megalanthropogenesis without the social order, since it exists in the rural economy 15 .
He concludes his work by appealing to the French government (p. 341): "could you, for an instant, neglect the reproduction of great men [?]16 "
The interested reader will find more details in Carol (1995) on the role of nineteenth-century French physicians in these medical practices, which, while not yet called eugenics, outlined a plan to improve the human species.
Darwin's first cousin Galton coined the term "eugenics" in 1883 and elaborated a theory of heredity. The theory claims to have a scientific basis, namely, the scrupulous observation of genealogies and the Darwinian theory of evolution. In Chapter 4, we shall see how Galton and his successors tried to predict the future of children from what their parents had transmitted to them.
The notion of freedom
The last chapter of Part 1 delves further into the reasons that lead people to believe that their future is predetermined. Its title, "How and why to restrict freedom," will lead us to address the complex notion of "freedom," whose origin we shall now explore.
Many scholars have already discussed this complex concept at length. We refer the reader to Gary Brent Madison, The logic of liberty (1986), Jacqueline de Romilly, La Grèce antique à la découverte de la liberté (1989), Pierre Grimal, Les erreurs de la liberté (1989), and Peggy Avez, L'envers de la liberté (2017).
Rather than referring to a current definition of freedom, we begin by examining where and how it appeared in antiquity and the successive meanings it has acquired over time. We extend our survey to the modern age but not to the contemporary period, discussed in greater detail in Chapter 5. Similarly, we cannot reach back to cultures without a written language, so we shall need to restrict ourselves to those that have one.
Freedom in ancient civilizations
Ancient civilizations-including Assyrian, Egyptian, and Hebrewhave left us no trace of a word meaning "freedom," although some scholars have identified rudimentary forms of the concept [START_REF] Dietrich | Liberty, freedom and autonomy in the ancient world: a general introduction and comparison[END_REF]. Likewise, the major Asian religions that appeared around the fifth century B.C.E.-Buddhism, Taoism, and Confucianismdevoted scant attention to freedom. For instance, according to the scholars who have tried to find signs of its presence, Buddhism may or may not allow free thought (Federman, 2010, pp. 15-16). We shall not dwell on the topic here, for it is not our purpose to analyze texts in which the notion of freedom is not clearly stated. We begin by examining in greater detail how the Greeks and Romans first developed the concept of "freedom."
In ancient Greece, we find several occurrences of the word ἐλεύθερος (free) to express the fear of serfdom as early as Homer's Iliad (eighth century B.C.E.). But this notion of freedom did not acquire its full political meaning until the Median wars in the early fifth century B.C.E. In his History of these wars (written between 430 and 424 B.C.E., VII, 103), Herodotus clearly defines what such freedom meant to the Greeks in a dialogue between Xerxes I, king of Persia, and a former king of Sparta, Demaratus: were all equally free and were not ruled by one man, stand against so great an army? Since, as thou knowest, we shall be more than a thousand coming about each one of them, supposing them to be in number five thousand. If indeed they were ruled by one man after our fashion, they might perhaps from fear of him become braver than it was their nature to be, or they might go compelled by the lash to fight with greater numbers, being themselves fewer in number; but if left at liberty, they would do neither of these things: and I for my part suppose that, even if equally matched in numbers, the Hellenes would hardly dare to fight with the Persians taken alone. With us however this of which thou speakest is found in single men, not indeed often, but rarely; for there are Persians of my spearmen who will consent to fight with three men of the Hellenes at once: but thou hast had no experience of these things and therefore thou speakest very much at random.17
Xerxes is clearly unable to understand this concept of freedom, and believes he can easily defeat the Greek army, ten times smaller than the Persian army. The Persians obey a master, and the Greeks-in particular the Athenians-have been free since 514 B.C.E., when the tyrants of Athens, the Peisistratids, were chased out of the city. However, they are free in a well-defined sense and they have a master-the law-that they fear and that does not allow them to retreat before an enemy. The Median Wars, which lasted twenty years from 499 to 479 B.C.E., ended in a overwhelming victory of the Greek cities, particularly Athens, against the Persians at the land battle of Plataea and the naval battle of Mycale.
Thereafter, the Greeks often proclaimed this freedom (ἐλεύθερία), most notably in their theater. For example, Aeschylus, in The Persians (472 B.C.E., 402-405), has the messenger state:
Now, sons of Hellas, now! Set Hellas free, set free your wives, your homes, Your gods' high altars and your fathers' tombs. Now all is on the stake!18
With the alliances between Greek cities, freedom lasted until 338 B.C.E., when Philip II of Macedon triumphed over the coalition of Greek cities at the battle of Chaeronea. Macedonia was an absolute monarchy, situated on the northern confines of the Greek cities. Greek freedom ended in Athens with the reform of the Athenian Constitution by the Macedonian general Antipatros, who restricted citizenship to the wealthiest and thus deprived more than half of the citizens of their civic rights.
But what is important here is to try to understand what the Greeks meant by freedom. For Athens, the concept-in essence democratic-applied above all to the management and defense of the autonomous city (polis). To prevent absolutist and arbitrary tyrannies such as that of Peisistratus, Clisthenes (Herodotus, VI, 66-69) established a new power structure in 508 B.C.E. called isonomy (ἰσονομία)-etymologically, the rule of equality-which came to be viewed as the first step toward democracy [START_REF] Lévêque | Clisthène l'Athénien[END_REF][START_REF] Fouchard | Des « citoyens égaux » en Grèce ancienne[END_REF]. Herodotus (430-424 B.C.E., III, 80), speaking through Otanes, praised its merits:
On the other hand the rule of many has first a name attaching to it which is the fairest of all names, that is to say 'Equality'; next, the multitude does none of those things which the monarch does: offices of state are exercised by lot, and the magistrates are compelled to render account of their action: and finally all matters of deliberation are referred to the public assembly. I therefore give as my opinion that we let monarchy go and increase the power of the multitude; for in the many is contained everything. 19This rule of equality rule was, however, sufficiently vague to allow accommodations with practices of domination and slavery characteristic of many ancient societies including Greece. In Politics (I, 3-8), Aristotle defended the theory of slavery and concluded (I, 5):
It is clear, then, that some men are by nature free, and other slaves, and that for these latter slavery is both expedient and right. 20Aristotle's defense seems shameful today, but that did not prevent him from discussing slavery at length before concluding that it was valid. Similarly, ostracism denied the citizen's right as an individual by authorizing his banishment from political life. Ostracized citizens included Themistocles in 471 B.C.E., despite his victory in the naval battle of Salamis in 480 B.C.E. Even more severe was the sentencing of Socrates to death by the Athenian judges in 499 B.C.E. for-among other charges-having corrupted youth. In his public teaching, Socrates personified doubt in all its forms. Having been offered the possibility of fleeing to escape execution, he refused, arguing that he could not oppose the laws of his city as he did not want to jeopardize the freedom of his fellow-citizens. It was only far later than Stoicism took up Socrates' philosophy and elaborated on the idea that "only the wise man is free, for he alone possesses an assured knowledge of Truth" (Grimal, 2004, p. 151). While founded by Zeno of Citium in Athens in the third century B.C.E., Stoicism reached its apogee in Rome, which is why we shall discuss it in fuller detail in connection with Roman freedom.
Contrasting with the democratic freedom of Greek cities was the Greeks' belief in inescapable fate and in the absolute power of the gods over their lives. In §2.1 above, we referred to Bouché-Leclercq's work Histoire de la divination dans l'antiquité (1879)(1880)(1881)(1882), whose first two volumes are devoted to Hellenic divination. The same author also published a book on Greek astrology (L'astrologie grecque) in 1899, which showed how this practice, suffused with philosophy and mathematics, transformed its Oriental sister. Greek astrology, which is attested well before Cleisthenes, was barely changed by his reforms. Its features remained, most notably: Moira or the three Moirai, who presided over the apportioned "lot" of each individual (god or human) by weaving the thread of life (Κλωθώ), unraveling it (Λάχεσις), and cutting it (Ἄτροπος); the gods and goddesses who could also influence fate; oracles, such as the Pythia in Delphi, who sought to decipher the messages sent by the gods; and the mantic methods that gave soothsayers access to knowledge beyond human understanding.
In sum, all citizens of a polity enjoyed the same political freedom provided they respected the city's written laws, but those very same individuals were subject to the laws of the gods. Many authors have tried to understand this dichotomy, which seems puzzling today but was perfectly understood by the Greeks. We quoted Aeschylus on freedom earlier; let us now see how Darius' ghost viewed the power of the gods (Aeschylus, The Persians, 472 B.C.E., 739-761): I see all; 'tis the end foretold. How swift the oracle hath sped! The word of Zeus, I knew, must be fulfilled; and lo, on Xerxes' head it falleth. I had looked for this not until many years were gone, but when man hasteth of himself toward sorrow, God will help him on.
Here is a spring of evils burst on us and ours, which all might know save him who, understanding not, in his hot youth, hath made it flow. He thought in fetters, like a slave, the holy Hellespont to bind, and Bosphorus, the stream of God, refashion to his mortal mind. With hammered bonds of iron he wrought for a great host a far-flung road, and, not in wisdom, dreamed a dream that man could match himself with God, subdue Poseidon! 21
In other words, according to his father Darius, Xerxes' defeat was due not only to himself but to several gods: Zeus, Io, and Poseidon. The text addresses the very same topic treated by Homer five centuries earlier when narrating the death of Patroclus (Homer, Iliad, 16, eighth century B.C.E., 844-850):
For this time, Hector, boast thou mightily; for to thee have [845] Zeus, the son of Cronos, and Apollo, vouchsafed victory, they that subdued me full easily, for of themselves they took the harness from my shoulders. But if twenty such as thou had faced me, here would all have perished, slain by my spear. Nay, it was baneful Fate and the son of Leto that slew me, [850] and of men Euphorbus, while thou art the third in my slaying. 22 Here the entities responsible for the death of Patroclus are Zeus, Apollo, Moira (translated here as "Fate" but designated in the Greek text by μοῖρ'), Euphorbus, and Hector. Yet again, therefore, the death was caused by multiple agents.
Various authors have offered an explanation for this dual causality. [START_REF] Lesky | Göttliche und mensliche motivation im homerischen epos[END_REF], with his "double motivation" model, sees the two sides-human and divine-of the same medal, with a focus on the subjects' identities. [START_REF] Vernant | Ebauches de la volonté dans la tragédie grecque[END_REF], by contrast, argues that we should concentrate on the action envisaged rather than on the players. He speaks of the fundamental ambiguity of the tragic act, 21 Greek°text:°φεῦ, ταχεῖά γ᾽ ἦλθε χρησμῶν πρᾶξις, ἐς δὲ παῖδ᾽ ἐμὸν°Ζεὺς ἀπέσκη ψεν τελευτὴν θεσφάτων: ἐγὼ δέ που°διὰ μακροῦ χρόνου τάδ᾽ ηὔχουν ἐκτελευτήσε ιν θεούς:°ἀλλ᾽ ὅταν σπεύδῃ τις αὐτός, χὠ θεὸς συνάπτεται.°νῦν κακῶν ἔοικε πηγὴ πᾶσιν ηὑρῆσθαι φίλοις.παῖς δ᾽ ἐμὸς τάδ᾽ οὐ κατειδὼς ἤνυσεν νέῳ θράσει:°ὅστις Ἑλλήσποντον ἱρὸν δοῦλον ὣς δεσμώμασιν°ἤλπισε σχήσειν ῥέοντα, Βόσπορον ῥό ον θεοῦ:°καὶ πόρον μετερρύθμιζε, καὶ πέδαις σφυρηλάτοις°περιβαλὼν πολλὴν κέλ ευθον ἤνυσεν πολλῷ στρατῷ,°θνητὸς ὢν θεῶν τε πάντων ᾤετ᾽, οὐκ εὐβουλίᾳ,°καὶ Ποσειδῶνος κρατήσειν. 22 Greek°text:°Ἕκτορ°μεγάλ'°εὔχεο:°σοὶ°γὰρ°ἔδωκε°νίκην°Ζεὺς°Κρονίδης°καὶ°Ἀ πόλλων,°οἵ°με°δάμασσαν ῥηιδίως:°αὐτοὶ°γὰρ°ἀπ '°ὤμων°τεύχε'°ἕλοντο.°τοιοῦτοι°δ '°εἴ°πέρ°μοι°ἐείκοσιν°ἀντεβόλησαν,°πάντές°κ'°αὐτόθ'°ὄλοντο°ἐμῷ°ὑπὸ°δουρὶ°δ αμέντες.°ἀλλά°με°μοῖρ'°ὀλοὴ°καὶ°Λητοῦς°ἔκτανεν°υἱός,°ἀνδρῶν°δ'°Εὔφορβος:°σ ὺ°δέ°με°τρίτος°ἐξεναρίζεις. which reflects a debate between the "past of the myth" and the "present of the city." [START_REF] Darbo-Peschanski | Deux acteurs pour un acte. Les personnages de l'Iliade et le modèle de l'acte réparti[END_REF] elaborates on this approach by proposing a model of the "distributed" act; like Vernant, she focuses on the act's components rather than on identifying the actors, as Lesky did. From this standpoint, she shows that the act involves two agents, whether divine or human. More recently, a dossier on Lectures anthropologiques de l'agir dans l'antiquité ("Anthropological readings of acting in antiquity"), edited by [START_REF] Brouillet | Lectures anthropologiques de l'agir dans l'antiquité[END_REF], confirms and completes the interpretation by Vernant and Darbo-Peschanski, which we too find more convincing than Lesky's.
We shall now discuss the Roman concept of freedom more briefly, even it is sufficiently similar to that of the Greeks, at least until the end of Greek ἐλεύθερία. Roman libertas appeared in 509 B.C.E., almost simultaneously with Greek ἐλεύθερία, when the Romans toppled the last Etruscan king, Tarquin the Proud. However, it lasted longer than in Greece, for we can date its end to 27 B.C.E., when Octavius, who received the title of Augustus, set up the new institutions of the Roman Empire and thus abolished the freedom that had existed for nearly 500 years.
Roman freedom was essentially an individual freedom that guaranteed the legal status of each citizen. There was also an equivalent of Greek ostracism: one of the first acts of the city of Rome was to deprive Tarquinius Collatinus of his citizenship for the sole reason that his name was linked to the royal lineage. As in Greece, freedom in Rome was effectively the opposite of slavery. The Roman divinities, also similar to their Greek counterparts, controlled the fates of all individuals, and politicians had to consult the gods for all important decisions. However, political freedom exercised by an assembly of citizens was inconceivable-unlike in Athens, where public officials were chosen by lottery. In Rome, the privileges granted to noble families were the chief prerequisite for occupying such positions of authority.
Although it originated in Greece after the end of its political freedom, the main driver in shaping a better defined notion of Roman freedom in the third century B.C.E. was Stoicism. Despite their prolific output, the school's founders-particularly Chrysippus of C.E.)-have left us only a few fragments (see Arnim, 1902Arnim, -1925)). It is notably thanks to during the period of Roman freedom, then to Seneca (ca. 4 B.C.E.-45) and under the Empire, that we have a fuller picture of the problem posed by the notion of freedom and its corollary, the concept of fate. We shall restrict ourselves to a concise overview of Stoicism without involving ourselves in a fuller discussion, which persists to this day (see, for example, [START_REF] Bobzien | Determinism and Freedom in Stoic Philosophy[END_REF], contested by Mikeš, 2016).
How can one hope for a space of freedom in a doctrine where fate (fatum) is the predominant force? The grammarian Gellius, in his Noctes Atticae (ca. 150-180), quotes Chrysippus to the best of his recollection:
"Fate," he says, "is an eternal and unalterable series of circumstances, and a chain rolling and entangling itself through an unbroken series of consequences, from which it is fashioned and made up." But I have copied Chrysippus' very words, as exactly as I could recall them, in order that, if my interpretation should seem too obscure to anyone, he may turn his attention to the philosopher's own language. 23How can human action have an effect on the world if fate is allpowerful? For the Stoics, human action does not occur outside fatum, but is one of its constituent elements. The Stoics must therefore broaden their vision of the world to include physics, logic, and ethics as a complex whole in which human reason can grasp the chain of events while forming part of it. Human action can therefore promote virtue when people train themselves to reflect and to perceive the order of things. Human action will then attain Stoic freedom, which Cicero defines as follows in Paradoxa Stoicorum (46 B.C.E., 34):
What then is freedom? Ability to live as you wish. Who then lives as he wishes, if not the one who pursues upright things, who rejoices in duty, whose way of life is considered and planned, who doesn't obey the laws because of fear, but follows and cultivates them because he judges that to be most advantageous, who says nothing, does nothing, in fact thinks nothing unless it is willingly and freely, whose every plan and undertaking proceeds from and returns to him, nor is there anything which has more power for him than his own will and judgement, to whom even that which is said to have the most power. Fortune herself, yields, since, as the wise poet said, she shapes herself according to each man's own character? So this happens only to the wise man, that he does nothing unwillingly, nothing sorrowfully, nothing under duress. 24Acting freely in this manner thus consists in wanting what fate wants. Clearly, such freedom can be attained only by a handful of wise men, whose qualities are enumerated above. This particular vision of freedom is tied to Stoic philosophy as a whole, and cannot be treated as a separate entity. Accordingly, we have chosen not to elaborate on it as fully as we did with Greek freedom, for such a discussion would take us beyond the scope of our book.
Space also precludes a detailed account of Roman power and the extinction of Roman freedom, topics largely covered by Grimal (1989). After Cesar's assassination in 44 B.C.E., his chief heir Octavius succeeded in defeating Mark Antony and, under the name of Augustus, in becoming the head of a new imperial regime that replaced the Republic in 27 B.C.E. The notion of freedom then disappeared from the Roman Empire.
Freedom in the monotheistic religions
Rather than provide a detailed description of the concept of freedom in monotheistic religions, we shall focus on how they differed from the Greek and Roman notions of freedom.
The first monotheistic religion was Judaism, whose foundational text is the Old Testament (Hebrew Bible). As God is transcendent, immanent, omnipotent, and omniscient, man has no freedom in his presence. For example, Abraham, one of the first Jewish prophets, born in time immemorial, is asked by his god Yahweh to sacrifice his only son Isaac. Without uttering a single protest, Abraham carries out the order exactly as told (Gen. 22:1-19). It is only when he raises his knife to slay his son that an angel stops him and praises him for having followed his God's orders to the letter. Yahweh's prescriptions are legion, and often extremely fierce, but are always carried out with the greatest respect. For instance, when a man gathered wood on the Sabbath, Yahweh tells Moses to execute him, and the entire community stones him to death . All that counts is the emancipation from slavery, according to the Bible, of the Hebrews of Egypt or Babylon. We can conclude that Judaism requires of its faithful a total obedience to God's commandments, without allowing them any true freedom.
The Christian religions, which emerged after the death of Jesus, are also predicated on the single deity, in the form of the august Trinity. They could hardly accommodate the notion of freedom as formulated by the Greeks and Romans. Only God is truly free, but man is endowed with free will. This notion was elaborated in the earliest days of Christianity (second century), then with greater precision by Augustine of Hippo (354-430). In the second book of De libero arbitro (387)(388)(389)(390)(391), the dialogue between Augustine and his disciple Evodius is devoted to free will (II, 1.1):
Augustinus. Do you know for certain that God has given man this gift, which you think ought not to have been given? Evodius. As far as I thought I understood in the first book, we have free choice of will, and we only sin as a result. 25We must therefore examine how this notion gained ground against Greek and Roman thought, which prevailed before Jesus Christ. Unlike the Greco-Roman gods, a single God must be perfect, omnipotent, and-above all-the only free entity. Accordingly, he cannot be held responsible for evil; only the free will that he has granted to men can lead them to evil. Thus God's freedom and human free will cannot be identical. Yet the quest for a definition of freedom applicable both to humans and to God poses a problem that runs through the entire history of Christianity, for, at the same time, it regards man as being in God's image. In De XIV,4,6) Augustine clearly states how that image is to be found in the human soul: Therefore neither is that trinity an image of God, which is not now, nor is that other an image of God, which then will not be; but we must find in the soul of man, i.e., the rational or intellectual soul, that image of the Creator which is immortally implanted in its immortality. 26The image of God dwells in the human soul, but when man chooses evil, the image loses its beauty and its colors fade. Augustine can then draw a fundamental distinction between free will (liberum arbitrum) and freedom (libertas). Free will becomes the prerequisite for attaining true freedom, which Augustine defines as follows (De libero arbitro, I, 16, 32):
Then there is freedom, though indeed there is no true freedom except for those who are happy and cling to the eternal law. 27 For Augustine, freedom here denotes a specific situation in which the soul reaches perfection in its accord with God (Trego, 2005). This notion has nothing to do with the Greeks' political freedom but is closer to the Stoic concept, with fatum being replaced by God. Both entities stand above human actions, which must comply with their existence in order to have any value. Just as few Stoic sages want what fatum wants, so few Christians fully accept their God's will.
Over time, the notions of free will and Christian freedom underwent changes, most notably with Thomas Aquinas in the thirteenth century, and Luther and Calvin's Reformation in the sixteenth century. However, the basic outlines of the two concepts provided by Augustine were not substantially altered.
The third monotheistic religion is Islam, which appeared in the seventh century with the prophet Mohammed. In the Koran, the term "freedom" is used only in contrast to slavery. Indeed, at its origin, Islam was in favor of the enslavement of conquered peoples, as we can read here (Quran, 33, 52, transl. Sarwar):
Besides these, other women are not lawful for you to marry nor is it lawful for you to exchange your wives for the wives of others (except for the slave girls), even though they may seem attractive to you. God is watchful over all things. 28 Free will is also part of Islam, naturally in opposition to divine will (Quran, 2, 256, transl. Sarwar):
There is no compulsion in religion. Certainly, right has become clearly distinct from wrong. Whoever rejects the devil and believes in God has firmly taken hold of a strong handle that never breaks. God is All-hearing and knowing. 29 But while many Koranic verses, particularly in Surat 2 (The cow), recognize the legitimacy and existence of the other monotheistic religions (Judaism and Christianity), later jurisprudence abolished some of these verses, on the grounds invoked in Surat 3 (The family of Amran, 85, transl. Sarwar):
No religion other than Islam (submission to the will of God) will be accepted from anyone. Whoever follows a religion other than Islam will be lost on the Day of Judgment. 30
Islam's position on freedom is ultimately close to that of the other monotheistic religions, despite its divergence on many other points. For a more contemporary view of the concept of freedom in Islam we refer the reader to Étienne (2006) and [START_REF] Madani | Freedom and its concepts in Islam[END_REF], who give very different accounts of what remains a highly topical issue.
In sum, the monotheistic religions rejected the Greco-Roman notion of freedom and introduced the opposition between human free will and God's will. If man chooses to obey the single god, he is therefore regarded as having been emancipated, and therefore as free.
Philosophical and political freedom
The scientific revolution began in the sixteenth century and took hold in the seventeenth and eighteenth centuries. Its chief philosophical leaders and exponents were Bacon, with the Novum Organon (1620), and Descartes, with the Discours de la méthode (Discourse on the method: 1637) and Les méditations métaphysiques (Metaphysical meditations: 1647). 31 What was the revolution's novel approach to freedom?
For Augustine, man's will (his free will) would not be free if it went against God's will. By contrast, Descartes (1647) argues that free will exonerates us from having to be God's subjects. In response to the sixth set of objections from various theologians, philosophers, and geometricians (p. 372), he writes: So human freedom relates to indifference very differently from how divine freedom relates to it. The thesis that the essences of things are indivisible isn't relevant here. For one thing, no essence that can be attributed to God can be attributed in the same sense to any of his creatures. Also, indifference isn't part of the essence of human freedom: we are free when ignorance of what is right makes us indifferent, but we are especially free when a clear perception impels us to pursue some object. 32 (transl. Bennett, 2006) Human freedom thus defined enables Descartes to engage in true scientific work focused on the facts of nature. That is what he does in his Discours de la méthode, for the purpose of managing reason properly and seeking truth in science. This mechanistic vision of science, which prevailed until the late nineteenth century, was challenged most notably by quantum mechanics and the various interpretations of its indeterminism.
While the Cartesian notion defines philosophical freedom emancipated from submission to God, another form of freedom developed in the eighteenth century. We find it in Montesquieu's L'esprit des lois (1748, Book XII, chapter 2, p. 296): Philosophical liberty consists in the free exercise of the will; or at least, if we must speak agreeably to all systems, in an opinion that we have the free exercise of our will. Political liberty consists in security, or, at least, in the opinion that we enjoy security. 33 (transl. Nugent) This political freedom, which is supposed to free humans from insecurity, is very different from the philosophical freedom described by Descartes. However, the two are closely tied. By detaching science from prejudices and false idols-as Bacon so aptly describes them-philosophical freedom is intimately linked to political freedom, which allows people to express this new scientific approach.
It is important to realize the degree to which these currents of thought were threatened by the Church. Galileo's sentencing in 1633 caused Descartes to withhold publication of the Traité du monde et de la lumière (The World, or Treatise on light), which did not appear in print until 1664, and his works were placed on the Index in 1663. Similarly, L'esprit des lois (The spirit of the laws) by Montesquieu was placed on the Index in 1751 and condemned by the Sorbonne.
At the start of the French Revolution, Condorcet, in his Esquisse d'un tableau historique des progrès de l'esprit humain (Sketch for a historical picture of the progress of the human mind, 1794), recognized the importance of Bacon, Galileo, and, above all, Descartes in this liberation of human minds. But the Terror silenced his ideas.
Later, in 1819, Constant gave a speech at the Athénée Royal de Paris on De la liberté des anciens comparée à celle des modernes (The liberty of ancients compared with that of moderns) in which he went one step further and contrasted Greek and Roman freedom with modern liberty (p. 603):
Individual liberty, I repeat, is the true modern liberty. Political liberty is its guarantee, consequently political liberty is indispensable. 34This is a perfect expression of the liberalism that had been gradually taking shape in Europe since the sixteenth century. Constant proposed the representative system, in which the people give a proxy to a certain number of elected officials because they do not have the time to defend their interests themselves, as is necessary in modern nations. Constant does point out the dangers of relinquishing the right to share in political power, but he does not address the complexity of social hierarchies.
The goal then becomes-through the political authority thus defined-to ensure security and a defense against the enduring threat of violence between individuals. Unfortunately, this ideal collapsed in the disastrous wars of the twentieth century, made possible and infinitely deadlier by scientific progress and liberal industrialization. The guarantees of freedom came up against the worst fascist regimes and the annihilation of peoples deemed undesirable by them. As a result, the legacy of this idea of freedom became vain, as a rationalism too confident in its principles.
Conclusion
As noted at the outset, we have examined the changing concepts of freedom up to the modern age; in Chapter 5, we discuss which new forms may apply in our time.
However, our brief overview has already shown us the great differences between the concepts, despite the fact that they all seem valid in their respective social contexts. Even more importantly, the paradigms that allow the definition of specific forms of freedom are not erased by later paradigms, for they persist in human memory.
Each of these forms is intimately bound up with the culture and religion of the peoples considered, and it seems hard, if not impossible, to find a universal definition of the term. In fact, freedom is a continually renewed process, not an absolute, unattainable state. The process recycles elements of the earlier concepts of freedom, transforming them so as to adapt them to the latest world-view that is taking root. This requires a response to the problems posed by the new world-view while trying to circumscribe them in a new concept of freedom. At the same time, however, the need to adhere to the world-view will turn the aspiration to freedom into a desire for submission (Avez, 2017, p. 28).
Chapter 3 Astronomy and astrology: once indistinguishable, now clearly separate
Astronomy and astrology have been defined in many ways over the centuries, so we must begin by specifying exactly what we mean by the two terms. The Greek etymology of αστρονομία comprises άστρον and νόμος-literally, "the law of the stars"-which we shall define here as a science that studies the position, movements, structure, and evolution of heavenly bodies. The Greek αστρολογία is composed of άστρον and λόγος-literally, "discourse on the stars"-which we shall define here as a divination art that is based on the observation of heavenly bodies and seeks to determine their presumed influence on earthly event and human fate.
We begin by showing how astronomy and astrology were closely linked in antiquity-including Assyria, Babylon, Greece, and Egypt-to the point of being indistinguishable from each other. We then describe how these initial ties weakened over time, eventually resulting in two opposing approaches to our relations with celestial phenomena-the first becoming a science, the second a divination art. The final section discusses the present revival of scientific interest in astrology, both to condemn it as a superstition and to apply a new statistical approach to it. In conclusion, it is important to examine the reasons for astrology's enduring popular appeal.
Astronomy and astrology in antiquity
An exhaustive account of the relationship between astrology and astronomy lies beyond the scope of our book. Our focus, instead, will be on the prediction of earthly events and human phenomena from the astronomical positions of heavenly bodies either at a time prior to the events or at the birth of individuals.
Apart from Hebrew civilization, all ancient civilizations, whether in Asia, Europe, Africa or the Americas, were polytheistic. They viewed man and earthly events as being closely connected to the universe and, more specifically, to the heavenly bodies. This explains the simultaneous development of early forms of astronomy, to measure and explain the movements of celestial bodies across the sky, and of astrology, to link these movements to the fate of humans on Earth.
We shall not describe all the connections between astronomy and astrology in the different parts of the ancient world, but rather concentrate on a small number of civilizations in which the two were very closely tied. We refer the interested reader to the works of historians who have been exploring this vast field since the nineteenth century, notably including: Auguste Bouché-Leclercq, in his four-volume L'histoire de la divination dans l'antiquité (1879-1882) and L'astrologie grecque (1899); Franz Cumont, who illustrates the religious aspects of astrology in Astrology and religion among the Greeks and Romans (1912) and L'Égypte des astrologues (1937); David Edwin Pingree, whose The Yavanajātaka of Sphujidhavaja (1978), Jyotihsāstra: astral andmathematical literature (1981), and Astral science in Mesopotamia (1999) show how borrowings from Greek astrological treatises introduced a new astrology in India, still taught in its universities; Ulla Koch-Westenholz also examines in detail the evolution of astrological practices throughout the history of Mesopotamia in Mesopotamian astrology: an introduction to Babylonian and Assyrian celestial divination (1995); Francesca Rochberg Halton has published a translation of Babylonian Horoscopes (1998) with commentary and a more general overview of cuneiform texts in Before nature: cuneiform knowledge and the history of science (2016); John M. Steele provides a fuller account of astrology in Mesopotamia, Egypt, Greece, Rome, Byzantium, China, and India in The Circulation of Astronomical Knowledge in the Ancient World (2016).
Taking more specific examples, we shall try to describe the relationships between astronomy and astrology, to show if astrology allows a prediction of future events, and to determine whether such predictions are verified.
Astronomy and astrology in Mesopotamia
These two disciplines, now completely separate, were totally integrated in most ancient civilizations, particularly in Mesopotamia-a term used here to designate different elites living in the region. Their detailed study was made possible by numerous discoveries in the nineteenth and twentieth centuries.
The first step was the decipherment of cuneiform texts during the nineteenth century. This achievement was made possible by the equivalent of the Rosetta Stone for Egyptian hieroglyphs: the Behistun Inscription. Written in Old Persian, Elamitic, and Akkadian, it enabled Rawlinson and his assistants to decipher these three basic languages of Mesopotamia (1861-84).
The second step was the inventory of all the tablets found in the Mesopotamian archival and library sites. Notable examples include the following: in Nippur, some 30,000 tablets and tablet fragments were discovered, the oldest dating from the third millennium B.C.E. [START_REF] Gibson | Nippur, Sacred City of Enlil, supreme god of Sumer and Akkad[END_REF][START_REF] Hilprecht | Mathematical Methodological and Chronological Tablets from the Temple Library of Nippur[END_REF]; in Assur, over 4.300 tablets were found at 50 sites, of which the oldest dates from the ancient Akkadian period, 2330-2150B.C.E. (Pedersén, 1985); in Niniveh, more than 3,000 tablets and fragments from the Royal Library of Ashurbanipal have been unearthed, dating from 668-627 B.C.E.. The third step was the classification of these writings, which, among other things, made it possible to distinguish between astronomical texts, astrological texts, and texts belonging to both categories. This classification also identified many texts as copies of earlier versions, allowing a historical reconstruction.
Of the 1,594 literary and scientific tablets examined by Fincke in Niniveh (2003), for example, 746, or 46.8% of the total, concerned various divination methods, of which only 346 related to astrology and 13 to astronomy, often combined with astrology. While the Mesopotamians, as noted earlier, practiced many other divination methods, the predominant one was astrology.
Interestingly, astronomical research led to ever more sophisticated mathematical modeling of the movements of celestial bodies. [START_REF] Mansfield | Plimpton 322 is Babylonian exact sexagesimal trigonometry[END_REF] have shown that tablet Plimpton 322, dated between the nineteenth and sixteenth centuries B.C.E., is a sexagesimal-i.e., base-60-trigonometric table, far earlier than the first trigonometric tables of Hipparcos of Nicea (180-125 B.C.E.). [START_REF] Ossendrijver | Ancient Babylonian astronomers calculated Jupiter's position's from the area under a time velocity graph[END_REF] has shown that the method developed by the "Oxford calculators" in the fourteenth century to formulate the "mean speed theorem" was already described in texts by Babylonian astronomers written between 400 and 50 B.C.E.
What exactly was the Mesopotamian concept of astrology? It was very different from our modern definition, namely, the use of the configuration of the planets, the sun, and the moon to determine a person's future at birth. For the Mesopotamians, the relevant notion is that of "judicial astrology," which involves the prediction of events concerning the king, the country, and the people (Neugebauer, 1945, p. 39). The belief that the entire universe is causally connected may be found in the Babylonian Diviner's Manual (Oppenheim, 1974, p. 204):
The signs on earth just as those in the sky give us signals. Sky and earth both produce portents though appearing separately, they are not separate (because) sky and earth are related. A sign that portends evil in the sky is (also) evil on the earth.
The correlations between the signs and what they signified give what they called "omens", which are pairs of independent elements: "on the one hand a sign in the natural world or social environment, and on the other an event in social life" (Rochberg, 2010, p. 19). They mixed a scrupulous observation of the movements of the sun, the moon and the planets, with an interpretation of these phenomena as divine signs permitting to establish the norms and anomalies by means of which to find the order of things. This was also an act of social control by the Kings of these countries. As Guinan (2014, p. 105) says:
Not only do divination and law share the same casuistic form the sun-god Šamaš is patron of both. During the day he enables justice to be transmitted to the king and through the king into the human arena. At night when he passes into the netherworld he presides over a divine court that issues divinatory decisions.
Once the idea of a parallel between celestial events and earth or human events is accepted, its use and development are a logical consequence.
For instance, Tablet 63 of the Enũma Anu Enlil (so called in reference to the opening words of its prologue: "When the gods Anu, Enlil and Ea designed heaven and earth") discusses the movements of the planet Venus during the 21-year reign of Ammisaduqa (ca. mid-seventeenth century B.C.E.). The tabletfortunately preserved in the form of copies made at least a millennium later-contains the following omens:
[…] in month X, n th day, Venus in the west became visible: the harvest of the land will prosper.
[….] if Jupiter remains (in the sky) in the morning, enemy kings will become reconciled (transcription by Reiner, 1975, p. 29).
These "celestial omens," as the Mesopotamians called them, comprised a first part describing the observation of a celestial phenomenon and a second part that predicted a terrestrial event to come. The predicted events, however, were not inevitable. The scribes give fuller explanations in their letters to the sovereign. They offer solutions to avert such perils when foretold in omens. One solution is to perform a ritual called namburbi, meaning that the imminent misfortune predicted can be untied like a knot, so that it no longer holds together. When the sovereign's death is foretold, one can, for example, designate a substitute sovereign vested with all the real sovereign's powers. The real sovereign is thus protected and performs none of his duties for the 100-day duration of the omen. This scenario is explicitly described in letters containing such statements as: […] as regards the substitute king: if the farmer my lord agrees, he can go to his fate tomorrow, but if not, he may sit on his throne for the full 100 days (cited by Hunger, 2009, p. 70).
The true sovereign is called "my lord farmer" and "to go to his fate" is a euphemism for "to die."
In the second half of the first millennium B.C.E., new approaches were developed that may be regarded as the precursors of Greek horoscopes, discussed in the following section. Of approximately thirty known texts [START_REF] Sachs | Babylonian horoscopes[END_REF][START_REF] Rochberg | Babylonian horoscopes[END_REF], some have been dated with precision thanks to the astronomical information they contain. The earliest text dates from April 29, 410 B.C.E.. However, most offer no predictions about the future of the persons concerned. Only one text dating from April 203 B.C.E. gives some predictions on the person's life after presenting the planetary data at the time of his birth:
He will be lacking property, .… His food (?) will not [suffice (?)] for [his] hunger (?). The property which he had acquired in his youth (?) will not [last (?)]. The 36th year (or: 36 years) he will have property. (His) days will be long. His wife, whom people will seduce (?) in his presence, will .… (or: His wife, in whose presence people will overpower him, she will bring (it) about] (?).) He will have …'s and women. He will see (?) profit. Between travels concerning property […] .... (translation by Rochberg, 1998, pp. 66-67, closely following the initial translation by Sachs, 1952, pp. 57-58).
This text marks a significant departure from the "celestial omens" for several reasons. First, its time horizon is not the immediate future but the longer term, and even the person's total lifetime ("His days will be long"), without, however, specifying its duration. Second, as the text does not concern a sovereign, it does not lend itself to the "namburbi" ritual. Third, it involves "personal astrology" rather than the former "judicial astrology." Events are not dated with precision, except for the date at which the person is supposed to become a property-owner.
The other texts provide only planetary data at the time of birth, but we can interpret them with the aid of surviving general documents. Astronomical texts called Almanacs contained the information needed to complete the horoscopes. Among others, Almanac TCL VI No. 14 found in Assur gives many examples:
If a child is born when the moon has come forth, [then his life (?) will be] bright, excellent, regular, and long. If a child is born when the sun has come forth, [then] […] If a child is born when Jupiter has come forth, [then his life (?) will be] regular, well; he will become rich, he will grow old, [his] day[s] will be long (translation by Sachs, 1952, p. 68).
The terms used here are very similar to those of the previous quotation, and describe the state of each planet at the time of birth. As in the earlier horoscope, no dating of events is provided.
We may therefore conclude that this extension of celestial divination probably served as a model for Greek astrology and that Greek astronomers used the Mesopotamian discoveries. However, Mesopotamian astronomy and astrology were always practiced by priests and remained a religious pursuit.
Hellenistic astronomy and astrology
The Babylonian Horoscopes [START_REF] Sachs | Babylonian horoscopes[END_REF] are wholly consistent with the Greek Horoscopes [START_REF] Neugebauer | Greek horoscopes[END_REF], and Greek astronomy carried on the research undertaken by Mesopotamian astronomers. The two disciplines remained closely linked, but the approach evolved because the two cultures were so different. We shall use "Hellenistic" to refer to a tradition followed more generally in the Mediterranean region from around the third century B.C.E. to the sixth century C.E.
While Mesopotamian astronomy relied largely on arithmetical processes, Hellenistic astronomy was inspired by geometric models.
The astronomer Eudoxus of Cnidus (408-355 B.C.E.), whose work is known to us only through quotations, developed a geocentric system comprising 27 spheres. In the third century B.C.E., Euclid defined the axiomatic bases of geometry, but he is less well known as the author of the Phaenomena [START_REF] Berggren | Euclid's Phaenomena. A translation and study of a Hellenistic treatise in spherical astronomy[END_REF]-a treatise on spherical geometry for the study of celestial phenomena. The heliocentric hypothesis was defended by Aristarchos of Samos (ca. 310-230 B.C.E.), whom [START_REF] Heath | Aristarchus of Samos. The ancient Copernicus[END_REF] calls the "Copernicus of antiquity." Heath provides a detailed, but now partly obsolete, history of Greek astronomy. In the Almagest (ca. 150 C.E.), Ptolemy uses his calculations to show that the positions of the planets can be explained only if the circles on which the planets move are centered not on the Earth but on a point at some distance from it. This model is thus neither strictly geocentric nor strictly heliocentric. It remained in use until Kepler's pure heliocentric system. It should be noted that Ptolemy also wrote an astrological treatise, the Tetrabiblos (ca. 168 C.E.), discussed later.
Let us now examine the differences between Hellenistic and Babylonian astrology as they relate to differences between Hellenistic and Mesopotamian culture.
Mesopotamian "celestial omens" implied a correspondence between human and celestial phenomena, some of whose consequences could be avoided by resorting, for example, to the "namburbi" ritual. In contrast, Greek astrology was anchored in the Stoic concept of life, where chance does not exist and all events are decided by fate. As Bouché-Leclercq clearly explains (1899, p. 31):
But what most notably predestined the Stoics to vouch for astrological speculations and seek demonstrative reasons for them was their unshakeable faith in the legitimacy of divination, of which astrology is merely a particular form. 35These two doctrines equally concerned with knowledge and prediction, thus had a reciprocal influence from the outset. We will see in more details the view of Cicero (around 44 B.C.E.) on astrology and the Stoic concept of life in chapter 5.
Interestingly, astrologers such as Vettius Valens (120-175) perpetuated this belief that all events are decided by fate. He spent at least twenty-five years (ca. 145-170 C.E.) writing a book in Greek known as Anthologiarum Libri, which contains nearly 125 horoscopes that are broadly correct in astronomical terms. Vettius visibly worked on data collected with reasonable accuracy by himself or his predecessors [START_REF] Neugebauer | Greek horoscopes[END_REF]. The horoscopes were intended as examples for the various astrological theories discussed in the work and are therefore not, strictly speaking, predictive. They aim to show how the theories could have predicted the events, which he presents as having already occurred in the past. For example, in 87 of the 125 horoscopes, he gives the age at death or the occurrence of a major crisis, justifying the prediction with his theories. Rochberg-Halton (1984, p. 117) sums up the difference between Mesopotamian and Greek astrology without emphasizing the underlying role of Stoicism:
The contrast between Babylonian and Greek methods and rationale for prognostication on the basis of celestial events can be expressed in terms of difference between a form of divination on the one hand, in which the deity provides ominous signs in the heavens to be read and interpreted by a specialist, and on the other, a mechanistic theory of physical causality, in which the stars and the planets themselves directly produce effects on earth.
However, Rochberg-Halton is obliged to admit an influence of Babylonian "omens" on the formulation of Greek astrological methods.
The first point to bear in mind is that there are only thirty or so Babylonian horoscopes dating from 410 B.C.E. to 68 B.C.E., versus over six hundred Greek horoscopes, dating from 71 B.C.E. to 621 C.E. [START_REF] Neugebauer | Greek horoscopes[END_REF]. As noted earlier, Greek horoscopes are mainly based on a Stoic conception of life in which all events are fated to occur, whereas the Babylonian "omens" are regarded as signs whose effects can be averted. Despite these major differences, Greek horoscopes seldom actually foretell events. The original documents compiled by [START_REF] Neugebauer | Greek horoscopes[END_REF] sometimes give a fairly vague prediction. In document 3 (p. 17), for example, the astronomical details of the case studied yield the following prediction: Take care for 40 days because of Mars (translation from Grenfell and Hull, 1904, p. 256 36 ).
As we can see, the prediction is extremely vague. Only the literary texts, such as the writings of Vettius Valens, offer detailed predictions-but a posteriori.
We conclude our presentation of Greek astrology with Ptolemy's Tetrabiblos, which differs in many respects from the works of other contemporary astrologers such as Vettius Valens. The Tetrabiblos follows the Almagest, Ptolemy's astronomical treatise. Its main goal is to reformulate astrology as a natural science. Ptolemy no longer regards celestial bodies as capable of telling us about the future, but examines whether they can influence various terrestrial events. He never discusses individual cases, never compiles a horoscope, and never describes an astrologer's daily work. Ptolemy's interest in astrology is visibly confined to theory rather than practice (Riley, 1987). He embodies the transition toward the following period.
Astronomy ascendant, astrology discredited
This new period is marked by many major events that changed the vision of the world.
With the exception of Judaism, the prevailing religious belief in the earlier period had been polytheism. It waned as the new monotheistic religions-Christianity and Islam in their different forms-gained ground. Whereas polytheism lived at ease with astrology, monotheism rejected it, for the divinities that astrology saw in the heavenly bodies could in no way compete with the single God of the monotheists. Nevertheless, astrology continued to attract many monotheists despite the risk that they would be accused of heresy.
From the time of Moses, Judaism strongly rejected astrology. In Deuteronomy (ca. 7 th century B.C.E.), for example, we read:
And when you look up to the sky and behold the sun and the moon and the stars, the whole heavenly host, you must not be lured into bowing down to them or serving them. These the LORD your God allotted to other peoples everywhere under heaven (JPS Tanakh, 1985, Deuteronomy, chapter 4, verse 19 37 ).
Many other passages of the Hebrew Bible condemn astrology. Yet, outside the canonical Jewish literature, we find texts in the Judeo-Aramaic literature showing the use of astrology in the final centuries B.C.E. (Greenfield and Sokoloff, 1989).
Christianity, as well, rejected astrology from its earliest days. Augustine of Hippo (354-430), while admitting that he had been tempted by astrologers' doctrines in his youth (Confessiones,, later condemned it violently:
But those who are of opinion that, apart from the will of God, the stars determine what we shall do, or what good things we shall possess, or what evils we shall suffer, must be refused a hearing by all, not only by those who hold the true religion, but by those who wish to be the worshippers of any gods whatsoever, even false gods (De civitate Dei contra paganos, ca. 410-427, V, 1, p. 178: translated by Dods, 1871 38 ).
He did recognize the influence of the Sun and other celestial bodies on a variety of physical phenomena, but denied their power over the human mind. This approach enabled thirteenth-century theologians to accept the influence of celestial bodies on many human behaviors while continuing to reject their influence on mind and will.
The numerous authors of the period who reintroduced astrology include Robert Grosseteste (1175-1253) and his disciple Roger Bacon (1214Bacon ( -1294) ) at the University of Oxford, and Albertus Magnus (ca. 1200-1280) in Germany. Cecco d 'Ascoli (1269'Ascoli ( -1327)), professor of astrology at the University of Bologna, summarizes this attitude perfectly:
Each [celestial sphere] does not create necessity with its motion, But rather disposes the human creatures Through its qualities; if the soul follows these And abandons judgement it makes itself cowardly: A slave, a thief, a stranger to virtue, It divests itself of its noble habitus (translation by S.B. [START_REF] Fabian | Cecco vs. Dante: correcting the Comedy with applied astrology[END_REF] of Acerba II.i 39 ).
While this approach was accepted by the Church in England and Germany, the Italian Inquisition condemned Cecco d'Ascoli to be burned at the stake and banned the publication of his works. These fortunately continued to circulate after his death, and the Acerba was published as early as 1473.
In Islam, astrology experienced a similar fate. In the Koran (Muhammad, ca. 632-634), Allah says in verse 65 of chapter 27:
Say, "No one in the heavens or the earth knows the unseen except God, and no one knows when they will be resurrected." 40 39 Original Italian text: Non fa necesità ciaschum movendo, Ma ben dispone creatura humana Per quallità, qual l'anima seguendo L'arbitrio abandona e fàssi vile E serva e ladra e, de vertute estrana, Da sé dispoglia l'abito gentile. 40 Original Arabic text:
Those who practice astrology therefore claim the knowledge that Allah alone possesses, and they offer to those who believe them that which they cannot possess. Astrology is therefore totally prohibited and is a major sin. However, by the eighth century, astrology was introduced into the Arab world by the Caliph Al-Mansur (714-775), who employed an astrologer at his court, and Abu Ma'shar (Albumasar) (787-886) generalized the belief in the influence of celestial bodies on human fate in his works, originally written in Arabic but soon translated into Latin.
All monotheistic religions thus rejected astrology, yet all resorted to it in their non-theological practices. For the reasons why it fell out of favor, we must look elsewhere.
In the early seventeenth century, Francis Bacon (1561Bacon ( -1626) ) formulated a new approach to scientific research (Novum Organon, 1620, I, 19):
There are and can be only two ways of searching into and discovering truth. The one flies from the senses and particulars to the most general axioms, and from these principles, the truth of which it takes for settled and immovable, proceeds to judgement and to the discovery of middle axioms. And this way is now in fashion. The other derives from the senses and particulars, rising by a gradual and unbroken ascent, so that it arrives at the most general axioms at last of all. This is the true way, but as yet untried41 (translated by L. Jardine and M. Silverthorne, 2000).
Bacon called this approach "induction," but the same term used by [START_REF] Mill | A system of logic, ratiocinate and inductive, being a connected view of the principles of evidence, and the methods of scientific investigation[END_REF] and his successors had a very different meaning, namely, the generalization of particular facts. While the scientists who followed Bacon-most notably astronomers-did not refer to him directly, he visibly influenced them (see, for example, [START_REF] Ducheyne | Bacon's idea and Newton's practice of induction[END_REF]. Induction allowed them to find the general laws governing the movements of celestial bodies: Kepler (1561-1630) discovered the three laws of planetary movement that now bear his name, and Newton completed this synthesis (Philosophiae Naturalis Principia Mathematica, 1687) with his theory of universal gravitation and its six axioms.
As a result, astrology was totally excluded from the scientific field and lost all influence on scientists. Bouché-Leclercq, in his Astrologie grecque (1899, p. III), clearly sums up the status of astrology in his day: I readily observe, indeed with pleasure, that few people are concerned with astrology today. While it lives on in the countries of the Orient, in our parts it belongs to the past and no longer interests anyone but historians. 42This observation is not totally accurate, for the late eighteenth century witnessed a revival of interest in astrology in some circles in England and the United States (on this topic, we recommend the DVD by Graves, 2014, which contains 42 works on the "English astrological revival" published between 1784 and 1884) and in France (Christian, 1870-71, including a detailed study of Louis XVI's horoscope, pp. 531-49).
In the next section, we discuss how astrology has been subjected to statistical testing in Western countries in modern times, even though it is no longer taught there.
Statistics and astrology in the current period
The current period is marked by the growing importance of statistics, which uses growing volumes of data (big data). As a discipline, however, statistics is difficult if not impossible to define. Kendall (1943, p. 1) goes so far as to state:
Among the many subjects which statisticians disagree is the definition of their science.
Suffice it to say here that statistics seeks to determine the possible relations between data sets on a given population. The term "seeks" effectively denotes the imperfect nature of the approach. Whereas astrology previously worked on individual cases, it now sets out to work on a population, that is, on an aggregate of individual cases. 43We begin with the tests performed to verify astrological arguments without involving astrologers. We then discuss the tests conducted on predictions made by astrologers themselves.
One of the first researchers to carry out tests to verify astrological claims was Paul Choisnard . A graduate of the École Polytechnique and author of many books on "scientific astrology," Choisnard tested his theories on his many friends and acquaintances. This introduced bias in his data, and his proofs cannot be confirmed by reproducing his experiments.
Adopting a stricter approach, Michel Gauquelin (1928Gauquelin ( -1991) ) and Françoise Gauquelin (1929Gauquelin ( -2007) ) took all necessary precautions to obtain unbiased samples, such as: selecting a phenomenon that may be repeated on a regular basis; studying the phenomenon several times using new data; and performing statistical tests to verify whether certain hypotheses are valid. For example, in a book published in 1960, Les hommes et des astres (Of men and celestial bodies), Michel Gauquelin used 25,000 cases taken from civil registration records in Western European countries, including 3,142 military leaders, 3,305 scientists, 1,485 sports champions, and so on. Born under positions of specific planets (rise or culmination), they exhibit a positive correlation between their career success and those positions for the following planets: Mars and Jupiter for military leaders, Saturn and Mars for scientists, Mars for sports champions, and so on. But correlation is no way synonymous with causality: the presence of hidden factors can explain a strong correlation or covariance between two variables that are not causally linked. For instance, a detailed study of French population cohorts born between 1931 and 1935 [START_REF] Courgeau | Interaction between spatial mobility, family and career life cycle: A French survey[END_REF] showed that the classic curve linking mobility to age disappears entirely if we factor in the various stages of the person's life in society (family stages, economic stages, political stages, and so on). We must therefore look for all the causes that may generate the correlation between variables, and Gauquelin's analyses are unfortunately limited by the available sources (civil registers and hospital records).
Gauquelin's findings sparked fierce debate in the scientific and astrological communities, and were subjected to a number of tests, particularly to verify the Mars effect among athletes. The fact that the effect did not apply to the entire population examined but mainly to its elite is problematic. Moreover, the effect is very visible for champions born before 1950, but tends to disappear for those born later. Gauquelin explains this by the medical procedures disrupting the natural birth process, but his argument is hardly convincing. For our purposes, the correlations should be established on the basis of a well-defined theory. However, no genuinely scientific theory has been proposed for this effect [START_REF] Good | Book review of: Birth times: a scientific investigation of the secrets of astrology[END_REF], and the cosmobiological theory suggested by Gauquelin fails to provide a sufficiently coherent explanation [START_REF] Eysenck | Astrology. Science or superstition[END_REF].
Gauquelin himself was well aware that his work had, in fact, nothing to do with astrology. In The scientific basis of astrology (1969, p. 145), he writes:
It is now quite certain that the signs in the sky which presided over our birth have no power whatever to decide our fates, to affect our hereditary characteristics, or to play any part however humble in the totality of the effects, random or otherwise, which form the larger part of our lives and mould our impulses to actions.
There is a great difference between an astrological effect that may emerge from statistical tests and a practice that leads to the formulation of predictions. In any event, the planetary effects detected by Gauquelin are far too weak to be of any value to astrologers.
The psychologist Hans Eysenck (1916Eysenck ( -1997) ) was initially attracted by the Gauquelins' results and sought to verify the hypothesis of a link between the two main dimensions of personality-extraversion/introversion and emotionality/stabilityand astrological signs at birth. The earth signs (Taurus, Virgo, and Capricorn) are regarded as practical and stable, whereas the water signs (Cancer, Scorpio, and Pisces) are emotional and intuitive. An initial article published with [START_REF] Mayo | An empirical study of the relation between astrological factors and personality[END_REF] gives the results of a survey of 2,324 persons: the results fully corroborate the hypothesis.
However, in his book on astrology co-authored with David Nias (1982), Eysenck reported that when he studied the personality profiles of 1,160 children aged 11-17, with their known birth dates, no effect was visible. Looking at the results of other, similar studies in which respondents' astrological knowledge was tested, one can establish with certainty that prior knowledge of astrology influences the results of such studies. Eysenck and Nias therefore conclude (p. 215):
In none of the more convincing studies we have surveyed is there any indication that we are dealing with an effect that is decisive enough to be of practical importance.
Astrologers, who had initially taken a very positive attitude toward Eysenck, turned against him completely once they saw his findings. They regarded themselves as having been betrayed by the psychologist.
As a final example of verification of astrological hypotheses without involving astrologers, we can cite the study by Geoffrey Dean and Ivan William Kelly (2003) on "time twins," i.e., persons born at nearly the same time and in nearby localities but of different parents. From National Child Development Study in England, they extracted a sample of 2,100 time twins born in Greater London between March 3 and 9, 1958. In this period, Saturn was exactly near the horizon-a strong position, for astrologers. Dean and Kelly had 110 characteristics measured at ages 11, 16, and 23, most of which are included in horoscopes. The resemblance between time twins for each characteristic is measured by the serial correlation between successive pairs of individuals. According to astrology, we should find a strongly positive serial correlation for these time twins. For the 110 characteristics examined in the aggregate, the study found a mean serial correlation of -0.003, not significantly different from zero. Similarly, when treated separately, the 110 serial correlations failed to support astrological assumptions.
The preceding statistical tests did not directly involve professional astrologers but focused only on the astrological predictions implicit in individual births. We now turn to tests based on predictions by astrologers themselves.
For such tests to be valid, they must be conducted on a double-blind basis, so that neither the astrologers nor the tester know the answers to the questions asked.
To our knowledge, the first to have carried out such a test is Vernon [START_REF] Clark | Experimental astrology[END_REF]. In fact, he performed three tests, of which only the last was double-blind. For this third test, he sent ten pairs of horoscopes to thirty astrologers. The first horoscope concerned a person severely handicapped from birth with cerebral palsy. The second concerned a person with above-average intelligence who had never suffered severe illness. The astrologers, who had no information on these persons' lives, were asked to identify the horoscope of the cerebral palsy patient, which Clark himself did not know. The Student's test applied to the results suggested that there was one chance in a hundred of their having been obtained at random. But the small number of horoscopes tested (ten) weakened the results.
In the twenty-five years that followed, several roughly similar tests were conducted-with positive or negative results. Gauquelin (1973), for example, gave fourteen astrologers the horoscopes of three celebrities and asked them to identify the three. The result was very negative: a random choice would have yielded better results. In any event, these tests always involved a small number of astrologers and few horoscopes.
In 1985, Carlson published a new double-blind test on astrology in Nature. It was intended to assess both astrologers and the volunteers who had provided their dates of birth. The volunteers also respond to the California Personality Inventory (CPI), a test that assigns an individual score for each of eighteen typical situations such as passiveness, femininity, and masculinity (CPI profile). The astrologers were asked to describe the volunteers' personalities on the basis of celestial body positions at the time of birth. The volunteers were given three descriptions of their personality, one supplied by the astrologer, two picked at random. They were unable to recognize themselves in the astrologer's description. They were also presented with three CPI profiles: their own and two others chosen at random. Again, they were unable to recognize their own profile. The astrologers who had the volunteers' horoscopes at their disposal failed to recognize their CPI profile among the three offered. Carlson concluded that astrologers were unable to predict an individual's personality from his or her horoscope better than at random.
We believe this overly peremptory verdict should be nuanced. First, the initially planned size of the test population had to be sharply reduced because of refusals to participate. Of the originally planned group of ninety astrologers, fewer than twentyeight actually took part (the author does not even give their final number). Furthermore, the fact that the volunteers failed to recognize their CPI profiles more accurately than the astrologers casts doubt on the validity of such profiles for non-psychologists.
Several later studies have tried to overcome these difficulties while preserving the double-blind method. McGrew and McFall (1990), for example, involved astrologers in the choice of test questions in order to obtain a profile more consistent with astrological practice. But the number of astrologers participating in the study was still low (only six). The similar study by [START_REF] Nanninga | The Astrotest. A though match for astrologers[END_REF] tried, in addition, to increase the number of astrologers participating in the test (44). The results of both tests were identical to Carlson's: astrologers are incapable of predicting an individual's personality better than at random. Many astrologers responded very negatively to all these statistical studies-whether or not they were invited to participatearguing that they involved an excessive rationalization of something that belongs to the order of subjectivity [START_REF] Mcritchie | Cognitive bias in the McGrew and McFall experiment: Review of "a scientific enquiry into the validity of astrology[END_REF][START_REF] Mcritchie | Clearing the logjam in astrological research. Commentary article on Geoffrey Dean and Ivan Kelly's article 'Is astrology relevant to consciousness and psi?[END_REF]. For instance, McRitchie (2014, p 34) writes:
Astrology is concerned with providing descriptions of one's personal potentials and how to make the best choice at different stages in life. This is not the same sort of information that is generated by psychological tests, which typically only measure the dimensions of psychology traits.
By denying any possibility of assessing the validity of astrology, in particular by means of psychological tests and statistics, he makes it unattackable but, at the same time, he drastically weakens its potential.
Dean and Kelly's response (2017) to his second article, published in 2016, offers arguments that further confirm this weakness.
For this purpose, they present meta-analyses of tests carried out by astrologers as well as by skeptics. As noted earlier, these tests were performed on small samples, undermining the reliability of their results. To be more precise, we can say that the 95% fluctuation interval is approximately equal to
n p n p 1 ; 1 ,
where n is the sample size and p the estimated proportion of successes. A meta-analysis of a larger number of tests conducted with small samples yields results that are far more reliable than those obtained with each analysis taken separately. [START_REF] Dean | Meta-analyses of nearly 300 empirical studies: putting astrology and astrologers to the test[END_REF], who worked on 300 results of empirical studies, gives various examples of such analyses. First, to avoid all conflict with astrologers, he rejected no test. He collected 54 studies in which astrologers had to match horoscopes with individuals, as in Carlson's study, for a total of 742 astrologers and 1,407 horoscopes. A meta-analysis of these results confirms that astrologers are no better at predicting than a random choice. Similarly, in eighteen studies involving 650 volunteers and 2,100 profiles, the volunteers were unable to pick out their own profiles among those of other participants. This result too confirms Carlson's with many more volunteers.
These studies contradict McRitchie's conclusion (2016, p. 175) that "[…] there is no reliable evidence against astrological theory and practice," for they demonstrate that by using the right statistical tools one can prove that astrologers are hardly capable of predicting a person's character with the sole aid of his or her horoscope.
Why is the belief in astrology still so strong?
The level of belief in astrology is still far from negligible. In France, a series of five surveys from 1982 to 2000 (Boy, 2004/5) shows that about 40% of the population believes in the explanation of personal character by astrological signs, and some 25% believe in predictions based on astrological signs (horoscopes), with weak variations from year to year. In England, Canada, and the United States, Gallup surveys conducted between 1975 and 2005 show that approximately 25% of adults answered "yes" to questions such as "do you believe in horoscopes?" [START_REF] Campion | How many people actually believe in astrology? The Conversation[END_REF]. It is worth noting how close the percentages of believers in horoscope predictions are in the different countries observed.
But can this type of survey accurately measure popular belief in astrology? Many authors are doubtful and have tried to ask more detailed questions. Martin Bauer and John Durant (1997), for example, use a British survey of 1988 that began by asking "Do you sometimes read a horoscope or a personal astrology report?". Respondents who answered "yes" were then asked "How often do you read a horoscope or personal astrology report?" and "how seriously do you take what these reports said?". While 73% replied "yes" to the first question, only 44% replied "often" or "very often" to the second, and just 6% took the results seriously or very seriously. Clearly, the wording of the question strongly influences respondents' answers.
Another way to measure popular belief in astrology is to use the "Google Trends" tool, which shows the changes in the number of searches for different terms since a given date-here, "astronomy," "astrology," and "horoscope." The curve does not show an absolute number but a ratio to the sum of the search volumes of all possible queries for the terms. Figure 1 shows the curve for worldwide searches for the three terms. The first conclusion is that interest in horoscopes is three times as great as interest in astrology and ten times as great as interest in astronomy, showing the predominance of the first term in the current period. While the search interest for "horoscope" peaks every January, when users are looking for predictions for the year ahead, there have been few variations for the rest of the year since 2004: a dip from 54% to 45% between February 2004 and August 2008, followed by a slight upturn from September 2008 to 68% in April 2014, and a near-stagnation until August 2018. The pick observed for Jan. 1, 2011 is linked to the announcement of an horoscope change, all zodiac signs being shifted up by one position as the earth axis has shifted since the past 3000 years by precession, proposed but not accepted by many of the practitioners. The search interest for "astrology" declined slightly from 30% in February 2004 to 15% in August 2011 then stalled, with persistent January peaks, but less sharp than those for "horoscope." The search interest for "astronomy" decreased from 18% to 4% in August 2011 then stagnated, but without January peaks.
It is also interesting to see the popularity of the terms "astrology" and "astronomy" on YouTube, the world's most popular video site. The results are shown in figure 2. The curve shows the contrasting change in the search for videos whose title contains one of the two words: a decline from near 40% in January 2008 to 10% in January 2022 for "astronomy," versus a rise from 27% in January 2008 to near 100% in last years for "astrology", with an intersection in mid-2013.
It is interesting to see the countries in which astrology predominates. Map 1 shows the distribution: Astrology ranks first in India, Nepal, Sri Lanka, the United Arab Emirates, Pakistan, and North America, while Europe (excluding the United Kingdom, Norway, and Sweden) and South America prefer astronomy.
We can conclude that if the belief in astrology and horoscopes remains very strong in all countries, it is stronger in North America, India, Australia and some other counties in the world. As noted earlier, it is even still taught in Indian universities.
How, then, can we explain this contradiction between the rejection of astrology and horoscopes based on scientific proof, and the persistent popular belief in the two?
To begin with, astrologers have developed highly effective communication resources, including: major, frequent international conferences, such as the United Astrology Conference (UAC), a five-day event that has attracted over a thousand astrologers from more than forty countries since 1986; numerous international organizations such as the International Society for Astrological Research; many training centers for young astrologers, of which more than 100 in the United States (see U.S. astrology classes: https://www.findastrologer.com/astrology-education/local-astrologyclasses/); and, as noted earlier, websites offering not only numerous videos but also abundant information on astrologers and their qualifications.
Many sociologists and psychologists have also tried to understand the interest in astrology. Their studies, however, date from before 2005 and so, unfortunately, cannot shed light on the recent changes that we have documented with Google Trends.
A very detailed study was attempted by Daniel Bois and Guy [START_REF] Boy | Croyances aux parasciences : dimensions sociales et culturelles[END_REF] and extended by Daniel Bois (2004Bois ( -2005)), who analyzes the results of twenty years of surveys on the subject in France. Bois finds no major change in belief over the two decades studied, and concludes that "supply and demand for parascience" remained "fairly constant" (Bois, 2004(Bois, -2005, p. 59) , p. 59) during the period. However, we observed earlier that supply and demand have risen sharply worldwide since 2013. Lastly, examining the social groups most inclined to believe in these phenomena (women, young people, the middle classes, and the non-religious), Daniel Bois shows that the difficulty in mastering one's future may be one explanation for such beliefs.
Bauer and Durant conducted a similar study in Britain in 1997. Their conclusion (p. 69) is very close to that of the French study: indicated, may be experiencing difficulty in accommodating their religious feelings to life in an uncertain post-industrial culture. Paradoxically as it may seem, therefore, we conclude that popular belief in astrology may be part and parcel of late modernity itself.
They add that the belief in astrology is not due to mere ignorance but reflects a "deeper opposition both to the authority of science and to a certain conception of modernity" (p. 69).
A more recent study in the United States (Pew Research Center, 2009) offers another similar portrait of persons who believe more strongly than others in astrology: women, young people aged 18-29, the least educated, and the non-religious. The study adds groups that were not in the French and British surveys: Hispanics, African-Americans, Democrats, and liberals. Thus, in both Europe and the United States, sociological analyses converge toward a very similar model before 2010. But has the model persisted? Only analyses using Google Trends hint at a possible change that will require deeper investigation by means of new sociological surveys.
Conclusion
In this chapter, we believe that we have provided a negative answer to the question "Can astrology predict a human life?". Mesopotamian astrology allows a person to avert an outcome that, being ultimately not inevitable, is unpredictable. Even Hellenistic astrology, which the Stoics basically regarded as predictive, manages to foretell few events without the possibility of determining whether or not they have actually occurred. Lastly, statistical astrology, by its very definition, cannot predict any individual event, as Gauquelin clearly concluded.
In contrast, astronomy which was undistinguishable from astrology in ancient times, became independent of it during the Renaissance and acquired the status of a science with the works of Kepler, Newton, etc.
Chapter 4 Eugenics and the theory of inheritability
If our life cannot be read in the heavenly bodies that preside over our birth, could it be that our heredity consists of the book in which our fate is already partly written? That is what Galton (1822[START_REF] Galton | The eugenic college of Kantsaywhere[END_REF] and his successors tried to demonstrate with eugenics. In this chapter we show the extreme misuses resulting from this conceptually flawed approach.
Eugenic research on heredity actually traces its roots to Galton's first cousin, Charles Darwin, whose On the Origin of Species by Means of Natural Selection (1859) provided the main inspiration. Darwin's theory of natural selection rests on a comparison between domesticated species and wild species. Its starting point is the observation of artificial selection by livestock breeders and farmers to obtain domesticated animal races or plants. Darwin deduces the existence of a natural selection operating on all living beings, which prompts him to state (p. 170):
Whatever the cause may be of each slight difference in the offspring from their parents-and a cause for each must exist-it is the steady accumulation, through natural selection, of such differences, when beneficial to the individual, that gives rise to all the more important modifications of structure, by which the innumerable beings on the face of this earth are enabled to struggle with each other, and the best adapted to survive.
The theory of the origin of species, which had a momentous impact on the science and culture of its time, was continuously elaborated later into a general theory of evolution. The focus of this chapter, however, is not evolutionary theory but eugenics. While we can draw a connection between the two despite their major differences, we shall not dwell further on the theory of evolution itself.
Galton read the first edition of On the Origin of Species, which he saw as opening up "an entirely new province of knowledge" (F. Darwin and S. Darwin, 1903, p. 129). His reading played a decisive role in awakening his interest in heredity. In an article published in 1865 on "Hereditary talent and character," Galton returns to the comparison between domesticated species and wild species. He recognizes that, for domesticated species, the physical structure of future generations is controlled by the breeder's objectives. He deduces that human physical, mental, moral, and intellectual characteristics must also be predictable and controllable. Galton supports his argument with many examples of heredity among scientists, writers, painters, musicians, chancellors, and others. He conceded that this does not constitute proper proof of his deduction (p. 158):
All that I can show is that talent and peculiarities of character are found in the children, when they have existed in either of the parents, to an extent beyond all question greater than in the children of ordinary persons.
However, this does enable him to set out the principle of what he later called eugenics (p. 319):
No one, I think, can doubt from the facts and analogies I have brought forward, that, if talented men were mated with talented women, of the same mental and physical characters as themselves, generation after generation, we might produce a highly bred human race, with no more tendency to revert to meaner ancestral types than is shown by our long established breeds of race-horses and fox-hounds.
Heredity can thus make it possible to predict the future of a lineage, just as astrology had made it possible to predict a person's future, but in very different conditions. It is no longer a matter of observing stars and planets, but of observing and acting upon lineages of human beings in order to have an effect on their descendants. But how should one go about this? That is the main problem posed by eugenics.
This initial text already discusses the notion that ancestral contributions are distributed in a geometrical series (p. 326):
The father transmits, on an average, one half of his nature, the grandfather one fourth, the great-grandfather one eighth; the share decreasing step by step, in a geometrical ratio, with great rapidity.
Galton elaborated on these points with great constancy in his later writings.
We begin by examining in fuller detail how eugenics took hold in the nineteenth century under Galton's guidance and developed throughout the first half of the twentieth century with the establishment of the Nazi and fascist regimes. Next, we look at the reasons for its apparent rejection after World War II and its transformation into a heredity-based theory aimed at controlling the growth and quality of the world population.
Galton establishes eugenics
After 1865, Galton committed himself to an intensive research program centered on heredity and set out to demonstrate its value by statistical means. In the process, he moved away from Darwin's thought-focused on the origin of species and natural selectionwith the aim of theorizing heredity in statistical language.
In 1869, he published Hereditary Genius. The first edition had little success, as he noted twenty-three years later in the "Prefatory chapter to the edition of 1892." Darwin, however, wrote a highly laudatory letter to Galton: "I do not think I ever in all my life read any thing more interesting and original …" (Pearson, 1924, p. 6). While the hereditary transmission of human physical capacities was accepted at the time, there was little support for the transmission of intellectual capacities. Even Darwin remained convinced that the main distinguishing criteria between human beings were "zeal and hard work."
To demonstrate that intellectual capacities can be transmitted in the same way as physical capacities, Galton generalizes the mathematical law of the frequency of errors, which [START_REF] Quetelet | Lettres à S. A. R. le Duc Régnant de Saxe-Cobourg et Gotha sur la théorie des probabilités[END_REF] had applied to human physical traits such as chest size and waist size. By analogy, Galton applies the law to human intellectual faculties such as talent and intelligence. He assumes that the statistical method appropriate for physical characteristics is valid for intellectual ones. The justification for this argument, however, is not very clear, as Galton himself recognizes in a later work (Natural Inheritance, 1889, p. 56):
It had been objected to some of my former work, especially in Hereditary Genius, that I pushed the application of the Law of Frequency of Error somewhat too far. I may have done so, rather by incautious phrases than in reality; but I am sure that, with the new evidence now before us, the applicability of that law is more than justified within the reasonable limit asked for the present book.
Pursuing this approach, he constructs a scale for each intellectual faculty measured by the reputation of the members of these classes, for example, their income or selections made by historians or critics. He then examines their relatives to determine the percentages of "eminent" kin at different degrees and thus show the transmission of their capacities. Figure 3.1 illustrates his findings for the ancestors and descendants of English judges since the Reformation. According to the table, 36% of their children are "eminent" versus 9.5% of their grandsons and 1.5% of their great-grandsons. By contrast, 26% of their fathers are "eminent" versus 7.5% of their grandfathers and 0.5% of their great-grandfathers. The geometric law predicted in Galton's 1865 article is thus verified, but only approximately. And this study, too literary in its approach, is still far from providing a fully satisfactory statistical demonstration of the role of heredity in the transmission of intellectual characteristics.
In 1873, Galton published an article on "Hereditary Improvement" in which he further developed his theories. One of his proposals was that a society could be established to perform various scientific tasks such as surveys on human heredity or even maintain a register of eminent families. Darwin, while interested in the article, was rather skeptical about the society and the register (Pearson, 1930, vol. II, p. 176):
But the greatest difficulty, I think, would be in deciding who deserved to be on the register. How few are above mediocrity in health, strength, morals and intellect; and how difficult to judge on these latter heads.
Nevertheless, he remained convinced that "the object seems a grand one" and continued to engage in substantial correspondence with Galton until his death in 1882.
In fact, it was not until 1883 that Galton introduced the term "eugenics" in a work entitled Inquiries into Human Faculty and its Development (pp. [START_REF] Brouillet | Lectures anthropologiques de l'agir dans l'antiquité[END_REF][25]:
[This book's] intention is to touch on various topics more or less connected with that of the cultivation of race, or as we might call it, with "eugenic" questions, and to present the results of several of my own separate investigations.
In a footnote he indicates that the term is derived from that Greek eugenes, which means that the individual is "hereditarily endowed with noble qualities." He proposes that individuals with "good" genes should be tested for these qualities and should be rewarded if they marry young and have children early.
To put this theory on a firm foundation, Galton developed the concepts of statistical regression and later of correlation in the 35 years after his 1865 article. We refer the interested reader to the chapter on "The English Breakthrough: Galton" in [START_REF] Stigler | The history of statistics. The measurement of uncertainty before 1900[END_REF]. The author presents and discusses Galton's statistical approach, whereas our focus here is its application to the eugenic doctrine.
In 1891, at the Seventh International Congress of Hygiene and Demography, Galton delivered a Presidential Address (1892) in which he set out a more detailed view of the doctrine without ever naming it. He clearly expressed his interest in the relative fertility of different social classes and "races." Using the example of bees, where the workers are sterile, he envisaged the sterilization of the members of a human community of low social value. He concluded that the improvement of future human generations is largely, albeit indirectly, under our control. His call had little impact on the demographic community.
These same demographers, however, began to show concern for fertility by social class in the Sixth Session of the International Institute of Statistics in 1897. While standard statistics did not make it possible to classify population movements by social class, their detailed existence for each urban quarter enabled Jacques Bertillon (1898) to characterize quarters by the degree of comfort of the majority of its inhabitants. He was thus able to compile a grid ranging from the poorest to the richest quarters for four major European capitals: Paris, Berlin, Vienna, and London. Comparing their total fertility rates (ratio of births to the number of women of reproductive age), he obtained table 3.1.
Source: Newsholme and Stevenson, 1906, p. 66 The results, which are fully consistent for all four capitals, show that the fertility of the richest quarters is approximately three times as low as that of the poorest quarters. A similar study [START_REF] Newsholme | The decline of human fertility in the United Kingdom and other countries as shown by corrected birth-rates[END_REF] on London metropolitan boroughs conducted for the 1901 census yielded comparable results. This evidence of fertility differentials led a growing number of persons to reconsider their attitude toward eugenics. Galton also took advantage of his findings to give fresh impetus to his theory at the turn of the century.
His first step was to found the journal Biometrika with Weldon and Pearson in 1901, for the initial purpose of providing a mathematical basis for Darwin's theory of evolution. Next, in 1904, he sought to establish a laboratory with solid credentials to study eugenic issues. To this end, he set out to develop a fuller definition of eugenics, involving extensive discussions with University of London faculty members. The following definition was finally approved in October 1904:
The term National Eugenics is here defined as the study and the agencies under social control that may improve or impair the racial qualities of future generations either physically or mentally.
We can easily see how this definition differs from the one provided in 1883, with the emphasis on "agencies under social control" and the goal of improving racial qualities of future generations. Also in 1904, the Eugenic Record Office was established under Galton's supervision; in 1907 it became the Francis Galton Eugenic Laboratory, headed by Karl Pearson (1857Pearson ( -1936)). Thus did eugenics enter British academia, and many eugenics laboratories were set up in other countries as well. In 1904, Germany founded the first eugenics journal, the Archiv für Rassen-und Gesellschaftsbiologie (Journal of Racial and Social Biology). In 1909, the first issue of Eugenics Review was published by the Eugenics Education Society, founded in 1907 with Galton as its first president. The year 1909 also saw the founding of the Swedish Society for Racial Hygiene. In 1910, the Eugenics Record Office was established in the United States. Eugenics thus took root rapidly throughout the world.
Despite this success, we must now consider the dangers of such a doctrine, for Galton viewed it as new religion that must be introduced into the national conscience (Galton, 1905, p. 50). In the same article, he spelled out the more specific goal that he had in mind for it (p. 47):
The aim of eugenics is to bring as many influences as can reasonably be employed to cause the useful classes in the community to contribute more than their proportion to the next generation.
While he clearly defined his objective, he did not specify the means to attain it. As noted earlier, these "useful classes" were already the least fertile at the time. It is therefore hard to see how their fertility could increase given that it was linked to the birth control desired by those very classes and probably almost unknown to the poorer classes. The only means to lower the fertility of the poorer classes would be to give them access to birth control, but was it needed to ensure that their fertility rate would fall below that of the well-to-do classes? A final possibility might be to make the poorer classes sterile through means imposed by the rich.
Galton clearly envisaged the latter solution in his novel, The Eugenic College of Kantsaywhere (1911), first published on the centenary of his death in 2011. He describes a fictional society whose young citizens are tested to determine their place in society. The fittest must quickly choose a wife among the women having obtained the same grade in the exam and they must rapidly procreate. The least fit are forbidden to reproduce, and if they disobey the ban, they are cast out of the society. This scenario foreshadows the practices applied by Nazi and fascist regimes a quarter-century after Galton's death.
The possible connections between Galton's heredity theory and Mendel's laws (1865) are also worth examining. First, it should be noted that Mendel's laws were not rediscovered until 1900 by three European botanists: De Vries in Holland, Correns in Germany, and Von Tschermak in Austria. After 1900, there are few references to Mendel in Galton's publications or correspondence. In a letter to Karl Pearson, he writes:
By the way I find that I have the honour of being born in the same year, 1822, as he was. (Pearson, 1930, p. 335) Note the humor in this comment on their matching birth years, with no allusion to the importance of Mendel's work. However, the very first volume of the journal Biometrika, founded by Galton in 1901, contains an article by William Weldon (1902) on Mendel's lawsalbeit highly critical as to their universal validity-followed by many others, mostly by Weldon, in later volumes.
It is worth taking a more detailed look at the controversy that lasted from 1900 to 1906, the year of Weldon's death, between "ancestrians" and "Mendelians" [START_REF] Froggatt | The 'Law of ancestral heredity' and the Mendelian-ancestrian controversy in England 1889-1906[END_REF]. The "ancestrians" included Karl Pearson and Raphael Weldon, who supported Galton's "ancestral law of heredity," stating that ancestral contributions are distributed geometrically. Among the "Mendelians," William Bateson defended Mendel's laws, arguing that only the contributions of direct parents influenced children. In 1910, Bateson founded the Journal of Genetics with Reginald Punnett. Galton, who was 78 in 1900, stayed out of the dispute. The readers wanting to have a general view of definitions and genetic terminology may find them in the Annex of this chapter.
The controversy pitted two different approaches to genetics against each other. The "ancestrian" approach was based on a statistical regression analysis of a population. The aim was not to predict an individual's offspring, but to use a regression equation to link the presence of a character in an individual to its presence in ancestors of earlier generations. The geometrical series would then be roughly verified by observations. Weldon drew the following conclusion:
The degree to which a parental character affects offspring depends not only upon its development in the individual parent, but on its degree of development in the ancestors of that parent (Weldon, 1902, p. 248). This is indeed a statistical distribution that does not imply an actual contribution of earlier generations. Moreover, it applies to continuous characteristics such as an individual's size.
By contrast, the "Mendelian" approach sought to predict the characteristics of the descendants of a given individual. This was an exercise not in statistics but in probability: what is the probability that a child will have a given characteristic that one of his two parents possesses? This enabled Bateson to respond to Weldon's article as follows:
The next step was at once to defend Mendel from Professor Weldon. That could only be done by following this critic from statement to statement, pointing out exactly where he has gone wrong, what he has misunderstood, what omitted, what introduced in error (Bateson, 1902, p. viii).
Here, we are dealing with a probabilistic biological law that links a characteristic observed in a child to the same characteristic in its parents. The law is applied to binary data, as the individual does or does not possess the characteristic.
In reality, this was a dialogue of the deaf, with each participant using his own specific language-Weldon, the biometrician, with his statistical approach to populations; and Bateson, the geneticist, applying his individual approach and using a simple probabilistic language with no knowledge of the mathematics employed in statistics. Their exchanges continued until Weldon's death in 1906.
Other voices, however, attempted to establish a potential convergence between the two approaches. In 1902, Yule sought to show that they could not be totally inconsistent-as Bateson claimed-but he failed to provide a perfect demonstration. In 1904, Pearson admitted that Mendel's laws and Galton's Ancestral Law of Heredity were not necessarily contradictory. But it was not until 1918 that Fisher tried to merge them, although he laid down axioms that needed to be verified for that purpose. We examine his approach in greater detail in the following section.
Development of eugenics in the first half of the twentieth century
In 1912, a year after Francis Galton's death, the First International Eugenics Congress was held in London. Dedicated to his memory, it was attended by more than 400 persons from all over the world, under the chairmanship of Major Leonard Darwin, Charles Darwin's son. In his presidential address, he stated that the main goal of the Congress was to promote the improvement of the "race," for natural selection was incapable of doing so. He cited the example of the United States, where eight states had already passed laws allowing or requiring sterilization of certain categories of individuals.
Ronald Fisher (1890Fisher ( -1962) ) joined the Eugenics Education Society in 1910. In 1914, he published an article in The Eugenics Review expressing his hopes as a eugenicist, and, in 1918, an important article in which he interpreted Galton's biometrical results with the aid of Mendel's laws.
To arrive at a more accurate analysis of the causes of human variability, he formulated the following assumptions: (1) there are polygenes that act additively; (2) they segregate independently;
(3) the influence of the environment is unrelated to that of genes; (4) the population is in Hardy-Weinberg equilibrium, i.e., with no inbred individuals, migration, mutation or selection; (5) to simplify the algebra, the number of polygenes is supposed infinite.
Each of these assumptions allows the variance of a phenotypic trait ( P Var ) to be decomposed into additive parts. A more detailed demonstration is provided in Vetta and Courgeau (2003, p. 406), yielding the following formulation:
E Var D Var A Var E Var G Var P Var [1]
where G Var is the genetic variance, decomposed into its additive part A Var and its dominance part D Var .
E Var is the environment variance. Obviously, if assumption (3) does not hold, we need to add an interaction and a covariance term between the effects of genes and of the environment. This will preclude an additive formulation.
On the same assumptions, we can define "heritability in the narrow sense": While these notions were used previously with somewhat different meanings,44 the concept of heritability was spelled out more precisely by [START_REF] Lush | Genetic Aspects of the Danish System of Progeny-testing Swine[END_REF], and animal and plant breeders started to use the "narrow" and "broad" definitions. However, even Lush (1949, p. 373) found that the estimates of heritability were surprisingly high:
If breeders have been selecting intensely and if heritability is as high as these estimates, the breed average should have been improving rapidly for many generations and should still be doing so. But the actual evidence does not indicate improvements that rapid. Admittedly the evidence on the actual rate of improvement is scanty.
These breeders tried to verify some of Fisher's assumptions in their tightly controlled animal or plant experiments, where subjects with different genotypes can be provided with a near uniform environment, allowing a prediction of responses to selection [START_REF] Crusio | Estimating Heritabilities in Quantitative Behavior Genetics: A Station Passed[END_REF][START_REF] Visscher | Heritability in the Genomic Era -Concepts and Misconceptions[END_REF]. However, given the complexity of the underlying gene action, such analysis had not gone beyond the black-box level [START_REF] Hill | Understanding and Using Quantitative Genetics[END_REF], and the non-verification of Fisher's other assumptions could lead to incorrect results. Part 3 will examine how Fisher's theories were taken up and expanded by behavior genetics in the 1970s.
After this presentation of Fisher's article and its direct impact on research in population genetics, let us return to the development of eugenics after World War I.
The Second International Congress of Eugenics was held in New York in 1921 under the chairmanship of Henry Osborn (1889-1961), who played a major role in the dissemination of eugenic ideas. Fisher, one of the many attendees, gave three papers. At its close, the Congress adopted a resolution calling for an international organization, which was eventually founded in 1925: the International Federation of Eugenics Organizations (IFEO), headed by the American Charles Davenport. That same year, Davenport wrote a letter to Professor Hansen in which he stated: But, alas, for the criminalistic and the degenerates, since it seems to be impracticable to put them all on ships and keep them traveling back and forth on the Atlantic Ocean thru the rest of their lives, I see no way out but to prevent breeding them (Davenport, 1925, p. 1).
He was thus advocating the elimination of the unfit from the population through measures such as forced sterilization and laws to prevent their reproduction.
The German eugenicists, who had not been invited to the Congress, nevertheless continued to publish. [START_REF] Sanger | Stop perpetuating the unfit by a national policy on limitation of families[END_REF], Erwin Baur (1875-1933), Eugen Fischer (1874[START_REF] Sheehan | A shortened form of Betts' questionnaire upon mental imagery[END_REF]), and Fritz Lenz (1887-1976) published Grundriss der menschlichen Erblichkeitslehre und Rassenhygiene (Principles of Human Heredity and Racial Hygiene). The notion of "race"rejected by most researchers today-is discussed at length, and the book already praises the "Nordic race." In 1924, after French forces withdrew from the Rhineland, the Germans rejoined international discussions on eugenics.
Eugenic practices became increasingly politicized in the 1920s.
In the United States, for example, the Eugenics Record Office-founded in 1910, as mentioned earlier-set out to organize and perform eugenic research with a view to informing lawmakers, judges, and government officials. In 1921, Harry Hamilton, its Assistant Director, published a very detailed study of sterilization laws now enacted in eighteen states, up from eight in 1911. By 1937, thirty states had laws on mandatory sterilization. In the same period, similar laws were passed in many other countries including Mexico and Argentina in the Americas, and Germany, Denmark, Norway, Finland, and Sweden in Europe.
In Italy, Corrado Gini (1884-1965) founded the Società Italiana di Genetica e Eugenica in 1919. After Mussolini assumed full powers as head of government of the Kingdom of Italy in 1925, Gini was able to play a hegemonic role in Fascist eugenics. At the International Congress of Eugenics in Rome in 1929, Davenport and Eugen Fischer wrote a memorandum to Mussolini urging him to implement eugenics at "maximum speed" because of the "enormous" risk that undesirable reproduction might run out of control (Sprinkle, 1994, p. 91). It was not until 1938, however, that Il Giornale d'Italia published the "Manifesto degli scienziati razzisti" (Manifesto of Racist Scientists), defining race as biological concept and announcing the existence of a pure "Italian Race" of Aryan descent, from which the Jews were excluded (Casetti and Conca, 2015, p. 107). This text, prepared at Mussolini's request, led to a series of racial laws, particularly against the Jews, and their deportation to concentration camps.
More insidiously, Margaret Sanger (1879Sanger ( -1966) ) founded the American Birth Control League in New York in 1921 for the purpose of promoting the dissemination of contraceptive methods. But in an article published the same year, she added other goals:
We need one generation of birth control to weed out the misfits, to breed self-reliant, intelligent, responsible individuals. Our migration laws forbid the entrance into this country of paupers, insane, feebleminded and diseased people from other lands. Why not extend the idea and discourage the bringing to birth these same types within our borders. Let us stop reproducing and perpetuation [sic] disease, insanity and ignorance (Sanger, 1921, p. 1).
This comes very close to the eugenic objectives regarding misfits that Galton advocated in his utopian college of Kantsaywhere.
In 1925, Sanger organized the sixth International Neo-Malthusian and Birth Control Conference in New York, which included sessions on the "eugenic racial and public health aspect" (the first conference had been held secretly in Paris in 1900 because of the French government's ban on all forms of birth-control advocacy). The event also allowed her to spell out her action in favor of negative as against positive eugenics. Negative eugenic actions are targeted at persons who should not procreate, whereas positive eugenic actions aim to encourage those who should reproduce more. At the end of the Conference, the eugenicist Roswell Johnson (1877-1967) introduced the following resolution:
Resolved, that this conference believes that persons whose progeny give promise of being of decided value to the community should be encouraged to bear as large families, properly spaced, as they feel they feasibly can (Sanger, 1925, p. 163).
This explicitly positive resolution was adopted, but soon elicited comments and criticism in the U.S. and British press. Sanger responded with an editorial (1925) specifying that, while the American Birth Control League was in favor of population controli.e., negative eugenics-it could not accept the resolution. Sanger explained that it was difficult for the League to predetermine individuals whose progeny could be valuable to society, or even to offer advice to adults capable of making their own decisions.
The coming to power of the Nazis in Germany consecrated the coalition of eugenicists and political leaders. Significantly, Germany had been the first country to publish a Journal of Racial and Social Biology45 in 1904, and to establish a Society for Racial Hygiene in 1905. 46 A further step was taken in 1927 with the establishment of the Kaiser Wilhelm Institute for Anthropology, Human Heredity and Eugenics47 in Dahlem, headed by Fischer, the German eugenicist previously cited. At the time, however, Fischer stated that the concept of race studies (Rassenkunde) should be treated in a scientific manner, independently of all other considerations. Meanwhile, the stock-market crash of 1929 and its dramatic repercussions on the labor market both in the U.S. and Europe (by 1933, one-third of the German working population was unemployed) brought Hitler to power on January 30, 1933.
By July 1933, the meeting of the Kuratorium (Board of Directors) of the Kaiser Wilhelm Institute "sealed the Faustian bargain between the Dahlem director and the Nazi state medical bureaucracy" (Weiss, 2010, p. 70). The agreement served as the basis for the Law for the Reestablishment of the Professional Civil Service and the Law for the Prevention of Genetically Diseased Offspring. 48 The two laws banned all non-Aryans, particularly Jews, from all civil-service positions in Germany and allowed the forced sterilization of persons declared unfit by so-called racial hygiene experts, without possibility of appeal. As a result, ten million medical-genetic case files were prepared in 1,100 Health Offices by 2,600 government-employed doctors, assisted by 10,000 other physicians [START_REF] Bock | Zwansgsterilisation im Nationalsozialismus[END_REF]. One million citizens were singled out for sterilization and their cases processed by 205 Hereditary Health Courts. 49In 1935, two even more coercive laws were adopted in Nuremberg: the Law of the Reich Citizen50 stripped all non-Aryans of German citizenship, and the Law for the Protection of the German Blood51 prohibited unions or sexual intercourse with non-Aryans. While the term "non-Aryan" notably targeted Jews, the laws also applied to Poles, Czechs, Romani (gypsies), and other Slavs. This paved the way for the Shoah, which exterminated over five million Jews; Aktion T4, involving the extermination of nearly 250,000 physically and mentally handicapped persons; the Porajamos, which exterminated almost 500,000 Romani; and other actions. There is no need here to dwell on the horror of these exterminations, but it is important to stress that all were carried out in the name of eugenics.
Eugenics also had a powerful influence on the social sciences between the wars-most notably demography, our focus here.
The International Union for the Scientific Study of Population Problems (IUSSPP) was founded in 1928 (it became in 1947 the International Union for the Scientific Study of Population: IUSSP). Both its officers and its first congresses were clearly influenced by eugenicists. Corrado Gini-an ardent supporter of Mussolini's fascist regime, as noted earlier (Cassata, 2006)-was vice-president of the IUSSPP.
In 1935, the IUSSPP held its congress in Berlin under the chairmanship of the eugenicist anthropologist Fischer, mentioned previously. German biologists used the venue to promote extreme eugenic theories in 59 of the total 126 papers presented.
116
At the following congress, held in Paris in 1937, all papers containing elements of the radical Nazi doctrine were grouped together in the same section with papers from a few other participants opposed to it. The proceedings were published as Problèmes qualitatifs de la Population [START_REF] Uiespp | Problèmes qualitatifs[END_REF]. The German-American anthropologist Franz Boas (1938, p. 83), who was of Jewish origin, responded clearly:
Lack of clarity in regard to what constitutes a type is the cause of the incredible amount of work produced for more than a century, but particularly by modern race enthusiasts.
This did not prevent German researchers, in their report on the congress, from stating that they had totally refuted the allegations by Jewish participants thanks to what they called the "sword of our science"52 (Weiss, 2005, p. 5). Also in the same section, new measures were proposed by the American Frederick Osborn (1938, p. 117), which exactly anticipated what would become behavior genetics:
We are also beginning to determine scientifically the extent to which psychological differences are due to external circumstances or, on the contrary, to genetic factors. At present, the latter seem to be the more influential.
He drew his conclusions from an examination of differences in intelligence between individuals from different social groups. He compared differences between individuals in the same group with differences between group averages. The demonstration is hardly convincing, but the basic idea is indeed there.
To sum up, after Galton's death eugenics spread across the world and led to the extermination of millions of people in the name of improving humanity. This tidal wave would appear to have died out at the end of World War II with the surrender of the fascist and Nazi regimes. In the next part, however, we shall see that this was not the case and that eugenics has continued to hold even greater sway, although it is no longer explicitly invoked.
A new face for eugenics
The postwar years were marked by swift progress in public health, unfortunately not always followed by an equivalent economic progress. As Alfred Sauvy noted:
[…] it is easier to produce a serum to save one million people than to provide food for them afterward (Sauvy, 1948, p. 10).
The discovery of the insecticide properties of DDT (Dichlorodiphenyltrichloroethane)-tested by the Allied armies during the war to control malarial-and typhus-carrying insects-led to spectacular results. Released on the market in 1945, it allowed the eradication of malaria in several countries. For example, in Ceylon (now Sri Lanka), the number of malaria cases plunged from nearly three million in 1946 to only 29 in 1964 thanks to DDT spraying, raising hopes that the battle against malaria had been won (Tren and Bate, 2001, p. 36). Many countries in Europe, North America, Asia, and Africa recorded similar results, leading the World Health Organization (WHO) to launch a malaria eradication campaign in 1955 chiefly based on the use of DDT. The campaign ended in 1969 after doubts emerged regarding its impact on wildlife. We shall return to this interruption, which had disastrous effects on the evolution of malaria. The relevant point here is the growth in world population not only in capitalist and communist countries but especially in the underdeveloped Third World countries, as Sauvy defined them in 1952:
The underdeveloped countries-the Third World-have entered a new phase. Some medical techniques can be introduced rather quickly, for a major reason: their low cost. An entire region of Algeria was treated with DDT against malaria: the cost was 68 francs per person. Elsewhere, as in Ceylon and India, similar results have been recorded. For a few cents, a man's life is extended several years. These countries thus register our mortality rate of 1914 and our birth rate of the eighteenth century. […] For this Third World too-ignored, exploited, and scorned like the Third Estate-wants to exist at last (Sauvy, 1952, p. 14).
This unprecedented population growth in what were then referred to as Third World countries-now renamed developing countriesenabled American eugenicists to re-establish their prominence. At the Third International Conference on Planned Parenthood in Bombay in 1952, Sanger created the International Planned Parenthood Federation (IPPF), aimed at extending to the Third World the goals of the American Birth Control League, discussed earlier. Now, however, the underlying objective was to counter the population growth entailed by medical methods. Back in 1950, Sanger had already stated (Lasker Award Address, p. 3):
We have gone far in the field of preventative medicine, let us now have a little preventative politics and a system of thinking that will probe to the roots and heart of this human problem.
The IPPF promoted family planning "for those who need it most," a phrase that eluded the question of whether the need was felt by the individuals themselves or by those who knew better than them (Connelly, 2006, pp. 221-222.). That same year, the Indian Premier, Nehru, submitted to his parliament the first world plan to limit his country's population.
At the IPPF's Fifth Conference in Tokyo in 1955, Pincus, contacted by Sanger in 1950, announced the possibility of the first oral contraceptive pill, which was being tested on mammals. The drug, called Enovid, was then tested on 123 women living in Puerto Rico, for such tests were banned and punishable by law in many U.S. states. However, many women dropped out of the program during the tests, leading Pincus-in an effort to sway the Food and Drug Administration (FDA)-to claim that the test had covered 1,279 cycles, with no mention of contraceptive effects. The FDA approved Enovid in 1957 for use in treating menstrual disorders. The true purpose, however, was contraception. In 1959, Enovid was resubmitted to the FDA but this time for birth control. Eight hundred women had been enrolled in tests but only 130 took the pill for a year or more. The FDA conducted a survey among 61 doctors, of whom 36 approved the pill, 14 admitted lacking sufficient experience, and 21 rejected it [START_REF] Eig | The birth of the pill[END_REF]. Despite this low endorsement rate, the FDA approved its use in May 1960. It was rapidly distributed in the U.S. despite opposition from the Catholic and Protestant churches; its dissemination was slower in Europe and very limited in the Third World, most notably because of its cost. Soon after the FDA decision, the IPPF informed Pincus that it would no longer fund his research. The pill, which Sanger had intended as a means to decrease fertility in poor countries, ended up being used mainly in the developed countries.
The Population Council was founded by John Rockefeller III53 in 1952, and Osborn-the eugenicist discussed earlier-was its president in 1957-59. The institution played a key role in the initial programs to limit the number of births-i.e., what has come to be know as birth control. As Osborn later wrote, in 1968 (p. 104): "Eugenics goals are more likely to be attained under a name other than eugenics." Poverty and the fact of living in a Third World country would replace the "dysgenic qualities of body and mind." Rather than allow these countries to achieve economic takeoff, the Population Council sponsored a vast campaign to restrict their fertility. It worked with the IPPF to launch large-scale programs to implant intra-uterine devices (IUDs) and perform vasectomies in India, Pakistan, South Korea, and Taiwan. These sterilizations were very inexpensive but were often carried out without the interested parties' clear consent, for example after giving birth, or in exchange for payment.
The United Nations Population Fund (UNPF) was established in 1969 with Rafael Salas as director. Salas had been the Executive Secretary of the Republic of the Philippines under President Marcos. His leadership of the new international institution came under strong criticism. In particular, when China introduced its one-child policy in 1979, the UNPF granted an initial subsidy of $50 million for 1980-1984 to help the Chinese government develop its population policy, for "the new 1979 Constitution explicitly advocates and promotes family planning" (UNPF, 1980, p. 3). In 1983, it awarded one of its first two United Nations Prizes for Population to the Chinese family planning minister, Qian Xinzhong. 54 It is no longer necessary to point out that the coercive birth control measures enforced by Chinese policy were a new massacre of the innocents (Aird, 1990).
The main goal of all these newly created international institutions was to reduce fertility in Third-World countries, regardless of whether the local population wanted to or not, and without envisaging their economic development as a means of curtailing their population growth. But another approach could be considered to reduce and even halt that growth: it would consist in not allowing the dissemination of the medical methods that had previously extended the lives of their inhabitants. In 1968, Ehrlich proposed this solution in his book The Population Bomb (p. 17):
[…] there are only two kinds of solutions to the population problem. One is a "birth rate solution," in which we find ways to lower the birth rate. The other is a "death rate solution," in which ways to raise the death rate-war, famine, pestilence-find us. The problem could have been avoided by population control, in which mankind consciously adjusted the birth rate so that a "death rate solution" did not have to occur.
Ehrlich was thus admitting that international organizations had failed to reduce the fertility of Third-World populations. Even if some readers may think that in the last sentence Ehrlich is making a contingent prediction, setting out a default (if we fail to limit population growth by lowering fertility, it will be limited by rising mortality), other ones may think that such a sentence may permit a "death rate" solution, as the ban adopted in 1972 to reduce the use of DDT, as shown below. In this case, his proposal would not only deny them access to medical methods but also revive all the scourges that had afflicted humanity in the past. This was indeed a new form of eugenics even worse than what Galton had envisaged: the goal was no longer to eliminate the unfit, but to eliminate as many Third-World people as possible, on the grounds that they were unfit for our living conditions.
The implementation of this second solution required a certain number of actions, particularly involving the WHO, in order to stop the distribution of effective medicines in the Third World. Ehrlich clearly indicated that this could concern DDT. The insecticide had already come under attack in the U.S. In her book Silent Spring (1962), Rachel Carson had criticized the use of DDT despite the fact that it had already allowed the eradication of malaria in the southern U.S. and southern Europe, and had reduced its impact on mortality in many Third-World countries. She questioned its harmlessness and claimed that its effects persisted for long periods after spraying, that it was carcinogenic, and that it accumulated in the fat of animals consumed by humans. But she offered scant evidence to support these accusations.
The tremendous impact of Carson's book on public opinion led the WHO to end its DDT-based malaria eradication campaign in 1969. In 1972, the new Environmental Protection Agency (EPA) in the U.S. banned the use of DDT, regarding it as toxic for humans and restricting its use to emergencies. The ban was soon adopted by other governments and heavily reduced the use of DDT for combating malaria.
The WHO turned to other methods, but they proved far less effective than DDT. In particular, pesticides such as methyl parathion-approved by the EPA in 1972 as a DDT substitutewere finally recognized as extremely dangerous in 1999. By the mid-2000s, several African countries reversed their policy with regard to DDT. For example, the government of Mozambique reauthorized the use of DDT in July 2005. It was not until 2006 that the WHO accepted DDT once again as a means to fight malaria:
Nearly thirty years after phasing out the widespread use of indoor spraying with DDT and other insecticides to control malaria, the World Health Organization (WHO) today announced that this intervention will once again play a major role in its efforts to fight the disease. WHO is now recommending the use of indoor residual spraying (IRS) not only in epidemic areas but also in areas with constant and high malaria transmission, including throughout Africa (WHO, 2006, p. 1).
Estimates of mortality due to malaria have varied considerablyfrom 655,000 for the WHO (2010) to 1.2 million for the Institute for Health Metrics and Evaluation (IHME). Despite this uncertainty, an article by [START_REF] Murray | Global malaria mortality between 1980 and 2010: a systematic analysis[END_REF] offers a confidence interval for estimating the figure. World deaths due to malaria rose from 995,000 in 1980, the first year of their study (with a 95% confidence interval ranging from 711,000 to 1,412,000), to a peak of 1,817,0000 in 2004 (with a 95% confidence interval ranging from 1,430,000 to 2,366,000). In 2005, the curve started to move downward, reaching 1,238,000 in 2010 (with a 95% confidence interval ranging from 848,000 to 1,591,000). Since then, it has declined continuously. It would be hard not to view this as a change in attitude toward policy with regard to the Third World, as Murray et al. clearly state (p. 428):
Our findings also signal a need to shift control strategies to pay more attention to adults-eg, they lend support to the strategy of universal coverage of insecticide-treated bednets among household members rather than focusing on women and children as was the case in the initial distribution campaign.
The authors, however, fail to explain how a policy of abandoning insecticides, including DDT, could have lasted thirty-three years from 1972 to 2006, despite the fact that it was the only method capable of preventing millions of malaria-induced deaths in Third-World countries [START_REF] Roberts | The excellent powder. DDT's political and scientific study[END_REF].
As regards scientific research, a new discipline-behavior genetics-emerged with the article by [START_REF] Jinks | Comparison of the Biometrical Genetical, MAVA, and Classical Approaches to the Analysis of Human Behaviour[END_REF], although earlier publications already named it [START_REF] Fuller | Behavior Genetics[END_REF] but without supplying its foundations. The discipline rests on all of the hypotheses and concepts formulated by Fisher 1918 (see §3.2), applied to human populations. The year 1970 also saw the founding of the Behavior Genetics Association and its journal Behavior Genetics, inaugurating a flood of papers and books on a number of psychological or medical traits such as intelligence measured by IQ, personality, alcoholism, smoking, homosexuality, obesity, soda or fruit juice intake, and so on.
While we can control some or all environmental effects in plant and animal experiments, we cannot do so in human populations. As a result, Fisher's third assumption is not satisfied. However, [START_REF] Jinks | Comparison of the Biometrical Genetical, MAVA, and Classical Approaches to the Analysis of Human Behaviour[END_REF] devised a method for measuring this genetics/environment (GE) interaction, which enables us to use heritability models for human populations. Eysenck (1973, p. 5) wrote that this paper "is the corner-stone on which any future argument about heritability may be based"; Hewitt, in his obituary of Fulker (1998, p. 165) regards it as "one of the most influential methodological papers in human behaviour genetics."
Let us see whether Fisher's assumptions still hold after 1970.
Most of the models used by behavior geneticists derive from variants of the analysis of variance by Jinks and Fulker. They had been applied mainly to twin data but may be generalized for general genetic relatedness. For example, Kohler et al. (1999, p. 260) write:
In this article we primarily follow the regression approach, for which the term DF analysis (after DeFries and Fulker 1985) has been coined, and the extension of this approach to probit and tobit models.
They use such an approach to analyse Danish twin data on fertility for 1870-1910 and 1952-1954 birth cohorts. Their analysis leads them to estimate such values as heritability in the narrow sense ( 2h ), the ratio of dominance to total phenotypic variance (H² -h²), and the ratio of shared-environment variance to total variance. We have already criticized their approach to the heritability of fertility [START_REF] Vetta | Demographic Behaviour and Behaviour Genetics[END_REF]. Let us recall our main objection.
Their analysis ignores assortative mating because they claim to have no information on it: in fact this coefficient between husband and wife must be nearly 1 (the "nearly" takes care of infidelity). For a population in Hardy-Weinberg equilibrium there should thus be no filial regression, and for a population far from genetic equilibrium this will create a complex problem. In any case, the standard twin model used for heritability analysis, based on Fisher's assumptions, is unsuitable for fertility analysis. Unfortunately this model is now used in most biodemographic analyses of fertility.
For other demographic phenomena, such as mortality or migration, a standard twin model may be used without our earlier objection. In this case, however, [START_REF] Vetta | Appendix[END_REF] pointed out that there is an algebraic error in [START_REF] Jinks | Comparison of the Biometrical Genetical, MAVA, and Classical Approaches to the Analysis of Human Behaviour[END_REF] and that when this error is corrected their method is not valid (Capron et al., 1999 55 ). In the rest of their paper, Jinks and Fulker used correlations between relatives and environmental effects given by [START_REF] Fisher | The Correlation between Relatives on the Supposition of Mendelian Inheritance[END_REF], under assortative mating. Fisher's formulae, however, are not correct [START_REF] Vetta | Corrections to Fisher's correlation between relatives and environmental effects[END_REF]. As human populations mate assortatively, the concept of heritability cannot be applied to them, so that Fisher's fourth assumption will not hold.
More generally, we can state that heritability estimates have no value for human populations, for which we cannot always control environments or levels of genetic variation by experimental means. Despite Plomin's assertion (2001Plomin's assertion ( , p. 1104) ) that "the genetics of behavior is much too important a topic to be left to geneticists," he oddly uses models devised by geneticists such as Fisher and Jinks, whose hypotheses now need to be tested. [START_REF] Aschard | Inclusion of gene-gene and geneenvironment interactions unlikely to dramatically improve risk prediction for complex diseases[END_REF], while adding a broad range of hypothetical GE interactions, finally show that genetic information does not improve risk predictions for complex diseases.
The DNA structure was discovered by Watson andCrick in 1953. By 1960, biologists believed that humans might have two million protein-coding genes [START_REF] Kauffman | Metabolic stability and epigenesis in randomly constructed genetic nets[END_REF]. However, the Human Genome Project eventually found only 19,797. 56 That number is well below that of rice, whose genome has 50,000 genes, as do many simpler organisms. Let us see in greater detail how this research affects Fisher's hypotheses.
First, the polygenic hypothesis assumes that a trait is determined by a large number of polygenes either uniquely or in combination with polygenes associated with another trait. However, the human characteristics studied by behavior genetics are innumerable-including fertility, nuptiality, longevity, intelligence, personality, homosexuality, alcoholism, femininity, autism, manic depression, aggression, happiness, spatial and verbal reasoning, criminal behavior, obesity, vote choice, political participation, and so on. It is therefore implausible that they could be linked to such a small number of genes. Similarly, the human body produces well over a million proteins. The polygene hypothesis cannot explain this with only 20,000 genes. Therefore, Fisher's assumption ( 5) is not satisfied. Most crucially, Fisher did not know that genes are grouped together in 23 pairs of chromosomes: in meiosis (cell division), two characteristics undergo segregations that are either independent, if they are controlled by genes located on two distinct pairs of chromosomes, or totally linked, if they depend on genes located on the same pair of chromosomes. In reality, exchanges can occur between two chromatids, and genes can recombine. It can no longer be argued, therefore, that polygenes act independently (assumption 1), subject to independent segregation (assumption 2). Their transmission is thus impossible to quantify. As we have already seen that assumptions (3) and ( 4) are not verified, we can conclude that none of Fisher's assumptions is verified. As Gottlieb (2001, p. 6125) clearly states:
[…] it is now known that both genes and environments are involved in all traits and that it is not possible to specify their weighting or quantitative influence on any trait, […] this has been a hard-won scientific insight that had not yet percolated to the mass of humanity.
In other words, the use of the concept of heritability linked to Fisher's assumptions leads to a dead end.
However, behavior geneticists remained silent about these criticisms and kept making the same errors. Not even the advent of the genomic era reduced their audience. Rodgers et al. (2001, p. 187) had this to say about human fertility:
[…] the molecular genetic and behavioural genetic research lead to the same conclusions […] In the future, the important theoretical questions in this arena may well merge from the human genome project.
Many papers have been published in recent years using classical twin studies57 and genomic methods simultaneously [START_REF] Van Dongen | The continuing value of twin studies in the omics era[END_REF]. These genomic studies try to link specific human behaviors with specific genetic markers. For example, genome-wide association studies (GWASs)-which define genomic regions associated with individual traits or complex diseases-have identified "around 2,000 robust associations with more than 300 complex diseases and traits" between 2006 and 2013 (Manolio, 2013, p. 549). These studies were designed to demonstrate the links between DNA and human traits and behaviors. However, as Manolio explained:
[the] initial euphoria […] has dimmed somewhat with the recognition that GWAS-defined loci explain only a very small proportion of different traits' heritability, [and] they have met considerable skepticism regarding their clinical applicability.
Moreover, when large numbers of genetic markers are screened, there is a high risk of false positive associations.
To deal with the problem of missing heritability, a genomewide complex trait analysis (GCTA) was developed more recently. It scans the genomes of thousands of unrelated persons in order to see if those who are concordant for a trait share more single nucleotide polymorphisms (SNPs) than those who are not concordant. It can therefore estimate the proportion of trait heritability that can be attributed to shared SNPs. These studies show that twin studies yield higher heritability estimates than GCTAs. For example, a twin study of "callous-unemotional" behavior finds a heritability of 64% compared with a GCTA finding of only 7%-a non-significant figure given the sample size [START_REF] Viding | Genetics of callous-unemotional behavior in children[END_REF]. Many GCTA-based studies even yield null heritability estimates. Similarly, heritability estimates based on structural 127 equation modeling (SEM) produce estimates that are of the same magnitude as SNP-based estimates and largely below the values obtained using classical twin studies [START_REF] Munoz | Evaluating the contribution of genetics and familial shared environment to common disease using the UK Biobank[END_REF]. In fact, they suffer from serious methodological problems and generate faulty estimates of the genetic contribution to variation in the majority of traits.
As a research approach, heritability is therefore a dead end. Heritability, it should be recalled, is a population concept. As noted earlier, narrow heritability is the proportion of additive variance in the total phenotypic variance of a trait, while broad heritability is the proportion of genetic (additive + dominance) variance in the phenotypic variance. Once the heritability of a trait has been estimated, there is nothing more to add apart from expressing sympathy with individuals displaying a low value for the trait, as they are "doomed" to keep displaying that lower value. Very little can be done to improve their genes or the genes in the population without selective breeding for the trait. We should also emphasize that behavior geneticists invariably use incorrect formulas in their models.
Scientists use mathematical models to test hypotheses. A scientist will compare the prediction of his or her model with an actual measurement and will accept or reject the model based on the relative accuracy of its prediction. This is not possible in BG models, which is why behavior geneticists keep finding different heritability estimates for a trait year after year. No progress is possible.
After deciphering the human genome, the science of genetics has lately been undergoing important changes. As Charney (2012, p. 331) writes:
Recent discoveries, including the activity of retrotransposons, the extent of copy number variations, somatic and chromosomal mosaicism, and the nature of the epigenome as a regulator of DNA expressivity, are challenging the nature of the genome and the relationship between genotype and phenotype.
We refer the interested reader to more detailed papers on these topics: for retrotransposons, [START_REF] Sciamanna | Retrotransposons, reverse transcriptase and the genesis of new genetic information[END_REF]; for copy number variations, [START_REF] Redon | Global variation in copy number in the human genome[END_REF]for chromosomal mosaicism, Templado et al., 2011;for epigenetics, Weaver et al., 2004. These studies indicate that DNA can no longer be viewed as the sole biological agent of heritability or be regarded as fixed at the moment of conception. It is necessary to introduce a broader point of view encompassing all these recent discoveries, some of which concern factors that continue to play over the life course and may be environmentally responsive. Moreover, this action is no longer deterministic but highly stochastic (Kupiec, 2008), introducing a new handicap for heritability studies.
As Charney (2012, p. 332) clearly shows:
[…] the cumulative evidence of recent discoveries in genetics and in epigenetics calls into question the validity of two classes of methodologies that are central to the discipline: twin, family, and adoption studies, which are used to derive heritability estimates, and gene association studies, which include both genome-wide and candidate-gene association studies.
These developments of the postgenomic era call into question the validity of standard behavior genetics and the more recent behavioral genomics. There are now multilevel interactions involved in the network described by recent discoveries. Genes are situated a long way from their supposed phenotypic effects, exerted through different levels of biological organization with the influence of the environment [START_REF] Noble | Genes and causation[END_REF]). Yet today's behavioral researchers do not hesitate to advocate an approach based on behavioral epigenetics-i.e., an examination of the role of epigenetics in shaping human behavior. Some continue to use twin studies. Of the twenty-four commentaries to Charney's 2012 paper, only five still defended classical twin models. We refer the interested reader to Charney's detailed response to these commentaries and to the simplistic conception of human nature they have fostered. As another example, Tan et al. (2015, p. 138) state:
By treating gene expression or DNA methylation levels as molecular phenotypes, the classical twin design can be applied at different ages to explore the age-dependent patterns in the genetic and environmental contribution to epigenetic modification of gene activity, which can be linked to ageing-related phenotypes (e.g. physical and cognitive decline) and diseases.
Once again, the authors fail to take into account the complexity described earlier, and use simplistic paths from the genome, epigenome, and other factors to the phenotype.
Like those of their predecessors, the views of behavioral epigenesists are radically at odds with recent research in the field of molecular genetics, biophysics, and systems biology-to name just three of several scientific disciplines that are not in agreement with their assumptions.
To sum up, the hypotheses on which behavior genetics is based are not verified, and the approaches used to try to confirm its results are increasingly regarded as fruitless. A more rigorous scientific examination clearly shows that behavior genetics is fundamentally unable to distinguish between genetic and environmental influences using the analytical tools that existed at its origin and the genomic or postgenomic discoveries of recent years. Instead of continuing down the blind alley of heritability, historical demographers should find it more fruitful to consider the social, economic, political, climatic, and geographic factors available for study.
Conclusion
While eugenic methods have been proposed by philosophers such as Plato ever since antiquity, the major developments in eugenics date from Galton's work in the late nineteenth century, in the wake of discoveries on heredity.
In reality, eugenics is a political attitude and not a scientific one, even though it was formulated by a scientist-Galton-and certain pseudosciences such as behavior genetics use its premises. Eugenics postulates the existence of persons regarded as disadvantaged and unfit by an "elite"-easily recognizable by its political power (Hitler or Mussolini), economic power (the Rockefeller dynasty), and scientific power (Galton and Fisher). It advocates either negative eugenics, leading to the sterilization and physical elimination of persons regarded as disadvantaged and unfit, or positive eugenics, promoting the reproduction of persons regarded as fit. The two policies often coexist, as in Nazi Germany, where sterilization, deportation, and the massacre of populations viewed as unfit went hand in hand with the promotion of the selective reproduction of the Aryan race.
Throughout this chapter, we have shown the main effects of eugenics since its introduction by Galton to the present.
Today, it has taken a more devious form, as it can now be practiced not just by a group of persons, but by single individuals vested with the power of acting on their offspring. Let us take a more detailed look at what this power represents and how it may be regarded as a new form of eugenics.
We refer to new biomedical procedures that are being increasingly recommended to parents for detecting potential genetic problems in the unborn child. Whereas eugenics used to be essentially political and thus a collective practice, is it possible that an individual form of eugenics is at work-with collective consequences?
In 1996, the philosopher Philip Kircher clearly articulated the problem in his book The lives to come. He specifically discussed eugenic abortion (p. 199):
Individual choices are not made in a social vacuum, and unless changes in social attitudes keep pace with the proliferation of genetic tests, we can anticipate that many future prospective parents, acting to avoid misery for potential children, will have to bow to social attitudes they reject and resent. […] Laissez-faire eugenics is in danger of retaining the most disturbing aspects of its historical predecessors-the tendency to try to transform the population in a particular direction, not to avoid suffering but to reflect a set of social values.
Individuals do not make choices outside their society: the standards prevailing in the society will influence their choices.
For example, as a child's sex can now be determined well before its birth, this many induce abortions of girls by parents who want to have a boy. The sex ratio at birth, which stands at approximately 1.06 at the global level (106 male births per 100 female births), is far higher in certain countries. The growing recourse to selective abortions made possible by biomedical methods has led to sex ratios of 1.15 in China, 1.11 in India, 1.10 in Vietnam, and 1.13 in Armenia in 2017.
The fact that this imbalance of the sexes at birth may seem to be solely due to individual choices leads some authors to view individual eugenics as acceptable. [START_REF] Caplan | What is immoral about eugenics?[END_REF]Caplan et al. ( , p. 1285)), for instance, conclude their article with these words:
In so far as coercion and force are absent and individual choice is allowed to hold sway, then presuming fairness in the access to the means of enhancing our offsprings' lives it is hard to see what exactly is wrong with parents choosing to use genetic knowledge to improve the health and wellbeing of their offspring.
While such an argument is valid for a genetic malformation, it becomes very dangerous when a child's sex is concerned. These individual practices are implied by the country's prevailing social and political rules, and have major collective effects such as an increased proportion of males remaining never married when reaching adulthood. In China, for instance, the sex ratio at birth started rising in 1979 as a result of the single-child policy, enforced until 2016. The consequence is already visible, but will intensify all the way to the end of the twentieth century: a high rate of males who will remain single. This can have dramatic effects, such as a rise in crime [START_REF] Edlund | Sex ratios and crime: evidence from China's one-child policy[END_REF] and the difficulty in finding a wife [START_REF] Guilmoto | Skewed sex ratios at birth and future marriage squeeze in China and India, 2005-2100[END_REF]. The result of these behaviors can truly be described as a new form of eugenics, no longer collective but individual.
In conclusion, eugenics in all its forms leads individuals or human societies to harmful behavior whose effects-in many circumstances-can even be dramatic.
Annex: Definitions and genetic terminology
The name "behaviour geneticist" is used by two distinct groups of researchers. One group specialises in laboratory experiments on animals. Their experiments are well designed and well executed. We acknowledge their contribution to science and this chapter does not relate to their work. The other groups of "behaviour geneticists" owe their allegiance to [START_REF] Jinks | Comparison of the Biometrical Genetical, MAVA, and Classical Approaches to the Analysis of Human Behaviour[END_REF]. They do not conduct experiments but fit statistical models of the components of variance type to observed data. They could be described as observational behaviour geneticists. The parametric values obtained from fitted models, they believe, enable them to solve the nature-nurture problem. As not all readers of this volume are specialists in population genetics, we define the genetic terms and concepts used in this chapter.
The basic unit of human heredity is a "chromosome". The name arises from the fact that chromosomes have an affinity for certain stains (chroma = colour and soma =body) and is due to the 19 th century German biologist Walter Flemming. The fundamental hereditary material in a chromosome, DNA (deoxyribonucleic acid) is composed of a double-stranded helix of sugar phosphate held together by pairs of nucleotide bases, that carry information by means of the linear sequence of its nucleotides. Humans have 23 pairs of chromosomes (46 chromosomes). In females all 23 pairs are identical. In males 22 pairs are identical but the 23 rd pair, called the sex chromosome, is not identical. Gene, a molecule of DNA, is situated on a chromosome. It can have many forms that are known as "alleles". The average number of genes on a human chromosome is about 760. The exact position at which a gene is situated is called "locus". On homologous chromosomes, an allele of the gene will be situated at the same position on each chromosome. The whole set of genes carried by a species is called the "genome" of the specie. If a person has two identical alleles at a locus, he is "homozygous"; otherwise it is "heterozygous". Among humans germ cells (egg and sperm) are produced by a process, called meiosis. It is the type of cell division that reduces the amount of genetic material. Thus, each egg and sperm has only 23 chromosomes. When a sperm impregnates the egg, each of the 23 chromosomes in the sperm joins its counterpart in the egg and the process forming a human begins with 23 pairs of chromosomes. An individual's "genotype" is the complete set of all alleles at all loci. The human genome has less than 20k genes.
Mendel was the first to study a qualitative trait. A Mendelian or qualitative trait is under the control of one gene residing on a chromosome pair. Assume this gene has two alleles, A and a one on each chromosome. As we receive one allele each from mother and father, the population will consist of three genotypes AA, Aa and aa with respect to this gene (we do not distinguish between Aa and aA). When we can distinguish between the genotypes, the trait is known as a qualitative trait and we can study the effect of the gene. Blood groups are an example of a qualitative trait. A Mendelian trait may exhibit dominance. If, for example, allele A is completely dominant over allele a, then Aa looks like AA. If dominance is partial, then Aa will be nearer AA than aa. In absence of dominance Aa will be a mixture of the effects of allele A and a.
Behaviour genetics is not concerned with qualitative traits. It is concerned with quantitative traits. A quantitative trait is determined by a large number of genes. Consider a second gene B. It will also have three genotypes BB, Bb and bb. Thus, two genes will give rise to 9 genotypes (each of the three A genotypes combining with each of the three B genotypes, i.e. AABB, AABb, AAbb, …aabb). For n genes, the number of genotypes will be 3 n . A quantitative trait e.g. height, is measured on a continuous scale. Some genotypes may give rise to similar phenotypes and we may not be able to distinguish between these genotypes. Thus, there is no 1-1 correspondence between genes and their effect. Environment may also affect the trait in which case an individual's "phenotype" may not be a true reflection of the genotype.
A behaviour geneticist collects data on the phenotype of a trait and then tries to make inferences about the genotype. Therefore, a phenotypic value needs to be associated with the underlying genotypic value or with the genotype. Without association no genetic inference can be made. Therefore, we need new concepts not used in Mendelian genetics. "Genotypic value" is one of these new concepts. Regrettably, we can define it for one gene only and then, inappropriately, 'generalise' it. The genotypic values of the three genotypes AA, Aa and aa are defined as the regression of their phenotypic values on genotypic frequencies. We cannot find this line of regression because genotypic values are hypothetical constructs. Another new concept we need is that of "additive value". We play the same trick and define additive values as the regression of genotypic values on genotypic frequencies. Additive values are also hypothetical and may or may not exist. The deviations from this hypothetical regression of genotypic values on genotypes are called "dominance values". In Mendelian genetics dominance effects are real. In quantitative genetics they are random fluctuations from this regression. This distinction is not generally understood. The reason for constructing hypothetical concepts using statistical linear regression is that the originator of Quantitative Genetics, R A Fisher did so in his 1918 paper. To explain the concept of additive values, textbook writers give genotypes AA, Aa and aa hypothetical values a, d and -a (please note that equally spaced values for the three genotypes would not reflect "dominance"). This, however, does not mean that they are "real". We emphasise that genetic, additive and dominance values are hypothetical statistical constructs and may or may not exist.
Chapter 5 Why and how to restrict freedom
What can lead people to believe that another entity-whether a celestial body or a gene-can determine their future? The Methodos Series, in which this book is being published, is devoted to examining and solving the major methodological problems faced by the social sciences. My opening question, therefore, deserves fuller examination here. We begin by discussing the basic hypotheses of astrology and eugenics on the one hand, and those of astronomy and genetics on the other. Next, we explore the deep reasons that drive individuals to believe in astrology, eugenics, and-more broadlyin a religion. We conclude by discussing the negation of human freedom and the means to avoid it.
Axioms used to predict the future or establish a true science
In our chapter on astrology and astronomy, we noted that Francis Bacon proposed two approaches to research in the early seventeenth century. The first, commonly used in his time, rested on the premise that the most general principles were established and unassailable. The second, by contrast, started with factual observation and experimentation, in order to ultimately derive the axioms consistent with this thorough investigation. But what specific meaning should we assign to the term "axiom," which is present in both approaches?
If we consider a science as an enterprise to discover the principles governing things, such principles have been referred to as axioms ever since Euclid [START_REF] Franck | Peut-on accroître le pouvoir des modèles en économie? In Leçons de philosophie économique[END_REF]. However, as Bacon points out (Novum Organon, 1620, I (23):
[…] they are either names of things which do not exist (for as there are things left unnamed through lack of observation, so likewise are there names which result from fantastic suppositions and to which nothing in reality corresponds), or they are names of things which exist, but yet confused and ill-defined, and hastily and irregularly derived from realities. 58 The first type of principles is therefore based on generalities, and Bacon classifies these notions into four categories that he calls "idols." 59 The Idols of the Tribe search for more order and regularity in the world than it actually displays, and he clearly states (Novum Organon, 1620, I, 46):
The Idols of the Cave arise in the human mind, which tries to construct a complete thought system from a mere handful of observations and ideas. The Idols of the Market Place stem from the words that relate to our understanding of the world. The Idols of the Theater are those issued from philosophical systems such as religion and theology.
The second type of axiom uses the induction method61 (Franck, 2002, p. 289): […]., induction consists in discovering a system's principle from a study of its properties, by way of experiment and observation.
Bacon, however, distinguishes between different types of axioms ranging from the lowest, which are close to experimentation, to the intermediate, which define the frontiers of research, and all the way up to the most general (Novum Organon, 1620, I, 104).
But the elucidation of our world's structure is never complete. The discovery of new properties-thanks, for example, to more efficient means of observation-can lead to the finding of new axioms. For example, Einstein's theory of relativity adds to the classic space-time axioms a new axiom on the invariance of the speed of light in a vacuum, verified by a multitude of experiments [START_REF] Suppes | Representation and invariance of scientific structures[END_REF].
Principles and axioms for astrology and astronomy
The principles used in astrology are, as Bacon already observed, of the first type. Some may not have been defined with great clarity in the past; others may have varied from one astrologer to another. All, however, consistently display certain features in their formulation. For our purposes, the so called axioms, which are in fact principles, put forward by the astrologer Richard Vetter in 2000 seem to fairly represent the goal and content of astrology:
The principle of analogy or correspondence The conception of time's quality and contents The conception of the number's quality (the mainly geometric horoscope model) (Vetter, 2000, p. 84).
The first principle is common to all the other divination methods, as Vetter himself acknowledged (p. 85). Its symbols are metaphorical: they function as parables and rely on a certain parallelism between microcosm and macrocosm. The second is also the key principle of all oracular methods such as cartomancy. It regards time as full of significances-a sort of transcendent, unlimited storehouse. The moment when a person's life begins existence will reveal all of his or her qualities and potentialities to the astrologer. The third principle distinguishes astrology from the other divination methods by introducing numerology. The geometry of the position of the heavenly bodies at a person's birth yields his or her horoscope, which is a stylized model of astronomical realities.
Thus stated, these principles can be generalized to the other divination methods-justifying our initial choice not to examine all of these, which would have been an exercise in redundancy. However, as shown in Chapter 3, the principles above do not derive from a detailed observation of facts, but are posited from the outset as the basis of these approaches without questioning their soundness-which, as we have seen, is unverified. How can one assert the parallelism between microcosm and macrocosm when it can hardly be verified by experiment? This is clearly an example of the first approach defined by Bacon, which no longer corresponds to what we now regard as a science, but belongs more specifically to the Idol of the Tribe category, as Bacon already noted. Astronomy, by contrast, developed from the observation of celestial bodies over millennia, and this meticulous accumulation allowed Newton to define its axioms for classical mechanics with the aid of Baconian induction (Newton, 1687, pp. 12-13, translated into English by M. Andrew, 1779):
1. Every body perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impress'd thereon. 2. The alteration of motion is ever proportional to the motive force impress'd; and is made in the direction of the right line in which that force is impress'd. 3. To every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always equal, and directed to contrary parts. 62 As Stephen Ducheyne has clearly shown ( 2005), these axioms are of the second type recommended by Bacon. They remained valid until new observations at the atomic and subatomic level required their change with the introduction of quantum mechanics in the first half of the twentieth century. While Newton's axioms still apply as a limit case for long-distance observations of celestial bodies, they need to be modified for the observation of atomic particles.
Such is the fundamental difference between astrology and astronomy, which Bacon had identified by 1620. Is this difference equally valid for eugenics versus Mendelism?
Principles and axioms for eugenics and Mendelism
While the goal of eugenics is evident-to improve the racial qualities of future generations-Galton did not elaborate clear principles for it. We can nevertheless regard his hypotheses on intelligence, talent, and virtue-which he failed to justify-as constituting its principles.
To begin with, Galton viewed these intellectual capacities as subject to the same transmission as physical qualities, which, by contrast, are measurable. He deduced that a measure of intellectual capacities must surely exist, and that it can therefore serve to verify the generalization of the Law of Frequency of Error. His proposed measure for estimating individuals' "worth" is largely based on the prevailing political views of his time. His scale combines social class and merits, with "Ministers of State, heads of Departments, Bishops, Judges, Commanders and Admirals in Chief, Governors of colonies and other appointments" ranking highest (Galton and Galton, 1998, p. 100). Galton never actually justifies this measure in empirical terms. He himself pointed out the questions it could raise, yet he used it to quantify human intelligence. We could therefore treat it as a first principle to justify eugenics: intelligence, talent, and virtues are measurable.
The introduction of IQ by Binet and Simon in 1905-i.e., in Galton's lifetime-seems more convincing than previous attempts to measure intellectual capacities. But the two authors are far less bold than Galton, and clearly state the impossibility of measuring intelligence (pp. 134-195, translation by Elizabeth S. Kite, 1916):
This scale properly speaking does not permit the measure of the intelligence, because intellectual qualities are not superposable, and therefore cannot be measured as linear surfaces are measured, but are on the contrary, a classification, a hierarchy among diverse intelligences; and for the necessities of practice this classification is equivalent to a measure. 63Despite these shortcomings, it was the U.S. adaptation of IQ by Lewis Terman in 1916-known as the Stanford-Binet test-that led to its acceptance as a unit measure of mental processes. Many versions were later developed. Terman was an influential member of the Eugenics Research Association, and his work was heavily informed by eugenics. For example, in his book The Measurement of Intelligence (1916, pp. 3-4), he clearly states his goal:
The trouble was, they were too often based upon the assumption that under the right conditions all children would be equally, or almost equally, capable of making satisfactory school progress. Psychological studies of school children by means of standardized intelligence tests have shown that this supposition is not in accord with the facts. It has been found that children do not fall into two well-defined groups, the "feeble-minded" and the "normal." Instead, there are many grades of intelligence, ranging from idiocy on the one hand to genius on the other. Among those classed as normal, vast individual differences have been found to exist in original mental endowment, differences which affect profoundly the capacity to profit from school instruction. This classification into well-defined groups later served as the basis for racial segregation policies in U.S. schools. Terman claims that children with high IQ (of course, nearly all of them white) will become future leaders in science, industry, and politics. By contrast, children with low IQ, notably blacks, must be educated in separate classes, because: "Their dullness seems to be racial, or at least inherent in the family stocks from which they come" (Terman, 1916, p. 91).
This argument was taken up by Jensen in 1967 to try to prove that the Head Start Program for Black Children in the USA was useless (Vetta and Courgeau, 2003, pp. 409-410). Later, Herrnstein and Murray's The Bell Curve (1994) used the notion of unit intelligence to show that eugenics could lead to the emergence of a cognitive elite (Vetta and Courgeau, 2003, p. 410). Gould's review of the book in 1996 clearly shows its errors, for the authors believe intelligence can be measured by a single index, IQ-a claim disproved by experiment.
In other words, the assertion that intelligence is measurable-which we may view as an axiom posited by Galton-is not verified empirically and has a political connotation, to which eugenics has always given priority.
Galton's second claim, made in 1865, is that genius, talent, and character are hereditary. But what does he mean by "heredity"a concept that raised many questions even in his day? In 1872 (p. 394), he had this to say about the doctrines of heredity:
From the well known circumstances that an individual may transmit to his descendants ancestral qualities which he does not himself possess, we are assured that they could not have been altogether destroyed in him, but must have maintained their existence in a latent form.
He therefore postulated the existence of two largely independent elements: the individual's latent part, known to us only through its effects on offspring, and the manifest part. The latter is largely shaped by the person's living environment and only a very small share is passed on to the descendants, whereas the latent part is passed down in full. Galton was thus able to formulate a law of hereditary transmission in 1886 concerning the height of children with respect to that of their parents. Assuming the law to be universal, he concluded his article by stating (p. 261) that:
The results of our two valid limiting suppositions are therefore (1) that the mid-parental deviate, pure and simple, influences the offspring to 9 4 of its amount;
(2) that it influences it to the 11 6 of its amount. These values differ but slightly from 2 1 , and their mean is closely 2 1 , so we may fairly accept that result. Hence the influence, pure and simple, of the mid-parent may be taken as 2 1 , of the midgrandparent 4 1 , of the mid-great-grandparent 8 1 , and so on.
To obtain this result, Galton was convinced that he had demonstrated-from a sample of 205 parents and their 930 children-that the children were more "mediocre" 64 than their parents, for he had made his observations at the individual level. But [START_REF] Maraun | The mythologization of regression towards the mean[END_REF] have clearly shown that such a "regression towards mediocrity" does not apply to individual entities: it is valid only at the aggregate level. They accordingly characterize Galton's 64 He uses "mediocre" to mean that the slope of the regression line linking the parents' height on the x-axis to their children's height on the y-axis is equal to 3 2 . approach as "mythological," for it demonstrates nothing at the individual level. Moreover, while this result is always obtained with two normal distributions-which Galton assumes to be valid hereit is not consistently obtained with non-symmetric or bimodal distributions (Schwartz and Reike, whose 2018 article focuses on "Regression away from the mean"). Lastly, the linearity that Galton assumes for his regression is not, in fact, fully verified [START_REF] Wachsmuth | Galton's Bend: A Previously Undiscovered Nonlinearity in Galton's Family Stature Regression Data[END_REF] and the observed data do not fit the proposed model.
Galton did not present a "statistical law of heredity" to describe bisexual offspring until 1897. As so often, he combines statistical calculations with a study of physiological or biological phenomena. He thus justifies his statistical results by arguing that they match his hypotheses on heredity (p. 403):
Now this law is strictly consonant with the observed binary subdivisions of the germ cell, and the concomitant extrusion and loss of one half of the several contributions from each of the two parents to the germ cell of the offspring. The apparent artificiality of the law ceases on these grounds to afford the cause for doubt; its close agreement with physiological phenomena ought to give a prejudice in favour of its truth rather than the contrary.
Having set out all the conditions that he sees as necessary for the law, he states it in the form of the series already presented in 1865 (p. 326) and in many of his articles:
The father transmits, on an average one half of his nature, the grand-father one fourth, the great-grand-father one eighth; the share decreasing step by step, in a geometrical ratio, with great rapidity.
This "law" assumes that all ancestors are known, whereas in fact it can be verified for only a small number of ancestors. But even if the unknown residuals are small, it hardly qualifies as a law. We shall regard it here as the second axiom of Galton's theory.
The assumption raised difficulties for many authors in the late nineteenth and early twentieth centuries. Pearson, for example, was initially a strong advocate of the law, which he called the "law of ancestral heredity." In 1897-1898, he wrote (p. 412): "with all due reservations, it seems to me that the law of ancestral heredity is likely to prove one of the more brilliant of Mr. Galton's discoveries." However, he expressed reservations, notably in regard to selection. He showed that while the law was verified, it did not discredit the Darwinian notion of selection applied to a continuous variation-as Galton argued. In 1903, Pearson went even further by showing that the law was purely statistical and independent of any physiological or biological theory of heredity-a view contrary to Galton's, as noted earlier. Lastly, he showed that the geometric series of ratio 2 1 proposed by Galton is not applicable to many observations. Yule explicitly stated in 1902 (p. 204): "Being unable to accept Mr. Galton's law of heredity, a fortiori I cannot accept it as the law, and have therefore applied the phrase to a more general statement." His Law of Ancestral Heredity, offered as an alternative (p. 202), assumed that "the mean character of the offspring can be calculated with the more exactness, the more extensive our knowledge of the corresponding characters of the ancestry." Instead of Galton's rigid mathematical law, Yule proposed a more flexible version that allowed a more accurate prediction of the mean value of a trait when certain ancestors of an individual were known. Naturally, the "Mendelians" either rejected this axiom altogether or viewed it as an occasional and relatively insignificant consequence of Mendel's laws. For example, in his book on Mendel's Principles of Heredity (1909), Bateson wrote (p. 6):
Galton's method failed for want of analysis. His formula should in all probability be looked upon rather as an occasional consequence of the actual laws of heredity than in any proper sense one of these laws.
The positivism of Galton and his successors consisted in rejecting causality in favor of statistics and correlation, whereas Mendelism relied on an individual probability.
Galton's eugenics came 300 years after Bacon, who therefore could not classify it among his four idols. As we have seen, the essential aim of eugenics was political. In Bacon's day, however, all political goals lay in the hands of absolute monarchs claiming divine right. For example, James I, King of England and Scotland at the time of Bacon, had this to say about royal power in his work on The True Law of Free Monarchies, included in the volume of his collected works in 1616 (p. 193):
As there is not a thing so necessarie to be knowne by the people of any land, next to the knowledge of their God, as the right knowledge of their alleageance, according to the forme of government established among them, especially in a Monarchie (which form of government, as resembling the Divine, approcheth nearest to perfection, as all the learned and wise men from the beginning have agreed upon; […] The King defended his divine right against the obstacles that Parliament sought to put in his way, and he regarded this body as merely the head court of the king and his vassals.
Since James I's accession in 1603, Bacon had enjoyed a fasttrack career that propelled him to the position of Lord Chancellor in 1618. It was therefore impossible for him to directly challenge the monarchy and its absolute political power, for he was totally indebted to it.
After the collapse of the absolute monarchies, the policies introduced by governments have reflected the different approaches of their framers. Some have rested on solid scientific foundations. Many public-health policies, for example, supported by scientific discoveries such as vaccination, have allowed the eradication of many diseases that previously took a heavy toll on populations. Other policies, by contrast, have been based on idols in the Baconian sense and should be debunked as such.
Eugenics, for example, can be regarded as an Idol of the Market Place, for it uses "names of things which do not exist, . . . or exist, but yet confused and ill defined, and hastily and irregularly derived from realities" 65 (Novum Organon, 1620, I, 60). Its main principles are based on abstruse and scientifically unsubstantiated concepts. Its proposed policy line is "formed by the intercourse and association of men" 66 (Novum Organon, 1620, I, 43), and eugenics organizations have been set up around the world-leading, as noted earlier, to many crimes against humanity.
After World War II, while eugenics continued to be active politically under other guises such as birth control, the concept of heritability gave rise to a new idol: hereditarianism. Its purpose was to show that human behavior can be broken down into two additive parts: a genetic part and an environmental part. This approach qualifies as an Idol of the Tribe, for "The human understanding is of its own nature prone to suppose the existence of more order and regularity in the world that it finds" 67 (Novum Organon, I, 45). Behavior genetics uses Fisher's assumptions to attempt to connect genetics to eugenics-an endeavor that we have shown to be unacceptable.
This strong theoretical rejection of eugenics and hereditarianism leads us to examine whether Mendelism relies on axiomatics better supported by observation.
First, we must note that Mendel's very precise and detailed observations in 1865 initially produced not axiomatics but a set of conclusions. His discoveries were surprising in an age that knew nothing about chromosomes, genes, and DNA. However, what he called "traits" corresponded to what we now call "alleles," i.e., the sets of genes situated in the same locus. 68 By presenting the results of his experiments in simple terms, Mendel was able to develop principles, but they attracted little attention.
Not until his results had been rediscovered by various scientists at the turn of the twentieth century were his conclusions transformed into a small set of axioms. Despite some variations across the century (see Marks, 2000), these display some constant 66 Latin text: propter hominum commercium et consortium, appellamus. 67 Latin text: Intellectus humanus ex proprietate sua facile supponit majorem ordinem et aequalitatem in rebus. 68 Although the definition of all the terms used in genetics lies outside the scope of our book, it is useful to note here that a gene is "a unit of information that encodes a genetic characteristic,", an allele is "one of two or more alternate forms of a gene," and a locus is "a specific place on a chromosome occupied by an allele" (Pierce, 2012, p. 46).
features. Curiously, it is the biometrician Weldon who, in 1902, formulated two initial principles of Mendelism (p. 229), which he called Law of Segregation and Law of Dominance-in order to criticize them. This triggered the controversy between "ancestrians" and "Mendelians" (see Chapter 3). After Weldon's death in 1906, Mendelism triumphed. In 1909, Bateson published Mendel's Principles of Heredity, in which he reproduced both principles-now substantiated-in exactly the same terms (p. 13). He noted, however, that the "dominance of certain characters is often an important but never an essential feature of Mendelian heredity." In 1916, Morgan proposed another law, the Law of Independent Assortment, which relied on his own work on the relations between genes and chromosomes. This law is often violated, however, for the genes present on the same chromosome are inherited together.
Of these various laws, we believe only the first-the Law of Segregation-can be regarded as a true axiom, of the highest type. It has since been verified by a large number of experiments and displays no exception. We can define it as follows:
A diploid individual possesses a pair of alleles for any particular trait and each parent passes one of these randomly to his or her offspring.
This axiom is consistent with the theoretical behavior of chromosomes proposed by Sutton in 1903 in his article "The Chromosomes in Heredity" (p. 237):
Thus the phenomena of germ-cell division and of heredity are seen to have the same essential features, viz., purity of units (chromosomes, characters) and the independent transmission of the same.
It then took fifty years to gradually demonstrate that a chromosome is composed of two long molecules of deoxyribonucleic acid-DNA-wrapped around each other in a helix. Watson and Crick discovered the structure in 1953. It consists of a sequence of four types of nucleotides, which carry and reproduce genetic information.
Without describing the complex process in detail, we can say that the DNA nucleotide sequence allows the formation of the sequence of amino acids in a protein, which may be regarded as an elementary characteristic effectively manifested by an individual.
The other two laws have many exceptions and are not a key element of Mendelian heredity. We cannot regarded them as general axioms in the Baconian sense, but only as intermediate axioms, requiring specific conditions to be valid.
The Law of Dominance states that certain alleles are dominant and others recessive, so that an individual with a dominant allele will only express the effect of that characteristic. Although it applies frequently, it led Bateson (1909, pp. 13-14) to write:
The consequences of its occurrence and the complications it introduces must be understood as a preliminary to the practical investigations of the phenomena of heredity, but it is only a subordinate incident of special cases, and Mendel's principles of inheritance apply equally to cases where there is no dominance and the heterogeneous type is intermediate in character between the two pure types.
The latter type of dominance-called incomplete dominancecannot be regarded as a genuine law of hereditary transmission, for it concerns the phenotypic expression of a genotype.
The Law of Independent Assortment states that the alleles of genes for different traits segregate independently of one other. Again, far from being universal like the first law, it is very often disproved by experiment. This is due to the fact that a gene is linked to a chromosome: if the different traits depend on genes located on distinct chromosomes, their segregations will be independent; if they depend on genes located on the same chromosome, their segregations will be totally linked.
The discovery of DNA as the medium of Mendelian genetics thus crowned the theory by establishing the notion of a genetic program embedded in DNA and sufficient to explain the complete functioning of an organism.
However, the sequencing of the human genome, completed in 2003, shows that it contains only 22,000 genes, or one-half the number in a paramecium. This makes it hard to imagine that they can single-handedly govern human phenotypes, which are far more abundant. While they have allowed advances in the therapy of rare monogenic diseases, they are of little value for most diseases and phenotypic traits. Subsequent discoveries have deeply transformed the genetic approach, just as quantum mechanics has modified Newtonian theory. These discoveries include, among others, regulatory RNA or noncoding RNA, alternative splicing, epigenetic splicing, and metagenomic splicing. 69In their book Ni Dieu Ni gène (Neither God nor Gene) published in 2000, Jean-Jacques Kupiec and Pierre Sonigo restrict the Mendelian approach by proposing a broader theory (p. 194):
[…] heredity is not written in DNA. It is the result of random draws by natural selection. It is the principles of this selection that we must understand, rather than desperately hoping to read in genes that which is not written in them. 70The theory that they offer for this purpose generalizes Darwin's theory of evolution by applying it to "the populations of cells and molecules, as we do for populations of plants or animals71 " (p. 214).
We have thus shown that-as with astrology and astronomy-there is a clear distinction between eugenics and Mendelism reflecting the two different approaches defined by Bacon. Also, as with Newton's laws, new discoveries have led to a fuller, more satisfactory approach to Mendelism.
Reasons to believe in the prediction of the future
We must now ask why these approaches have had so much success throughout human history-in the case of astrology, since remotest antiquity; for eugenics, since the late nineteenth century, although it was already latent in far earlier times. We shall also try to understand why astrology, although now rejected as a science, continues to capture the attention of so many people. Similarly, eugenics-despite the avoidance of the term since the end of World War II-has been applied by many political leaders with gigantic resources at their disposal for the task. Hereditarism is still practiced as a science by a great number of researchers, despite the obvious falsity of its scientific premises.
We therefore believe it is important to seek the deep reasons that lead so many persons to deny human freedom by putting their faith in these pseudosciences.
Why and how should one continue to believe in divination?
In this section, we discuss not only astrology, but also all the other forms of divination mentioned in our introduction. Many are still relatively widespread in various parts of the world, so we begin by examining to what extent.
All the graphs generated by Google Trends show the global supremacy of astrology since 2004, all other practices being negligible in English as well as French and Spanish. A more detailed examination of certain countries reveals the presence of cartomancy and chiromancy in Switzerland, although their frequency is less than one-tenth that of astrology. In France as well, searches for "cartomancy" and "chiromancy" are not insignificant, but less frequent than in Switzerland. It would have been interesting to compare these figures with those for other languages or other countries. For example, how prevalent is chiromancy today in Japan or China? However, the results obtained for three major languages-English, French, and Spanish-will suffice here, for the purpose of our study is not to be universal but to identify the main reasons that still drive people to believe in divination.
As noted earlier, all these forms of divination have played an important role in the past and in a large majority of civilizations. Let us examine how such methods were regarded in antiquity.
In the first century B.C.E., C.E.) wrote De natura deorum (On the nature of gods), De divinatione (About divination), and, at the end of his life, De fato (Concerning fate). We shall focus here on De divinatione (around 43 B.C.E), where he presents his views in the form of a dialogue between Marcus (himself), who mocks superstition, and his brother Quintus, who defends the values of divination. In what follows, we rely mainly on Quintus' arguments to understand the motives for belief in divination, for we have already described Marcus' arguments at some length.
Quintus begins by distinguishing the two main forms of divination (I, 11):
"There is, I assure you," said he, "nothing new or original in my views; for those which I adopt are not only very old, but they are endorsed by the consent of all peoples and nations. There are two kinds of divination: the first is dependent on art, the other on nature." 72In Latin, the words ars and scientia are equivalent, and Quintus uses them interchangeably to designate knowledge. However, as scientia differs from what we now call "science," we prefer to use "art" here. This art is mainly represented not only by astrology but also by various predictions based on the examination of entrails, lightning, and other phenomena. In addition to astrology, the Romans regarded other predictions as "arts," for they were codified in texts reflecting extensive observations. The Roman augurs, for example, had a highly elaborate-but secret-corpus of doctrine called "augural right." The sixteen augurs in Cesar's time served as the interpreters of Jupiter, who conveyed to them the approvals, warnings or refusals of the celestial will. Interestingly, Cicero was elected augur in 53 B.C.E. "Natural" divination was based on dreams and prophetic divinations. These were not codified in texts, but their predictive content could be revealed by the dreamer's or the prophet's inspiration. Quintus distinguishes these two forms of divination (I, 56):
As in seeds there inheres the germ of those things which the seeds produce, so in causes are stored the future events whose coming is foreseen by reason or conjecture, or is discerned by the soul when inspired by frenzy, or when it is set free by sleep. 73He goes on to describe the reasons for Roman belief in divination.
The first and main reason is found in Stoic philosophy which, although born in Greece in ca. 300 B.C.E., was by far the prevailing current in the Roman period. Stoicism assigned a role to fate (fatum) as a force governing the universe-a chain of events. Quintus defines it as follows (I, 55):
Now by Fate I mean the same that the Greeks call εἱμαρμένη, that is, an orderly succession of causes wherein cause is linked to cause and each cause of itself produces an effect. That is an immortal truth having its source in all eternity. Therefore nothing has happened which was not bound to happen, and, likewise, nothing is going to happen which will not find in nature every efficient cause of its happening. 74This deterministic vision of fate does not rule out human action but incorporates it in its causalities. While individuals are powerless to alter the events affecting them, they do control their attitudes and responses to them. This philosophical approach is also intimately linked to the existence of the gods and their intervention in human affairs. As Quintus states (I, 5):
My own opinion is that, if the kinds of divination which we have inherited from our forefathers and now practise are trustworthy, then there are gods and, conversely, if there are gods then there are men who have the power of divination. 75Like the Greeks, Roman religion had many gods, but it gave precedence to Jupiter.
Another factor drove the Romans to believe in divination: their political regime. According to tradition, the augurs existed since Rome's foundation in ca. 753 B.C.E. Cicero describes the event (De divinatione, 1. 48), when Romulus and Remus, as augurs, and through the observation of birds, asked the gods to tell them which of the two would rule the city. The appearance of twelve birds on the side favorable to Romulus expressed the divine choice. Ever since, the augurs had served as representatives of Roman power and made all the key decisions: I think that, although in the beginning augural law was established from a belief in divination, yet later it was maintained and preserved from considerations of political expediency.
Augural art was thus applied to Rome for domestic and external policy alike.
The Romans were also extremely fond of predictive readings of individual fates. In particular, Chaldean astrologers competed with the haruspices, who also predicted the future by examining animal entrails. The belief in these predictions was very strong, most notably after Caesar's death in 44 B.C.E., and prevailed until the fall of the Roman Empire in the West at the end of the fourth century C.E.
Marcus, for his part, criticizes and even mocks superstition, rebutting Quintus' arguments one by one in the second part of the work. He offers a lucid vision of the role of divination in Roman society, and his conclusion leaves the reader free to believe in it or not.
Divination thus appears to have been a general attitudeindeed, even a theory-espoused by the entire Roman population.
The theory assumes the existence of gods acting as a force that can communicate with humans by a variety of means. The force operates at every level of society-from Stoic philosophy to religion, politics, and the fate of individuals.
In modern times, several disciplines have sought to explain not only divination methods but, more generally, religious practices. These disciplines notably include sociology [START_REF] Durkheim | Les formes élémentaires de la vie religieuse: le système totémique en Australie[END_REF], history (Bouché-Leclerc, 1879), psychology [START_REF] James | The varieties of religious experiences. A study of human nature[END_REF], and anthropology [START_REF] Lévy-Bruhl | Le surnaturel et la nature dans la société primitive[END_REF][START_REF] Evans-Pritchard | Witchcraft, oracles and magic among the Azande[END_REF]. Lisdorf ( 2007) presents these various approaches in detail. We shall focus here on the contributions of cultural, psychological, and cognitive approaches. They seek to understand not only the different forms of divination but, more generally, all religious experiences, which involve one irrational element (monotheisms) or several (polytheisms). This generalization is necessary to embrace the full spectrum of religious phenomena.
We can currently distinguish four main theories that aim to explain the evolution of these behaviors and beliefs. They have been developed in the past thirty years, chiefly in psychology, biocultural anthropology, and cognitive sciences. They seek to offer a fresh approach to the issues most widely discussed in our times concerning the origin, representation, and transmission of religion with the aid of the cognitive structures of the human mind.
The first theory is based on the work of Stewart Gunther (1993), Pascal Boyer (1994), and Justin [START_REF] Barrett | Exploring the natural foundations of religion[END_REF]. Their initial hypothesis is that religious concepts have emerged from cognitive processes that-from earliest antiquity-offered solutions to numerous problems relating to the natural and human environment. These concepts can now be tested empirically by means of relevant experiments. [START_REF] Boyer | The naturalness of religious ideas: a cognitive theory of religion[END_REF] explains the diffusion and persistence of "minimally counterintuitive" ideas, of which religious concepts are a subset, by the fact that they are more attractive than others. While some psychological studies support this hypothesis, Barrett himself (2000, p. 10-11) believes that these concepts do not suffice to explain religious sentiments: However, counterintuitive concepts such as invisible sofas rarely occupy important (if any) roles in religious systems.
Counterintuitive beings or objects of commitment in religious belief systems are most often intentional agents.
This leads Barret to assume that, from remotest antiquity, humans have also possessed "hyperactive agent-detection devices" (HADDs) that would, for example, enable a hunter to detect an imaginary tiger in his surroundings in order to prevent the failure to detect a real tiger. While this is an attractive explanation of supranatural phenomena, its verification is problematic (Powell and Clark, 2012). To begin with, no neural mechanism has yet been found to confirm the hypothesis. Most important, experiments show that there are not one but several neural systems to detect different types of agents [START_REF] Zmigrod | The neural mechanisms of hallucinations: A quantitative meta-analysis of neuroimaging studies[END_REF]. The theory also largely ignores the culture in which the individual lives. Boyer does distinguish between small archaic societies and more complex societies-our focus hereranging from Mesopotamia to modern nations (2018, p. 121):
Religions appeared with large-scale kingdoms, literacy and state institutions. Before them, people had pragmatic cults and ceremonies, the point of which was to address specific contingencies, misfortune in particular.
However, he maintains his view that religious sentiments have a common origin. Yet it is obvious that people's cultural environments strongly determine their sentiments and beliefs. It is therefore worth examining whether other theories can better explain religious behavior.
The second theory derives from research on the cultural evolution of societies. It takes fuller account of individuals' living environments in order to explain their irrational beliefs. After all, why do beliefs vary from one place to another and from one culture to another [START_REF] Gervais | The Zeus problem: Why representational content biases cannot explain faiths in gods[END_REF]? The best explanation is that people live in different cultures, espousing their beliefs, norms, and behaviors. Ritual behaviors accordingly function as "credibilityenhancing displays" (CREDs) making people more willing to believe in the existence of supernatural forces acting on their society. In the words of Norenzayan et al. (2016, p. 5), CREDs complement the previous approach:
Credibility-enhancing displays (CREDs), or learners' sensitivity to cues that a cultural model is genuinely committed to his or her stated or advertised beliefs. If models engage in behaviors that would be unlikely if they privately held opposing beliefs, learners are more likely to trust the sincerity of the models and, as a result, adopt their beliefs.
This theory seeks to explain the emergence and current prevalence of religions that it qualifies as "prosocial," as they are more efficient in promoting large-scale cooperation in our societies, and as the groups concerned can, ipso facto, be advantaged by the inclusive fitness of their genes, defined by Hamilton (1964, p. 8):
Inclusive fitness may be imagined as the personal fitness which an individual actually expresses in its production of adult offspring as it becomes after it has been first stripped and then augmented in a certain way. It is stripped of all components which can be considered as due to the individual's social environment, leaving the fitness which he would express if not exposed to any of the harms or benefits of that environment. This quantity is then augmented by certain fractions of the quantities of harm and benefit which the individual himself causes to the fitnesses of his neighbours. The fractions in question are simply the coefficients of relationship appropriate to the neighbours whom he affects: unity for clonal individuals, one-half for sibs, one-quarter for halfsibs, one-eighth for cousins, . . . and finally zero for all neighbours whose relationship can be considered negligibly small. For example, several studies have shown a high positive correlation between belief in "prosocial" religions and the size or complexity of societies (e.g. [START_REF] Roes | Belief in moralizing gods[END_REF][START_REF] Botero | The ecology of religious beliefs[END_REF]. But correlation does not necessarily mean causality, and these studies should be viewed with caution. Moreover, the concept of inclusive fitness is an artificial construct incorporating additive effects of genes-as does behavior genetics, whose fallacies we exposed in the previous chapter. Hamilton himself, like Fisher, believed in eugenics. An article by Nowak et al. on "The evolution of eusociality" (2010) attacked and showed the limits of this concept, which has been used in biology for nearly sixty years. The subsequent discussion of eusociality's relevance is far from over [START_REF] Birch | Kin selection, group selection, and the varieties of population structures[END_REF].
A third theory seeks to show that religion is an adaptive phenomenon by introducing cooperation between individuals. It explains the evolution of religions over time by concentrating on the costs and benefits of a religious attitude. Given the huge costs involved in the observance of a religion with its prohibitions and complex rites, we may initially imagine that such an approach is unlikely to yield results. However, its defenders use biologists' "signaling theory" to show that it does allow progress. The theory assumes that the greater the constraints placed on a group's members, the stronger its solidarity. [START_REF] Sosis | Religion and intragroup cooperation: Preliminary results of a comparative analysis of utopian communities[END_REF] compared the stability of 277 religious communities and secular communities (e.g. socialist and anarchist) in the United States over a 120-year period spanning the nineteenth and early twentieth centuries. Over the entire period, secular communities had an average annual probability of dissolution four times as high as religious communities. A more detailed study of communities (Sosis and Bresler, 2003) incorporated the number of restrictions enforced (on the consumption of coffee, alcohol, tobacco or meat, on communication with other communities, on jewelry, and so on). It showed that the effect of these prohibitions on the longevity of religious communities was proportional to their number, whereas they had no effect on secular communities. The authors can therefore attribute this stability to rituals (p. 230):
Trust emerges because participants direct their ritual efforts toward the same deity or spirit. The ritual action itself signals belief in this nonmaterial supernatural entity, an entity whose existence can accordingly not be demonstrated. By directing rituals' referents toward the unfalsifiable, religions attach themselves to ultimate beliefs that are unverifiable and hence potentially eternally true. These ultimate sacred postulates are not subject to the vicissitudes of examination; they are beyond examination, making them much stabler referents than those employed by secular rituals.
Having thus demonstrated the fitness of religious phenotypes, they go on to investigate how religion promotes it by strategies that promote higher fertility among couples, for example by requiring stable unions and banning abortion [START_REF] Sanderson | Adaptation, evolution, and religion[END_REF]. However, they fail to identify the phenotypes that can induce these behaviors (Powell and Clarke, 2008).
A fourth theory, developed by the anthropologist Maurice [START_REF] Bloch | Why religion is nothing special but is central[END_REF], argues that the phenomenon to be explained is not religion but, more generally, the establishment by human societies of institutions linked to States. Given the problems encountered by the previous theories summarized above, Bloch offers the following proposal (p. 2055):
The alternative story I propose here avoids these problems. It argues that religious-like phenomena in general are an inseparable part of a key adaptation unique to modern humans. This is the capacity to imagine other worlds, an adaptation that I shall argue is the very foundation of the sociality of modern human society. This neurological adaptation occurred most probably fully developed only around the time of the Upper Palaeolithic revolution.
The author thus goes back to the period of the origins of astrology. As Émile [START_REF] Durkheim | Les formes élémentaires de la vie religieuse: le système totémique en Australie[END_REF] had recognized before him, astrology did not differ from astronomy at the time, and Bloch adds that it was tied to the political organization of complex social groups. He regards humanity as capable of handling the "transactional social" and the "transcendental social" simultaneously. We share the transactional with chimpanzees, whereas the transcendental is specific to us. The entry into the transcendental, like a symbolic second birth in many civilizations, opens mankind to imagination. It enables us to understand the roles played by individuals and the resulting social groups. It offers a simultaneous explanation of religious rituals and of the gods created as members of these transcendental groups.
This theory seems at odds with cognitive approaches, yet Bloch himself (2016)-although stating that he is not terribly fond of such labels-and his most recent defenders such as Connor Wood and John Shaver (2019) note that the theory does not repudiate the earlier approaches but reconciles them by opening up a new perspective (pp. 13-14). However, the very notion of transcendence, used in philosophy by many authors with very different meanings, is not sufficiently analyzed in a scientific manner. In his 2007 article, Bloch envisages the use of "mirror neurons" theory 76 to explain the notion of social transcendence-"the action of alter requires from us a part of the same physiological process" (pp. 289-290)-but he admits that this theory is contested (for more details see [START_REF] Hickok | Eight problems for the mirror neuron theory of action. Understanding in monkeys and humans[END_REF]. Moreover, it was developed for humans as well as primates, contradicting Bloch's position on transcendence.
These four current theories are thus far from explaining the complexity of religious phenomena. Even Norenzayan et al. (2016, p. 16), while supporting the second theory, fully recognize their deficiencies:
Despite recent progress, the evolutionary study of religion is in its infancy, and important gaps remain in our knowledge and much work needs to be done to reach a more complete understanding.
We take the argument a step further by pointing out that these theories use concepts whose definition and value have been largely challenged by many studies. Such concepts include hyperactive agent-detection devices, credibility-enhancing displays, inclusive fitness, signaling theory, and mirror neurons. Only Bloch's original theory, which holds that "religion theory is nothing special," might be exempt from this weakness, but his additional words "is central" diminish its value considerably. In fact, his recent defenders, Wood and Shaver, now regard it as a mere sequel to earlier theories.
Psychological studies of religion, as well, have introduced instruments for measuring religious spirituality and assigned them a major role. The Daily Spirituality Experience Scale (DSES), for example, is regarded as "one of the most significant recent innovations in the conceptualization and measurement of religiousness and spirituality" (Ellison and Fan, 2008, p. 247). It is used in thousands of articles to show the positive effects of religion. A detailed analysis of these effects in Australia (Schurmmans-Stekhoven, 2011, p. 144) shows that they are incorrectly interpreted and concludes:
76 "A mirror neuron is a neuron that fires both when an individual performs an action and when the individual observes the same action performed by another individual." (Stemmer and Whitaker, eds., 2008, p. 237) The results suggest that those truly interested in discovering the causes of well-being would do well to remember that civil behaviour and dispositions are not exclusive to those high on spirituality or religiosity and once these broad variables are considered the effects for theistic experience and belief vanish.
A similar analysis of Japan (Schurmmans-Stekhoven, 2018) fully confirms these results for another culture.
These criticisms are well summarized by Luther Martin and Donald Wiebe (2012, p. 17):
Like semiotics, the history of Religious Studies has been one of simultaneous institutional success and institutional bankruptcy. On the one hand, there are now numerous departments, institutes, associations, congresses and journals dedicated to Religious Studies. On the other hand, the academic study of religion has failed to live up to earlier promises of theoretical coherence and scientific integrity; indeed, such promises have been severely undermined.
This conclusion is all the more striking as [START_REF] Wiebe | Theory in the study of religion[END_REF], in total agreement with Martin, believed the opposite thirty years earlier.
A final comment: these studies generally ignore the notions of atheism and agnosticism, which have existed since antiquity and would require an analysis similar to that of religion [START_REF] Labouff | Differences in attitudes toward outgroups in religious and nonreligious contexts in a multinational sample: a situational context priming study[END_REF]. In the final section of this chapter, we shall return to these notions-which lie outside the field of religion-by drawing a connection between atheism and political anarchism, as Bloch initially did when he wrote (2008, p. 2058): "The creation of an apparently separate religion is closely tied to the history of the state."
Why and how should one still believe in eugenics in its present forms?
In Chapter 4, we examined the various approaches put forward to justify eugenics and hereditarianism. It is worth asking what drives many people today to believe in these doctrines, whose nefarious and criminal legacy we presented. They may seem different now, for they produce an abundant literature that-despite its pseudoscientific character-is promoted by a large community of researchers; at the same time, they offer a wide array of genetic tests to all of humanity, so all individuals can learn their future.
The geneticist Arnold Munnich (2008, p. xxi)
describes this program clearly:
Have geneticists replaced card-readers and other fortune-tellers in our contemporaries' imagination? How have we come to accredit the notion that genetics would make it possible to understand everything, prevent everything, and avoid even the worst events? It's no doubt because, in the minds of many, genetics specializes in origins and destiny. The geneticist is a sort of oracle with a clear vision of the past and, consequently, of what the future holds for us. . . . And lo and behold, genetics has become a moral issue, a means of ideological pressure, or the opportunity for taking positions of a denominational nature, or even for taking action in the name of a given chapel or divinity . . . There is only one point in this text with which we might take issue: card-readers and other fortune-tellers are hardly about to disappear. Having said that, let us examine the impact of behavior genetics and its means of persuasion.
The Behavior Genetics Association (BGA), founded in the U.S. in 1970, initially comprised 44 members who had manifested their interest in this line of research (Loehlin, 2009). Its activities were U.S.-centered until 1980, but it then began to hold international meetings in other countries-a sign that its approach was attracting a wider audience. Its publication resources also expanded steadily, and articles in its field were published by many journals such as Behavior Genetics, Acta Geneticae, Medicae et Gemmelologiae, Social Biology, Genes, Brains, and Behaviour, and Twin Researches. The discipline found its way back first into American universities including the University of Colorado at Boulder, the University of Minnesota, and the University of Texas, then internationally at Kings College (London), the Vrije Univesitet (Amsterdam), and elsewhere. Interestingly, the list includes no Francophone universities. The BGA annual conferences are now attended by nearly 1,000 persons.
Behavior genetics has been abundantly criticized, most notably by renowned geneticists and psychologists such as Jerzy Hirsch in his book To "Unfrock the Charlatans" (1981). BGA members, however, generally ignore these attacks, leaving the questions raised in abeyance. However, the issue of differences between black and white "races" sparked a major debate in the U.S. on the validity of the analyses that lead to a genetic hypothesis concerning these differences. The members of the Genetic Society of America first voted overwhelmingly in favor of a report supporting [START_REF] Hirsch | To "unfrock the charlatans[END_REF]; in the end, his arguments were rejected and a watered-down version of the report was drafted and approved. This was in response to criticisms from a geneticist-mathematician who was well-known but not even mentioned in the report (letter from David Suzuki to Hirsch, 1984, quoted in the French edition of Hirsch's book). The episode gives an idea of the political power of behavior geneticists.
It is important to measure the impact of these studies on the general population. Psychological studies on the effect of the media, which disseminate the results of behavior genetics research, can measure their effect on public beliefs. The findings by Alexandre Morin-Chassé (2014) are essential to understanding this influence.
Morin-Chassé conducted a double-blind survey of 1,413 Americans. He divided his respondents into three groups and had each read one of three press articles. The first article reported the discovery of a "cancer gene"; the second was on the discovery of a "liberal gene" affecting people's political views; the third discussed a "debt gene" found in persons with negative credit-card balances. After the groups had read the articles, Morin-Chassé asked all respondents to indicate their belief that fourteen individual characteristics were influenced by genes, on an 11-point scale ranging from 0% to 100% genetically caused. These characteristics, which ranged from skin color to a preference between Apple and Microsoft, had already been tested in earlier surveys for their influence on respondents. The characteristics also included those discussed in the articles previously read by respondents.
The first group provided a ranking very similar to the one found in many studies [START_REF] Schneider | Genetic attribution: sign of intolerance or acceptance[END_REF] covering the entire population: the more a trait is biological (such as skin color or size), the more it is perceived as gene-related. The second group attributed political behavior to genetic factors by a far greater margin than the other groups. It therefore revealed a strong influence of the earlier reading of the article about the "liberal gene." This influence, however, was also strong for all the other traits regarded as weakly related to genetics by the first group. The third group saw the genetic factor as the cause of negative credit-card balances by a larger margin than the other groups. This provides further confirmation of the effect of reading the article on the "debt gene." As with the second group, the article influenced views about traits weakly linked to genetics.
These findings clearly show the role of the press, which often overstates the impact of the results of behavior genetics studies, and the powerful influence of their articles on their readers' views.
Human freedom
Let us now return to the prevalence of atheism in today's populations. Atheism may be defined as the absence or the rejection of any belief in any divinity whatsoever, whereas agnosticism holds that the existence of a god or gods cannot be determined. Two surveys were conducted in 2005 and 2012 to measure religiosity and atheism worldwide on a sample of men and women across 39 countries for the first survey and 57 countries for the second, which covered more than 50,000 respondents. The following question was asked:
Irrespective of whether you attend a place of worship or not, would you say you are a religious person, not a religious person or a convinced atheist? (WIN-Gallup International, 2012, p. 3) The responses allow a rough categorization into religious persons, agnostics, and atheists, for the non-response rates are generally low. In 2012, 59% of the world population described itself as religious, 23% as agnostic, and 13% as atheist; 5% did not answer the questionnaire. Between 2005 and 2012, the percentage of "religious" persons fell and that of agnostics and atheists rose.
A recent, more detailed, and more accurate study of atheists in the U.S. (Gervais and Najle, 2017) found a far higher Bayesian estimate of 26%. By extrapolation, the number of atheists worldwide would be about two billion. While confirming these results would require further investigation, we can see that the number of nonreligious people in the world is significant, and that we should not confine studies to religious persons-as many researchers do.
Next, what do we know about the origins of religious sentiment in archaic societies? We have gone back very far in the history of humanity-up to Mesopotamia-to find the origin of astrology and religions. But all these ancient civilizations were already States whose elaborate political and social organization could entail an obligation for their populations to observe their rules. What was the situation in earlier times, in which [START_REF] Boyer | Minds make societies[END_REF] discerns no religious organization? The studies of Ukrainian "Mega-Sites" dating from 4000 to 3200 B.C.E., covering 100-400 hectares and with a population of several thousand people, show no sign of a centralized authority, of monuments or, no doubt, of an official religion [START_REF] Wengrow | Cities before the State in early Eurasia[END_REF]. While hard to fully understand and transpose to today's world, these egalitarian societies appear to tell us that political and religious powers are neither necessary nor indispensable for a human society to exist. But this is merely an assumption whose robustness we cannot verify, given the remoteness of these cultures from the present.
The lack of written sources for archaic societies makes researchers dependent on the evidence collected on their populations by ethnologists and anthropologists over the centuries. Pierre Clastres (1974, p. 23) commented on such ethnocentrism:
A Copernican revolution is at stake, in the sense that in some respects, ethnology until now has let primitive cultures revolve around Western civilization in a centripetal motion, so to speak. (English translation by Robert Hurley) 77Clastres' premature death at age 43 prevented him from completing that revolution. He did not, in fact, examine the role of religion. Rather, he confined his investigation to the role of political power in archaic Amazonian societies, while failing to answer David Graeber's question (2004, p. 23):
The most common criticism of Clastres is to ask how his Amazonians could really be organizing their societies against the emergence of something they have never actually experienced.
Despite this weakness, Clastres did make progress in the critique of modern anthropology.
Unfortunately, the latter is still conducted using data bases that collect observations by ethnologists and archeologists in the form of binary data files (did the society studied experience a given form of religious thought or not), such as the Standard Cross-cultural Sample developed by George Murdock and Douglas White (1969). These data are not exempt from the cultural bias singled out by Clastres, and we should be highly cautious in interpreting the findings of the studies based on them.
The study by Peoples et al. on Hunter-Gatherers and the Origin of Religion (2016) is open to such criticism. It is based on a sample of 33 hunter-gatherer societies, taken from the Standard Cross-cultural Sample. It shows that all these societies practiced animism-which the authors do not regard as a religion-but that only 15% had what they call "active high gods." Again, given the source used, Clastres' objections apply, and the findings are questionable.
The debate on religious sentiment in archaic societies is thus far from settled, and the ethnocentrism of our societies offers little prospect of progress.
Nevertheless, we can state that the establishment of an entity-whether celestial body, god or gene-that cannot be influenced by humans but determines their fate is an invention of complex societies endowed with a strong political authority, for the purpose of controlling their members by curtailing their freedom. Like religion, freedom is a concept with multiple meanings, and we must specify how we interpret the term here.
While the methodological objective of this book precludes a discussion of Jean-Paul Sartre's philosophy, we believe that it has best captured the notion of freedom as it pertains to our subject, for it eliminates the notion of divinity as a deterministic figure. Sartre spells out what he means by "freedom" in the following passage (1946, p. 37):
Everything is indeed permitted if God does not exist, and man is in consequence forlorn, for he cannot find anything to depend upon either within or outside himself. He discovers forthwith, that he is without excuse. For if indeed existence precedes essence, one will never be able to explain one's action by reference to a given and specific human nature; in other words, there is no determinismman is free, man is freedom. Nor, on the other hand, if God does not exist, are we provided with any values or commands that could legitimise our behaviour. Thus we have neither behind us, nor before us in a luminous realm of values, any means of justification or excuse. -We are left alone, without excuse. That is what I mean when I say that man is condemned to be free. Condemned, because he did not create himself, yet is nevertheless at liberty, and from the moment that he is thrown into this world he is responsible for everything he does. (English translation by Philippe Mairet, 1956) 78In fact, for Sartre, very few people are ready to accept and assume their freedom. They prefer to pin responsibility for their situation on someone else or something else in order to free themselves from it. This offers a perfect explanation for the strong persistence of the belief in an external god (for religion), or in an unfortunate gene passed down from our ancestors (for eugenics). This attitude has also been imposed on most individuals by the successive political regimes that have arisen throughout human history. j'exprimerai en disant que l'homme est condamné à être libre. Condamné, parce qu'il ne s'est pas créé lui-même, et par ailleurs cependant libre, parce qu'une fois jeté dans le monde, il est responsable de tout ce qu'il fait.
Part II. What can one capture of a human life, and how?
Chapter 6 Imaginary life stories to forge and nourish our mind
In Part I, we showed that the divination arts and eugenics seek to demonstrate the predictability of a person's future, and we examined the reasons for their enduring success. Despite the latter, we debunked their claims with specific arguments. We now turn to the many difficulties encountered when trying to capture even a portion of an individual life, the problems we face when attempting to analyze it, and the resources we can apply to overcome those obstacles.
Chapter 6 looks at the various ways of imagining or recording a life story, and the difficulties this involves.
From our earliest childhood our mind is shaped-like an iron blank reddened by fire-by myriad epics, tales, legends, myths, and other narratives. Once forged, our mind is again nourished throughout our lives by novels, films, plays, and other works, which revive that initiation by reinforcing it or-on the contrarycontradicting it, for our mind is ever watchful. This chapter takes a closer look at the various types of imaginary narratives. We offer a classification by focusing on those that recount the lives of one or more characters in a partial manner, without ever attaining exhaustiveness, given that those lives are so complex. We set aside the categories for which the narratives are secondary, artificial, or even unnecessary. We discuss the methods used for the purpose, and the importance attached by their authors to certain specific aspects of those lives.
We will then look in a more detailed way to some of these imaginary life stories, in order to see how they are symbolic of their time.
First, the Epic of Gilgamesh at the dawn of human civilizations marks the advent of a patriarchal culture in opposition to the matriarchy that preceded it. The later epics of Homer in Greece and the Mahābhārata in India and the Old Testament of Jewish people are quite similar but mark so different cultures, like many others epics and myths all around the world. We may also wonder why nowadays there seems to be a lack of new epics characterizing the decline of Judeo-Christian civilizations.
Second, the Sophocles' tragedy of Oedipus Tyrannus may be defined by the concept of the necessity of human fate. Aristotle preferred tragedy to epics, as it was more effective in condensing action. Like epics this dramatic genre knew a great number of kinds even in our days.
Third, the romances emerged in the Middle-Ages, and became important later as novels. We analyze here the devout romances inspired by the life of Henry de Joyeuse, as a model for the spiritual ascension of a penitent and a mystic, omitting his previous life of luxury before his conversion. From the late eighteen century the novel reached maturity and is still now a very important part of the literature.
We close with a general discussion of the role played in our lives by all these kinds of imaginary life histories.
Imaginary life stories
Our first question is: what are the distinctive features of these various kinds of narratives, some of which tell the story of one or more individuals either partially (by presenting an episode of their life) or more fully, but never exhaustively? The theory of this literature-its narratology-goes back to the earliest times. We outline different points of view on the narratives without going into excessive detail.
In Greek antiquity, Plato initially directed his attention to what the Greeks called ποίησις (poiesis) (Duchemin, 1985), a term that encompasses not only epic but tragedy, comedy, dithyrambic poetry, and even tales [START_REF] Manson | Platon et les contes de nourrices[END_REF]. Poiesis is also closely linked to music, and, more broadly, constitutes the Platonic notion of μίμησις (mimesis) which has two main meanings: imitation proper, and representation in general [START_REF] Brancacci | Mimesis, poésie et musique[END_REF]). Plato's purpose was not to define different kinds of narratives but to describe different types of imitations, not mutually exclusive. For instance, in the Republic (ca. 360 B.C.E., III, 394 b-c), he writes: there is one kind of poetry and tale-telling which works wholly through imitation, as you remarked, tragedy and comedy; and another which employs the recital of the poet himself, best exemplified, I presume, in the dithyramb; and there is again that which employs both, in epic poetry and in many other places, if you apprehend me. 79Thus the poet will imitate, or invent, or combine the two modes. A fuller analysis of Plato's critique of these imitations-which he vehemently excludes from his ideal Republic-lies beyond the scope of our discussion. However, for the reader seeking fuller information on the topic, we recommend Plato and the poets (Destrée, Herrmann, 2011) and Exiling the poets [START_REF] Naddaff | Exiling the poets. The production of censorship in Plato's Republic[END_REF].
Aristotle, in his Poetics (ca. 350 B.C.E.), goes further and addresses its generic constitution. He opens his work by stating (1447a):
I propose to treat of Poetry in itself and of its various kinds, noting the essential quality of each, to inquire into the structure of the plot as requisite to a good poem; into the number and nature of the parts of which a poem is composed; and similarly into whatever else falls within the same inquiry. Following, then, the order of nature, let us begin with the principles which come first. 80 Aristotle is therefore indeed referring here to categories defined by their specific purposes. While still resembling Plato's types of imitation, the categories differ in that they are sufficiently distinct to be clearly separated. Aristotle eliminates Plato's proposed combination of modes, which is not suited to his specification as the categories must be distinguishable. He recognizes only two modes of enunciation: narrative poetry and dramatic poetry. However, he continues to admit the epic genre and the dramatic genre (tragedy and comedy, although his book on comedy is now lost), as well as music-to which he adds painting as a form of imitation. He prefers tragedy to epic, most notably because it condenses actions better. His classification is based on the formal characteristics of the works more than on their deeper significance. A final comment on Aristotle's view of ποίησις: it clearly contrasts with Plato's, for it recognizes the importance of ποίησις to philosophy. For a fuller analysis, we refer the interested reader to Hallwell (1998), Aristotle's Poetics.
Neither Aristotle nor Plato discusses lyric poetry as defined today. They merely distinguish between several kinds of poetry, such as dithyrambs and elegies, without devoting great attention to them. It was only in the eighteenth century that Batteux (1746) extended Aristotle's genres to lyric poetry, which he defines as follows (p. 240):
The other kinds of poetry are mainly concerned with actions. Lyric poetry is entirely devoted to feelings. That is its manner, its essential aim. 81 80 Greek°text:°περὶ ποιητικῆς αὐτῆς τε καὶ τῶν εἰδῶν αὐτῆς, ἥν τινα δύναμιν ἕκαστ ον ἔχει, καὶ πῶς δεῖ συνίστασθαι τοὺς μύθους εἰ μέλλει καλῶς ἕξειν ἡ ποίησις, ἔτι δὲ ἐκ πόσων καὶ ποίων ἐστὶ μορίων, ὁμοίως δὲ καὶ περὶ τῶν ἄλλων ὅσα τῆς αὐτῆς ἐστι μεθόδου, λέγωμεν ἀρξάμενοι κτὰ φύσιν πρῶτον ἀπὸ τῶν πρώτων. 81 French text: Les autres espèces de poésie ont pour objet principal les actions: la poésie lyrique est toute consacrée aux sentiments, c'est sa manière, son objet essentiel.
In other words, lyric poetry imitates feelings, just as tragedy evokes pity and fright.
Another type of literature also emerged in the Middle Ages as romances, but became important mainly in our times: the novel. But was it a new genre? For Dumézil (1968)-who included the cycle of Arthurian romances and Indian epics in the same complex-symbols and functions are expressed "either in epic narratives in the proper sense, or in romances inseparable from epic" (p. 25). From the standpoint of comparative mythology, it seems normal to regard romance and epic as one. However, to examine the cultural formation and genesis of the forms of the imaginary, we need to consider them separately. As [START_REF] Paquette | Epopée et roman: Continuité ou discontinuité[END_REF] admits (p. 36):
And if the epic-despite a common narrative system that it shares with the novel-must be regarded as a clearly distinct genre, it is because it imparts to the formation of the language that engenders it the significance of an absolute beginning. 82He argues that the epic is the history of the origins of an ethnic group, whereas the novel is a literary work about that group.
More recently, some authors have explored the significance of a literary genre and its boundaries. Such a classification assumes the existence of clear resemblance criteria that make it possible to distinguish between genres. [START_REF] Schaeffer | Qu'est-ce qu'un genre littéraire[END_REF], after giving a history of the various theoretical proposals to resolve this classification problem, concludes (p. 63):
[. . .] not one of Aristotle's few illustrious successors was able to go further than the author of Poetics; on the contrary, each endeavored to make the problems even more unfathomable than his predecessor already had. 83In the face of this failure, Schaeffer proposes the replacement of the theory of literary genres by four generic logical sequences to address any work (p. 185):
[. . .] any text is, indeed, an act of communication; any text has a structure from which one can extrapolate ad hoc rules; any text [. . .] is positioned relative to other texts, and thus has a hypertextual dimension; lastly, any text resembles other texts. 84Accordingly, he views the creation of a text in terms of discursive conventions, which can be either constitutive, regulatory or traditional. Schaeffer defines as constitutive (p. 159) the conventions that enter into the definition of a genre without, however, fully defining it. Regulatory conventions add new rules to a form of existing communication. Traditional conventions concern the meaning of linguistic expressions. Multiple types of discursive conventions usually apply to any given work, and the notion of genre is no longer very meaningful in this context. Nevertheless, genre is still commonly used as an analytical tool by many scholars and as a method for classifying texts in bookstores, libraries, and elsewhere. More recently, some authors have even tried to rehabilitate the notion of genre, no longer with the aid of formal classification criteria, as scholars did after Aristotle, but by arguing that works are the expression of a body of thought. As a result, the goal becomes to "understand" genres, i.e., to grasp their deeper meaning rather than focusing on reproducible criteria regarding their form and features.
Before Schaeffer, Peter Szondi (1974), discussing studies by the historian of antiquity Schlegel (1772-1829) and the philosopher Schelling (1775Schelling ( -1854)), had already offered another approach to these texts. He would not define (p. 50):
[. . .] the various genres in a descriptive manner, based on their elements, but deduce them from a specific concept (in the case of tragedy, from the concept of necessity). The tragic hero faces this necessity as he confronts his fate, with the "moral independence" that Schlegel, in the same passage, attributes to Prometheus and Antigone-tragic hero and tragic heroine-but denies to the epic hero Achilles. 85Thus breaking with the classic conception of ποίησις as a set of reproducible generic traits, Szondi sets out to seek their origin. Szondi, in another book (1975) on: Einfürung in die literarische hermeneutik (Introduction to literary hermeneutics) traces the development of hermeneutics through examination of the work of eighteenth-century German scholars. He introduces here the main role of hermeneutic theory in literary studies.
Later, [START_REF] Goyet | Penser sans concepts: fonction de l'épopée guerrière[END_REF] showed that epic allows a society to find radically new solutions to a major political, cultural, and social crisis by embodying it in characters in order to think it through and reconfigure the world. [START_REF] Vinclair | De l'épopée et du roman. Essai d'énergétique comparée[END_REF] compares the effect of epic and the novel on societies. He shows that one can characterize the novel by its ethical effort to emancipate the individual, and epic by its political effort to redefine common values [START_REF] Vinclair | Le roman fait l'épopée[END_REF]. This new path should extend to all literary genres and reverse the prior approach. No longer focused on the formal characteristics of genres-which had led Schaeffer to reject them-it describes them on the basis of their thought patterns, and the hermeneutic way in which they lead the reader to think and live (Bertho, 2016).
The notion of genre thus revived, we can now take a closer look at the genres of relevance to our discussion, as they examine lives that are imaginary yet well rooted in the society that produces them.
We begin with the genesis of epics, which-in Vinclair's words (2015, p. 348)-endeavor: The term epic derives from the ancient Greek ἔπος, "that which is expressed by speech," and ποιέω, from the verb "to create." The epic is thus a vast verse or prose composition that sets out a historical theme and celebrates the actions of a model hero or the great deeds of a group. For our purpose, we focus here on the actions or even the entire life of a hero. However, we must distinguish these epicscomposed in the form of literary works-from oral myths existing in many versions. Hence the importance of specifying the differences between the two categories, which can be significant. It will be useful, for this purpose, to compare the approaches of two scholars: Vernant and Lévi-Strauss. Vernant (1974, p. 246) clearly sets out the differences between his approach, based on historical psychology (1965), and that of Lévi-Strauss, based on structural analysis (1958):
It will be noted in this respect that Lévi-Strauss works on a corpus of oral narratives offering a very large number of variants. The manner itself in which the research is conducted calls for a systematic comparison of the narratives in order to identify the formal elements that recur in each myth, according to relationships of homology, inversion, and permutation. At the same time, it rules out a philological analysis of each version. The problem is different in the case of a heavily structured and elaborate written work such as the Theogony or Works and Days. Here, one cannot give precedence to elements found, with greater or lesser changes, in other versions. One must strive to provide an exhaustive analysis of the myth in the details of its configuration. 87 86 French text: …à penser (par polylogie et par épreuves) et à faire advenir […], en commun, de nouvelles valeurs politiques, … 87 French text: On notera à cet égard que Lévi-Strauss travaille sur un corpus de récits oraux offrant un très grand nombre de variantes. La manière même de la recherche appelle une comparaison systématique des récits pour en retenir les éléments formels qui se répondent de mythe en mythe, suivant des rapports d'homologie, d'inversion, de permutation. En même temps elle exclut une analyse philologique fouillée de chacune des versions. Le problème est différent dans le
As our focus here is on written epics, we shall set aside the structural approach, while recognizing its value for the study of oral myths. The problem is, in fact, similar to that of tragedy, discussed later. Lévi-Strauss' comparison-in La Potière jalouse (1985)-of Sophocles' Oedipus Rex (ca. 349 B.C.E.) with Labiche's comedy Un chapeau de paille d'Italie (1851) in no way describes the content of each play, but only their common form [START_REF] Nourry Salmon | Psychologie historique et analyse structurale chez J.-P. Vernant[END_REF]. By contrast, Vernant and Vidal-Naquet (1986) place Oedipus Rex in the context of its time and try to understand the thought process specific to Greek tragedy of the fourth century B.C.E.. That is our preferred approach here as well. We can sum up the two approaches by saying that Lévi-Strauss joins Aristotle in focusing on the structure of generic traits, whereas Vernant joins Vinclair by looking at their genesis and thought patterns.
Let us examine a few selected examples of epics.
The oldest known epic is that of Gilgamesh, which narrates a part of the life of the fifth sovereign of the first dynasty of the Mesopotamian city-state of Uruk in ca. 2650 B.C.E. We discuss the epic in greater detail below [START_REF] Bottéro | L'épopée de Gilgameš[END_REF][START_REF] George | The Epic of Gilgamesh[END_REF].
Homer, believed to have lived around the eighth century B.C.E., is the author of the two Greek epics, the Iliad and the Odyssey, but both texts have a far older origin. The Iliad, for example, reflects a society in which countless petty rulers are subservient to a more powerful king, and its content presumably dates from the Bronze Age [START_REF] Severyns | Homère et l'histoire[END_REF].
In India, the Mahābhārata (the epic history of the Bhārata's descendents) began to acquire its current form in the fourth century B.C.E., but its origins may go back further. Some mythical traits may be Vedic or Pre-Aryan. This long Indian poem recounts a family drama spread across three generations. Dumézil's detailed analysis (1968) enables us to grasp its deep meaning. cas d'une oeuvre écrite, fortement charpentée et élaborée, comme la Théogonie ou Les travaux et les Jours. Il ne peut s'agir alors de privilégier les éléments qu'on retrouve, plus ou moins transformés, dans d'autres versions. On doit s'efforcer de donner du mythe, dans le détail de sa configuration, une analyse exhaustive.
The books of the Old and New Testament are not generally regarded as epics. This is surprising, for the life stories of the prophets do celebrate the actions of key figures such as Abraham, Moses, Daniel, and Jesus Christ, and they show the vision of the world shared by many peoples today. In reality, what is regarded as a Biblical epic is the adaptation of texts from various books of the Bible in dactylic hexameters produced from the early fourth century up to the Renaissance [START_REF] Faïsse | Epopée biblique entre traduction poétique et commentaire exégétique[END_REF]. However, the fact that the original texts form the bedrock of several current religions (Judaism, Christianity, and Islam) also sets the Bible apart from pagan epics.
By contrast, can we regard as epics the many epic poems produced in Europe from the sixteenth century onward, such as Ronsard's Franciade (1572) and Milton's Paradise Lost (1667)? In terms of their formal characteristics, these works are obviously epics. If, however, we look to them for radically new solutions for interpreting the world, we must admit that they are failed epics. As Vinclair clearly states (2015, p. 213):
Failed epics because, through a misguided interpretation of the theory of genres, the heroic poet (in keeping with the opinion of contemporary critics) thinks that by imitating the phenotypic properties he could achieve a work of the same kind as those of Homer or Virgil. 88 Can we also conclude that the epic genre is now dead? We do not think so, for with the decline of Christian civilization, a new culture should arise. In the late nineteenth century, Nietzsche wrote in Die fröliche Wissenschaft (1882, sect. 125):
God is dead! God remains dead! And we have killed him! 89 One might have imagined that the decline of Judeo-Christian civilizations would foster the rise of a new epic in our world. That is 88 French text: Des épopées ratées parce que, se fourvoyant sur la théorie des genres, le poète héroïque (suivant l'opinion des critiques de l'époque) pense que l'imitation des propriétés phénotypiques lui permettrait d'accomplir une oeuvre de même nature que celles d'Homère ou de Virgile.
certainly what Mallarmé intended with his planned Livre (Book), which he never wrote. Before Nietzsche, he had come to believe that God does not exist. As he wrote in a letter to Cazalis in 1867 (Correspondance 1854-1898, 1, 241):
After a supreme synthesis, I am slowly gaining in strength-unable, as you can see, to be distracted. But how much more unable I was, several months ago, first in my terrible struggle with that old and evil plumage-God-fortunately knocked to the ground. But as that struggle had taken place on his bony wing-which, by an agony more vigorous than I could have suspected in him, had swept me into the Shadows-I fell, victorious, madly and infinitely so [. . .] 90
To found this new godless civilization, glimpsed by Mallarmé and Nietzsche, a new epic was required. But neither writer was able to compose it, no doubt because of its sheer magnitude, and one hundred and fifty years later, religion still maintains a strong presence and is far from having vanished from the minds of our contemporaries.
Let us now turn to tragedy.
Greek tragedy emerged several centuries after Homer's epic and acquired its definitive form with Aeschylus (ca. 525-456 B.C..), when his play The Persians was first performed in 472 B.C.E. The term "tragedy" derives from the ancient Greek τραγῳδία, which combines τράγος (he-goat) and ᾠδή (song, sung poem). It first designated the ritual chant accompanying the goat's sacrifice at the feasts of Dionysus in archaic times. Later, as Szondi noted, tragedy was defined by the concept of the necessity of human fate.
We have also seen that, in formal terms, Aristotle preferred tragedy to epic, as it was more effective in condensing actions. This narrower scope could prevent it, at least in part, from telling a fuller 90 French text: J'en suis, après une synthèse suprême, à cette lente acquisition de la force -incapable tu le vois de me distraire. Mais combien plus je l'étais, il y a plusieurs mois, d'abord dans ma lutte terrible avec ce vieux et méchant plumage, terrassé heureusement, Dieu. Mais comme cette lutte s'était passée sur son aile osseuse qui, par une agonie plus vigoureuse que je ne l'eusse soupçonné chez lui, m'avait emporté dans les Ténèbres, je tombai, victorieux, éperdument et infiniment … and more elaborate life story. It would often concentrate on a specific episode of life. To quote Aristotle (Poetics, 1449b):
Epic poetry agreed with tragedy only in so far as it was a metrical representation of heroic action, but inasmuch as it has a single metre and is narrative in that respect they are different. And then as regards length, for Tragedy endeavours, as far as possible, to confine itself to a single revolution of the sun, or but slightly to exceed this limit, whereas the Epic action has no limits of time. 91 Aristotle's observation became the basis for the rule of unity of time championed by Boileau and Corneille in the seventeenth century, together with unity of action and place.
However, as noted earlier, if we drop Aristotle's definitionwhich is linked to generic traits-the recounting of past events in the unfolding of a tragedy made it possible to extend the life span considered.
For example, Sophocles' tragedy Οἰδίπους Τύραννος (Oedipus Tyrannus) begins when the king-the better part of his life behind him-consults the oracle in order to determine the cause of the plague raging in Thebes. The oracle, however, entails the successive recapitulation of all the major events that marked his life since birth. The necessity of his fate then becomes inescapable, up to the point when his wife/mother commits suicide and he blinds himself. We examine this tragedy as an example of the genre in a later section.
The use of the three-unities rule was not rejected until the nineteenth century, in Victor Hugo's preface to his play Cromwell (1828). Here is what he writes about the unity of time (p. 27):
Unity of time is no more solid than unity of place. Action, forcibly framed in twenty-four hours, is as ridiculous as if it were framed in the entrance. Every action has its own duration just as it has its specific place. Pouring the same dose of time into all events! 91 Greek°text:°ἡ μὲν οὖν ἐποποιία τῇ τραγῳδίᾳ μέχρι μὲν τοῦ μετὰ μέτρου λόγῳ μί μησις εἶναι σπουδαίων ἠκολούθησεν: τῷ δὲ τὸ°μέτρον ἁπλοῦν ἔχειν καὶ ἀπαγγελί αν εἶναι, ταύτῃ διαφέρουσιν: ἔτι δὲ τῷ μήκει:°ἔτι δὲ τῷ μήκει: ἡ μὲν ὅτι μάλιστα π ειρᾶται ὑπὸ μίαν περίοδον ἡλίου εἶναι ἢ μικρὸν ἐξαλλάττειν, ἡ δὲ ἐποποιία ἀόριστ ος τῷ χρόνῳ καὶ τούτῳ διαφέρει.
Applying the same measure to everything! One would laugh at a cobbler who wanted to fit the same shoe to all feet. Crossing unity of time with unity of place, like bars on a cage, in whichpedantically, following Aristotle-one introduces all the facts, all the peoples, all the figures that providence bestows in such massive amounts in reality! That's to mutilate men and things, that's to make history wince. 92In fact, he calls his work a "drama," not a "tragedy," highlighting a sub-genre of Greek tragedy that already existed in antiquity as τὸ σατυρικόν δρᾶμα (satirical drama). Oddly enough, though, the drama complies with unity of time, since it takes place on June 26, 1657, from three in the morning to noon. However, Hugo's later dramas and those of the other Romantics departed from the rule.
The twentieth century saw a revival of ancient tragedy in different forms: Eugene O'Neill's trilogy Mourning becomes Electra (1931); as tragicomedy or dramatic comedy with La guerre de Troie n'aura pas lieu by Jean Giraudoux (1935); as drama with Les mouches by Jean-Paul [START_REF] Sartre | Les Mouches[END_REF]; or as dark drama with Antigone by Jean Anouilh, performed in 1944 andpublished in 1946. We may conclude that Greek tragedy can keep renewing itself and will never die.
Let us now turn to the new literary genre that emerged in medieval Europe, evolved during the Renaissance to take root across the centuries, and was given different names from one country and period to another. In English, it was first known as "romance" in the Middle Ages and became "novel" at the end of the XVII th century [START_REF] Millet | Novel et Romance. Histoire d'un chassé-croisé générique[END_REF][START_REF] Lee | The meaning of romance: rethinking early modern fiction[END_REF]. In French, German, and other languages, it has kept its name roman since the Middle Ages [START_REF] Lee | The meaning of romance: rethinking early modern fiction[END_REF]. As observed earlier, the romance or novel took on its particular characteristics with respect to the epic and developed its own way of thinking.
The words romance (English) and roman (French) derive from the Latin adverb romanice, meaning "in the manner of the Romans," i.e., in accordance with the spoken language, not the written language. The English novel derives from the old French nouvel, meaning "new, young, fresh, recent," i.e., a new form of literature. It should be noted, however, that a nouvelle in French is a sub-genre of the novel, being a short, dense story. In English, by contrast, romance has become a sub-genre of the novel.
If we consider the epic and the novel on the basis of the presence of reproducible generic traits and processes, we have seen that we could not distinguish between the two, for both told individual life stories. Yet, despite the many sub-genres that have appeared since the Middle Ages, Vinclair's detailed study of the novel leads him (2015, p. 348) to define it as seeking: In the seventeenth century, an abundant series of romances on the life of Henri de Joyeuse (1563-1608) fueled the genre still known as romans dévots (devotion romances), all written by clergymen. Their subject was the biography of an important member of a then famous family. Henri was the brother of Anne de Joyeuse, who played a key role during the reign of Henri III of France. Later we describe in detail how he became a full-fledged novelistic character in the seventeenth century.
The novel reached maturity in the late eighteenth century and its apogee in the nineteenth. As an example, we take Leo Tolstoy's (1828Tolstoy's ( -1910) ) Война и миръ (War and Peace) (1868-1869), which, strictly speaking, represents a sub-genre-the epic novel-for it combines two overlying strata of narrative (Vinclair, 2015, p. 337): a novelistic stratum that recounts, as an experience of consciousness, how the characters "continue to lead their normal lives"; and a theological-historical stratum, which relates the progress of History, which cannot be controlled either by great men, who are its instruments, or-a fortiori-ordinary people, who are its spectators. 94 Individuals are portrayed as unwitting instruments serving historical and social purposes. Although the novel has more than 550 characters, it focuses on the life stories of the three main characters: Natasha Rostova, Andrei Bolkonsky, and Pierre Bezukhov, from 1805 to 1820. Of the many studies devoted to it, we refer the reader to the article by Lee Trepanier (2011), which discusses a number of these analyses and weighs their merits.
In the twentieth and early twenty-first centuries, authors' interest in the genre was unabated, and new sub-genres developed 94 French text: un plan romanesque qui raconte, sur le mode de l'expérience de la conscience, la manière dont les personnages « continuent à mener leur vie normale » ; et un plan théologico-historique qui raconte la progression de cette Histoire sur laquelle n'ont prise ni les grands hommes, qui en sont les instruments, ni a fortiori les hommes du commun, qui en sont les spectateurs. such as science fiction, novels inspired by the Letterist movement, crime novels, and parodic novels.
We shall not dwell here on the other literary genres, which resort far less to biographical narratives, the category of relevance to our study. Aristotle's Poetics was supposed to contain a section on the theory of comedy, but it has not survived. However, in his discussion of tragedy, we find a definition of comedy (1449a):
Comedy, as we have said, is a representation of inferior people, not indeed in the full sense of the word bad, but the laughable is a species of the base or ugly. It consists in some blunder or ugliness that does not cause pain or disaster, an obvious example being the comic mask which is ugly and distorted but not painful. 95Clearly, his definition does not involve a life story, but merely a representation of "inferior" individuals. Similarly, de Guardia ( 2004) offers an in-depth analysis of the possibility of comedy as opposed to tragedy, and observes (p. 131):
Molière gave up the primacy of plot and shifted the focus of comedy to characters. The goal was no longer to tell a story that would cause laughter, but to describe people in a way that would cause laughter. 96The life story told in a comedy is thus purely conventional, and hence of little relevance to our study. Likewise, lyric poetry, with its exclusive focus on feelings-as Batteux had noted in 1746-has no use for life stories.
As for fables, tales, and legends, they are too condensed and imaginary to describe a detailed life story. However, as tales shape our minds from our earliest childhood, we may view them as equal to myths in their ability to impart meaning to our lives [START_REF] Bettelheim | The uses of enchantment. The meaning and importance of fairy tales[END_REF]. Tales offer us simple but profound images, which we can incorporate into our existence, rather than complex life stories as epics do (von Franz, 1970). We therefore exclude them from our analysis.
Let us now take a closer look at examples of the various kinds of life stories recounted in the major literary genres considered here.
The Epic of Gilgamesh
The Epic of Gilgamesh is more than 1,000 years older than Homer's epics and the Mahābhārata, making it the oldest known literary work along with the Atrahasis Epic [START_REF] Frimer-Kensky | The Atrahasis epic and its significance for our understanding of Genesis 1-9[END_REF], the equivalent of the Biblical Genesis. The Sumerian list of kings (ca. 2000 B.C.E.) tells us that Gilgamesh was the fifth ruler of the city of Uruk in Mesopotamia, after the flood. He reigned from ca. 2461 to 2400 B.C.E., according to Gertoux's estimate (2016, p. 61), which he obtained by synchronization with astronomical data and events occurring among other peoples (Sumerians and Egyptians).
There are many versions of the epic in Sumerian and Akkadian (see §3.1.1 for more details on Akkadian) spanning a period of about 2,000 years, but all are incomplete. By comparing them, one can reconstruct two main versions: an ancient Babylonian version, with many gaps, and a fuller standard version, translated in English by [START_REF] George | The Epic of Gilgamesh[END_REF] and in French by [START_REF] Bottéro | L'épopée de Gilgameš[END_REF], with older texts. The text then vanished in ca. 250 B.C.E. to be rediscovered in the mid-nineteenth century on Assyrian tablets that could now be translated. Despite its many variants, the text of the epic displays great unity from its oldest version to its most recent version.
Through the fictional life story of Gilgamesh, the Mesopotamians-in Bottéro's words (1992, p. 294)-infused it with:
[. . .] the reflected image of their way of life and way of thinking, their culture, their desires, their problems, their values, and their limits, everything that informed their existence and gave it meaning, and-even beyond that-their more universally human reactions to the great issues of our fate [. . .]97
That is indeed what characterizes epic as a literary genre: the establishment of new values.
Although we cannot be certain of this, given the lack of earlier texts, the Epic of Gilgamesh would appear to reflect the suppression of early Mesopotamian matriarchal societies [START_REF] Gange | Avant les dieux, la mère universelle. Monte-Carlo[END_REF] by a new patriarchal hegemonic order represented by its hero's superhuman power. Gilgamesh is portrayed from the outset, because of his birth, as two-thirds God and one-third man-as the builder of the ramparts of Uruk to defend his city from attackers and as the mighty commander of his troops. He also acts as master of fertility by taking an adolescent girl away from her mother and her betrothed, exercising his right to possess her on her wedding night. Gilgamesh thus performs the three functions outlined by [START_REF] Dumézil | Mythe et épopée I, L'idéologie des trois fonctions dans les épopées des peuples Indo-Européens[END_REF]: sovereignty, military force, and fertility. However, his excesses against women lead them to ask the Mother-Goddess Aruru, who trained him, to calm his ardor. This marks the revival of the power of matriarchal societies, which Gilgamesh is attempting to suppress. Aruru raises Enkidu in the steppe, endowing him with a force equal to that of Gilgamesh. Enkidu, who resembles a god of wild beasts, is the opposite of the wholly civilized Gilgamesh, who must humanize him with the aid of a courtesan in order to bring him to Uruk, where he intends to fight him. When Enkidu arrives in Uruk, Gilgamesh takes part in a wedding where he must possess the bride before the spouse. Enkidu blocks the door to Gilgamesh and starts to fight him. The combat, however, ends with Gilgamesh acknowledging Enkidu's force, and the women's complaint remains unresolved.
The two now inseparable companions set out to conquer the Cedar Forest in order to find the timber lacking in the Uruk region. But the forest is guarded by Humbaba, a superhuman monster placed there by Enlil, the sovereign of the gods. Before departing on their dangerous expedition, the two companions seek advice from Ninsun, Gilgamesh's mother-goddess and lady of the wild cows. Once again, the epic tells of an intervention by an all-powerful goddess who warns her son of all the dangers of this risky adventure-warnings that he largely ignores.
After a long journey, the two companions battle Humbaba, behead him, and return to Uruk with their load of felled trees. On their return, Ishtar-chief goddess of Uruk, goddess of love, fertility, and war-asks Gilgamesh to marry her. This hierogamy (sacred marriage) would have legitimized his crown, as his father Lugulbanda had by marrying the goddess Ninsun. But Gilgamesh rejects her offer, noting that Ishtar's previous unions had brought misfortune to her consorts. As Lanoue points out (2016, p. 566):
However, a marriage with Ishtar would not have settled the succession issue, for their children would have received an ambiguous inheritance from their mother. Ishtar is a goddess of sex but also of war and death-two sides of the same coin presented in a metonymic syntagm: sex increases population, while war reduces it. 98Moreover, to assert the power of his sex, Gilgamesh would be ill advised to wed a powerful goddess. Yet again, therefore, his opposition to the matriarchate is what causes him to reject a union with Ishtar.
As a goddess, Ishtar complains bitterly to the supreme god Anu, asking him to create the Celestial Bull, who will lay waste the entire region of Uruk for her. Anu grants her request, and the bull descends to Uruk and begins his destruction. But Gilgamesh and Enkidu, on the strength of their success in the Cedar Forest, attack him and slay him. Since Gilgamesh's mother is the "lady of the wild cows," we realize that he has thus slain his own maternal root (Fognini, 2008, p. 53).
Unfortunately this dual attack on the gods spells death for Enkidu, who endures a slow, twelve-day agony in which he relives and reinterprets his life, even questioning Gilgamesh's friendship.
Enkidu's death leads Gilgamesh to embark on a long quest to understand the meaning of human life. He encounters the immortal ancestor, Uta-napishti-foreshadowing the Biblical Noah-who survived the flood with his family and all the animals that he had placed in his boat. After the detailed account of the flood, Utanapishti asks Gilgamesh to spend seven sleepless days and nights in order to achieve immortality. On the very first day, however, Gilgamesh falls into a sleep that lasts the following seven nights and days. He thus cannot escape death. But just as Gilgamesh is leaving them, Uta-napishti's wife convinces her husband to let Gilgamesh reach the plant that prolongs life. Gilgamesh promptly seizes it, but on his way back to Uruk, he decides to bathe in a pool of fresh water and a snake steals the plant.
All he can do now is return to his city, which he administers until his death. This conclusion repeats the description in the first tablet exactly. It is only after his death that he is finally deified as the sovereign and judge of the dead in hell. This epic, which has remained very popular for nearly twenty centuries, thus marks the advent of a patriarchal culture in opposition to the matriarchy that preceded it. Even the female goddesses who prevailed before this revolution were masculinized: the goddess Aruru was replaced by male divine entities, and the great gods of local pantheons were invoked as being endowed with her capabilities [START_REF] Frank | Le fuseau et la quenouille. Personnalités divines et humaines participant à la naissance de l'homme et à sa destinée en Mésopotamie ancienne[END_REF].
There are many similarities between this epic and the later epics of Homer and the Mahābhārata. A recent work on these topics may interest readers seeking further information (Geler, 2014).
Sophocles' tragedy: Oedipus Tyrannus
The Greek tragic genre appeared in the late sixth century B.C.E., when epic-as Homer conceived it-ceased to be in step with Athenian political society. Having discussed Aeschylus' tragedies ( §2.3.1), we now turn to Sophocles' tragedy: Oedipus Tyrannus.
Our first question is: what else do we know about the author? He is said to have lived in Thebes (Greece) around the thirteenth century B.C.E., i.e., more than a millennium after Gilgamesh. We have little information on earliest Greek antiquity. Linear B script was not deciphered until 1953, thanks to the linguist Michael Ventris. But very few of these tablets contain information on the rulers of the time.
For slightly more information on the reign of Oedipus, we must look at Homer's epics. In the Odyssey (11, 271-280), we find a short passage on Oedipus: I also saw the lovely Epicástë, mother of Oedipus; unknowingly, she'd shared in a monstrosity: she married her own son. And she wed him after he had killed his father. But the gods did not wait long to let men know what had been wrought. Yet since they had devised dark misery, the gods let him remain in handsome Thebes; and there, despite his dismal sufferings, he stayed with the Cadméans as their king. But she went down into the house where Hades is sturdy guardian of the gates; for she, gripped by her grief, had tied to a high beam her noose. But when she died, she left behind calamities for Oedipus-as many as the Avengers of a mother carry. 99 99 Greek°text°:°μητέρα τ᾽ Οἰδιπόδαο ἴδον, καλὴν Ἐπικάστην,°ἣ μέγα ἔργον ἔρεξεν ἀιδρείῃσι νόοιο°γημαμένη ᾧ υἷι: ὁ δ᾽ ὃν πατέρ᾽ ἐξεναρίξας°γῆμεν: ἄφαρ δ᾽ ἀνάπυ στα θεοὶ θέσαν ἀνθρώποισιν.°275ἀλλ᾽ ὁ μὲν ἐν Θήβῃ πολυηράτῳ ἄλγεα πάσχων°Κ αδμείων ἤνασσε θεῶν ὀλοὰς διὰ βουλάς:°ἡ δ᾽ ἔβη εἰς Ἀίδαο πυλάρταο κρατεροῖ ο,°ἁψαμένη βρόχον αἰπὺν ἀφ᾽ ὑψηλοῖο μελάθρου,°ᾧ ἄχεϊ σχομένη: τῷ δ᾽ ἄλγεα κ
In this outline of the life of Epicaste (Jocasta), Oedipus' mother and wife, Homer does not mention their four children, for Epicaste's death occurs here just after her marriage, nor does he speak of the fact that Oedipus left the palace after blinding himself. In Homer's account, he continues to reign on the Cadmeans. Other versions say that, after Epicaste's death, Oedipus had two other wives who bore him his children [START_REF] Lacore | Traces homériques et hésiodiques du mythe d'OEdipe[END_REF].
Sophocles' tragedy dates from 429 B.C.E., in the period when Greek freedom was flourishing (see §2.3.1). We must therefore view the play in the context of Greek society of its time. As noted previously, the democratic freedom of Greek citizens came up against the belief in an inescapable fate, partly represented by Hellenic astrology (see §3.1.2). Being informed by the concept of necessity, tragedy served to show how an individual-thanks to freedom-could fight against the higher power of his or her fate.
Because of the need to concentrate on a short period of time, the tragedy unfolds at the point when Oedipus is about to lose power. However, by inserting timely reminders of past events, Sophocles reconstructs a fuller life story leading up to the present situation. As the tragedy is well known and has already been the subject of many studies, we shall simply focus on certain aspects of Oedipus' ambiguous life story, which is revealed during the entire course of the play.
At his birth, his parents, Laius and Epicaste, are reigning over Thebes. They consult the oracle of Delphi, who predicts that their son will kill his father. The parents decide to eliminate him and Epicaste asks a slave to leave him out in the open, which-in Greece-would entail his death. But the slave does not carry out the order and hands the boy to a shepherd, who takes him to Corinth and entrusts him to the local monarchs, Polybus and Merope. They adopt Oedipus and raise him as their son. When he reaches adulthood, however, a rumor claims that he was adopted.
Oedipus decides to consult the oracle of Delphi to find out the truth. But the oracle does not answer his question and tells him άλλιπ᾽ ὀπίσσω°πολλὰ μάλ᾽, ὅσσα τε μητρὸς Ἐρινύες ἐκτελέουσιν. English translation by A. Mandelbaum. that he will slay his father and marry his mother. To avoid this, he leaves Corinth for Thebes. On the way, he meets an old man. They quarrel, and Oedipus kills him in self-defense without knowing that the man is his true father.
Oedipus arrives in Thebes, a city tormented by the presence of the Sphinx, who is devouring its inhabitants and visitors because they cannot answer his question (not revealed in the tragedy). Oedipus gives the right answer, freeing the city.
As a reward for his action, the city asks him to rule it and forces him to marry its queen, Epicaste. Oedipus reigns for about twenty years, and the couple have four children.
The play actually begins at this point, with the outbreak of an epidemic whose cause Oedipus wants to discover-a search that will lead him to reveal his own fate. This triggers a reversal, turning him from superhuman to subhuman [START_REF] Vernant | Mythe et tragédie en Grèce ancienne -1[END_REF]. When he puts his eyes out so as not to see the world any more, because he cannot bear it, he clearly states (Sophocles, 1330-1335):
It was Apollo, friends, Apollo who brought these troubles to pass, these terrible, terrible troubles. But the hand that struck my eyes was none other than my own, wretched that I am! Why should I see, when sight showed me nothing sweet? 100 Oedipus therefore distinguishes between the divine causality brought on by Apollo and his human action due to his misfortune. It is hard not to link this event to the death of Socrates (described earlier in §2.3.1) as a conflict leading him to accept the judges' sentence. Schelling (1914, p. 85) shows the implications for Sophocles' play:
It is by letting its hero battle the higher power of fate that Greek tragedy honored human freedom. So as not to transgress the barriers of art, tragedy had to ensure that he succumbed; but, to make up for this humiliation of human freedom torn away by art, it was also 100 Greek°text:°Ἀπόλλων τάδ᾽ ἦν, Ἀπόλλων, φίλοι,°ὁ κακὰ κακὰ τελῶν ἐμὰ τάδ᾽ ἐ μὰ πάθεα.°ἔπαισε δ᾽ αὐτόχειρ νιν οὔτις, ἀλλ᾽ ἐγὼ τλάμων.°τί γὰρ ἔδει μ᾽ ὁρᾶν,°ὅτ ῳ γ᾽ ὁρῶντι μηδὲν ἦν ἰδεῖν γλυκύ. necessary-and this, also for the crime committed by fate-that he should undergo the punishment. 101 Schelling wrote this text in 1795 in a series of "philosophical letters," but it not published in his lifetime. Other interpretations have been offered for Oedipus' reaction: Vernant and Vidal-Naquet (1972, p. 70), for example, do not see an opposition but, on the contrary, a union. We find Schelling's interpretation closer to Sophocles' likely intent, although it is impossible for us today to put ourselves in his state of mind or to surmise exactly what he wanted to convey to his audience.
Romances about the life of Henri de Joyeuse
For the lives of Gilgamesh and Oedipus, we lack contemporary evidence. All we have is an epic or a tragedy, written well after their death. In contrast, for the life of Henri de Joyeuse (1563-1608), we have enough documentation to compare with the novels written about his life.
The Duke Henri de Joyeuse was the brother of Anne de Joyeuse, the male favorite of Henri III, king of France. At age 18, he married Catherine de Nogaret de la Valette, who died in 1587 after giving birth to their daughter. His wife's death convinced him to become a Capuchin monk a month later. That same year, his brother Scipio drowned in the Tarn during the siege of Villemur. In 1592, Henri was allowed by his superiors to return to civilian life. First commander of the armies in Languedoc, he became Governor of the province and Marshal of France. The Edict of Nantes restored 101 German text: Die griechische Tragödie ehrte menschliche Freiheit dadurch, daß sie ihren Helden gegen die Uebermacht des Schicksals k ä m p f e n ließ: um nicht über die Schranken der Kunst zu springen, mußte sie ihn u n t e r l i e g e n , aber, um auch diese, durch die Kunst abgedrungne, Demüthigung menschlicher Freiheit wieder gut zu machen, mußte sie ihnauch für das durch's S c h i c k s a l begangne Verbrechenb ü ß e n lassen. So lange er noch f r e i ist, hält er sich gegen die Macht des Verhängnisses aufrecht. freedom of religion to French Huguenots in 1598, and Henri rejoined the Capuchins a year later. His life became that of an exemplary penitent and superior, whose sermons in Paris and the provinces ensured his success. At the same time, he was a mystic subject to ecstasies. Despite his weak health, he traveled to Rome in 1608 to attend the general chapter and died on the way back at the Convent of Rivoli in Italy. His remains were returned to Paris in a triumphal procession and buried in the church of the Feuillants Convent. This life story was reconstructed from several historical sources, most notably Raynal's Histoire de la ville de Toulouse [. . .] (1759, pp. 311-321).
Henri's life was thus marked by sharp contrasts-at once a courtier and intimate of Kings Henri III and Henri IV living in luxury in Toulouse and Paris, and a Capuchin monk whose fiery sermons and apostolic ardor drove him to warm up at a crossroads with beggars [START_REF] Brousse | La vie du révérend père, P. Ange de Joyeuse, prédicateur capucin, autrefois duc, pair et maréchal de France, & Gouverneur pour le Roy en Languedoc[END_REF].
After his death, he inspired what were known as "devout romances" (romans dévots) intended to promote a new form of salvation in Catholicism. The term references Vinclair's more general definition of the novel (2015, p. 248).
The first of these romances (du Lisdam, 1619) is devoted not to the life of Henri de Joyeuse, but to the conversion of Leopolda and Lindarche to religion, inspired by several examples including the life of the courtier-priest. While the text never mentions Henri by name, the account of his life was explicit enough for everyone to recognize him less than twenty years after his death. The narrative, however, does not try to conceal his "debauchery" and "worldly licentiousness" before his second retreat (p. 599):
It is said that he committed a thousand debaucheries, but that when he was thought to have wholeheartedly embraced worldly licentiousness, he ran to the fathers who had given him the monk's habit. He persevered with such constancy that his death was the end of his penance. 102The reader will notice that the use of the phrase "it is said" suffices to exonerate Henri from his excesses, all the more so as he returned to religious life after this six-year interval.
The second romance, by Jacques Brousse (1621), was published three years later. Here, Henri is clearly named and the work is dedicated to his only daughter. The episode of his return to secular life is now barely mentioned and the Duke is never described as a Marshal of France but always as a man of peace, even though he fought the Huguenots fiercely during his spell as a layman.
A third romance, by Philippe d'Angoumois, was published in 1625 under the title Les triomphes de l'amour de dieu en la conversion d'Hermogene (The triumphs of the love of God in the conversion of Hermogenes). The narrative incorporates an extensive account of Henri de Joyeuse's career as a model for the spiritual ascension of the former courtier Hermogenes. The author contrasts the Duke's life of luxury before his conversion with the rigors of his Capuchin life. There is no mention of his return to military and worldly life for six years. This omission erases an important part of his life in order to focus on his monastic career.
For a fuller description of these fictionalized biographies told by Catholic clergymen, we refer the reader to Nancy Oddo's account (2000,2008) of their invention in the seventeenth century. Here, we should like to contrast them with the romance by d' [START_REF] Aubigné | Les aventures du baron de Faeneste, Faeneste IV[END_REF], who set out to combat Catholic preaching methods.
The early seventeenth century was marked by the incessant religious controversy between Catholics and Protestants. In d'Aubigné's romance, Ange de Joyeuse is repeatedly cited for his sermons, which the author openly criticizes. While his claims seem too burlesque to be real, some (p. 799) are confirmed by the diary of his contemporary Pierre de l'Estoile (p. 598):
At that point the great preacher rolled his eyes, remained for a long time as if having fainted, and came back to his senses to dwell at length on the pains of the Passion, which he compared to all the pains he could recall, scorning all sorts of fevers and maladies, which he enumerated, and then slight wounds and other ills; then he fainted for the second time and, utterly transported by fury, pulled out of his pocket a rope made like a halter with a noose; he placed it around his neck, stuck out his tongue, and-according to somewould have strangled himself had he pulled really hard; the companions of the lesser observance rushed up and removed the halter rope. The entire vault resounded with the cries of spectators, who had changed their laughter into laments, the comic opening into tragedy, which, however, was a bloodless sacrifice. 103By thus ridiculing Father Ange, d'Aubigné castigated an individual at the same time as a religious order (Fantoni, 2011, p. 279). This shows how the romance could either legitimize a Catholic sentiment or condemn it from a Protestant standpoint.
What role will these imaginary life stories play in our own life?
In this chapter, we have seen that the method to analyze these imaginary life stories may be "comprehensive" in the sense given by philosophical hermeneutics, for example by [START_REF] Dilthey | Entleittung in die Geisteswissenschaften[END_REF]. This was clearly shown by Szondi (1975) and his followers. Let us recall here some of its principles.
The term "comprehension" is polysemic, and we must begin by examining its multiple meanings. The first is to "comprehend" individual behavior, i.e., to understand the reasons for a person's acts and the significance he or she assigns to them. This applies perfectly to the writing of an imaginary life story. The process might also consist of arriving at a more general "comprehension" of a set of facts or events, i.e., producing an interpretation of the set in order to show its exact significance. In this case, the facts or events do not concern a particular individual but are more general and apply, for example, to the history of a social group. This second meaning is less relevant to us here, but is of the utmost interest to the historian.
We have discussed this "comprehensive" approach for imaginary lives (epics, tragedies, comedies, novels, and other genres), showing how it enables us to extract their key elements. As Mesure notes (1990, p. 231):
To begin with, it is confirmed that comprehension does consist in taking real-world experiences and building the set that brings them together; from what was a mere sequence, [comprehension] achieves the emergence of what properly constitutes a life, i.e., a totality directed toward an end that imparts meaning to each stage […] 104In this way, the object of the sciences of the mind obtains a deeper justification, and simultaneously its philosophical foundation.
In the introduction to this chapter, we noted the importance of these imaginary lives in our own life. Do the selected lives that we have presented in greater detail enable us to better identify the reasons for this importance?
Will they take us to the confines of the unfettered imagination of the inventors of these lives-albeit so different from ours-and simply entertain us?
If this hypothesis has some basis in fact, it is because our own life is so dull that we seek diversion through reading. Ettore Scola's film A Special Day shows us the life of a mother under Mussolini's fascist regime: she is neglected by her husband and confined to the repetitive chores of managing her many children and her household. While her husband and children go off to a large fascist celebration, she encounters by chance another outcast of the regime, a homosexual, who reveals another universe to her. Before his arrest by the police of the regime-which is uncompromising for those who do not think like it-the man leaves her Alexandre Dumas' novel The Three Musketeers. The woman, who can barely read, may perhaps be able, with the aid of the novel, to open herself up to different world from the regime in which she is a captive. This possibility is suggested by the film's ending, where she begins to decipher the book in secret. In this case, however, we discover another aspect of imaginary life stories: the change of course in the woman's life can go much further if she accepts the opening toward another person's life that the novel will offer her.
Actually, these imaginary life stories were not written for mere entertainment. Their substance is far deeper. To begin with, epics have served as foundations for civilizations for thousands of years. The Epic of Gilgamesh was the basis for the ancient Assyrian regimes, whose rediscovery in our time opens our minds to another way of thinking about our own civilization. Sophocles' tragedies nourished the Athenian regime in the sixth century B.C.E. by placing human freedom above the inescapable fate of the individual. Today, they still provide food for thought, as contemporary performances perfectly demonstrate. The romances on the life of Henri de Joyeuse give us insights on the role of Christian religiona role that persists to this day. All the other examples that we could have discussed contain important reflections on the society in which they were created, and now help us to better understand the society in which we live.
Reading these life stories-beyond the narratives of their author or authors-will therefore nourish our existence by offering another possible world that is essential to our lives, for they remain imprisoned in a regime whose degree of freedom can vary. The 200 following chapter looks at these actually lived lives and shows the limits to our understanding of them.
Chapter 7 Real life stories to celebrate or to study humans
This chapter will examine how human memory develops with the aim of capturing life stories and what it records of them; we then explore how these processes are analyzed by the social sciences. We discuss the two main approaches: philosophical hermeneutics and the scientific approach to social science advocated by Bacon in 1620 and implemented by Graunt in 1662.
However, the non-fictionalized accounts of real individuals' lives will also be examined. Such texts exist at least since the fifth century B.C.E.. Skylax of Caryanda reportedly wrote a life of Heraclides, tyrant of Mylasa, in ca. 480 B.C.E. Although no text survives, it is cited in the Suda, a Byzantine encyclopedia on the ancient Mediterranean world of the tenth century, under the title "The story of the tyrant (or king) Heraclides of Mylasa"105 (Momigliano, 1971, p. 29).
Several kinds of life stories later featured in human history, such as biographies of illustrious figures beginning in antiquity, individual journals (livres de raison) from the Middle Ages, private notebooks and diaries, and memoirs. This approach to life storiesboth imaginary and real-should be viewed in connection with philosophical hermeneutics, which originated with Schleimacher and Dilthey, and gained substance in the twentieth century with Heidegger, Ricoeur, and others.
By the seventeenth century, however, a more scientific approach to life stories emerged. It was linked to the beginnings of population and probability sciences, and it developed further in later centuries. This chapter takes a closer look at these narrativeswhose detailed structure varies substantially from one author and one period to another-and the advent of scientific analysis of life stories.
Life stories to celebrate humans
The imagined life and the real life of Brother Ange de Joyeuse, we analyzed in the previous chapter, could be similar yet very different according to the source (romances or historical documents). In this section, we show how these life stories or biographical narrativesviewed here as the closest reflection of the lives of the persons concerned-nevertheless select the events regarded as worthy of recall.
While the biographical genre emerged in earliest antiquity, it encompasses highly varied texts, described in greater detail below. We shall examine how the social sciences appropriated them as a research method.
In Alexandre Gefen's words (2004, p. 60):
A deceptively simple genre, biography stands at the crossroads of human life. It offers a field where the paradigms and skills deployed by social science, profane and sacred spirituality, and the symbolic forms specific to literature are confronted with one another. 106It is indeed impossible to transcribe all the instants of an individual life, and the events selected for inclusion in a life story are therefore subject to the biographer's somewhat arbitrary choice. Similarly, the choice of individuals as subjects of a biography is far from arbitrary and must be examined as well. We begin by discussing the method used for such choices.
There are three broad periods in which new forms of life stories appeared. The first is Greek and Roman antiquity, which saw the creation of life stories of which Arnaldo Momigliano's The development of Greek biography (1971) offers an interesting but partial account, since he deals mostly with ancient Greece. The biographies were those of rulers, leading philosophers or poets. The second period, running up to the early twentieth century, saw not only the persistence of earlier forms of biography but also the development of life stories of less prominent persons who sought to leave a trace through their biographies. From the seventeenth century to the twentieth, many social sciences introduced fullfledged research methods some of them more recently closely tied to the "event-history" (i.e., biographical) approach.
To conclude, we show that biography is inherently transdisciplinary, for it pertains to all the social sciences, serving as a tool to understand human societies. Consequently, it belongs to no single one of those disciplines.
Life stories in antiquity
As noted in the introduction to this chapter, the life-story genre goes back to at least the fifth century B.C.E. How much of the complexity of an individual life did it capture, and in what form? First, which broad categories of persons did ancient biographies cover, and what types of events did they relate? It is also important to distinguish autobiographies from biographies, before attempting to define a more general paradigm.
Ancient biographies concerned noteworthy figures: political leaders, philosophers whose acts and ideas influenced many other persons, and-starting in ancient Rome-poets and aristocrats. Most of these exceptional individuals made their mark on the history of their city or country.
The biographies were written by authors who frequently lived well after the life stories they recount. As a result, their narratives are less reliable, although they attempted to gather as much information as possible on their subject. It is therefore better to rely on autobiographies even though their authors may embellish their actions when recounting them. We use the phrase "thought to have" for many scholars have questioned the letter's authenticity, arguing that it was written by one of Plato's assistants ten or twenty years after Dion's death. The latest of these claims dates from 2015. Put forward by Burnyeat and Frede, it has been criticized by many commentators including [START_REF] Kahn | The pseudo-platonic seventh letter[END_REF]. While a discussion of its merits lies outside the scope of our study, we should point out that most commentators recognize Plato's style and thought in the text. For example, Momigliano (1971, p. 60-62), after carefully examining the arguments presented, admits that the text is indeed an autobiography and not a biography by one of his disciples. Similarly, an in-depth computer analysis of Plato's style led [START_REF] Ledger | Re-counting Plato: A computer analysis of Plato's style[END_REF] to conclude that the letter was the work of Plato himself. We find these arguments sufficiently convincing to view the text as autobiographical.
Plato not only gives us a detailed account of his three stays in Sicily between 388 and 361 B.C.E., but he also tells us about his youth and his hopes of setting up a truly republican government in Athens. In 404 B.C.E., the city was under the tyrannical rule of the Thirty, who had replaced the earlier Athenian democracy. Although democracy was restored a year later-albeit with a general amnesty-Socrates' death sentence in 399 B.C.E. caused Plato to question the Athenian regime. He wrote (VII th letter, 325 c, d):
When, therefore, I considered all this, and the type of men who were administering the affairs of State, with their laws too and their customs, the more I considered them and the more I advanced in years myself, the more difficult appeared to me the task of managing affairs of State rightly. For neither Dion nor any other will ever voluntarily aim thus at a power that would bring upon himself and his race an everlasting curse, but rather at a moderate government and the establishment of the justest and best of laws by means of the fewest possible exiles and executions. Yet when Dion was now pursuing this course, resolved to suffer rather than to do unholy deeds-although guarding himself against so suffering-none the less when he had attained the highest pitch of superiority over his foes he stumbled. And therein he suffered no surprising fate. 108 Dion was unfortunately assassinated in 354 B.C.E., making all of Plato's efforts vain.
We have described Plato's letter in some detail for it clearly illustrates his goal in writing this autobiography. He spells out his plan, explains the reasons, and shows his failure to implement it. By contrast, he barely mentions his repeated journeys between Athens 108 Greek°text: οὕτω μὲν γὰρ οὔτε δίων οὔτε ἄλλος πετὲ οὐδεὶς ἐπὶ δύναμιν ἑκὼν εἶ σιν ἀλιτηριώδη ἑαυτῷ τε καὶ γένει εἰς τὸν ἀεὶ χρόνον, ἐπὶ πολιτείαν δὲ καὶ νόμων κατασκευὴν τῶν δικαιοτάτων τε καὶ ἀρίστων, οὔ τι δι᾽ ὀλιγίστων θανάτων καὶ φόν ων γιγνομένην: ἃ δὴ δίων νῦν πράττων, προτιμήσας τὸ πάσχειν ἀνόσια τοῦ δρᾶσ αι πρότερον, διευλαβούμενος δὲ μὴ παθεῖν, ὅμως ἔπταισεν ἐπ᾽ ἄκρον ἐλθὼ ν τοῦ περιγενέσθαι τῶν. and Syracuse, which were anything but plain sailing. For example, on his first return to Athens in 387 B.C.E., he was forced onto a Spartan boat whose crew, having stopped at Aegina-then at war with Athens-put him up for sale as a slave. Luckily, Anniceris of Cyrene, who knew him personally, bought him for twenty or thirty minas and sent him home to Athens (Diogenes Laertius, III, 20). Thus Plato did not set sail as an adventurer, but in order to have his political and philosophical theories put into practice.
Later, the Romans too produced excellent autobiographies, such as Julius Cesar's Commentaries: Commentarii de bello Gallico (ca. 51 B.C.E.) and Commentarii de bello civili (ca. 46 B.C.E.). We do not know for certain whether these texts were written day by day (or rather year by year), or, instead, at the end of the two conflicts in question. However, we can characterize them as a self-celebration rather than an autobiography covering all aspects of his lifewhether military, political or literary. This self-praise is cleverly orchestrated, as a large number of living witnesses could contradict his assertions. This explains the omission of events such as the crossing of the Rubicon, the river separating his province of Cisalpine Gaul from Roman Italy, which he did not rule. Julius Cesar's move sparked a civil war in 49 B.C.E. He had thus put himself in an illegal situation, for no general was allowed to cross the river with an army, but he makes no mention of this in his commentaries. Suetonius (I,32), in his life of the Cesars, credited him-as did many later authors-with the famous phrase Alea jacta est (The die is cast) when he crossed the Rubicon. For more details on the historical distortion of Cesar' life, we recommend [START_REF] Rambaud | L'art de la déformation historique dans les commentaires de César[END_REF].
We conclude this overview of ancient biographies and autobiographies with Plutarch's Parallel Lives (ca. 100-110). The comparison between the lives of a Greek and a Roman, selected among the 23 pairs covered by Plutarch, 109 shows that one take can take a set of illustrious men of Greece and Rome and match individuals of the same weight-even in regard to vice-against one another (for example, Theseus versus Romulus, Alexander the Great versus Cesar, Demosthenes versus Cicero). Plutarch's biographies stand in contrast to many of his predecessors, as he explains in his preface to the life of Alexander (Plutarch, Alexander, 1, 2):
For it is not Histories that I am writing, but Lives; and in the most illustrious deeds there is not always a manifestation of virtue or vice, nay, a slight thing like a phrase or a jest often makes a greater revelation of character than battles where thousands fall, or the greatest armaments, or sieges of cities. 110His biographies describe the different facets of his subjects' personalities without dwelling on their actions.
From the Edict of Milan to the early twentieth century
When the cult of Christian martyrs was legitimized by the Edict of Milan in 313, hagiographies-i.e., the lives of saintsbegan to circulate in many regions of the world. As Saintyves noted (1907), these narratives merely followed the model of the lives of pagan gods. By highlighting the saints' miracles, these stereotyped accounts, which have more in common with imaginary lives than with actual biographies, are of scarce value for studying real lives, so we shall not discuss them further.
The war memoirs of Xenophon and Cesar gave way to memoirs by persons who led more peaceful but no less illustrious lives, in which they recorded contemporary events that they witnessed or in which they took part. Their goal was to recount not their own lives but the events they experienced. Such narratives are thus less relevant for our purposes, even though the events may have affected the personalities or life stories of the people involved. The perfect example is the Mémoires-Journaux de Pierre de l 'Estoile (1546'Estoile ( -1611)), a magistrate of the Parlement de Paris, who recorded for nearly thirty years the events that occurred in the reigns of Henri III and Henry IV of France.
Admittedly, biographies and autobiographies of prominent figures continued to be produced, covering an ever greater number of persons in ever more diverse walks of life. Emperors, kings, princes, and nobles are, of course, particularly well represented, but so were their ministers in proportion to the importance of their role. There was also a growing number of biographies or autobiographies not only of philosophers, but of scholars, writers, poets, painters, sculptors, and other noteworthy individuals.
Most significantly, however, persons of lesser social standing began to write their autobiographies and the lives of their families. This genre appeared in most European countries in the thirteenth century, and even earlier in other parts of the world, for example the ta'rikh in Islam in the eleventh century [START_REF] Makdisi | The diary in Islamic historiography[END_REF], and the Murasaki Shikibu Nikki in Japan in the same period.
Although their names vary from country to country (Libri di famiglia in Italy, Livres de raison in France, Diaries in England and the United States, and so on), these documents share many common features.
The Libri di famiglia first appeared in Italy in the early thirteenth century and have been the subject of numerous studies. They often seem linked to Libri amministrativi, which recorded the management of assets [START_REF] Mordenti | Les livres de famille en Italie[END_REF] and reflect the ability of craftsmen, traders, property owners, and legal professionals of that period to take a long-term view. The first known example, written in Calabria, dates from the 1230s [START_REF] Tricard | Les livres de raison français au regard des livres de famille italiens : pour relancer une enquête[END_REF]. It consists largely of a register of births, marriages, and deaths in a family. Soon, however, the Libro di famiglia became a distinct genre from the Libro amministrativo and spread across Italy. Its purpose was now to serve as a book of memory, updated continuously, and focused on the family (Mordenti, 2004, p. 794). The Libri became ever more common in the fifteenth and sixteenth centuries, then disappeared in the late nineteenth century. However, they are far from entirely truthful. Mordenti (2004, p. 795) clearly notes:
Consequently, to grant the information they contain an absolute value of objectivity and conformity to truth could only be a profound mistake. Cross-checks performed with the data handed down-as is the case for some eighteenth-and nineteenth-century books-have shown that the data are often inaccurate. At the very most, the "true" information conveyed by these texts (exactly as with autobiographies) lies in the degree of distortion, in the bias introduced by the writer when preparing the text. Their historical truth therefore lies, above all, in this sort of distorting gaze, which the memorial texts transmit to us and to which they "objectively" bear witness. 111 The Libri di famiglia are therefore the imperfect-but nevertheless highly valuable-witnesses to Italian society from the late Middle Ages to the late nineteenth century. They give us information not only on family life (pregnancies, miscarriages, births, nursing, illnesses, prescriptions and medical care, deaths, and epidemics), but also on family assets (properties, inheritances, dowries, debts and receivables) as well as on the family's broader social life (political offices, honors, education, skills, and trades) and, lastly, on unusual events (catastrophes, celestial signs, astronomical phenomena, prophecies, and dreams).
In France, the Livres de raison appeared somewhat later, as the earliest forms date from the fourteenth century [START_REF] Tricard | Les livres de raison français au regard des livres de famille italiens : pour relancer une enquête[END_REF](Tricard, , p. 1006)). Most were written by notaries, merchants, officers, clergymen, and teachers. Only a minority of Livres de raison were kept by aristocrats or, on the contrary, peasants. They were found all over France, but primarily south of the Geneva-Saint Malo line (Lemaître, 2006, p. 5). Whereas the Italian Libri emphasized family, 111 French text: Dès lors accorder à l'information qu'ils contiennent une valeur absolue d'objectivité et de conformité à la vérité ne peut qu'être profondément erroné : des recoupements effectués à partir de données transmises -comme c'est le cas pour certains livres du XVIII e et XIX e siècleont démontré que celles-ci étaient souvent inexactes. Tout au plus l'information « vraie » que ces textes véhiculent (exactement comme pour les autobiographies) est-elle à rechercher dans le degré de déformation, dans le clinamen que le rédacteur introduit lors de sa mise en texte. C'est donc surtout dans cette sorte de regard déformant, que les textes mémoriels nous transmettent et dont ils rendent « objectivement » témoignage, que se situe leur vérité historique.
the French Livres focused on raison (reason). The word derives from the Latin ratio, whose many meanings include balance-sheet, account, method, reasoning, and proof. For example, the Livre de raison long served as proof in court by providing a "reasoned" account of the family's actions and assets. In 1879, de defined it as follows:
The specific character of the Livre de raison, when properly kept, was to offer, in a few lines and in a simple manner, all that constituted the family and the household in moral and material terms. Its pages would record the genealogy of the ancestors, the biography of the parents, births, marriages, and deaths, the main events in the family, the growth of the family, i.e., the uses to which savings were put, the inventory of property, and the final advice given to children. 112Like the Libro di famiglia, the Livre de raison therefore ensured the preservation of a certain form of family memory. While there were differences in composition, content, and scope (Tricard, p. 1002), they are not relevant to our study. As in Italy, the Livres disappeared in the late nineteenth century.
In England, the habit of keeping a Diary (or Journal, a less common term) began to spread in the mid-fifteenth century. The oldest known Journal is anonymous and dates from 1442-1443. The author offers a daily chronicle, in Latin, of the main activities of his master Thomas Beckington [START_REF] Bochaca | Un voyage par mer d'Angleterre à Bordeaux et retour en 1442-1443 d'après A Journal by one of the suite of Thomas Beckington[END_REF]. By 1600, diaries had become commonplace among the nobility and bourgeoisie. They would record family mores, marriage, and births [START_REF] Bourcier | Les Journaux privés en Angleterre de 1600 à 1660[END_REF]. Ponsonby (1923, p. 1)
describes diaries in these words:
A diary, that is to say the daily or periodic record of personal experiences and impressions, is of course a very different thing from history, although some of the older diaries have been of great use in furnishing the historian with facts and giving him examples of contemporary opinions.
They were very common from the eighteenth century to the late nineteenth, but-unlike the Italian Libri and French Livres-they are still produced today. The Diary is more centered on its writer's impressions than the Italian and French texts, and has preserved all its appeal. These myriad documents record the events in the lives of individuals, generally from the middle class, whose proportion has risen in all countries.
From the twentieth to the twenty-first century
While Livres de raison and Libri di famiglia were gone by the early twentieth century, all the other forms of biography not only endured but became more diverse. Today, there are websites where members can write their own life stories with the aid of a narrator and print their text. Here are three examples from a very extensive list: https://www.bl.uk/projects/national-life-stories in the United Kingdom, https://lifestoriesaustralia.com.au/ in Australia, and www.entoureo.fr/ and www.leromandemavie.fr/ in France.
At the same time, a more general reflection on the significance of life stories-both imaginary and real-took shape.
As we previously said, philosophical hermeneutics sought to offer a more complex view of life stories by incorporating them into the "sciences of the mind" or "human sciences" (Geisteswissenschaften).
The term "hermeneutics" derives from the Greek ἑρμηνευτική τέχνη, meaning "the art of interpreting," for the god Hermes was the messenger and interpreter of the other gods' orders. Initially devoted to explaining Greek and Latin literary works, it later turned to the study of religious texts [START_REF] Rico | Aux sources de l'herméneutique occidentale: les premiers commentaires dans les traditions grecque, juive et chrétienne[END_REF]. Philosophical hermeneutics emerged in the early twentieth century. Here, we look at the insights it can give us into real life stories. First, Dilthey attempted to establish a philosophy capable of capturing human life. He argued that, since the unity of a physical person must be grasped over the course of the person's life, biography is the core of knowledge. While he did regard human life as unfathomable, he did not believe that the human sciences should be deprived of explanation. Mesure comments (1990, p. 214):
Rather than an "exclusive opposition" between explanation and comprehension, it would be fair to speak here, as Dilthey does explicitly, of a "reciprocal dependence between the two types of approaches." On the one hand, as we have just seen, explanation requires comprehension in order to meet the goal of intelligibility that defines it. On the other hand, and reciprocally, the identification of causal relationships is one of the means of revealing, between the different moments of a process or the different aspects of an era, the interdependence that makes them part of an interactive whole, to which the comprehensive approach then applies. 113Unfortunately, Dilthey never managed to achieve the emergence of a complex of human sciences in which explaining and comprehension would not be dissociated. His last book (1911) offers three "worldviews" (Weltanschauungen) for understanding life: the religious, the poetic, and the metaphysical. We may well ask, however, if these three categories cover all the civilizations that have followed one another over time. Is philosophy itself not a world-view created for and by a specific epoch?
Heidegger conducted his research along the hermeneutical path laid out by Dilthey, as his Cassel Lectures of 1925 show [START_REF] Gens | Heidegger. Les conférences de Cassel (1925) precedées de la correspondence Dilthey-Husserl[END_REF]. By then, however, he was already expressing his divergence from Dilthey, who, for his part, acknowledged the validity of natural sciences. The key point here is Heidegger's rejection of rationalism. In the mid-1920s, he regarded philosophy as something totally different from science, and he soon began to assert that science does not think. He elaborated on this position in many texts [START_REF] Perrin | Une guerre à couteaux tirés. Heidegger et le rationalisme[END_REF]. His 1952 lecture entitled Was Heißt Denken? (What Does Thought Mean?) recalls and expands the argument (p. 158):
Science does not think in the way that thinkers think. But it in no way follows that thought has no need to turn to science. The statement "Science does not think" implies no permission for thought to take its ease by engaging in storytelling. 114Heidegger saw the contemporary scientific approach, initiated by Francis Bacon and Descartes, as a mere technique that does not ask about the essence of things but proceeds in a mechanical manner by counting.
In his wake, Ricoeur, in his three volumes on Temps et récit (Time and narrative) (1983)(1984)(1985), develops philosophical hermeneutic theory by taking up the subjects discussed by Aristotle in the Poetics and Saint Augustine in his concerning time. On the two thinkers, Ricoeur observes (Temps et récit. III Le temps raconté, p. 375):
What is surprising here is that Augustine and Aristotle confront each other not only as the first phenomenologist and the first cosmologist, but as men carried by two archaic currents issuing from different sources-the Greek source and the Biblical sourcethat later mingled their waters in Western thought. 115Ricoeur effectively links philosophical hermeneutics to ancient hermeneutics. Likewise, he considers that imaginary life stories (literary narratives) and real life stories, far from being mutually exclusive, are mutually complementary (Soi-même comme un autre (Oneself as another), 1990, p. 191). This is consistent with our intention to examine one category in the previous chapter and the other in the present chapter.
However, when Ricoeur seeks to reconcile hermeneutics and social science by rejecting the duality between explanation and comprehension, his arguments are not very persuasive. In Temps et Récit, I, p. 154, for example, he writes: Historical demography, i.e., demography in a temporal perspective, presents the biological evolution of humanity regarded as a single mass. At the same time, it identifies world rhythms of population that place longue durée [the long term] on a semi-millennial scale and challenge the periodization of traditional history. 116Rather than showing a reconciliation between historical demography and hermeneutics, this quotation perfectly demonstrates the incompatibility between the two approaches: the first examines humanity as a single mass, whereas the second examines humans as individuals.
Life stories to study humankind
In contrast, a more scientific approach to the same events emerged in the seventeenth century, driven by [START_REF] Bacon | Novum Organon[END_REF]. At the start of Chapter 5, we noted his inductive approach, which began with observation and led up to a true scientific analysis. The application of this approach to life stories totally changed the way they were measured and analyzed over time. 117 The starting point, however, was the examination of very few elements of the life story by John Graunt (1620Graunt ( -1674)).
In his dedication to Robert Moray, [START_REF] Graunt | Natural and political observations mentioned in a following index, and made upon the bills of mortality[END_REF] presents himself as a follower of Bacon, describing his discourses on life and death as natural history. Graunt's Observations laid the foundations of a true population science. To achieve this goal, he begins with the measurement-now as exhaustive as possible-of deaths and some other phenomena.
The recording of certain human events was actually an old practice, but Graunt's totally novel approach turned it into a true measurement method suitable for an emerging population science. For this purpose, Graunt used the bills of burials, marriages, and christenings kept in England and Wales since 1538, as ordered by Thomas Cromwell. This record-keeping by the clergy was not yet a regular practice, particularly under the reign of Mary Tudor, marked by the persecution of Protestants. In 1598, Elizabeth I ordered the records to be kept in parchment books, along with the bills compiled since the start of her reign in 1558. But registration on a nearly continuous basis did not begin until the start of James I's reign in 1603. In 1653, Olivier Cromwell transferred responsibility for the registers from the clergy to elected members for each parish, and fees were introduced for each registration. The restoration of Charles II in 1660 spelled the end of these civil registers, which had become religious again by the time Graunt was writing his book. Bills of mortality-of which one of the oldest known dates from 1532-were primarily designed to give an idea of the number of deaths and their trends, particularly for deaths due to the plague, which was raging at the time. Bills of marriage, which began to record the names of godfathers and godmothers in 1557, were intended to curb the rise in divorces. Previously many people had been able to divorce by declaring that they had married their godfather or godmother's son or daughter. This invalidated the marriage, for the Church regarded it as a spiritual incest. Fears of a new tax proved warranted in 1653, when fees were introduced for all registrations. In sum, while the bills served religious, political, social, administrative, tax-related, and other purposes, they manifestly had no scientific purpose at the outset. Indeed, Graunt clearly noted how the registers were generally used by those who received them (p. 1):
[They] made little other use of them, then to look at the foot, how the Burials increased or decreased; And, among the Casualties, what had happened rare, and extraordinary in the week current: so as they might take the same as a Text to talk upon, in the next Company; and withall, in the Plague-time, how the Sickness increased, or decreased, that so the Rich might judge of the necessity of their removal, and Trades-men might conjecture what doings they were like to have in their respective dealings.
Moreover, the measurement of phenomena regarded as God's secret was somewhat of a challenge. As Graunt observed, discussing the population of London (p. 59):
I had been frighted with that misunderstood Example of David, from attempting any computation of the People of this populous place. This enumeration performed without an order from the Lord entailed three days of devastating plague on Israel.
To comply with Bacon's goal of starting from the facts in order to develop a science, Graunt used the bills to show regular patterns or features that could not be grasped without them. In fact, he set out to discover an underlying order for these phenomena, which at the time were regarded as acts of God and hence not amenable to forecasts or any other calculation. Let us look at the major directions of his research and the measurements that he associated with them.
As its unit, the first measurement adopted the event, specifically death (often for a stated cause), baptism, and marriage. For example, Graunt treated all observed deaths as equivalent, stripping them of their human, political, and religious complexity. We can see how closely this measurement resembles what Plato defined in antiquity by counting the number of units observed, without regard for the particularities of each unit. Likewise, in this context, Graunt could simply enumerate these facts, sometimes filtered by classification criteria such as causes of death (unnumbered page: Epistle dedicatory to the Honourable John Lord Roberts):
[. . .] so far succeeded therein, as to have reduced several great confused Volumes into a few perspicuous Tables, and abridged such Observations as naturally flowed from them, into a few succinct Paragraphs, without any long Series of multiloquious Deductions [.] However, he subjected the figures to critical analysis, estimated the degree of confidence with which he could accept them, and adjusted them. His book discusses in great detail these entire operations, which were specific to the direct measurement of events and intended to guarantee the reliability of his research findings.
Graunt then tried to go further by taking the individual as unit and attempting to estimate London's total population without distinguishing a subset that would be useful solely for military, political or religious purposes. This marks the emergence of the notion of statistical individual-stripped precisely of individual attributes-which allowed the introduction of a science of man. To achieve this goal, however, Graunt had no general population census available or even a partial census of the kind conducted in antiquity. To estimate the total population, he resorted to a concept-later called the "multiplier"-that amounted to an indirect measurement. The estimate was made under different assumptions and from various observed and measured facts. The basic assumption was that these facts maintain a constant, necessary relationship with the population and that their existence is an assessment criterion [START_REF] Moheau | Recherches et considérations sur la population de la France[END_REF]. Here, Graunt estimated the London population from various facts observed in it: deaths, births (whose number was assumed to be twice that of fertile women), families, surface area, and so on (see the details on the assumptions made for each of these estimates in Vilquin's notes to his 1977 French translation of Graunt's book).
These hypotheses are very rough, however, and cannot yield a truly satisfactory estimate of the population. Laplace (1783) later used the "multiplier" method-with the aid of a genuine population sample-to yield a more precise estimate of the errors committed.
Let us now examine another approach introduced by Graunt to measure the population from its deaths.
During his estimation test, Graunt realized that he lacked another essential measure for making progress in his analysis: his sources did not record the age at which the various events occurred. Determining age seemed essential for moving forward in these disciplines, although its value would be downplayed later. To proxy the proportion of deaths at each age, Graunt resorted to the notion of probability, which had only just made its first appearance in the scientific world and would prove indispensable for population sciences.
Before we continue, therefore, we must briefly outline how the new discipline of probability took shape through games of chance, shortly before the emergence of population sciences.
Probability arose in the discussion between Fermat and Pascal on wagers (1654a) and Huygens' first treatise on probability (1657). Pascal defined the new science as follows (1654b): Thus, by combining the rigor of scientific demonstrations with the uncertainty of chance, and reconciling these apparent opposites, it can, drawing its name from both, rightfully claim this astonishing title: The Geometry of Chance. 118 This announces the introduction of a new measure for approximating uncertain phenomena: mathematical expectation. But Pascal's studies were not published until 1665, while Huygens-who took them up-published his treatise in 1657. As Huygens clearly stated (p. 1):
Although in games depending entirely upon Fortune, the Success is always uncertain; yet it may be exactly determin'd at the same time, how much more likely one is to win than lose.
He now assigned a measure to this "chance," i.e., to this probability, making it possible to reason mathematically on games. The probability is intrinsically objective, for it implies the existence of events that can be repeated in identical conditions.
Graunt applied probability theory not to games but to humans, although it is doubtful that the events examined are capable of repeating themselves as in games of chance. When he set out to estimate the population of London, in the absence of a census, he resorted to probability as a means to deduce the population at risk from the number of deaths. For this purpose, he used the concept of fair game (p. 59):
Next considering, That it is esteemed an even Lay, whether any man lives ten years longer, I supposed it was the same, that one of any 10 might die within one year.
Unlike [START_REF] Hacking | The emergence of probability[END_REF], who claims that Graunt's probabilistic reasoning is correct, we have shown that it is actually far from perfect. We shall not describe his errors in detail here (see [START_REF] Courgeau | Dispersion of measurements in demography: a historical view[END_REF] but simply outline the principle of his method.
Graunt initially assumes that the annual probability of dying between ages 10 and 60 is constant. If so, although Graunt does not spell out his calculation, we can estimate the population aged 10-60 from the ratio of the sum of actual deaths to the annual probability. Thus, if we assume-as he does at the outset-that the probability of dying within ten years is 1/2, the constant annual probability is 0.067 and yields a multiplier of 14.925. With 10,000 deaths observed, we therefore arrive at a London population aged 10-60 close to 150,000. 119 That is a far cry from the 6-7 million estimated by men of great experience in this City, although Graunt does not include children under 10 and old people over 60 in his count. However, the estimate of his multiplier is not based on any precise measure and is therefore highly questionable.
From this calculation, the notion of an underlying life table clearly emerges. Graunt elaborates on it, starting with the idea that the distribution of age-specific deaths in ten-year periods after age 6 follows a geometric progression with a root of 64 and a ratio of 5/8, and not 1/2 as he assumed in the previous calculation. The estimates were refined in the hundred years that followed, until Wargentin's publication of a life table broken down by age and sex, based on data from the census introduced in Sweden in 1749.
The advent of exhaustive censuses in most European countries by the early nineteenth century made the preparation of further such estimates of population useless. A direct measurement of population size was finally possible, provided it was carried out within a brief lapse of time (to avoid double counting, notably of internal migration) and with great accuracy (to avoid omissions). Ideally, the censuses should have been conducted by agencies independent of political, religious, and tax authorities so as not to arouse fears of new taxes potentially generated by the operation. Unfortunately, that was not always the case. In France, for instance, the First Division of the Interior Ministry ordered préfets to enumerate the population of their départements in 1801, while the Statistics Bureau was in charge of analyzing and preparing the data for possible publication.
The census questionnaires yield not only the population of the entire country or its administrative subdivisions, but also exhaustive counts of the phenomena occurring in the population, provided that the phenomena are covered by one or more questions in the individual forms. The questions most often asked concern name, sex, current and earlier place of residence, date and place of birth, nationality, marital status, education, labor-market status and occupation, native language and language commonly used, and religion.
Meanwhile, vital events continue to be recorded in three main registers: birth registers, marriage registers, and death registers. Some countries also maintain a population register that records basic information on each individual, particularly on changes in place of residence, whose reporting is essential to the smooth functioning of the system. Such population registers provide a continuous record of internal migration. The registers are properly kept in a small number of countries, where the census becomes necessary only for checking their quality and providing information that they do not collect.
One can then estimate all age-specific rates120 for the phenomena studied, for a given year or period. Such estimates no longer even require a calculation of their precision, given the high degree of the latter relative to the size of the population measured [START_REF] Courgeau | Probability and Social sciences. Methodological relationships between the two approaches[END_REF]. However, these simple rates-which express the ratio of occurrences of an event to a population-are inadequate for certain phenomena. For example, while we can determine a ratio of international migration to the population of the country concerned, internal migration between two areas of the same country requires what are called intensity indices. We need to calculate the ratio of migrants between two areas to the product of the population of origin and the destination population, with which they are directly related [START_REF] Ravenstein | The laws of migration[END_REF]. Similarly, to study marriage, which involves two populations at risk-never-married men and womenwe could use comparable intensity indices, for example by age. But given that spouses are not of identical age in most human populations, such rates have hardly ever been used.
To round out this arsenal of simple indices, the researcher will try to represent a set of measures in condensed form. Examples include synthetic or summary indices, which, like "total" rates for demographic events, replace a series of measures by a single figure measuring the intensity of a given event. For instance, the total fertility rate, or simply total fertility, is the sum of age-specific fertility rates calculated for a given year. It can be interpreted as the mean number of children that a group of women would have had in their lifetime if, at each age, their fertility had been equal to the rate observed for that year. This type of measure was widely used in population sciences, especially before World War II.
After the measurement of facts, let us look at the measurement of relationships between facts, when we have at our disposal-notably thanks to censuses-the number of people having experienced various phenomena, demographic or other. How do we identify and measure the potential relationships between these facts?
In the early nineteenth century, Legendre, Gauss, and Laplace proposed the use of the least-squares method-with successive improvements-for what effectively constituted a regression analysis. The method solves a linear equation system containing fewer variables than equations. But its use was long confined to astronomy and geodesy. The reason is that the regression coefficients owe their significance to an external theory such as Newtonian physics for astronomy, or geometry for geodesy, and the theory depends on a small number of abstract concepts defined by axioms. This method eliminates the random fluctuations introduced by the empirical measurement of phenomena, yielding an optimal measure. By contrast, the social sciences in the early nineteenth century could only observe the multitude of factors influencing human life, without being able to impose an order on them. Comte (1839) pushed his criticism of the use of probability in the social sciences to an extreme (27th Lecture, note 18):
It is the basic notion of assessed probability that I find directly irrational and even sophistic: I view it as essentially unfit to guide our conduct in any instance, or at most in games of chance. It would routinely lead us in practice to reject, as numerically implausible, events that will occur nevertheless. 121However, although some contemporaries endorsed Comte's view [START_REF] Poinsot | Discussion de la « Note sur le calcul des probabilités » de Poisson[END_REF], it was not shared by most of those who were working in the social sciences. For instance, in the same period, [START_REF] Quetelet | Sur l'homme et le développement de ses facultés, ou Essai de physique sociale[END_REF] and [START_REF] Cournot | Exposition de la théorie des chances et des probabilités[END_REF] wrote books to show that no mathematical tool other than probability could turn social science into a true science.
Let us now briefly describe122 how the measurement of relationships between social facts by means of regression methods gained ground in the nineteenth century and eventually established itself in the social sciences. While the researchers mentioned below worked in different fields (we indicate the main area of interest for each), all became involved in statistics. Their goal was to show how the measurement of human facts could be incorporated into the search for the connections between them, and how probability allows a clear measurement of those links.
In 1835, the statistician Quetelet's theory of the average man sought to develop a social physics, focused on the mean distributions of many physical and social facts observed in a great number of populations. Quetelet showed that an abundance of facts can be represented by a normal distribution, but he ultimately failed to provide a measurement method capable of classifying or interlinking them. Cournot (1848) spelled out the limitations of this approach:
The average man thus defined, far from being, as it were, the species type, would simply be an impossible man, or at least there are no grounds so far for regarding him as possible. 123In other words, Quetelet's methods, too close to those of physics, could not show the diversity of human responses to a situation. Nor could they connect sub-populations that were homogeneous in respect of a given phenomenon to other sub-populations homogeneous in respect of another phenomenon.
The statistician [START_REF] Lexis | Über die Theorie der Stabilität statistischer Reihen[END_REF] tried to give a better characterization of human heterogeneity by measuring the dispersion of a series of demographic rates, for example using an index. An index value greater than unity means that the series are unstable, i.e., that we cannot regard them as having the same underlying probability. In fact, most of the series that he examined seemed unstable. Quetelet had found that all the distributions he observed were normal but was incapable of devising a means to classify them more fully. Lexis, in contrast, used a measure that was too precise and showed him that he was practically incapable of distinguishing a stable series. Yet he did not supply a method for analyzing this instability either.
The anthropologist Galton (1875)-whose views on eugenics we criticized in Chapter 4-did, make some improvements in statistics. He went further by showing that under the apparent unity of an approximately normal distribution, as observed by Quetelet, one could identify a mix of very different populations-all of them, however, binomial or normal-when their number exceeded 17. In his 1886 studies on heredity, despite his unawareness of Mendel's contemporaneous research (1865), Galton isolated several pairs of normal sub-populations (e.g., parents/children, brothers/sisters) and introduced the conditional expectation that led him to the notion of regression. However, instead of using the least-squares method, he resorted to various other procedures for obtaining a rough estimate of the regression parameters. The economist Edgeworth, following in Galton's footsteps (1883), generalized Galton's analysis of the bivariate case to the multivariate case by introducing correlations between each observed characteristic. Edgeworth even went so far as to correctly state the goal to be reached: In the end, however, Edgeworth did not answer the questions and continued to use approaches other than least squares, leaving Yule (1897) to formalize and solve his problem.
The sociologist Durkheim (1895) also used a regression method on aggregated data, which he called the concomitantvariation method. However, he was unaware of the advances in the method in England and continued not to estimate the parameters. He wrote:
The reason is that, for [the method] to be demonstrative, there is no need to strictly exclude all of the variations that differ from the ones being compared. The simple parallelism of the values taken by the two phenomena, provided that it has been established in a sufficient number of sufficiently varied cases, proves a relationship between the two. The method owes this privilege to the fact that it arrives at the causal relationship not from the outside, as the previous methods do, but from the inside. 124 Durkheim applied the method to suicides (1897) to show, for example, that when the proportion of Protestants increased in the provinces of Prussia and Bavaria, the percentage of suicides there rose in a linear pattern. In other words, he assumed that the Protestant sub-population was sufficiently homogeneous in regard to suicide as to enable a verifiable effect to be deduced from this method using aggregate data. Durkheim concluded that religion manifestly influenced suicide. He applied the method to other cases such as the relationships of suicide to education and family size. However, as we shall see in §6.2.3, the method entailed a risk of ecological fallacy.
The statistician [START_REF] Yule | On the theory of correlation[END_REF], working in demographic economics, studied the links between pauperism and various personal characteristics. He too used a linear regression between aggregate variables, but introducing ordinary least squares to estimate its parameters.
It is important to distinguish here between correlation and regression, which were often confused in the late nineteenth century. The correlation between two characteristics is symmetrical and involves no other theory than statistical theory. This explains the 124 French text: En effet, pour qu'elle soit démonstrative, il n'est pas nécessaire que toutes les variations différentes de celles que l'on compare aient été rigoureusement exclues. Le simple parallélisme des valeurs par lesquelles passent les deux phénomènes, pourvu qu'il ait été établi dans un nombre suffisant de cas suffisamment variés, est la preuve qu'il existe entre eux une relation. Cette méthode doit ce privilège à ce qu'elle atteint le rapport causal, non du dehors comme les précédentes, mais par le dedans. publication of many spurious correlations, such as the recent claims of a strong correlation between chocolate consumption in a given population and the per-capita number of Nobel prizes [START_REF] Messerli | Chocolate, consumption, Cognitive function, and Nobel laurates[END_REF] or serial killers. In reality, we should be looking for the economic, social, and cultural factors that lead people to consume chocolate and at the same time to reach a given level of education allowing potential access to a Nobel prize. That is precisely what a regression does when it introduces dissymmetry between variables, as a consequence of a deeper analysis of the underlying causality.
Here, we have one characteristic that will depend on other characteristics. Naturally the analysis must be performed with the utmost rigor, and the underlying paradigm of the discipline concerned will play the leading role-as we shall see.
Yule was thus in a position to say that a regression allowed an assessment and comparison of the effects of different aggregate variables introduced to explain the changes in the measurement of pauperism that he was using. He no longer even needed to factor in their normality, provided that the regression was linear. 125 In Stigler's words (1986, p. 360):
In regression analysis, conditional probability made possible the very definition of the quantities about which the statistician was interested in making inferences.
However, the risk of omission of an important characteristic can always introduce undesirable effects, as Yule fully recognized. We could call this the effect of unobserved heterogeneity.
To sum up, the introduction of human lives into the field of science strongly altered the ways in which they might be viewed. They ceased to be seen as individual lives that cannot be compared with one another. On the contrary, the scientific goal became to search for configurations of relationships that can interconnect lives. However, throughout the period discussed here, that quest was essentially a science of the moment, i.e., one that used the observation of human facts at a specific instant without considering 125 Although this conclusion is not fully justified, it freed social scientists at the time from the obligation to perform complex normality tests for multivariate distributions.
the length of time in which the person has been in a given state. The following period-from the twentieth to the twenty-first centuryhas been completing that observation with the aim of producing a fuller approach to human life.
Setting aside hermeneutics for the moment, let us examine how sociology can use life stories. Between 1918 and 1920, the sociologists Thomas and Znaniecki published their seminal study on The Polish peasant in Europe and America, in five volumes. The authors collected the life stories of fifty families of Polish immigrants to the United States through their correspondence. The result is an original, in-depth analysis of migration. It shows that family and community of origin play a more decisive role than the economic factor. The disaggregation effect of the American capitalist economy contrasts with the reconstruction of migrant families. In other words, Thomas and Znaniecki effectively draw inspiration from philosophical hermeutics.
Their work initiated what came to be known as the Chicago School of American sociologists, who successfully analyzed the life stories of many communities settled in the United States, up to the outbreak of World War II.
Immediately after the war, marked by the horror of the concentration camps, some direct eyewitness accounts of life in the camps were published. Their number was small, so brutal were the living conditions there (on this point, see Pollak, 1990, p. 15). One outstanding book should be noted here: Les Françaises à Ravensbrück (French Women in Ravensbrück) (1965), which gathers many testimonies chosen for the way in which they reflect the reactions of female deportees. The sociologist Marie-Josée Chombart de Lauwe was a member of the editorial committee.
It was not until the 1970s that the subject of the camps truly entered the field of sociology. In 1973, Bertaux noted (p. 355):
In other words, when one has not gathered the socio-occupational histories of individuals, it is impossible to reconstruct them from mobility flows alone. 126Interestingly, Bertaux linked this approach to longitudinal analysis in demography, and particularly to the work of Louis Henry, which we discuss later. This led Bertaux to propose a new biographical approach in sociology in 1976, which would treat biographies "not as life stories but as narratives of practices" (p. 199). These practices, observable via a survey, enabled him to capture social relations between individuals-which, for their part, are not observable. This time, sociology would turn to explanation rather than comprehension.
By contrast, during the 1980s, Pollak undertook the task of "comprehending"-in Dilthey's sense-the concentration-camp experience. In La gestion de l'indicible (Managing the Unspeakable) (1986), he analyzed in detail the life story of a victim of the Nazi regime and showed how she managed to overcome that period by building a working career and a private life. Moreover, Pollak perfectly demonstrates (p. 32): [....] that all individual histories and memories fit into a collective history and memory. 127From his close scrutiny of this particular narrative, Pollak extracted a hard core-a leitmotiv of sorts-that he identified in his 46 other long interviews of survivors. This gave him the possibility of "explaining" the survivors' behaviors, which he put to use in his book L'expérience concentrationnaire (1990), marking a convergence with the Chicago School approach.
Interestingly, Pollak's article appeared in the issue of the journal Actes de la recherche en Sciences Sociales that Bourdieu, the journal's editor and Pollak's doctoral dissertation supervisor, entitled L'illusion biographique-the very same title as Bourdieu's own article in the issue (1986). Without discussing Bourdieu's concept of "structural constructivism" in detail here, let us simply note that he invokes it in dogmatic fashion to launch a blistering attack on the biographical approach, which he wrongly rejects (p. 69):
This inclination to make oneself the ideologist of one's own lifeby selecting, on the basis of an overall intention, certain significant events and by drawing connections between them that will give them coherence, such as [the connections] implied by their establishment as causes or, more often, as ends-meets with the natural connivance of the biographer, who has every reason, starting with his propensities as an interpretation professional, to accept this artificial creation of meaning. 128As we saw in our examination of Pollak's article, that "inclination" is not due to a scrupulous sociologist paying close attention to the form of the testimony and the conditions in which it was recorded, to the examination of legal depositions, to the organization of the narrative, and so on. We are dealing here not with "an artificial creation of meaning" but with a far-reaching scrutiny of the interviewee's words. Heinich (2010, p. 429) quite rightly concludes his article with an attack on this fallacy: And it's pitiful to realize-rereading it more than twenty years later-how far the superb intelligence that was Bourdieu's strayed into the typical form of foolishness of our time consisting of allround suspicion, blind and systematic criticism. 129By attacking micro-history, Bourdieu tried to assert his presence in a field whose importance he failed to grasp.
On this topic, the Revue Française de Sociologie published an issue in January-March 1990 devoted to the biographical approach. One article in particular (de Coninck, Godard) describes three basic models that can be used in sociology: an archeological model, a change-centered mode, and a structural model. The authors conclude (p. 51):
Identifying causal relations that go beyond individual cases allows a comparison of the few regularities thus revealed, but this does not mean that the researcher presumes to be able to predict individual paths. 130 While that is a far cry from Bourdieu's rejection of the biographical approach, the "explanation" of the relations does not concern individuals, of course, but only the population examined, as we shall see for population sciences.
In sum, sociology has focused at times on the "comprehension" of social facts, at times on their "explanation," showing a possible convergence of the two approaches but without ever proving it.
In the population sciences, the approach prevailing since the seventeenth century persisted until the end of World War II. Its underlying paradigm was that of cross-sectional analysis, which assumes that social facts exist independently of the people who experience them (statistical individuals). The facts are explained by the social, economic, political, religious, and other characteristics of the society as a whole. The approach is essentially based on aggregate measures. cette forme de bêtise typique de notre époque que sont le soupçon généralisé, la critique aveugle et systématique. 130 French text: Mettre en évidence des relations causales qui dépassent les cas individuels autorise l'opération de comparaison sur les quelques régularités mises ainsi en évidence et ne signifie pas pour autant que le chercheur s'arroge une capacité de prédiction sur les trajectoires individuelles.
Although in use for over 280 years, the paradigm ran into a series of problems due to the characteristics of some of the measures. At a deeper level, its problems were due to its underlying assumptions.
First, the summary indices currently used in cross-sectional analysis can become misleading during periods when the timing of a phenomenon changes. For example, at the end of World War II, the cumulated first-marriage frequency largely exceeded unity, whereas the index might logically have been expected to remain consistently below unity-as in a real cohort. Henry (1966, p. 468) observed:
[. . .] in a recovery period, behavior is influenced by the earlier lag; accordingly, to assign to a fictitious cohort a series of indices observed in a recovery period means postulating the existence of a cohort engaged in a lifelong effort to make up for a delay that it had never experienced. 131 The period factors are actually experienced in life stages that differ considerably from one cohort to another. They may also entail different consequences, which summary indices utterly fail to capture.
Second, the regression methods used incorporate quantities aggregated on different criteria. This creates a strong risk of interpreting the results in terms of individual behavior-an outcome known as the ecological fallacy. [START_REF] Durkheim | Le suicide[END_REF] was in danger of committing the fallacy when he measured a positive connection between the percentage of Protestants in a region and its suicide rate. This finding rests entirely on the assumption that social facts exist independently of the people who experience them. Similarly, we have shown a positive link between the percentage of farmers and the percentage of migrants between Norwegian regions for men aged 22 born in 1948 [START_REF] Courgeau | Multilevel synthesis. From the group to the individual[END_REF]. Later, we shall see that this 131 French text: …au cours d'une période de récupération, le comportement est influencé par le retard antérieur ; attribuer à une cohorte fictive une série d'indices observés en période de récupération revient alors à postuler l'existence d'une génération qui, d'un bout à l'autre de sa vie, s'emploierait à rattraper un retard qu'elle n'aurait jamais pris. assumption is totally invalidated at individual level, and that farmers actually migrate far less than the rest of the population.
The period vision of cross-sectional analysis strips human life of all its density, for the analysis looks at a given moment and assumes that the phenomena studied are determined by the characteristics of the population just before their occurrence. If, instead, we give priority to the time spent by individuals in a given state, we shall emphasize duration and be better able to describe the sequence of events.
In these conditions, it is useful to adopt another point of view on the phenomena without, however, fundamentally changing the measurement of the facts. These will continue to be measured by population censuses and register data. But we shall track them over the lifetime of a generation or cohort instead of examining them during a year or another given period.
The calculation of new types of indices, called probabilities (in French: quotients), sought to eliminate interferences between phenomena occurring simultaneously in a generation or cohort. The goal was to measure each phenomenon in its pure state, i.e., separating the effect of the phenomenon studied from those of the other phenomena, regarded as disturbing [START_REF] Henry | Démographie : analyse et modèles[END_REF]. For this purpose, we need to assume that the phenomena are independent of one another.
For example, to measure a probability of first marriage in a population experiencing mortality, emigration, and immigration simultaneously, we need to assume independence between these three phenomena and marriage in order to obtain a measure of marriage in its pure state. We can then show that the probability of marriage is approximately equal to the ratio of first marriages observed at a given age to the never-married population initially at risk minus one-half of the deaths and emigration flows plus one-half of the immigration flows occurring in this initial population.
To be fully valid, however, our calculation will require another assumption. To study nuptiality, we have succeeded in eliminating the effect of certain disturbing phenomena-mortality, emigration, and immigration-but we have no certainty regarding the effect of many other characteristics that will influence the decision to marry. For the previous calculation to be fully valid, we must therefore posit a new hypothesis: the lifelong homogeneity of the population studied. In other words, a person's probability of marrying will not be affected by his or her education, occupation, nationality, religion, and so on. Although we can easily see how implausible this assumption is, we need it for now, and we shall see later the unfortunately vain attempts to remove it via differential longitudinal analysis.
We do have the option of generalizing this order-specific measure of the event to events for all orders combined by calculating, for example, age-specific fertility rates, which always eliminate the effect of the other disturbing phenomena.
Under these conditions, we can calculate summary indices of the intensity of phenomena across the life of a generation or cohort, and we can easily see that they do not display the disadvantages of cross-sectional indices. For instance, the intensity of first marriages will always be less than, or at most equal to, unity-unlike the cumulated first-marriage frequency. These indices will provide a better tracking of the changes in the phenomena studied by generation or cohort.
However, we must bear in mind that we cannot determine the indices until the phenomenon can no longer occur in the population studied: the age at menopause for fertility, and the total extinction of the generation or cohort for most other phenomena. This is obviously a major drawback. By contrast, period indices offered a cumulative measure that could be used immediately.
We noted above the need for the population to meet the homogeneity condition in order for the longitudinal analysis to be satisfactory and, at the same time, the need to waive that condition because of its lack of plausibility. The introduction of differential analysis was an attempt to resolve this problem. Let us see if the solution works.
Differential analysis seeks to study the occurrence of a given phenomenon in initial groups defined by characteristics such as education, occupation, and religion. The definition of notionally homogeneous groups can be problematic to begin with. As Lavoye and Mayer (1984, p. 145) noted: at best, from one or more well-known variables, we can determine the categories that may define homogeneous sub-populations (or sub-populations assumed to be homogeneous) and measure the demographic differences between these groups. But the determination of homogeneous groups is not always so simple, for we may want to identify subsets that have many characteristics in common [. . .]132
The authors clearly state the principle of this approach and some of its limitations when seeking a finer breakdown of the population studied. Let us take a closer look at the problem.
First, we need a measure of the variables that distinguishes between sub-populations. If we are dealing with characteristics measured by civil-registration records, we know that they capture a small number of individual characteristics, and only the use of fuller population registers, such as those of Denmark, 133 would enable us to perform such an analysis properly. For instance, when the register gives the person's occupation at the time of marriage, we can analyze family formation by initial occupational group, without actually being able to analyze the interaction between occupational change and fertility. By contrast, if we are dealing with characteristics measured by censuses, the situation is more complex, for these enumerations-typically conducted at ten-year intervalswill not allow us to define a person's status when he or she enters the group considered. A census will record the person's occupation at a different date from that of his or her entry into the subpopulation studied. For example, we will know someone's occupation not at the time of marriage, as before, but only in the census nearest the marriage date.
For a clearer view of the conditions that indices must satisfy in order to allow a differential analysis of this kind, let us look at a more specific example.
Suppose we want to analyze the legitimate fertility of a cohort of married women in the non-metropolitan areas of a given country. In addition to mortality and international migration, the analysis will obviously need to cover annual movements between metropolitan and non-metropolitan areas. While mortality and international migration can be easily incorporated into the estimated probabilities, the same is not true of movements to and from metropolitan areas, whose probability greatly exceeds that of deaths and international migration. Moreover, while we may assume that the first two disturbing phenomena are relatively independent of each other, the independence between migration and fertility is far harder to confirm. As noted earlier, the links between fertility and migration are too strong to be neglected.
But even if this analysis is feasible, would it have identified a homogeneous sub-population of married women? There is no reason to believe so. As Henry noted (1959, p. 32):
To determine exactly what is the practical impact of the heterogeneity of human groups, we would need to extend research in differential demography all the way to individual physical and psychological characteristics, taking care to study both the dispersion and the correlation of demographic indices within the fairly general groups that we have considered so far. 134 In a footnote (p. 25), he added:
134 French text: Pour savoir exactement, quelle est la portée pratique de l'hétérogénéité des groupes humains, il faudra pousser les recherches de démographie différentielle jusqu'aux caractéristiques individuelles physiques et psychologiques, avec le souci d'étudier à la fois la dispersion et la corrélation des indices démographiques à l'intérieur des groupes, assez sommaires, considérés jusqu'ici.
Given the practical difficulties, one cannot avoid asking whether the problem posed can be solved. 135Today, we can say that the problem cannot be solved by means of differential longitudinal analysis and that the only solution is an event-history approach.
Distinguishing between two sub-groups, such as women living in non-metropolitan or metropolitan areas, is not enough to permit a true differential analysis. As Lavoye and Mayer observed in the earlier quotation, it is essential to incorporate a very large set of characteristics for the analysis to be valid. This, however, will yield groups too small for a longitudinal analysis. Furthermore, we shall never be certain of having included all of the population's heterogeneity factors. There will always remain an unobserved heterogeneity whose effect on the estimated probabilities will be totally unknown. As we shall see, this problem does not occur in event-history analysis.
In conclusion, differential analysis does not allow population heterogeneity to be taken properly into account, forcing us to make do with the assumption that the population studied is homogeneous.
In light of the above, we can define the paradigm of longitudinal analysis by means of the following postulate: one can only study the occurrence of a single event, during the lifetime of a generation or cohort, in a population: [. . .] that maintains all of its characteristics and the same characteristics for as long as the phenomenon occurs136 [START_REF] Blayo | La condition d'homogénéité en analyse démographique et en analyse statistique des biographies[END_REF](Blayo, , p. 1504)).
For the analysis to be feasible and for the measures of the phenomena studied to be meaningful, the population must be regarded as homogeneous and the disturbing phenomena must be independent of the phenomenon studied.
Books on longitudinal analysis, such as [START_REF] Henry | Démographie : analyse et modèles[END_REF], devote a separate chapter to each phenomenon, since they have been isolated in a pure state: marriage, fertility, mortality, and migration.
Although it dispels some of the criticisms leveled against cross-sectional analysis, this approach raises some new problems of its own. First, probabilities in longitudinal analysis are calculated under the assumption of independence between the phenomena studied and the disturbing phenomena. This hypothesis is broadly verified for disturbing phenomena such as mortality and international migration in the study of marriage and fertility. For other phenomena, however, it is far more questionable. When we study marriage, for instance, the concurrent effect of cohabitation introduces a selection bias that eliminates from the population at risk a set of persons who undoubtedly exhibit special characteristics. This invalidates the independence assumption and makes the calculation of the corresponding probabilities largely meaningless.
Second, as the paradigm used allows the study of only one phenomenon, we cannot analyze concurrent events. Accordingly, studies of cause-specific mortality are not recommended, for the eradication of one cause of mortality will obviously change the probabilities of dying from other causes, in a manner virtually impossible to predict as long as the first cause exists. Likewise, "we must renounce the idea of studying a population in which several events allow entry" [START_REF] Blayo | La condition d'homogénéité en analyse démographique et en analyse statistique des biographies[END_REF], as in the example discussed above of women living in non-metropolitan areas. In other words, there are a great many cases indeed in which the postulate of longitudinal analysis makes it impossible to calculate clearly meaningful probabilities.
Third, as noted earlier, differential demography does not allow a proper examination of population heterogeneity. The breakdown of a given population into sub-populations that can be regarded as homogeneous soon yields groups too small to allow any meaningful longitudinal analysis.
In these conditions, making analytical progress requires the definition of a new paradigm with which we can validate the calculations and indices that are not valid or cannot be estimated in longitudinal analysis.
Census and register data were exhaustive for the main demographic phenomena, but covered very few other facts that would have provided a better understanding of the behavior of individual members of the population. To obtain this fuller information, we need to perform surveys, which-for reasons of cost and even feasibility-cannot be exhaustive. Surveys yield a different measurement of facts and a set of indices that, once again, must incorporate their own dispersion, so that we can draw more robust conclusions. As we shall see, this new form of measurement-unlike differential analysis-allows us to establish very clear connections between many facts as they unfold over time.
The method of choice for collecting a maximum number of events in a person's life history is the prospective or retrospective survey.
The prospective survey typically collects once a year the events that have occurred during that year. It is the best means for obtaining information that is reliable because of its great timeliness. Its main disadvantage is the need to wait many years after its start before using the event histories collected. For example, the prospective surveys of retirees carried out by Françoise Cribier's team (Cribier, Kich, 1999) had to track them from ages 65 to 90 to obtain final results. Another drawback is the risk of attrition, as many interviewees may eventually grow tired of responding.
The retrospective survey, instead, collects a large number of events that occurred in a sometimes distant past from an already elderly cohort. Its chief disadvantage is that respondents may have trouble recalling dates of remote events or may even forget some altogether. The survey that I conducted in 1981 (Courgeau, 1999), called Triple event history (work, family, and migration), covered cohorts born between 1911 and 1936, who had therefore lived a large part of their lives. Faced with respondents' difficulties in remembering old events, we performed a similar survey in a country maintaining population registers that allowed a quality check on the information gathered retrospectively (Poulain et al., 1991;Courgeau, 1991). Our analyses showed that, in most cases, dating errors act as background noise from which we can extract consistent information regardless of source: man, woman, couple or register. Memory therefore seems reliable in situations where the analysis requires it. Moreover, retrospective surveys are not subject to the attrition risk inherent in prospective surveys, since each respondent is interviewed only once.
This new approach focuses not on the event (marriage, birth, migration, change of occupation, and so on) but on the person's entire life history, regarded as a complex process in which the phenomena studied are in permanent interaction. The goal is to see how an event can influence the sequence of other events in a person's life. Similarly, time is not a discrete variable as in longitudinal analysis, measured over one or more years, but will be regarded as continuous. Lastly, a number of the events studied may be unobserved, for the observation period is limited by the date on which the retrospective survey is performed or the date when the prospective observation stops. The period is described as censored or truncated.
All these developments will lead to new measurement methods, which we cannot discuss in detail here, for they are based on complex mathematical and probabilistic concepts: martingale theory [START_REF] Doob | Stochastic processes[END_REF][START_REF] Dellacherie | Probabilités et potentiel. Théorie des martingales[END_REF] and countingprocess theory. To summarize this new approach, we can say that it regards transitions between many states as a multivariate counting process on which one can define a matrix of intensities of transition between each state (probabilities combined over time)-a matrix that changes over time. As we are dealing with survey data, the variance of the estimate is now necessary and is itself estimated simultaneously. We can thus eliminate the independence condition for the phenomena-which had been set in longitudinal analysisand focus instead on a detailed examination of the multiple dependencies that may exist between them.
By measuring the combined cumulative probabilities, we can study, for example, the links between family formation and migration to metropolitan areas (centered on Paris, Lyon, and Marseille) in France in the period 1925-1950[START_REF] Courgeau | Family formation and urbanization[END_REF]. We find that women's migration to metropolitan areas entails a steep fall in fertility, irrespective of birth order, whereas migration to lowurbanized areas increases fertility. What we observe, therefore, is a very rapid adjustment to the behavior of the host environment, although the adjustment differs according to the migrants' origin. Migration to metropolitan areas attracts women whose pre-migration behavior already closely resembles that of the host environment. By contrast, migration to less urbanized areas attracts women whose pre-migration fertility does not differ from that of large cities. The first type of migration reflects selective behavior, while the second type illustrates adaptive behavior. Naturally, we can perform the analysis in the other direction, to see how successive births influence female mobility. The results indicate a reciprocal dependence: the probability of migrating to metropolitan areas decreases with each birth, whereas mobility in the other direction rises after each birth. This example illustrates how a true differential analysis-impossible to conduct in a longitudinal framework-becomes entirely possible here.
This analysis also illustrates the complexity of dependencies that can be identified by measuring combined probabilities. We can speak of local dependence when only one process influences the other without reciprocity [START_REF] Schweder | Composable Markov processes[END_REF] and total independence when the phenomena have no influence on one another. The latter condition, imposed in longitudinal analysis, rarely occurs in reality when we perform an event-history analysis.
Lastly, event-history analysis can incorporate the equivalent of the double rates that we described in cross-sectional analysis. In his study of drosophila mating, Aalen used a double rate measuring the number of male and female flies simultaneously at risk [START_REF] Aalen | Nonparametric inference for a family of counting processes[END_REF]. One could do the same for the study of marriage or human migration between areas but, to our knowledge, no such attempts have yet been made.
Whereas longitudinal analysis was incapable of measuring the effects of different personal characteristics on the phenomena studied, or did so very poorly, event-history analysis has no such difficulty. It thus allows us to waive the homogeneity condition for the population needed in longitudinal analysis.
The models used generalize the regressions discussed in our presentation of cross-sectional analysis, but now with the addition of the time factor. For this purpose, we could distinguish between two types of models-parametric and semi-parametric [START_REF] Courgeau | Event history analysis in demography[END_REF])-but we shall confine our examination here to the second type, which is far more flexible to use. It achieves this flexibility by means of an underlying combined probability on which no parametric formalization of the duration-of-stay effect is imposed, while the effect of the various characteristics is estimated with the aid of parameters-hence the name "semi-parametric."
These models were initially proposed by [START_REF] Cox | Regression models and life-tables (with discussion)[END_REF], but without a true underlying theory. This was developed later by incorporating martingale theory and counting-process theory [START_REF] Aalen | Statistical inference for a family of counting processes[END_REF], as was done with the measurement of facts. Such models can include individual characteristics either in multiplicative form (proportional-risk models or accelerated failure time [AFT] models) or in additive form as in the classic regression model. Again for reasons of space, we cannot provide an overview of the assumptions and mathematical estimation methods applied to the models. Let us see instead how the relevant characteristics are measured and give a simple example of the application of the models. The characteristics may be fixed, such as those of the respondent's parents, or on the contrary they may vary over time, such as the respondent's occupation. They are generally measured by surveys and can be embodied in variables that are binary (the individual displays or does not display the characteristic at a given time), polytomous (the individual does or does not display a multiple characteristic, which can be either nominal or ordered, at a given time) or continuous (a person's income at a given time).
As an example of the use of such methods, we continue the earlier analysis of the links between family and urbanization [START_REF] Courgeau | Family formation and urbanization[END_REF], in which we now incorporate various characteristics of the population groups involved.
Let us see, for instance, the effect on the interaction between the birth of the third child and mobility between metropolitan and non-metropolitan areas of various characteristics such as the mother's education, position in her own family (eldest, number of siblings), father's occupation, own occupation, and so on-all of which are assumed to have a multiplicative impact on the combined probabilities. We can examine a few of these effects here in greater detail. Having a farmer father has an effect on the third birth, regardless of the mother's area of residence at the time of birth. If, however, the birth occurs before migration, it will be delayed, whereas after migration this effect disappears. Other aspects of behavior will not be affected by migration. A woman with many siblings will always be more likely to have a third child regardless of her area of origin. Once all these characteristics have been factored in, the effect described above-i.e., migration to metropolitan areas substantially reduces the probability of having a third child-will endure, whereas migration to less urbanized areas will continue to increase fertility.
Another issue deserves to be raised concerning these analyses, which cover a large number of personal states. As a result, the impact of age-essential in the earlier paradigms-is considerably weakened, whereas entries and durations of stay in these states become the dominant factors. For instance, in the study of internal migration [START_REF] Courgeau | Interaction between spatial mobility, family and career life cycle: A French survey[END_REF], the age effect, which was strong before the introduction of these states, diminishes and even disappears altogether in certain generations once we have introduced states into family life, the workplace, and social life.
Like regressions on aggregate characteristics, event-history models depend on unobserved heterogeneity. But now we can measure the effects of characteristics, subject to conditions that are often met. When the unobserved characteristics are independent of those we observe, Bretagnolle and Huber-Carol (1985) successfully demonstrated that they reduce the absolute estimated values of the parameters corresponding to the characteristics observed but do not change their signs. As a result, if the effect of a characteristic had seemed significant when other characteristics were omitted, their introduction into the model will merely strengthen the effect of the first characteristic. In contrast, some characteristics that appeared to have no significant effect can become quite influential when we introduce the initially unobserved characteristics.
These findings are very important because they enable us to be sure of the meaning of the effects observed, despite our not knowing if we have introduced into the models all the characteristics influencing the duration of stay. We should note, however, that these characteristics are independent of those already introduced.
The paradigm of the event-history approach can be summarized as follows: throughout his or her life, a person follows a complex trajectory that depends at a given moment on his or her earlier trajectory and on the information that he or she has acquired in the past (Courgeau and Lelièvre, 1996). The population to which these persons belong can now be regarded as heterogeneous and the phenomena observed are generally dependent on one another. In any event, the analysis will show if these conditions are met or not, whereas in longitudinal analysis the homogeneity of a population and the independence between phenomena were part of the paradigm.
The event-history approach therefore dispels some of the criticisms directed at longitudinal and cross-sectional analysis, but at the same time it raises new issues. First, it does not lend itself to the criticisms leveled at differential longitudinal analysis. We no longer need to break down the population into sub-populations too small to allow conclusions. The regression methods used in event-history analysis prove to be very powerful and do not require any decomposition of the population. However, we must be watchful of problems due to unobserved heterogeneity. We have already reported the results obtained when the heterogeneity was independent of the phenomena observed. To solve this broader problem, many researchers have tried to introduce an unobserved heterogeneity in the form of a function called frailty and have estimated its parameters (Vaupel and Yashin, 1985). Let us take a closer look at how they proceed, for they incorporate into their models a series of measures that we could describe as fictitious.
Frailty is an unknown function whose purpose is to model the underlying behavior of the members of the population observed. We use different frailty distributions to see how their introduction alters the effects of the characteristics observed. If they modify the effects in a manner that does not depend significantly on the underlying distribution, we may conclude that the introduction of frailty is useful. Unfortunately, the effects of the variables observed are strongly influenced by the hypothetical distribution of the unobserved variable. [START_REF] Trussel | Correcting for unmeasured heterogeneity in hazard models using the Heckman-Singer procedure[END_REF] show that some distributions can even change the signs of certain parameters. The instability of these results casts doubt on the usefulness of introducing unobserved heterogeneity. In fact, we can show that, for the analysis of non-repetitive phenomena, there is only one model that can be estimated without unobserved heterogeneity, but when we try to introduce one, there are an infinity of models that adjust identically to the data with different estimated probabilities [START_REF] Trussel | Demographic applications of event history analysis[END_REF]. Here, the choice of a distribution to represent unobserved heterogeneity-with no other event-history information or other kinds of information on its form-seems of little use or even harmful.
Second, the event-history paradigm eliminates the risk of ecological fallacy associated with the cross-sectional paradigm. As we are now working on individual data, the relationships identified operate at individual level. For instance, the positive link previously noted between the percentage of farmers and migrants in Norway will now appear as negative: farmers will migrate far less than the rest of the population. But, while this finding seems more normal than the surprising result of the cross-sectional analysis, it does not explain the earlier positive relationship. The reason is that, by eliminating the risk of ecological fallacy, the event-history approach can generate a risk of atomistic fallacy by neglecting the context in which behaviors occur.
We shall therefore turn to the contextual approach, followed by the multilevel approach, to see how they can enable us to resolve this difficulty.
Human beings do not live in isolation. On the contrary, they are closely involved in different social groups on which their existence strongly depends. By shifting our emphasis to these groups and identifying a plurality of levels, we abandon the dualist vision, which pits society in the cross-sectional approach against the individual in the event-history approach. In these conditions, as Franck (1995, p. 79) notes:
Once we have admitted the metaphysical or metadisciplinary concept of hierarchy, it no longer makes sense to choose between holism and atomism, and-as regards the social sciencesbetween holism and individualism. [. . .] the point is to find out the true connections between the different stages or levels, from top to bottom and from bottom to top. 137 We thus need to focus on these groups if we want to understand their behavior and identify new measures that will allow their inclusion in the population sciences.
Social groups can be highly diverse and variable from one society to another. We therefore cannot supply a description of them that will apply to all societies. We can only show their diversity.
A first type of universal grouping comes to mind immediately: the family. Although often treated as a single entity, it is already complex in itself. For example, we can work on the group of children to study phenomena such as the age of departure from the family home (Murphy and Wang, 1998), or on the group of parents to study their types of successive jobs (Courgeau and Meron, 1996). Other groupings of family or friends are also possible, such as the contact network (Courgeau, 1973) or the contact circle (entourage) (Bonvalet and Lelièvre, 2012), if we define them with precision.
Other communities can be considered, such as the business firm or government office where a person works, the class or school where a pupil studies, or the hospital or clinic where a patient is being treated.
Very often we shall need to fall back on geographic or administrative groups whose effect is less direct. However, it will be 137 French text: Une fois admis le concept métaphysique ou métadisciplinaire d'hiérarchie, il n'y a plus de sens à choisir entre holisme et atomisme, et pour ce qui est des sciences sociales, entre holisme et individualisme. [. . .] il s'agit de savoir comment s'articulent véritablement les différents étages ou niveaux, du haut vers le bas et du bas vers le haut. far easier to work on these groups, which are notably used for censuses and surveys. In France, for example, we can use geographic divisions such as the municipality (commune), city, département or region. Many data are gathered, aggregated, and published for these areas-on their population, mortality, fertility, health, education, economy, and so on. The multilevel analyses performed on these divisions often yield very significant results, suggesting that the official divisions are approximations of divisions that would be better suited for such studies, but for which no statistics are collected.
We can also conduct such analyses on groups of countries. The effect of national policies on the behavior of a country's population is obvious and makes this segmentation wholly relevant. For example, Wong and Mason (1985) employed it to study the use of contraceptive methods in a number of developing countries.
Such studies naturally require statistics that clearly distinguish the various groupings mentioned above. We also need information on their size, properties, and characteristics. Specialists generally classify the measures required for multilevel analyses into three broad groups (Lazarsfeld and Menzel, 1969). The first group consists of analytical variables, which we prefer to call aggregate characteristics. For instance, the number of pupils, the proportion of boys, and other variables are aggregated measures for a given class or school. The second group comprises structural variables based on relationships between individuals in a group. For example, the density of friendship ties in a class is a structural measure for the class. The third group consists of the general characteristics of each unit, such as the fact that a school is public or private.
To measure a relationship between an individual fact and personal and group characteristics, one solution is to aggregate the characteristics at different levels and include them in an eventhistory analysis. The analysis, therefore, still centers on the individual level, but the characteristics taken into account can be both specific to the person and specific to the groups of which he or she is a member. This becomes what is called a contextual analysis.
As noted earlier, the measures used are initially individual and are then aggregated in different ways. Often, the aggregation simply consists of the average of an individual characteristic for all members of a group. However, the aggregation may be of another kind, such as a measure of the dispersion of the characteristic in the group or other, more general measures of the group to which the person belongs.
Such an analysis will enable us to eliminate the risk of ecological fallacy, at least in part, for the aggregate characteristic will measure a construction that differs from its equivalent at individual level. It does not act as a substitute-as some authors of cross-sectional analyses believe-but as a characteristic of the subpopulation that will affect its members' behavior. At the same time, we avoid the atomistic fallacy, as we make allowance for the context in which the person lives. We should ask ourselves, however, if the inclusion of aggregate characteristics is entirely sufficient to take the context into account. As we shall see, a truly multilevel analysis is needed.
Before conducting one, let us see how the introduction of aggregate characteristics modifies our earlier analyses of migrations of farmers compared with other occupations in Norway. We now include both the fact that the person is a farmer and the percentage of farmers living in his or her region. A contextual analysis of this kind clearly shows us how to reconcile the contradictory results of the two previous analyses. First, we note that farmers are less likely to migrate than non-farmers. The probability is constant regardless of the percentage of farmers living in the region. Durkheim's hypothesis that social facts-here, migration-exist independently of the individuals who experience them is verified by the subpopulation of farmers, for their probability of migration remains identical whatever their region of origin. By contrast, we see that the fact of living in a region with a high percentage of farmers will increase the probability of migrating for other occupations. Thus Durkheim's hypothesis does not hold for non-farmers, and the contextual approach enables us to show this. One possible explanation, in regions with a high proportion of farmers, is the relative lack of non-farm jobs that leads other occupations to emigrate when seeking new jobs-all the more so given the large number of farmers.
The use of contextual models imposes highly restrictive conditions for their formulation. The models notably assume that individuals in a group behave independently of one another. In practice, the risk exposure of a member of a given group is more likely to depend on the risks encountered by other members of the same group. Overlooking this dependence generally produces biased estimates. Furthermore, we can show [START_REF] Courgeau | Multilevel synthesis. From the group to the individual[END_REF] that, in a contextual model, the relative risks for members of different groups are interlinked by strict relationships. Hence the importance of trying to remove the restrictions.
We could adopt the opposite solution, which consists in treating each group separately and performing an event-history analysis of each. This would totally free the analysis from the above constraints, if there are enough groups to ensure robust results. But this condition is seldom met, either for cost reasons in the case of surveys, or for more basic reasons, such as the size of certain classes when the analysis concerns education sciences. This can generate a very high error for the estimated parameters when the size of some groups is very small or the number of characteristics observed very high. If so, it will become almost impossible to obtain reliable results.
We must therefore seek a compromise between a contextual model that imposes overwhelming constraints and models that impose no constraints but make it almost impossible to produce significant estimates. The solution to this dual problem lies, in our view, in a multilevel model. Multilevel models impose constraints looser than those of a contextual model, but sufficient to yield clear and significant results, even when certain groups are restricted. To achieve this, multilevel models will introduce several levels-for example, in education sciences, one level for the pupil, another for his or her class, and so on (Goldstein, 2003). We can then show that it is possible to estimate parameters at these different levels but in the same modela model that enables us to take into account the various characteristics, both individual and specific to the chosen levels, as well as random parameters specific to each level. Again, space precludes a detailed mathematical description of the underlying hypotheses and estimation of multilevel models. For their more detailed application to population sciences, we refer the interested reader to the studies by Goldstein-who first proposed an initial linear model in 1986 and later developed these methods for different types of mathematical models (2003)-and [START_REF] Courgeau | Multilevel synthesis. From the group to the individual[END_REF].
For a simpler illustration here, we take some of the results obtainable with the model. We return to the example of migration in Norway, now adding many characteristics [START_REF] Courgeau | Multilevel synthesis. From the group to the individual[END_REF]. First, the multilevel model confirms the results of the contextual analysis by showing, in addition, that the introduction at individual level of the proportion of farmers in the region where non-farmers live cancels the random parameter of being a farmer at regional level. It also shows the effect of many other characteristics-both familyrelated and occupational-at individual level. Some new characteristics influence random parameters at regional level. For instance, having spent more than 12 years in education and being economically active introduce regional differences. This new approach, which we can describe as a multilevel event-history approach, enables us to flesh out the conceptual framework of the event-history approach without upsetting it. The new approach completes the previous one by introducing effects of more complex groups while continuing to study individual behavior.
In the new paradigm, individual behavior depends not only on the person's past history, viewed in all its complexity, but also on external constraints on the individual, whether or not he or she is aware of them. People's behavior may be influenced by their contact circle, composed of members of their more or less extended family, friends, and other work or leisure acquaintances. The living environment and information received from the press and television can also influence a person's future actions. More generally, pressure from the society in which people live can influence their behavior without their being fully aware of it. For example, people living in an environment with heavy unemployment or a severe lack of jobs in their economic sector may be more likely to migrate to a distant area than if they were living in a region with full employment in that sector. Likewise, we can now incorporate effects of broader characteristics into the analysis. For demographic or epidemiological studies, for instance, we can introduce the fact that the city has a hospital.
Such a paradigm allows us to reconcile discordant results obtained with the earlier paradigms, as the study of migration by Norwegian farmers and non-farmers showed. It is immune to both the ecological fallacy inherent in cross-sectional paradigm and the atomistic fallacy inherent in the event-history paradigm. However, it raises new problems, which will surely require solutions involving new measures.
First, what significance should we assign to the different aggregation levels that we can use? Some-such as family, contact network, contact circle, firm, class, and school-have clear meanings and pose no problem. Others-such as municipality, département, and region-will raise a number of problems, for they do not seem linked to a structure of our society but are defined more or less arbitrarily for administrative and geographic reasons. Admittedly, we can find justifications for them, as the administrative link can influence behavior through regulations specific to each level. We could also argue that these levels serve as approximations of other levels for which we have no measures. If so, however, we must try to gain a better understanding of these levels, which we may regard as fuzzy. For instance, if it were possible to define proper boundaries for "employment areas" (bassins d'emploi) or areas of influence around cities [START_REF] Courgeau | Zones d'influence migratoire des villes : application à la région de Toulouse[END_REF], their use would, no doubt, yield more satisfactory results than the use of a strictly administrative division.
Second, we have already noted that this paradigm, while incorporating multiple aggregation levels, uses an individual approach to explain behavior by characteristics measured at these levels. Hence the need to complement it with an approach to the behavior specific to each level, and then connect these behaviors together. For instance, isolated actions in a given community may generate awareness of a problem that actually concerns the entire community. This may lead policy-makers to take decisions at a more aggregated level, which can then apply to the entire community. These decisions will naturally affect individual behavior and may lead to new actions to neutralize their perverse effects, and so on.
Third, we believe it is essential to take into account the more detailed social structure of groups defined by criteria that are satisfactory for a multilevel analysis. For example, we previously mentioned the family-in itself, a complex group where each member plays a well-defined role that may differ from the roles of other members. We should take into account the interactions between group members and their changes over time in order to properly incorporate their social structure into our analysis. That is yet another difficult task, requiring the implementation of new measurement methods and new analytical tools.
We can now turn from this general view to consider the two main concepts without which no population science would be possible.
The first is the creation of an abstract fictitious individual, whom we can call a statistical individual as distinct from an observed individual. As Aristotle (330 B.C.E.) noted: "individual cases are so infinitely various that no systematic knowledge of them is possible." [START_REF] Graunt | Natural and political observations mentioned in a following index, and made upon the bills of mortality[END_REF] was the first to introduce the possibility of a population science that would set aside the observed individual and use statistics on a small number of characteristics, from which a statistical individual would be obtained. As Courgeau wrote in 2012: Under this scenario, two observed individuals, with identical characteristics, will certainly have different chances of experiencing a given event, for they will have an infinity of other characteristics that can influence the outcome. By contrast, two statistical individuals, seen as units of a repeated random draw, subjected to the same sampling conditions and possessing the same characteristics, will have the same probability of experiencing the event.
The key assumption allowing the use of probability theory here is that of exchangeability138 (de [START_REF] Finetti | La prévision: ses lois logiques, ses sources subjectives[END_REF]: n trials will be said to be exchangeable if the joint probability distribution is invariant for all permutations of the n units. We shall use it here for the residuals, given the explanatory characteristics measured for these individuals.
The second concept is the statistical network, which differs from observed networks. It appeared more recently, for example in Coleman's work (1958). While observed networks may be as diverse as the infinite kinds of ties existing between observed individuals, statistical networks may be more precisely defined by using statistics on ties and choosing criteria to circumscribe them. Here as well, the key assumption allowing the use of probability theory is that, given the explanatory characteristics introduced at each level, the residuals are exchangeable.
Conclusions
The events that mark our life from birth to death are regarded as private, non-repeatable, and unique to each of us. They include the stages in our education, our first job, our partnerships, the birth of our children, our changes of residence, our job changes, our unemployment spells, our retirement, and so on. Other less decisive events can also play an important role in our lives, but they can be even harder to commit to memory. This raises the question of imparting meaning to life stories: what methods can do so, with what advantages and drawbacks?
As in the last chapter, we have shown that the "comprehensive" method developed by philosophical hermeneutics [START_REF] Dilthey | Entleittung in die Geisteswissenschaften[END_REF]) can be used. But also a new one, the "explanatory" method of the social sciences has been used to understand these life histories.
First, the method used to collect life stories must be capable of capturing their essence, as the hermeneutics method was asking. That is a major problem to be resolved: down to what level of detail must the interviewer go, and how can the respondent's memory succeed in recalling all the details? The next chapter will take a closer look at these memory-related issues.
Secondly, it is only recently in the history of humanity, during the seventeenth century, that the idea of developing a population science emerged-a science with an "explanatory" purpose. Dilthey himself did not think that the "sciences of spirit" may go short of explanation, as he said that a descriptive psychology is also possible (1894).The goal of the explanatory purpose was to erase all the inexpressible aspects of these facts of our life in order to tackle more precisely defined objects such as mortality, fertility, migration, occupational mobility, and marriage. But these objects do not lend themselves to examination in a single way. Over time, successions of paradigms were devised, each aimed at shedding new light on the objects. These different paradigms required different measures of the facts observed and different measures of the relationships between the facts. That is the path outlined across this chapter. It is, however, still far from complete.
To begin with, we must absolutely avoid believing that each of these paradigms has outlived its usefulness and has been replaced by a new one. Each actually represents a particular viewpoint on a complex reality. But these points of view have given us an ever fuller vision of the facts and the relationships between them. We can say that (Courgeau, 2009, p. 273):
[. . ] each new paradigm comes as a complement to the preceding one for the purpose of treating cases that lie outside of the latter's scope, while partly preserving some of the results obtained with its predecessor.
In truth, each paradigm defines its own objects, and we can apply Agazzi's proposition (1985, p. 51)-formulated for the natural sciences-to the social sciences:
[. . ] scientific progress does not consist in a purely logical relationship between theories, and moreover it is not linear. Yet it exists and may be interpreted as an accumulation of truth, provided we do not forget that every scientific theory is true only about its own specific objects.
The measures associated with each of these theories are also specific to their particular objects, while allowing the necessary cumulativity almost a century later that the cognitive approach to psychology took up these earlier studies to provide information on autobiographical memory. In particular, the problem of memory failures is critical for the use of autobiographical memory in a number of social sciences.
Other approaches are possible, however. They were developed most notably by neuroscience and psychoanalysis, two sharply contrasting disciplines born at nearly the same time.
Modern neuroscience, founded by John Hughlings Jackson in 1884, is based on the axiom that the brain is purely a sensorimotor machine. It defines a rigorous structure for the brain and the mind (Steinberg, 2009). As Jackson (1884, p. 739) clearly states: I particularly wish to insist that the highest centres-physical basis of mind or consciousness-have this kind of constitution, that they represent innumerable different impressions and movements of all parts of the body, although very indirectly, as certainly as that the lumbar enlargement represents comparatively few of a limited region of the body nearly directly.
Psychoanalysis was founded by Josef Breuer and Sigmund Freud in 1895 with their work Studien über Hysterie, which already points to the role of sexuality in hysteria. That same year, Freud drafted Entwurf einer Psychologie-not published until 1987-in which he presents the theoretical basis of psychoanalysis, grounded in the recent discovery of neurons by Ramon y Cajal in 1888. Despite their differences, both approaches are based on the study of nervous diseases. We shall describe their points of convergence and, at the same time, the reasons for their incompatibility.
We conclude with the replication crisis that has confronted psychology more recently and with the means to resolve it.
Psychology and verification of remembrances
As in the other chapters, our aim is not to give an overview of scientific psychological research on human consciousness, but to discuss more specifically their contribution to the understanding of autobiographical memory.
There are many books with titles such as A history of modern psychology (for example [START_REF] Schultz | A history of Modern Psychology[END_REF][START_REF] Saugstad | A History of Modern Psychology[END_REF][START_REF] Ludden | A History of Modern Psychology[END_REF]. They tend to focus on the major schools-or rather, assumptions-of scientific psychology since its founding by Wilhelm Wundt in 1873. They typically distinguish between structuralism, functionalism, psychoanalysis, behaviorism, cognitive psychology, and evolutionary psychology. This chapter will place special emphasis on psychoanalysis, with a more detailed discussion of the concept of the unconscious in part 7.2. For now, let us turn to the other approaches. Wundt (1832Wundt ( -1920)), the founder of psychological structuralism (not to be confused with the French structuralist school), believed that mental functions such as sensation and perception could only be studied scientifically by introspection. In consequence, the only way to study memory was not through psychological experimentation, but by using methods viewed as non-experimental, such as sociology (Schultz and Schultz, 2011, p. 69).
Despite this, Ebbinghaus, while sharing the basic tenets of Wundt's structuralism, set out to study memory in his 1885 work entitled Über das Gedächntnis (On Memory). Introspection, which he practiced, made him the only subject of his research. This enabled him, by memorizing lists of syllables, to identify certain features of memory now accepted as standard. For example, he showed that the memorized syllables are forgotten quickly in an initial phase (only 44.2% are still recalled an hour later), and that the forgetting rate slows rapidly thereafter (21.1% are still recalled 31 days later). However, as his study never addressed autobiographical memory, we shall not examine his findings in further detail here. His approach was open to a severe objection: since different experimenters can obtain very different results through introspection, how can they assess the mechanisms of their own thought?
Functionalism freed itself of the analysis of psychological processes by studying the distribution of psychological characteristics in a population through physiological research, mental tests, and objective descriptions of behaviors. These researchers, however, took little interest in autobiographical memory. The exception was Galton-one of the movement's initiators-who began to conduct surveys on the subject in 1880.
In Chapter 4, we discussed Galton's eugenicism and critiqued it at length. We now turn to his innovative, experimentbased approach to psychology, and assess the validity of his results.
In 1879, Galton proposed an experimental psychometric approach, particularly for the study of memory, noting (p. 149) that:
[. . .] until the phenomena of any branch of knowledge have been subjected to measurement and number, it cannot assume the status and dignity of a science.
His article describes several experiments conducted-like those of Ebbinghaus-on a single subject: himself.
Galton begins by addressing an important question for further research on memory: are all events in a human life memorized or, on the contrary, are they mostly forgotten? In his own experience, absent constant reminders, these memorizations fade completely. He demonstrates this by trying to recall the dates of memorized events. For example, he lists the associations between ideas that he was able to memorize throughout his earlier life, and certain words chosen to evoke them. He gives the number of these associations. He can thus divide his memories into three periods: childhood, adulthood, and the period comprising very recent events. He was then lead to distinguish between three types of memory: sensory memory, essentially visual, but also linked to sounds and smells; histrionic representational memory; and a more abstract memory. As we shall see, the cognitivists successfully revived this approach nearly a century later. While Galton believed that the results would be different for any other subject than himself, he did not attempt to demonstrate this in his article. How, then, can these experiments be pooled to obtain statistical results?
Galton attempted to do so in his second article, "Statistics of mental imagery," published in 1880. He asked 100 adults to describe the visual memory of their breakfast table from the very morning of the experiment. For this purpose, he developed a closed but very detailed questionnaire, in which he considered the "Illumination" of the image, the "Definition" of its different objects, and the "Colouring" of each object. He drew the surprising conclusion that scientist are unaware of visual memory (p. 303):
To my astonishment, I found that the great majority of the men of science to whom I first applied, protested that mental imagery was unknown to them, and they looked on me as fanciful and fantastic in supposing that the words "mental imagery" really expressed what I believed everybody supposed them to mean. They had no more notion of its true nature than a colour-blind man who has not discerned his defect has of the nature of colour. They had a mental deficiency of which they were unaware and naturally enough supposed those who were normally endowed, were romancing. This claim was repeated, without proper discussion, in most of the psychology literature from the nineteenth to the twenty-first centuries, although some authors stated that the finding was not as obvious as it seems. Many later researchers noted the importance of their own visual memory [START_REF] Brewer | Scientists are not deficient in mental imagery: Galton revised[END_REF]).
To begin with, Galton himself admitted that his sample was not statistically representative: it consisted of friends, and he only recorded the replies by men. He indicated that of these 100 men, 19 were Fellows of the Royal Society, but he did not give the affiliations of the other scientists. [START_REF] Pearson | The life, letters and labours of Francis Galton[END_REF] published the response by Charles Darwin (Galton's half-cousin) to the questionnaire (p. 195), which clearly showed that this outstanding scientist was in full possession of visual memory. For example, concerning the "Definition" of objects, Darwin answered: Some objects quite defined, a slice of cold beef, some grapes and a pear, the state of my plate when I had finished and a few other objects are as distinct as if I had photos before me.
Darwin also stated that he perfectly recalled faces of students whom he had not seen in 60 years, but that at the time of the survey he could speak to a man for an hour and no longer remember his face one month later.
In the same article, Galton sought to corroborate his results with a larger sample of students (172), which he divided into two groups: group A with the highest grades, group B with the lowest. The result obtained with scientists might lead one to expect that group A would display worse visual memory than group B. On the contrary, Galton concluded the following from his observation (p. 312):
I gather from the foregoing paragraphs that the A and B boys are alike in mental imagery, and that the adult males are not very dissimilar to them [.] In sum, while the sample of 100 adults was in no way representative, the sample of 172 adolescents did not confirm the result obtained for the scientists.
The finding therefore seems biased by Galton's preconceived idea. [START_REF] Brewer | Scientists are not deficient in mental imagery: Galton revised[END_REF] believe that he may have been strongly influenced by the replies of the first two scientists whom he surveyed (the astronomer Herschel and the biologist Romanes) out of the thirty-or forty-odd scientists in his sample [START_REF] Burbridge | Galton's 100: an exploration of Francis Galton's imagery studies[END_REF]. Both reported very few images in their questionnaire. This, it is argued, caused a top-down interpretation of the data.
Another reason for Galton's failure to supply a satisfactory explanation of why the scientists could not answer his questions is given by Schwitzgebel (2009) in his book on Perplexities of consciousness, more specifically in his chapter on "Galton's other folly" (p. 52):
Consider also Galton's sceptical scientist who finds fallacy in supposing the existence of a "mind's eye" that sees "images". [. . .] Maybe, that, the difference between the scientists' and nonscientists' responses to Galton's questions reflects neither differences in their imagery (as Galton supposes) or epistemic failure (as I suggest) but only differences in how strictly they interpret the word "see".
While we may never know the exact nature of Galton's error, we can state today that his reasoning was incorrect, for-as other tests confirm-scientists' visual memory is as good as that of nonscientists (Brewer and Sommer-Aikins, 2006).
Beyond questioning Galton's findings, the functionalists addressed the deeper significance of this subjective measure of the liveliness of images: just what is being measured exactly? First, Galton's questionnaire concerns short-term memory, since respondents are asked about an event occurring on the day of the survey. One could extend the questionnaire to images further removed in time. That is what the cognitivists attempted, as we shall see later.
Second, many psychologists of the early twentieth century challenged the validity of the approach. For instance, the American psychologist [START_REF] Thorndike | On the function of visual images[END_REF] set out to compare the subjective measurement of the liveliness and accuracy of visual imagery to other, more objective measures of the memory of the shape, size, number, and other characteristics of memorized objects. Following Galton's method, he divided his sample of 200 students into two groups: the good and less good "visualizers." He then administered ten standard memory tests concerning objects or persons encountered in everyday life. For example, he asked them how many pillars there were in front of the Columbia University Library. Admittedly, this approach is open to criticism. For example, some students might never have seen the pillars; others might have seen them but did not consider them in any way significant. Despite these objections, the result of the comparison is clear: there is no correlation between the measurement of the liveliness of visual images and memory-test performance. One year later, the British psychologist [START_REF] Winch | The function of images[END_REF] tried to correct these flaws by showing his students specific objects and reached the same conclusion as Thorndike: there was no correlation between the two measures. Despite these findings, Betts developed a "Questionnaire Upon Mental Imagery" (QMI) (1909) more detailed than Galton's, with 150 items. He asked respondents to indicate the degree of clarity of the image evoked by a questionnaire item, for example, a red apple. The respondent had to rate the image on a scale of seven, ranging from a perfectly clear image to no image at all. These mental images were classified by origin of sensation: visual images, audible images, movement images, tactile images, taste images, and smell images. However, the questionnaire was used by hardly anyone other than Betts himself, for the behaviorist approach completely ignored his work. The test was later reused by the cognitivists, and by Sheehan in a shorter version (1967).
In 1913, Watson published an article entitled "Psychology as the Behaviorist views It," also known as "The Behaviorist Manifesto." The text offers an approach to psychology centered exclusively on observable behaviors, which can be described objectively in terms of "stimuli" and "responses." For Watson, memory consists in establishing a habit (1924, p. 237):
By "memory," then, we mean nothing except the fact that when we met a stimulus again after an absence, we do the old habitual thing (say the old words and show the old visceral-emotionalbehavior) that we learned to do when we were in the presence of that stimulus in the first place.
As his approach is essentially descriptive, he does not address the interaction between stimulus and response. He thus totally rejects his own mental imagery by describing it as (1913, p. 175):
[. . .] a mental luxury (even if it really exists) without any functional significance.
We shall not dwell any further on this behaviorist approach except to note its widespread success, particularly in the United States, during the first half of the twentieth century.
It was the cognitive approach that would focus on memory from 1950. In their 1968 book entitled Mémoire et Intelligence (Memory and Intelligence), Jean Piaget and Bärbel Innehlder clearly stated why their approach differed from that of their predecessors (pp. 22-23):
Classic studies on memory have been surprisingly positivistic, i.e., confined to the inputs into and outputs from the black box [. . .], making the various factors of the stimulus vary with great utmost relevance to our discussion. By contrast, semantic (or didactic) memory relies on symbolic and linguistic information. It consists of a set of permanent information not directly related to experienced events.
The volume edited by David Rubin, Autobiographical memory (1986), extended this approach to episodic memory by providing a synthesis of the studies on the topic. Autobiographical memory became a research priority again in the early 1950s, and we present the main findings below. One major theme is the time distribution of autobiographical memories.
In 1974, Crovitz and Schiffman took up the method recommended by Galton in 1879, which we described earlier. They applied it not to themselves (as Galton had), but to a sample of 98 students, whom they asked to associate 20 high-imagery, highmeaningfulness, and high-frequency nouns with a memory that they then had to date. The authors obtained 1,745 dated memories, ranging from one hour to 17 years earlier. They then decided to calculate the number of memories per hour by classifying the 1,745 memories into 60 groups. They obtained a roughly linear curve by plotting the results on a log-log graph, whose slope was estimated at -0.78. In other words, the frequency of memories declines steadily as a function of their duration. This remarkable result was later confirmed and refined by many other studies on larger samples. First, in another study of students, but covering a larger number of classes, researchers showed that the curve does not follow the previous slope for childhood memories (before the age of 6 or so) but decreases far more rapidly (for example [START_REF] Rubin | On the retention function for autobiographical memory[END_REF]. This would seem to correspond to the start of the learning period for young children. Second, studies on persons of different ages ranging from 12 to 70 show that, for the period of twenty years prior to the survey, the previous slope is still roughly constant (it varies from -0.88 to -1.03). However, for older persons, there is a reminiscence bump, now observed between ages 10 and 30, often with a peak at ages 21-30 (for example [START_REF] Fitzgerald | Autobiographical memory across the life span[END_REF]. This period corresponds to the entry into adulthood, which entails major changes in respondents' lives, such as entry into the workplace, formation of couples and families, and departure from the parental home. [START_REF] Rubin | Autobiographical memory across the lifespan[END_REF] propose a model that allows for (1) the retention function observed over the 20 most recent years, (2) infantile amnesia, which concerns the first six years of life, and (3) the reminiscence bump, which does not begin to take effect until age 40 and over.
Once they obtained these results, however, researchers could not go back in time to examine the remembered events, i.e., to verify the validity of their findings. These may be influenced by factors such as the choice of words used to stimulate respondents' memories. Rubin supplies the list of stimulus words used in a survey of 20 Duke University students aged on average 19.6 years, and of 20 community-dwelling subjects, aged on average 71.2 years (Rubin et al., 1986, p. 210):
Of the 20 stimulus words (avenue, baby, board, cat, dawn, coin, cotton, fire, flag, flower, friend, market, mountain, nail, picture, steam, storm, sugar, ticket and window), only the word baby seems to be associated with clear periods of the lifespan for which it might evoke memories.
As we can see, most of the words, except for baby, do not clearly evoke major events in respondents' lives such as marriage or partnership, exams, first job, and departure from their parents' home. Arguably, these events, even if they are old, are far more easily recalled than those evoked by the other words.
One way to overcome these drawbacks is to work on diaries kept by persons and to ask them about their memories of the events recorded. The book edited by Rubin contains a chapter by Marigold [START_REF] Linton | Ways of searching and the contents of memory[END_REF] offering an initial exploration of the real content of memory.
Drawing inspiration from the earlier-cited study by [START_REF] Ebbinghaus | Über das Gedächtnis: Unterssuchungen zur Experimentellen Psychologie[END_REF], Linton spent 12 years (1972Linton spent 12 years ( -1983) ) recording not syllables but the daily events of her life (at least two per day) that she then tried to freely recall after different lengths of time. One of her first results directly contradicted what Ebbinghaus had shown: the forgetting of these events followed a totally different curve from that of the forgetting of syllables. After a nearly total recall one year later, she observed a forgetting of approximately 5-6% per year for the subsequent periods. She also showed that the forgetting rate differed substantially with the type of events considered. This prompted her to distinguish events according to a hierarchy of types. The most general level is the mood tone, which leads to a distinction between negative and positive memories. Linton then realized that negative events are more often forgotten than positive ones in the years immediately following, but that many negative events remain in memory later on. The next level is the memory's general theme. By distinguishing between two broad themes, professional/work versus social/self, Linton observed that all the memories linked to the first consistently prevail over those linked to the second. The third level comprises what he labeled extendures (p.57), which are sets of memories connected to significant life stages such as graduate studies. Their recollection depends on their importance in the remembrance period. The fourth level consists of the specific events occurring over a lifetime. They are fully memorized in the first year, but a growing number are forgotten by the second year. The last two levels, elements and details, are of lesser interest for us and are very soon forgotten.
As this study covers only a single individual, it is open to the same criticism previously directed at introspection, i.e., that it produces different results depending on the experimenter. Some findings, however, have been confirmed by later studies. The forgetting curve over time, for example, was reproduced by [START_REF] Bradburn | Answering autobiographical questions: the impact of memory and inference on surveys[END_REF], [START_REF] Wagenaar | My memory: a study of biographical memory over six years[END_REF], and others. We may conclude that, depending on the type of memory examined, its recollection may be more or less accurate and the time effect is variable. In particular, can events considered in the social sciences be perfectly remembered after a long period, which would justify the use of retrospective surveys to study them? For major events in people's lives (marriages, births and deaths of children, changes of residence, changes of employer), some countries keep registers, making it possible to determine the exact dates of the events. One can then question respondents retrospectively about the dates and compare their answers with the dates supplied by the registers.
For this purpose, we shall rely on the results of a survey conducted in 1988-1989 on a sample of 445 couples by the Demography Department of the Catholic University of Louvain (Belgium) and the French National Institute for Demographic Studies (INED) (an initial survey had been attempted in 1982, but with a sample of only 50 couples). Let us note that few surveys have examined the reliability of data collected in retrospective surveys by comparing them with the data recorded in population registers, for few countries have kept such registers since at least 1930 (the survey covers persons aged 40-59). The survey results were published in two articles [START_REF] Poulain | Data from a life history survey and from the Belgian population register[END_REF][START_REF] Courgeau | Impact of response error on event history analysis[END_REF], while a later volume gives a more psychological analysis [START_REF] Auriat | Les défaillances de la mémoire humaine[END_REF].
The first important point is that the survey was conducted in very special conditions. The husband and wife were first questioned separately about the events of their common life, in order to test for a possible sex effect on memory. They were then questioned together in order to correct the errors made in the first test and to decide on a common reply. The third step was to check the population register in order to compile the dates of the events and assess the reliability of the dates recalled by respondents. Lastly, couples were invited to discuss differences between their answers and the register dates. For example, the register provides the date of the civil marriage, while respondents may have given the date of the religious marriage. Thanks to this final phase, one can avoid recording false errors.
The second notable feature is that the detailed analysis of memories of past demographic events shows that the events are in no way increasingly forgotten over time. First, dating errors by persons under 50 do not differ significantly from those made by respondents aged 50 and over. Second, older events are recalled just as accurately as recent ones. The various types of forgetting curves described above do not apply at all to these demographic events.
Yet dating errors do indeed occur, and they differ by type of event, ranging from minimal for family events to far greater ones for migrations. Moreover, they decrease according to the respondent, from a maximum occurrence for the husband to a minimum for the couple. For example, 93.0% of marriage dates given by husbands are accurate to within a month, rising to 98.9% for wives and 99.6% for couples. By contrast, migration dates are accurate to within a month only 61.8% of the time for husbands, 65.2% for wives, and 67.3% for couples. These differences are therefore related to the type of event remembered.
For all the events studied, however, the distributions are relatively symmetrical, indicating the absence of "telescoping" in the data. This phenomenon-widely discussed among psychologistsconsists in perceiving past events as having occurred more recently than is actually the case (Auriat, 1996, p. 24).
As the highest error rate concerns migration, it is legitimate to ask whether this has a strong impact on the event-history analyses (as described in the previous chapter) of migration.
A simple initial analysis concerns the durations of residences in locations occupied after marriage. We can measure the size of the errors with the instantaneous rates of migration (assumed to be constant for each observation year). Figure 1 compares the logs of the rates, h(t), estimated for men and women over a period t from ages zero to 19; figure 2 compares these logs for couples with register data. We see that the curves are not identical, owing to differences in the dating of information obtained for men, women, couples, and population registers. Although different, however, the curves seem to intertwine perfectly: each one is in turn above, below or between the others. All four might belong to the same distribution, the fluctuations being due to low numbers.
We can test this assumption by comparing differences between the moves actually observed in each group and the theoretical number of moves we would observe supposing identical behavior in all groups (Courgeau and Lelièvre, 1992, pp. 75-77).
Taking the 19 years of observation simultaneously, we obtain a chisquare statistic with three degrees of freedom equal to: 2 3 = 1.045 which does not contradict the assumption.
Let us now perform a more complex analysis involving a set of characteristics but still focused on post-marriage moves. As figures 1 and 2 show, we may assume that the probability of moving h follows a Gompertz law of parameter ρ; the characteristics examined Z have a multiplier effect on the probability, yielding the parametric model:
t Z Z t h exp , ;
where β is a vector of parameters to estimate and t the time elapsed. Taking the probability of moving for a control group as a reference, we measure the effect of a variable by the exponential of the parameter estimated for this variable. Thus, when the parameter has a value of +0.485 for persons housed by their employer (Table 7.1), their probability of moving is 1.62 (= exp(0.485)) times higher than for the control group (tenants).
The characteristics examined are: (1) duration between marriage and start of residence considered (under a year, 1-4 years, 5-9 years; the control group consists of cases of residences starting ten years or more after marriage) and (2) number of children born before the start of this residence period. We then introduce tenure status to analyze only three sets of survey data (this information was not recorded in the population register); the control group here consists mostly of tenants.
Table 8.1 gives the results of this parametric analysis. We can then verify that all the characteristics have a similar effect whatever the data source. The only one that has no effect on residence duration is the number of children at the start of residence. All the other characteristics often have a very strong effect, which we cannot distinguish by source from the examination of standard deviations.
Other examples discussed in this article [START_REF] Courgeau | Impact of response error on event history analysis[END_REF] lead to the same results and to the following conclusion (p. 109):
This result is essential for all analysis of retrospective data. However, for maximum survey reliability, it is useful to collect the information from the wife and if possible with her husband present.
The cognitive approach has also allowed the study of memories of a partly autobiographical nature, which raise various problems.
Concerning visual memory, for example, we have already examined Galton's inconclusive results (1880). After having been completely denied by behaviorists, it eventually attracted renewed interest from cognitivists. They sought, in particular, to identify the relationship between a subjective measure of visual imagery and its objective measure by means of spatial capacity tests, and to analyze that relationship more fully.
In 1967, Sheehan developed a shorter version of Betts' questionnaire (1909), discussed earlier. In 1973, Marks proposed the Vividness of Imagery Questionnaire (VVIQ), adopted by many cognitive psychologists and followed by many other similar tests such as VVIQ 2 , Object-Spatial Imagery Questionnaire (OSIQ; [START_REF] Blajenkova | Object-spatial imagery: A new self-report imagery questionnaire[END_REF], and Vividness of Object and Spatial Imagery (VOSI; Blazhenkova [formerly Blajenkova], 2016). Instead of the items listed by Betts, Marks used color photographs to stimulate visual memory. Psychologists have engaged in extensive discussions on the usefulness and validity of these questionnaires.
To begin with, many studies have tried to determine whether the differences in subjective visual imagery observed for a specific individual were strongly associated with other performances in objective visual perception. What follows is a simplified presentation of these discussions, which are far from settled.
The initial studies on Sheehan's questionnaire (1967) led Sheehan and Neisser (1970) to show that the results obtained on the mental imagery of 32 subjects were barely or not at all correlated with their memory of geometrical drawings. [START_REF] Marks | Visual imagery differences in the recall of pictures[END_REF] noted that the mental imagery test used for this study covered seven forms of sensation, as we indicated earlier. He proposed a test more specifically focused on visual imagery, the VVIQ. After conducting it on 74 students, he asked them about certain details of the images they were shown. The results showed that the good "visualizers" scored higher than the less able ones, and significantly so.
Many other tests, however, disproved the existence of such a connection: Berger and Gaunitz (1977) repeated Marks' test but showed that the good "visualizers" did not outperform the less able ones. [START_REF] Richardson | Mental imagery and memory: coding ability or coding preference?[END_REF]1979) showed the lack of correlation between mental imagery and memory performance. [START_REF] Ernest | Visual imagery ability and the recognition of verbal and non verbal stimuli[END_REF] found no relationship between a clear visualization of mental images and word recognition, both visual and auditive. Even Paivio (1986), despite his strong defense of the importance of visual imagery, concluded (p. 17) that:
[. . .] self-report measures of imagery tend to be uncorrelated with objective performance tests. [START_REF] Chara | An inquiry into the construct validity of the Vividness of Visual Imagery Questionnaire[END_REF] showed that the VVIQ scores are not correlated with any of the "memory tasks" performed later. More recently, Ϸórudóttir (2020) found no relationship between the results of visual imagery tests and memory accuracy tests. Only McKelvie's 1995 quantitative review attempted to assess the contribution of the VVIQ test more positively. However, he found the test to be just a minor component for certain "criterion tasks" and noted that only further research would allow a more definitive assessment. As his bibliography includes nearly 250 articles, his conclusion may seem excessive! Given such ambiguity about mental imagery tests, it is important to examine in greater detail the deeper psychological theories on which they can be based. These theories, developed in the 1970s, fall into two broad and opposing categories: propositional theories and imagery theories. While the theories within each category exhibit some differences, their principles are sufficiently clear for our purposes. A detailed presentation will therefore not be useful here. What is important is to see how they justify the use of subjective tests-or not.
Propositional theories were introduced in 1973 in Pylyshyn's article on "What the mind's eye tells the mind's brain: a critique of mental imagery." The author set out to answer the question: what is stored in visual memory? Contemporary psychologists believed that there were only two forms of mental representation: words and images. Pylyshyn refuted this notion and sought to show that, beyond this dual approach, memory is reducible to a single propositional structure. Accordingly, the image's near-perceptive characteristics are merely epiphenomena. Subjective tests of visual vividness thus clearly have no impact on memory.
Imagery theories were introduced by Paivio in 1971 and further developed by Kosslyn in his 1980 work, Image and mind. As noted earlier, however, Paivio took a critical stance regarding the correlation between mental imagery and memory performance. Imagery theories rebutted propositional theories point by point, arguing that imagery is not a flawed concept but, on the contrary, a valid concept in psychology. Paivio and Kosslyn proposed a theory according to which our intellectual activities involve two modes of symbolic representation-one visual, the other verbal. [START_REF] Kosslyn | Visual image preserve metric spatial information: evidence from studies of image scanning[END_REF] describe several experiments that, in their view, confirm the existence of visual images in our mental representations. In one experiment, participants are shown a map of an island with various objects. They are asked to memorize the map accurately by copying the positions of the objects on the map, which is then removed from their sight. They then hear the name of one object and asked to visualize it on their memorized map and to concentrate their vision on the object. After five seconds, they hear the name of another object. They are then asked to shift their gaze to this new object and to press a button when they reach it. The time elapsed to reach it is very nearly identical to the time they would have spent in front of the real map. According to the experiment's designers, this result shows that the images are memorized in a nearly pictorial way. These findings were, however, disputed by many authors in the other camp. Pylyshyn (1981, p.48) believed that Kosslyn and his colleagues were encouraging participants to act as if they were, in fact, viewing the real map and were then estimating the distance between two points:
The one empirical hypothesis is just this: When people imagine a scene or an event, what goes on in their minds is in many ways similar to what goes on when they observe the corresponding event actually happening.
Pylyshyn thus rejected Kosslyn's evidence of the existence of mental images.
The imagery approach gained strength, however, thanks to neuroimaging methods, introduced in the 1990s. In 2015, Pearson and Kosslyn (p. 10.090) felt confident in the end of the imagery debate after an experiment conducted that same year [START_REF] Naselaris | A voxel-wise encoding model for early visual areas decodes mental images of remembered scenes[END_REF], using "a sensory multifeature-based encoding model" on three subjects. After adjusting their model on perceived data, Naselaris et al. showed that the same model could successfully predict the images recalled by subjects, in the same brain areas.
But is that truly certain? Also in 2015, Zeman et al. clearly identified a group of persons who claimed to have no visual imagery-an absence the authors named "aphantasia." Some respondents, however, reported an involuntary vision of a mental image. In 2018, Keogh and Pearson showed that persons suffering from "aphantasia" are incapable of activating their visual cortex owing to the lack of a retroactive connection from the frontal cortex. In 2020, Thorudottir et al. described the case of an architect with normal visual perception but altered mental imagery. This disproved the hypothesis that visual perception and mental imagery are governed by the same mechanism.
We may conclude that the problem raised by the subjective measurement of visual memory is still far from having been solved in a fully satisfactory manner. In our view, a subjective questionnaire-such as those of Galton, Betts, Sheehan, Marks, and others-will never allow a clear measurement of visual memory, for it is too complex to be captured by such a simple and reductionist medium. This straightforward observation effectively explains the difficulties encountered in the use of questionnaires. It shows how important it is for psychology to develop totally different concepts of subjective memory.
Next, we need to examine the cognitivist approach to collective memory. Just as Piaget pioneered the cognitivist approach to individual memory, so did Halbwachs pioneer it for collective memory. While he had already blazed the trail in his earlier studies, his key work on the subject was La mémoire collective (Collective memory, 1950). Written during World War II, it was published posthumously, as Halbwachs had been deported to Buchenwald, where he died in 1945.
The introduction of collective memory, necessarily linked to individual memory, is equivalent to the shift from the event-history approach to the multilevel approach in the social sciences, discussed in the previous chapter. As Halbwachs clearly stated (1950, pp. 23-24):
Besides, if collective memory draws its strength and duration from being supported by a set of persons, it is, however, individuals who remember, as members of the group. 140While memory is an activity of individuals who are isolated from one another, these same individuals, as members of social groups, share with the other members a set of cultural tools that make their memory collective as well [START_REF] Roediger | Collective memory: a new arena of cognitive study[END_REF].
Social scientists soon incorporated this theme into their work (Halbwachs being a disciple of Durkheim), but psychologists-the focus of our attention here-responded more slowly. Social scientists were mainly interested in the consequences of collective memory on numerous social and cultural phenomena, but neglected the ways in which these processes are formed.
Cognitivist psychologists, by contrast, sought to transcend social phenomena in order to identify the mechanisms that explain the formation and persistence of a collective memory [START_REF] Hirst | Towards a psychology of collective memory[END_REF]. Despite this desired objective, most methods used so far still belong to the standard field of cognitivist approaches to individual memory. As Hirst et al. (2018, p. 449) note, after presenting and discussing these approaches:
To be sure, a generalized theory of the psychology of collective memory is yet to be proposed, but the different approaches discussed here suggest that the field is rapidly moving forward.
However, one of the paths explored in their article-the bottom-up approach to the formation and preservation of a collective memory-seems sufficiently novel to us to deserve a closer look. Several experiments on different numbers of subjects and networks [START_REF] Coman | Mnemonic convergence in social networks: the emergent properties of cognition at a collective level[END_REF][START_REF] Momennejad | Bridge ties bind collective memories[END_REF][START_REF] Vlasceanu | The synchronisation of collective beliefs: from dyadic interactions to network convergence[END_REF] have yielded promising results. These models, based on networks of relationships and conversations between individuals, try to show the conditions in which a collective memory can be formed. They introduce various means of communication between members of a single network or different networks to identify those that produce a collective memory rapidly or, on the contrary, those that prevent or delay its formation. This psychological research based on relationship networks can also be tied to multilevel research used in epidemiology, demography, sociology, and other disciplines (see previous chapter) but it is in its early stages and would require greater formalization. For the moment, it restricts experimental situations to single networks for each individual, whereas research in the other social sciences shows that the same person is linked to many networks of different kinds and with different goals.
Despite these many problems to solve in the various areas we have explored, we can say that cognitive psychology is still developing. It has the characteristics of a school of thought that has now come into its own, with its journals, laboratories, and international conferences-but also its convictions and assumptions.
Lastly, a word about an approach that has emerged more recently: evolutionary psychology. Based on Darwinian theory, it aims to show that psychological processes are linked to evolution and determined genetically. In studying the evolution of human memory, it has come up against many difficulties, as Nairne (2010, p. 28) spells out clearly:
As noted, there are no fossilized memory records, the heritability of cognitive processes remains largely unknown, and we can only speculate about the selection pressures that operated in ancestral environments.
While few researchers reject the notion that memory has evolved over the ages [START_REF] Nairne | Adaptive memory: the evolutionary significance of survival processing[END_REF], the empirical bases of such a theory seem too weak to ensure its success. We shall therefore not discuss evolutionary psychology any further, as it contributes little to the study of autobiographical memory.
From the Freudian unconscious to the neurosciences
As noted at the start of §7.1, psychoanalysis diverged from the other psychological approaches by positing the existence of the unconscious. In this section, we therefore examine it in greater detail by comparing it with the view of the unconscious expressed in today's neurosciences.
Psychoanalysis was developed by Sigmund Freud (1856[START_REF] Lotka | Théorie analytique des associations biologiques[END_REF] outside the current of the functionalist school, which viewed it as an outright heresy. Indeed, psychoanalysis regarded memory not only as a conscious function, but-primarily-as an unconscious one.
In 1895, Freud drafted a Project for a scientific psychology (Entwurf einer Psychologie), which remained unpublished until 1950, when it appeared as a volume edited by Marie Bonaparte, Anna Freud, and Ernst Kris. The Project presents a hypothetical theory of psychology based on the interrelationships between three types of neurons. As with the other schools, we shall confine our discussion of his approach to the aspects regarding memory.
The late nineteenth century was marked by the first images of neurons, most notably by Ramón y Cajal, who obtained them in 1888 and presented them before the Royal Society of London in 1894. His discovery proved that each nerve cell is a separate entity. For Freud (Project,p. 299), this offered the possibility of developing an explanation for memory:
A main characteristic of nervous tissue is memory: that is quite generally, a capacity for being permanently affected by single occurrences-which offers such a striking contrast to the behaviour of a material that permits the passage of a wave-movement and thereafter returns to its former condition. A psychological theory deserving any consideration must furnish an explanation of "memory." 141The task was to explain these differences between the perceptive system, which receives energy and transmits it to memory, and memory itself, which stores the energy. For this purpose, Freud assumed that memory neurons (ψ) possess contact barriers, whereas perception neurons (φ) do not: they merely transmit excitation to the memory neurons and return to their prior state, ready to function again. This transmission takes place through "facilitation"-the passage of an excitation from one neuron to another-which causes a lasting alteration in the contact barriers of the ψ neurons. Denoting this state of the contact barriers as the degree of facilitation, Freud asserts that "memory is represented by the facilitations existing between the ψ neurones" (p. 300). 142 But the degree of facilitation is not always identical, and will depend on the intensity of the impression of an event.
To explain consciousness, Freud introduces a third type of neurons, called α neurons. We shall not discuss the content of consciousness, which he regards as radically distinct from memory, the focus of our interest here. For Freud, memory can be either conscious or unconscious. The important point for our purposes is that the emerged-i.e., conscious-part of memory is what the other schools of psychology either study (Psychological structuralism, Functionalism or Cognitivism) or reject from their fields for being indescribable in empirical terms (Behaviorism, Evolutionary psychology). Freud, instead, concentrated on the immersed partand therefore, in his view, the unconscious part-of memory, in order to try to make it conscious through his psychoanalytical work. As we shall see, this immersed part of memory is in fact very shortlived and not outside of time, as Freud believed.
Yet Freud was forced to admit that his distinction between neurons rested on no known evidence, and that the question of the nature of facilitation remained unresolved.
In his letter to Fliess of December 6, 1896 (Freud, 1887(Freud, -1904, p. 207), p. 207), Freud argued that several unconscious mnemonic recordings existed:
As you know, I am working on the assumption that our psychic mechanism has come into being by a process of stratification: the material present in the form of memory traces being subjected from time to time to a rearrangement in accordance with fresh circumstances-to a retranscription. Thus what is essentially new about my theory is the thesis that memory is present not once but several times over, that it is laid down in various kinds of indications [. . .] 143 Again, he did not know how many recordings there were, but believed that there were at least three and probably more. Energy is first captured by perception, which keeps no trace of what has occurred; it then flows to the unconscious (which is inaccessible to consciousness), and onward to preconsciousness, which can become conscious in certain conditions. In the specific case of repression, the communication between the unconscious and preconsciousness does not take place, preventing contents from accessing consciousness. But this memory, inaccessible to consciousness, remains present in unconscious memory and continues to act without being recognized. Ideally, the psychoanalyst should lead the patient to recognize this action and remove the repression by making him or her become conscious of the "forgotten" memory.
It should be noted, however, that after outlining preconsciousness in the Project as the seat of that which can be recalled, Freud never described it in detail in his later work.
In fact, Freud never published his theory, no doubt because it lacked a sufficiently robust base and perhaps even because it led to an impasse. The reason is that the classic associationism with which it remained linked had been criticized in the early twentieth century-most notably by Bergson-and was totally discarded in the second half of the twentieth century with the advent of cognitivism. Yet the associationist theory had been developed precisely to explain phenomena specific to memory and to human thought. Its central tenet is that mental life consists of associative chains of elementary facts of consciousness. On these grounds, it seeks to explain how our memory and ideas are produced.
Associationism, which predominated in Freud's day, was championed by many philosophers and psychologists (such as John Stuart Mill, 1843;Ribot, 1870;[START_REF] Taine | [END_REF]. Ribot (1870, p. 242) described the school's commanding position in psychology:
When we see Messrs. Stuart Mill, Herbert, Spencer, and Bain in England; physiologists, M. Luys and M. Vulpian in France; in Germany, before them, Herbart and Miller, reduce all our psychological acts to various modes of association between our ideas, feelings, sensations, and desires, we cannot help believing
In the same period, Ricoeur (1965b), in De l'interprétation. Essai sur Freud, attempted to incorporate psychoanalysis into hermeneutics. He tried to show that Freud does not seek to "explain" the genesis of the unconscious, but to "comprehend" it in the hermeneutic sense. However, Freud's analytical approachdiscussed in detail below-went against this reductionist interpretation of psychoanalysis as a form of hermeneutics. This was clearly shown by Mi-Kyung Yi (2000, p. 260), in Herméneutique et psychanalyse, si proches … si étrangères, who objected to the reduction:
Let us begin by looking at what psychoanalysis is reduced to: a theoretical system of interpretative codes. Besides the fact that this reduction of psychoanalysis to a theory contradicts the Freudian priority assigned to the analytical method, the latter method is reduced to the application of the comprehension schema. Another, no less important consequence is that the analytical situation becomes a dialogue situation, and the analytical relationship becomes a comprehension relationship. 146 Even if psychoanalysis finds it difficult to systematize itself as a form of scientific knowledge, this can in no way justify a possible attempt by hermeneutics to capture its object.
Freud's truly original contribution is a new approach that enables him to explore what he calls the unconscious through lumière sur les associations toutes faites résultant des ressemblances et contiguïtés entre souvenirs inconscients. Il refuse donc à l'activité consciente ce qui en fait le caractère essentiel pour les auteurs actuels : c'est de constituer la pensée, c'est-àdire une activité constructrice réelle. Le problème de l'intelligence est en fait absent du freudisme, et c'est grand dommage, car la méditation sur la prise de conscience dans l'acte de compréhension, ainsi que sur les rapports entre schèmes intellectuels inconscients et la "réflexion" consciente, eût certainement simplifié la théorie de l'inconscient affectif. 146 French text: Commençons par ce à quoi est acculée la psychanalyse: un système théorique de codes interprétatifs. Outre que cette réduction de la psychanalyse à une théorie s'oppose à la priorité freudienne accordée à la méthode analytique, cette dernière se trouve réduite à l'application du schéma de compréhension. , un rapport de comprehension. circuitous paths. For him, the past is fully preserved in the unconscious, and consciousness simply throws light on the memories stored in the unconscious. To make these unconscious memories rise to the surface, he asks patients to tell him exactly all that came to their minds for an hour, without any intervention on his part. Session after session, patients learn to stop concealing their intimate thoughts, to tell him their dreams, and eventually they take pleasure in talking unrestrictedly as their spontaneous thoughts lead them. Unwittingly, patients will jump from one recollection to another, and recent memories will be interspersed ever more often with older memories, not only of life with their parents but also of old dreams that intermingle with the memory of real events (for more details on this approach see Piaget, 1965, pp. 193-196). The psychoanalyst will then be able to unravel the strands of the unconscious-which will hence become visible-and enable the subject who came for therapy to gradually become aware of the facts leading to his or her current situation. This approach has been heavily criticized by most psychologists in the other schools except for the cognitivists, who, like Piaget, finally accepted the validity of the study of processes regarded as unconscious. Let us outline the most significant of these criticisms.
First, Freud's research was based on a very small number of cases analyzed, which are in no way representative of the population as a whole. There were only a dozen cases, including himself, and most of his patients were young, single, and highly educated. His results thus seem hard to generalize. Moreover, some cases ended up never being published, prompting Solloway (1992, p. 160) to state: Some of the cases present such dubious evidence in favor of psychoanalytic theory that one may seriously wonder why Freud even bothered to publish them. Two of the cases were incomplete and the therapy ineffective. A third case was not actually treated by Freud147 .
Second, there was no control whatsoever of the conditions in which he collected his data. He kept no verbatim record of what his patients told him, but worked on his notes made several hours after the sessions, and he did not archive them. He was therefore at liberty to reinterpret the words, spurred by the desire to find proofs of his theory.
Third, a good number of other theoreticians of psychoanalysis disagreed with many of his hypotheses, most notably concerning the predominant role of biological characteristics-in particular, sex-as the fundamental determinants of psychoanalytical behaviors.
We shall now briefly examine some of these theoreticians before moving on to the neurosciences, which have totally redefined the notion of the unconscious.
While Freud emphasized sexuality as the main driver of psychic problems, the psychoanalysts who extended his approach explored other avenues while preserving the basic elements of his approach.
Adler , for example, broke with Freud in 1911. He believed that human behavior was largely determined by societal rather than sexual forces. Accordingly, he emphasized the conservation instinct and the will to power. For this purpose, he proposed the concept of social interest, defined as an innate potential to cooperate with others in order to fulfill one's own destiny. A person's life style, Adler argued, is set by the age of four or five and is hard to change later. He speculated that birth rank has an enormous impact on a person's future. Unfortunately, many later results undermined many of his hypotheses. In any event, most of the objections of a more methodological nature to the Freudian approach also apply to the Adlerian approach.
Similarly, Jung (1875-1961)-whom Freud saw as his successor-broke his friendship with Freud in 1914 to establish what he termed analytical psychology, which contradicted many of his predecessor's theses. Jung criticized Freud for excessively restricting the unconscious by confining it to patients' past experiences repressed by the patients themselves. Jung replaced this individual unconscious with a collective unconscious, characterized by elements common to all individuals. He undertook a vast survey on the generality of these symbols, inherent in the myths, rituals, and sacred representations of primitive societies, both Western and Eastern. This partly ties in with our discussion of epic in Chapter 6. However, these "hereditary" symbols, which Jung assumed to be present by childhood, are more easily explained by the evolution of infant mentality than by the action of a mysterious heredity (Piaget, 1965, p. 211). In this light, Jung's collective unconscious seems quite useless. Lastly, most of the methodological objections to Freudian methods are valid for the Jungian approach as well.
The evolution of Freudianism was paralleled by the development of the neurosciences. Initially based on the dynamic version of associationism introduced by John Hughlings Jackson (1835Jackson ( -1911)), they long remained under the influence of his theories [START_REF] Jackson | Evolution and dissolution of the nervous system, Croonian Lectures delivered at the Royal College of Physicians[END_REF]. Before Cajal's discovery of neurons in 1888, Jackson had already constructed a theory that assumed the existence of fibers (axons) serving as mediators between different parts of the nervous system. The theory focused on the study of reflex movements and proposed a hierarchy of nervous centers (p. 649):
The lowest centres are the most simple and most organised centres; each represents some limited region of the body indirectly, but yet most nearly directly; they are representative. The middle motor centres [. . .] are more complex and less organised, and represent wider regions of the body doubly indirectly; they are rerepresentative. [. . .] The highest motor centres are the most complex and least organised centres, and represent widest regions (movements of all parts of the body), triply indirectly; they are rere-preventative.
Nervous disease was accordingly seen as a regression toward a more archaic form of the nervous system. Conscious mental life therefore took place at the top level, while unconscious life was situated at the lower levels.
This dynamic associationism, while different from Freud's associationist schema, was no less objectionable for the same reasons noted earlier. In the 1980s, however, the neurosciences turned to the study of information processing and, for this purpose, had to engage in the observation of the processes involved.
They therefore sought to observe with precision the differences between conscious and unconscious thought, which Freud had merely stated to be self-evident without ever trying to observe them before theorizing them. The most salient question was the location of memory. Freud had regarded memory as independent of consciousness, believing it could be located in either the conscious or the unconscious part of the mind.
The neurosciences had to abandon Jackson's model, for it no longer matched the detailed observations made from the 1980s on. First, neuroscientists tried to define unconscious phenomena by observing individual behaviors that offered evidence of cognitive processes of which the individual is not truly aware.
The study of a neurological dissociation between the perception and grasping of objects [START_REF] Goodale | A neurological dissociation between perceiving objects and grasping them[END_REF] made it possible to show the existence of unconscious mental processes that are not perceived by the subject but can be located in the higher stages of mental life. Indeed, the authors show (p. 155):
[. . .] that a person with brain damage may retain the ability to calibrate normal aiming and prehension movements with respect to the orientation and dimension of objects, despite a profound inability to report, either verbally or manually, these same visual properties.
Many other experiments showed that these mental processes are fleeting and disappear from our unconscious in just a few tenths of a second. However, the authors held on to the view that some sectors of the cerebral cortex, which they call the ventral pathway, handle conscious functions while others-the dorsal pathway-handle the unconscious ones (Naccache, 2006, p. 83).
This framework was superseded by the development of fMRI (functional Magnetic Resonance Imagery), which made it possible to show the brain "in action." In 2000, Rees et al. used fMRI to show that the ventral pathway can be activated unconsciously. This finding-confirmed by other studies-showed that no location in the human brain is specifically devoted to conscious or unconscious functions.
In a more elaborate synthesis of these approaches, [START_REF] Naccache | Le nouvel inconscient. Freud, le Christophe Colomb des neurosciences[END_REF] compared the Freudian unconscious and the neuroscientific unconscious.
The first reason to reject the Freudian conceptions concerns repression, which Freud saw as an unconscious defense mechanism. It stands in total contradiction to the most relevant neuroscientific experimental data and theoretical models. Anderson and Green (2001), for example, show that repression is a fully conscious and voluntary process of elimination of unwanted memories. However, they do raise the question of whether the suppression is total or if the memory can return much later. Other authors, such as [START_REF] Smith | Forgetting and recovering the unforgettable[END_REF], observe that these memories are not completely erased but leave a persistent trace in the brain. A more recent study by [START_REF] Wang | Reconsidering unconscious persistence: Suppressing unwanted memories reduces their indirect expression in later thoughts[END_REF] shows the need to reconsider the influence of repression on patients' mental health. They write (p. 90): Does suppressing intrusive thoughts and memories, even successful, leave remnants of experience in implicit memory that discretely and perniciously influence mental life outside our awareness? To our surprise, and contrary to our own previous conjectures about the lingering influences of suppressed traces [. . .], the current study and others reported recently [. . .] suggest that this view is incorrect. The present research indicates that episodic retrieval suppression inhibits the semantic content underlying episodic traces. We found diminished accessibility of suppressed content measured on a task that participants view as correlated to the original suppression context; that shares no cues with the study episode; that prompts little awareness with the study episode memory; and that clearly could benefit from prior exposure.
Their study manifestly challenges the Freudian theory that ideas suppressed by patients can be made to re-emerge.
The second argument against Freudianism is that all neuroscience experiments support the assertion that the specific characteristic of the unconscious is its extreme evanescence (Naccache, 2006, p. 355):
As exponential decrease and immortality do not go well together, it is clear that the issue of the life expectancy of our unconscious mental representations is a second reason for the definitive abandonment by neuroscientists of the Freudian concept of the unconscious. 148How, then, can we reconcile this evanescence with the importance that Freud attaches to the unconscious memories of early childhood? As Piaget showed in detail (1945, p. 199):
The memory of a child between two and three years old is still a blend of made-up stories and exact but chaotic reconstructions, and organized memory develops only with the progress of full intelligence. 149In sum, there is no such thing as early childhood memory, for the child does not yet have an evocation memory capable of organizing these recollections.
For a detailed discussion of all the reasons for the incompatibility between the Freudian unconscious and the neuroscientific unconscious, we refer the interested reader to [START_REF] Naccache | Le nouvel inconscient. Freud, le Christophe Colomb des neurosciences[END_REF].
Given the multitude of small brain circuits that continuously produce unconscious mental representations, Naccache (2006, p. 272) defines consciousness as a neural network: [. . .] there arguably exists a normal network totally different from these other circuits, whose content corresponds at each instant to the mental representation that we experience consciously. We shall call this unique neural network the "conscious global workspace." 150Let us note that this is not an observation but a hypothesis, as indicated by the use of the conditional tense in the French original [rendered here by "arguably"]. An electrophysiological signature of conscious awareness would be needed here [START_REF] Sergent | Les bases neurologiques de la conscience: une revue des avancées récentes[END_REF]. The property of remote cerebral areas to communicate with one another could provide such a signature, but it has not yet been demonstrated.
Many neuroscientists have already recognized this definition of consciousness under the term Global Neuronal Workspace [START_REF] Dehaene | Experimental and theoretical approaches to conscious processing[END_REF]. If we accept it, then Freud, who thought he was analyzing the unconscious, actually revealed the modes of functioning of our consciousness (Naccache, 2006, p. 403): I credit Freud with having invented a method of treatment-the analytical cure-in which the equipment used for the treatment relies exclusively on the manipulation of the conscious mental attitudes of the patient and the therapist, namely, psychoanalysis. This recognition of the literally vital role of conscious beliefs in the process of healing certain impairments of the mind is revolutionary. 151This renewed appreciation for the technique described in detail above makes up for the partisan squabbles between Freud, Adler, Jung, and others, for all ultimately used the same psychoanalytical approach, albeit with many variants. However, neuroscience developments continue at a rapid pace. In particular, recent research has revealed the role of glial cells (or glia), which outnumber neurons. The study of their action on memory and many other behaviors is expanding and far from over [START_REF] Hemonnot-Girard | De nouvelles techniques pour dévoiler le rôle des cellules gliales du cerveau[END_REF]. Similarly, myelin-a form of electrical insulation surrounding neurons-has been shown to play a part in consolidating memory [START_REF] Steadman | Disruption of oligodendrogenesis impairs memory consolidation in adult mice[END_REF]. It is still too early to say whether these discoveries will revolutionize the neurosciences of memory.
Conclusions
We have deliberately refrained from devoting a chapter to parapsychology. The phenomena that it studies-such as telepathy, mesmerism, hypnotism, clairvoyance, apparitions and haunted places-are indeed too specific for our general approach to human life. Despite its intention, from the outset, to be scientific-as claimed by the Society for Psychical Research founded in 1882-it has struggled to establish itself as such.
Yet it is the examination of its methods that led to a more general challenge to the methods used by psychologists of all the schools reviewed here. Many authors had questioned the validity of the statistical results obtained by psychologists, but it was Bem's article (2011) on paranormal phenomena that ignited the controversy. Using psychological methods, Bem showed that eight of the nine experiments on paranormal phenomena proved their existence. In order to avoid protests by many readers, the editors of the Journal of Personality and Social Psychology, which had published the article, noted that the studies had been conducted in croyances conscientes dans le processus de guérison de certaines affections de l'esprit est révolutionnaire.
keeping with standardized scientific practices in the field of experimental psychology, and that it would have seemed inappropriate to apply other practices to parapsychological studies [START_REF] Judd | Editorial comment[END_REF]. This led to a number of replications of the study in order to verify its results (for example [START_REF] Ritchie | Failing the future: three unsuccessful attempts to replicate Bem's 'retroactive facilitation of recall' effect[END_REF], who described three fruitless attempts to reproduce the study).
The episode also led to a challenging of acceptance procedures for more general articles on psychology, as replications of studies in the field are relatively uncommon. An analysis of publications in 100 psychology journals between 1900 and 2012 showed that about 1.6% of articles used the term "replication," and a more detailed analysis of 500 articles using the term showed that only 68% actually performed a replication, making a total replication rate of 1.07% (Makel et al., 2012, p. 537). This Replication Crisis, as it came to be known, peaked in 2015 with the publication of the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015, p. 943) in the prestigious journal Science. After replicating 100 experimental studies in psychology, the project found that: Ninety seven percent of original studies had significant results (P < .05). Thirty six percent of replications had significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original results; and if no bias in original results are assumed, combining original and replication results left 68% with statistically significant effects.
The authors nevertheless concluded that these figures suggested there was room for improving reproductibility in psychology.
This cold shower triggered a reply in the same journal entitled "Comment on 'Estimating the reproducibility of psychological science'" from renowned psychologists [START_REF] Gilbert | Comments on "Estimating the reproducibility of psychological science[END_REF]. They pointed out that the populations to whom replications are applied are very different from the original populations. For instance, an original study measuring the attitude of Americans to African-Americans was replicated on Italians who share none of the stereotypes of Americans. This shows that the effects observed are conditioned by a large number of "confounders" (potential confusion factors), which need to be factored into comparisons of replication results [START_REF] Peters | Why most experiments in psychology failed: sample sizes for randomization to generate equivalent groups as a partial solution to the replication crisis[END_REF]. As a result, the samples to be picked in order to generate equivalent groups are far larger than those specified in the Open Science Collaboration study [START_REF] Peters | Knowing how effective an intervention, treatment, or manipulation is and increasing replication rates: accuracy in parameters estimation as a partial solution to the replication crisis[END_REF].
The statisticians [START_REF] Hung | Statistical methods for replicability assessment[END_REF]Fithian (2020, p. 1084) have also addressed the issue and reached similar conclusions. Moreover, they show that-because of biases due to the selection of significant effects-the Open Science Collaboration study data do not back up the findings set out in the previous quotation:
Our analyses point to several conclusions regarding effect shifts: First, that there are a few studies where we can be confident the effect in the replication study was significantly different than in the original study; second, that in aggregate, when effects do shift, they tend to decline (shift toward zero) in replications rather than increase; and third, that there is insufficient evidence to conclude that the vast majority of experimental effects simply evaporated upon replication.
This crisis can now be said to be over, with a reform of the statistical methods used in psychology. Moreover, the critique of studies performed on small samples-often just a few dozen persons-has led to a major revision of statistical methods used in scientific publications. For example, the use of the term "statistical significance"-formerly recommended in most scientific journals (particularly in psychology), is now banned there. An entire issue of The American Statistician shows the reasons for this move, summarized in the editorial by [START_REF] Wasserstein | Moving to a world beyond "p < 0.05[END_REF]. Initially regarded as a tool to show that a result could warrant further scrutiny, "statistical significance" had become what the authors call a "tyrant," i.e. a prerequisite for publishing in a reputable journal. As the editors of The American Statistician clearly state: This distinction is essential for updating publication guidelines in many disciplines, not only psychology.
Lastly, the once commonly accepted notion of objective probability had been increasingly challenged, and the tendency is now to replace it with that of Bayesian epistemic probability. The reader interested in the reasons for the shift will find a fuller discussion of them in our book on Probability and social science (2012).
On another side psychology is facing a Theory Crisis, may be more fundamental than the Replication Crisis. A number of psychologists have called attention to the fact that its theoretical foundations are shaky [START_REF] Klein | What can recent replication failures tell us about the theoretical commitments of psychology[END_REF][START_REF] Fiedler | What constitute strong psychological science? The (neglected) role of diagnosticity and a priori theorizing[END_REF][START_REF] Muthukrishna | A problem in theory[END_REF][START_REF] Rooij | Theory before the test. How to build highverisimilitude explanatory theories in psychological science[END_REF][START_REF] Eronen | The theory crisis in psychology: how to move forward[END_REF]etc.).
In conclusion, while psychology has encountered a range of problems, their resolution has entailed many challenges to existing methods and an ever more systematic evolution toward a more scientific approach. How can we define this scientificity more accurately? That is the question we set out to answer in the next chapter.
Chapter 9 Mechanisms, systems, autonomy, hermeneutics, and understanding human life
As we have seen throughout this work, many if not all social sciences adopt different approaches to human life. None, however, genuinely tries to consider it in its full complexity. Each addresses only some of its aspects. Human life, therefore, appears to loom larger than all the social sciences while constituting one of their key elements.
At the end of Part 1, we outlined the reasons that led to the separation between astrology and astronomy, and between eugenics and genetics. The main reason was the confusion-already noted by Francis Bacon in 1620-between two types of approaches: the first begins by positing axioms, from which it deduces consequences; the second, by contrast, identifies axioms from the observation of the facts studied, in order to deduce the principles of a science. The first type of axiom leads to idols, which are not verified by experiment, as we have seen for astrology and eugenics. The second leads to a true scientific approach as we have seen for astronomy and genetics.
But this scientific approach cannot address the complexity of human life, of which we have sought to provide an overview in the previous chapters. As Frederick Suppe (1989, p. 65) said it cannot deal with phenomena in all of their complexity, but only focus on a small number of phenomena of human life that can be characterized by a small number of parameters.
In this final chapter, we shall therefore take a closer look at the main approaches used to understand the various aspects of life stories.
We have seen that the notion of mechanism served as the basis for Newton's theory of astronomy (1687), and for the Mendelian theory of heredity in the early twentieth century. In the late twentieth and early twenty-first centuries, biologists and neuroscientists extended the mechanistic approach to their disciplines. In the first volume of the Methodos Series, Franck (2002) broadened its scope of application to all the social sciences, showing its initial uses in a number of these disciplines.
More recently, several authors (e.g. [START_REF] Machamer | Thinking about mechanisms[END_REF][START_REF] Glennan | Rethinking Mechanistic explanation[END_REF][START_REF] Bechtel | Explanation: a mechanist alternative[END_REF] have offered more precise definitions of such "mechanisms," but we prefer the one given by Glennan and Illari (2018) in The Routledge handbook of mechanisms and mechanical philosophy (p. 2):
A mechanism for a phenomenon consists of entities (or parts) whose activities and interactions are organized so as to be responsible for the phenomenon. Illari and Williamson (2012, p. 119) had already proposed a very similar definition, and we refer the interested reader to their article, which gives the reasons why the definition can apply to different natural, biological, and social sciences.
However, as the mechanistic approach is a causal approach to events, when attempting to explain the concept of aggregation level, we shall need to bring in another approach to explain the concept of emergence of a level.
An alternative (or complementary?) model to mechanistic causal analysis is the systemic approach, developed by Ludwig von Bertalanffy (1901Bertalanffy ( -1972) ) over his lifetime and presented in his work, General system theory: Foundations, development, applications (1969). (It must be remembered that von Bertalanffy, because of his membership in the Austrian national-socialist party and his role as "biologist of the Third Reich," had to emigrate at the end of World War II to the United States, where he published his book.) His approach has since been endorsed by many authors including Rapoport, Boulding, and Meadows.
In the "Foreword" to his work (p. vii), von Bertalanffy defined his general systemic approach as follows:
[. . .] systems theory is a broad view which far transcends technological problems and demands, a reorientation that has become necessary in science in general and in the gamut of disciplines from physics and biology to the behavioral and social sciences and to philosophy. This led him to view a system as an organized whole with many interrelations between its parts-a whole that is not the sum of its parts. While proponents of the systemic approach direct their attention towards the whole organism and focuses on how it achieves self-maintenance, mechanists direct their attention to how components of a mechanism are organized so that their activities produce a phenomenon (Bitch, Bechtel, 2021).
The systemic approach now extends to a large number of disciplines, including not only social sciences but also biology, psychology, and technology. Examples include Maturana and Varela's autonomy theory, as well as dynamical system theory, to be discussed in § §9.3-9.4. Similarly, the system dynamics model approach is used to forecast future human population trends.
We shall also examine a third approach: the hermeneutic approach. We have already presented it briefly in the chapters on imaginary and real life stories, showing how the "comprehension" of these lives different from their "explanation" by the social sciences. Our discussion was based on the work of Dilthey (1833[START_REF] Dilthey | Die typen der weltanschauung und ihre ausbildung in den metaphysichen system[END_REF]. In his wake, many authors such as Heidegger (1889-1976), Gadamer (1900-2002), and Ricoeur (1913-2005) developed different forms of hermeneutics, often linked to the phenomenology of Edmund Husserl , under the name of phenomenological hermeneutics, as indicated by Grondin's book entitled Le tournant herméneutique de la phénoménologie (2003). All these studies, however, remained focused on the "comprehension" of human lives. For example, Ricoeur (1990, pp. 191-192) writes:
As for the notion of narrative unity of life, one must also view it as an unstable mixture of storytelling and live experience. It is precisely because of the evasive nature of real life that we must resort to fiction in order to organize it retrospectively in the aftermath, even if this means regarding any type of plot-making borrowed from fiction or history as provisional and subject to revision. 152Ricoeur thus brings imaginary and real life into the same category, recognizing the power of myths for organizing one's own life story.
Because of its purely philosophical nature, such an approach lies outside of the scope of our book, whose subject is methodology. In §9.3, however, we shall see that the phenomenologist philosopher Merleau-Ponty (1908-1961) supplied the basis for the autonomy theory of Maturana and Varela, who developed a new paradigm based not on the metaphor of the computer (see §9.1.1) but on that of living organisms. It is important to see how these enable us to understand a human life.
Rather than contrast the three approaches, our conclusion here will attempt to show how they are complementary in explaining aspects of a human life and what they contribute to understanding them. Glennan and Illari (2018) devote fourteen chapters of their book to the application of mechanisms in a variety of disciplines including physics, evolutionary biology, molecular biology, biomedicine, ecology, neuroscience, cognitive science, sociology, history, and economics.
How demographic theories consider human life
Surprisingly, there is no mention of demography, whose development began in the seventeenth century with Graunt's work (1662). One reason, no doubt, lies in the definition of demography in the IUSSP (International Union for the Scientific Study of Population) Multilingual Demographic Dictionary (1982):
[. . .] a science that aims to study human populations by considering their size, their structure, their evolution and their general characteristics, primarily from a quantitative perspective.
The definition leaves out the fact that human populations are characterized by the rules, values, and signs that differentiate them, and that we cannot speak of populations by restricting their study to the physical or material aspects of the societies in which they live. Moreover, the definition avoids citing the study of the disappearance of a human group or a specific population, but only through their evolution. Jared Diamond's book Collapse: How societies choose to fail or succeed (2005) clearly shows-with the aid of abundant information on the foundations of societies both ancient and modern-how these principles and the environment in which populations live allow us to understand their evolution over time, inevitably leading to their disappearance sooner or later. After describing and discussing the problems of our current society, Diamond is forced to conclude (p. 498): This is a question faced by all human groups, on which their survival or collapse depends. The quantitative approach used by demography does not enable it to answer the question. We must take a closer look at the reasons for this.
Having presented the various paradigms adopted by demography in Chapter 7, we now turn to the theories developed over the centuries to explain and not merely to describe demographic behaviors. A paradigm is a set of assumptions and values that form a way of viewing reality for a community of researchers. A theory applies the paradigm, with additional assumptions, to provide a more general explanation of a population's behavior in a given situation [START_REF] Courgeau | Multilevel synthesis. From the group to the individual[END_REF].
We must therefore pursue our analysis in this broader context. There have been such analyses in the past, and we begin by examining some of the solutions found-as well as some of the failures recorded. Space precludes a detailed examination of the theories, but we can describe the stages of their elaboration.
From the origin of population science to the nineteenth century
In 1760, Euler introduced the concept of what we now call "stable population"-in other words, the notion that if deaths outnumber births, a population will eventually disappear. Euler also realized that extraordinary calamities such as epidemics, wars, and famines disrupt this uniform growth or decline of a population, but he did not discuss their impact, which can be very significant.
Before the nineteenth century, economics and population science were very closely linked, giving rise to two opposing schools. The first, comprising what we can call populationists, believed that population growth produces an increase in wealth. This school, however, had few advocates. They notably included William Temple in the seventeenth century and Joseph Strucker and Moheau in the eighteenth. Their far more numerous opponents argued that population growth depended on wealth. Examples include Walter Raleigh and Joshia Child in the seventeenth century and Richard Cantillon and François Quesnay in the eighteenth. Physiocracy, championed by Quesnay (1694Quesnay ( -1774)), stated that all increases in population are due to increases in wealth.
Despite their opposition, both theories regarded economics and demography-then known as "political arithmetic"-as two closely tied subjects, whereas they are now very often treated separately. But the disciplines were still in their infancy, and their main concepts were a long way from being defined.
For the reader seeking more information on the two schools, we strongly recommend the reprints and commentaries published in the series Classiques de l'économie et de la population, founded by Alfred Sauvy at INED and continued first by Eric Brian, then by Jean-Marc Rohrbasser. Titles include: Cantillon, Essai sur la nature du commerce en général (1952), OEuvres économiques complètes et autres textes. François Quesnay (2005), and Recherches et considérations sur la population de la France par M. [START_REF] Moheau | Recherches et considérations sur la population de la France[END_REF].
In the late eighteenth century, new crucial factors came into play in the analysis of human lives, most notably owing to the French Revolution, which introduced novel concepts.
In 1793, William Godwin published An enquiry concerning political justice, in which he examined the effects of most governments, run by monarchs or aristocratic groups, on the populations under their stewardship. He observed that these rulers, forever at war against one another, sacrificed their populations to a perpetual thirst for conquest, with no concern for their aspiration to peace and prosperity. In contrast, he said (vol. 1, p. 11):
[. . .] that perfectibility is one of the most unequivocal characteristic of human species, so that the political, as well as the intellectual state of man, may be presumed to be in a course of progressive improvement.
Godwin made this notion of perfectibility the basis for his theory, leading him to propose a type of society that anticipated the political and economic ideas of anarchism as set out by [START_REF] Kropotkin | Anarchism[END_REF]. However, he did not have the courage to stand by his opinions in the demographic revolution or rather transition, which began in France in the mid-eighteenth century.
Unlike the flames of the French Revolution, which burned out in a decade with Napoleonic imperialism, the demographic and industrial revolution endured.
The industrial revolution in the nineteenth century: the ascendancy of economics
In the following century, the industrial revolution inspired many theories. By contrast, the demographic revolution did not actually generate new theories until the twentieth century.
We begin with the theories inspired by the industrial revolution in the nineteenth century, which fall into two main categories.
The theory of industrialism, developed in the late 1810s, crowned the notion of perfectibility of the human species. Its main advocates were Charles Comte (1782Comte ( -1837) ) and Charles Dunoyer (1786-1862). The theory stated that society is based on man's mastery of nature through industry, so it analyzed the social foundation of that mastery. For instance, in L'industrie et la morale considérées dans leurs rapports avec la liberté (1825), Dunoyer wrote (p. 13):
Industry prepares peoples for collective activity as well as for all the types of activity necessary to the development and conservation of the species. One need only open one's eyes to see that, in our day, the most industrious and most cultured populations are also the liveliest and those with the greatest political capability. 154Like Jean-Baptiste Say (1803), Dunoyer stressed the importance of property rights as the basis of every industrial society.
In his Traité de législation (1826) and Traité de la propriété (1834), Comte advocated a society in which individuals would be entirely free to own and accumulate wealth. He then showed how government interventionism harms industrialism. While recognizing that the classes living off the fruit of their labor were far more populous than property-owners, he believed that in the event of distress (internal disorders or invasion by enemy armies) public aid should not interfere in the way a nation's products were distributed among the population (Comte, 1834, vol. 2, p. 488). The industrialist approach would thus condemn the more populous classes to excessive misery and, as Comte put it, to their "destruction" (id., p. 348). Conversely, if the rich were despoiled to the benefit of the poor, that would entail the "destruction" of the former, with the most dire consequences for the latter (id., p. 487). Lastly, a high fertility among the classes earning their livelihood from their wages alone would spell misery for the families formed with greater restraint, as the children of the former would contend with the latter for their subsistence (id., p. 350). Demographic trends were therefore wholly dependent on social conditions and could lead to the extinction of a class or even of the entire industrial society.
By contrast, Pierre-Joseph Proudhon (1809-1865) followed the path already outlined by Godwin: anarchism. In Qu'est-ce que la propriété (1840, p. 21), he proclaimed "La propriété c'est le vol!" ("Property is theft!"), but in his posthumous book Théorie de la propriété (1866), he noted his distinction between possession and property (p. 15): "I qualified only the latter as theft." Proudhon also foresaw the functioning of today's mutual insurance companies as early as 1846 in Système des contradictions économiques, et philosophie de la misère, in which he proposed (p. 527) "a law of exchange, a theory of MUTUALITY, a system of guarantees that determines the old forms of our civil and commercial societies." Our focus here, however, is on his critique of Malthusian theory. In Système, he presents the following calculation (pp. 493-494):
[. . .] with men marrying at the completed age of 28 years, women at 21; with nursemaids no longer used because of equality; with the duration of breastfeeding being reduced to 15 or 18 months; with the period of fertility potentially ranging from 15 to 18 months, it [. . .] as educated and capable workers are systematically childless or have only one or two children, it follows that this class is not increasing, and that it is having the utmost difficulty recruiting [. . .] 156
In a growing number of European countries, workers had stopped multiplying at the same rate as in the past, but the theory of demographic revolution did not effectively take hold until the twentieth century.
Karl Marx (1818Marx ( -1881) ) was very sarcastic in his comparison of Malthus with Frederick Eden, author of The state of the poor (1797), in Das Kapital, Buch 1 (1867, p. 603):
If the reader thinks at this point of Malthus, whose Essay on Population appeared in 1798, I would remind him that this work in its first form is nothing more than a schoolboyish, superficial plagiarism of Defoe, Sir James Steuart, Townsend, Franklin, Wallace, etc., declaimed in the manner of a sermon, but not containing a single original proposition of Malthus himself. The great sensation this pamphlet caused was due solely to the fact that it corresponded to the interest of a particular party. 157 In essence, Marx was an economist for whom demography can be understood only through economic theory [START_REF] Charbit | Capitalisme et population: Marx et Engels contre Malthus[END_REF]. For him, there was no universal law of population, but rather laws valid for each economic system-here, capitalism. Because of its importance, Marx's theory would require an entire volume. Here, we merely 156 French text: … les ouvriers instruits et capables n'ayant par système pas d'enfants, ou en ayant seulement un ou deux, il en résulte que cette classe ne s'augment pas, que c'est à grand'peine si elle recrute … 157 English translation by Samuel Moore and Edward Aveling, edited by Frederick Engels, 1887. German text: Sollte der Leser an M a l t h u s erinnern, dessen "E s s a y o n P o p u l a t i o n " 1798 erschien, so erinnere ich, dass diese Schrift in ihrer ersten Form (und die späteren Ausgaben stopften nur Material in das alte Schema und fügten n e u e s , aber n i c h t von Malthus entdecktes, sondern nur annexirtes zu) nichts als ein schülerhaft oberflächliches und pfäffisch verdeklamirtes P l a g i a t aus Sir James Steuart, Townsend, Franklin, Wallace u. s. w. ist und n i c h t e i n e n e i n z i g e n s e l b s t g e d a c h t e n Satz enthält. Nebenbei bemerkt. Obgleich Malthus Pfaffe der englischen Hochkirche, hatte er das Mönchsgelübde des Cölibats abgelegt. wish to point out its limited connections with demography. However, we may speculate that he would have cursed those whoclaiming to follow his ideas-imposed the Soviet or Chinese economy based on a militaristic social organization and a planned economy, more Fascist than Communist at this point.. In sum, theories on the industrial revolution prevailed throughout the nineteenth century. Few authors focused on the concurrent demographic revolution, which did not effectively capture researchers' attention until the twentieth century.
The demographic revolution in the twentieth century: the comeback of
Adolphe Landry (1874Landry ( -1956) ) In eighteenth-century France, the population was conditional upon production-in particular, the production of foodstuffs-and it varied, if we confine ourselves to an approximation, in the same way as production. In today's France, population changes seem very largely unrelated to production; population does not vary on account of variations in wealth. 158In other words, demography seemed to be freeing itself from the grip of economics, and this trend would gradually reach all the other countries at different dates. Landry saw the main cause of this revolution in the diffusion of the idea of rationalization of life (p. 60), adopted first in France, then in other countries. The other possible causes, which he lists, are far from having as great an impact.
While Landry predicted far in advance the depopulation of developed countries (cf. Hungary's negative growth rate since 1981), he failed to foresee the growth of the developing countries, which remains high although it has started to decline.
Today, the demographic revolution theory has turned into a theory of demographic transition, but it has become more complex by incorporating ever more numerous factors. We refer readers to Henri Leridon's 2015 presentation, in which he notes (p. 312):
So, we cannot but observe that there is no overarching, generally acknowledged theory of fertility, not even a small number of theories upon which demographers can agree or disagree, and which might serve as a foundation for ongoing debate.
Leridon cites many demographers who share this view, such as the authors of the Princeton study on the transition in Europe (Coale and Watkins, 1986), who confirm that none of the standard indicators can explain the decline in fertility in the countries observed. For our part, we should like to quote Neil Cummins on the French transition (2013), who concludes more specifically as follows (p. 473):
Demographic transition theory, the microeconomic theory of fertility, and the unified growth theory cannot explain why French fertility fell first in Europe because they all predict that fertility should have declined in England before anywhere else. Wrigley's proposition of a neo-Malthusian response cannot be valid as it was the richest terciles who reduced their fertility, and Weir's explanation, again, does not uniquely identify France. [. . .] The root causes behind the world's first fertility decline are still poorly understood.
It is important to realize that the decline in fertility began in France nearly 100 years before it did in England. For more details, see the arguments presented by [START_REF] Wrigley | The fall of marital fertility in nineteenth century France: exemplar or exception? (Part I)[END_REF] and [START_REF] Weir | Life under pressure: France and England, 1670-1870[END_REF], which Cummins rejects here as unverified.
Three contemporary theories
The application of the systemic approach to demography led [START_REF] Loriaux | Des causes aux systèmes[END_REF] to go beyond the causal approach for topics such as the demographic transition, the aging of current populations, and social protection. At the same time, however, he noted the absence of truly systemic methods apart from simulation-based modeling. The latter, for example, led [START_REF] Meadows | The limits to growth[END_REF][START_REF] Meadows | Beyond the limits: confronting global collapse, envisionig a sustainable future[END_REF][START_REF] Meadows | The limits to growth[END_REF]) to develop a "system dynamics model" aimed at forecasting the future of mankind. They concluded that if humanity maintains its economic growth without factoring in environmental and social costs, it will experience a collapse by the mid-twenty-first century. The authors' simulation model leads to various scenarios depending on the initial assumptions, and the comparison of forecast developments with those observed makes it possible to identify the more plausible outcomes. Despite the many criticisms directed at the model's assumptions, such as those voiced by the economist Solow (1973), its results have been compared with actual developments. The latest study [START_REF] Herrington | Update to limits to growth. Comparing World3 model with empirical data[END_REF] shows that the simulation's models, which display little divergence until 2020, predicted with a reasonably good approximation the trend actually observed over 50 years since the first report. But can one regard humanity as a whole when the foundations of the cultures and societies that compose it are so diverse and even contradictory? Moreover, while [START_REF] Loriaux | Des causes aux systèmes[END_REF] argued that-in demography-simulation models were the only possible systemic method, in biometrics, Harvey Goldstein proposed a new approach in 1986: the multilevel approach. The latter developed later in the social sciences, as we saw in Chapter 7; in demography, it was first applied in 1995 by Courgeau, then in 1998 with Baccaïni, and later in 2007 with a book entirely devoted to it. In 1998, Courgeau and Baccaïni wrote:
Is it reasonable to interpret the aggregated characteristics as the reflection of the social organisation in which we live, and the characteristic specific to each individual as the manifestation of individual liberty [. . .]?
The situation is even more complex, for there will always be a difference between the statistical individual and the observed individual.
However, a new theory has developed in the early twentyfirst century, offering a very different view of the phenomena experienced by a population: agent-based modelling. Its principle is to deduce the events in a formal system from rules of conduct applied to theoretical agents, and then compare them with behaviors observed in reality (Billari, Prskawetz, 2003). The rules are based on individual behaviors and make it possible-so it is argued-to predict macroscopic regularities. It is thus a bottom-up approach, in which a population's aggregate behaviors emerge from rules applied to autonomous individuals. The important point here is that the approach seeks to "comprehend" human behaviors with the aid of simple individual rules capable of "explaining" macro behaviors. One could thus arrive at a synthesis of philosophical hermeneutics and scientific explanation.
The initial problem with this approach, however, is how to define the rules with precision. Unfortunately, they are defined without a full discussion of their validity, which is often merely deduced by comparing expected behaviors with the aggregate behaviors of populations. This eliminates the need for observed data to explain the phenomenon, for the approach is based on simple rules of individual decision-making that could account for a given real-world phenomenon. As Burch notes (2003, p. 251):
A model explains some real-world phenomenon if a) the model is appropriate to the real-world system [. . .] and b) if the model logically implies the phenomenon, in other words, if the phenomenon follows logically from the model as specified to fit a particular part of the real world.
But how does one generate macroscopic regularities by using simple individual rules? Conte et al. (2012, p. 340) explicitly describe the difficulties encountered:
First, how to find out the simple local rules? How to avoid ad hoc and arbitrary explanations? As already observed, one criterion has often been used, i.e., choose the conditions that are sufficient to generate a given effect. However, this leads to a great deal of alternative options, all of which are to some extent arbitrary.
Without bringing into play the influence of networks on individual behaviors, it seems hard to obtain a macro behavior merely by aggregating individual ones. To obtain a more satisfactory model, one could introduce decision-making theories. Unfortunately, however, the choice of these theories is influenced by the researcher's discipline and can produce highly divergent results for the same phenomenon studied.
For a more detailed discussion of these theories, we refer the reader to three recent works: Eric Silverman, Methodological investigations in agent-based modelling (2018), Thomas Burch, Model-based demography (2018), and Jakub Bijak (ed.) Towards Bayesian model-based demography (2022).
To conclude this overview of theories, we turn to viability theory, developed by the mathematician Jean-Pierre Aubin (1939-) during his entire career. He gave a complete presentation of his theory, including a detailed account of its genesis and impact, in his 2010 book entitled La mort du devin, l'émergence du démiurge. Essai sur la contingence, la viabilité et l'inertie des systèmes (the work has not yet been translated in English, but its title would read The demise of the seer, the rise of the demiurge: essay on contingency, viability, and inertia of systems). Aubin has also published many books of a more mathematical nature, in English, including Dynamic economic theory (1997) and Viability theory: new directions (2011). He has promoted multidisciplinary studies on the subject, most notably in demography-the focus of our discussion here-with Noël Bonneuil. We provide a succinct account below.
Viability theory initially relies on what the author calls régulons in French, translated as regulees in [START_REF] Aubin | Dynamic economic theory. A viability approach[END_REF]regulons in Aubin et al., 2011. The term simply refers to the rules, values, and signs that structure and regulate all states of a given human society. Examples include economic goods in economics, individual behaviors in sociology, and cognitive states in psychology. Aubin (2010, p. 16) notes:
The difference between states and regulons resides in this: we know the actors who act upon the states; there is no consensus on the nature of those who govern the evolution of regulons. I shall use seer to denote the prototype of actors acting upon the states of the system, and demiurge to denote the entity that represents those mysterious mechanisms "regulating" the evolution with the aid of regulons. 159 Aubin admits that, unlike seers, he can neither name the entity that operates on regulons, nor explain why it does so. However, he shows that, in certain conditions, one can predict how organisms or populations will evolve relative to their environment and one can define their viability constraints.
To do this, Aubin does not use probability calculus, but differential inclusion calculus, which is based on the concept of open directions starting from the present moment. The concept generalizes the notion of differential equation and opens the door to membership of a set (Bonneuil, 2013, p. 73). The approach then tries to describe a given system's capacity for change by means of one or more differential equations in which the regulons are assumed to be proxied by measurable characteristics. In demography, for example, these measurable characteristics may consist of a birth rate, mortality rate, or rate of natural increase; in economics, they may consist of prices or consumption. This makes it mathematically possible to situate the evolution of the dynamical system studied amid a set of viable paths that satisfy certain constraints. The environment's viability kernel relative to the system is the subset of its states forming the point of departure for at least one viable evolution (Aubin, 2010, p. 668). This approach would offer a solution to the problem of the 159 French text: La différence entre états et régulons réside en ceci : on connaît les acteurs qui agissent sur les états, il n'existe pas de consensus sur la nature de ceux qui régissent l'évolution des régulons. J'appellerai « devin » le prototype des acteurs qui agissent sur les états du système, « démiurge » celui qui représente ces mystérieux mécanismes « régulant » l'évolution à l'aide de régulons. disappearance of societies, described by Diamond above, by avoiding non-viable paths. It is important to anticipate an evolution so as to be able to adapt to it in time.
The application of viability theory to many demographic problems, most notably by Bonneuil, shows that their solution requires us to take demography and economics into account simultaneously. For instance, to explain fertility fluctuations in the late twentieth century, Bonneuil, in a series of articles (1990, 1994, and 2017), proposes an approach using viability theory with two regulons: consumption and population growth. He recognizes (2017, p. 156) that: [. . .] the historical trajectories representing the West European countries move in the viability kernel associated with fertility norm n until they reach the boundary of this set. Then, couples must arbitrate between reducing their standard of living while maintaining the same level of fertility, and reducing their fertility to the norm n-1 while further increasing their consumption.
This explains these countries' fertility trajectories, which swung from high fertility starting in the mid-1940s (the baby boom) to a major decline in the 1970s (the baby bust).
While offering new perspectives in the social sciences, viability theory sets conditions that may not always seem fully justified. For example, its rejection of the influence of the past on the evolution of a population (Bonneuil, 2013, p. 72) runs counter to many theories that seek to explain that influence on population forecasts (Mazzuco and Keilman, 2020). Similarly, the search for a viability kernel once the model's parameters increase is so complex that only simplified cases can be analyzed (Aubin, 1997, p. 31).
To conclude our discussion on theories, we refer the reader to a recent article by Joel [START_REF] Cohen | Mathematics is biology's next microscope, only better; biology is mathematics' next physics, only better[END_REF]: "Mathematics is biology's next microscope, only better; biology is mathematics' next physics, only better." His examples, ranging from Euler (1760) to [START_REF] Lotka | Théorie analytique des associations biologiques[END_REF] and others, give reason for taking all social sciences into consideration rather than biology alone. This opens the possibility that, despite the reservations expressed earlier, viability theory could provide a new paradigm for social sciences in the future, for its highly mathematical character and its general scope would offer an incentive to do so. Let us recall what we said about paradigms in the conclusion to Chapter 7: each paradigm represents a specific point of view on a complex reality, and this applies equally to the notion of viability.
How can we explain the complexity of memory and the human brain: from artificial intelligence to neuroscience
In this section, we present a range of sciences of the human mindmost notably artificial intelligence, neuroscience, and nanosciences-all of which seek to explain how our brain and memory work.
The limits of artificial intelligence
In Chapter 8, we saw that Freud tried to introduce neurons into his theory of memory, but he was far from having all the elements needed to do so properly. Yet neurons are the very basis for the functioning of our brain and for our entire memory. Just half a century after their discovery by Ramon y Cajal, McCulloch and Pitts (1943) proposed a mechanistic model of the properties of neurons. For details, we refer the reader to Robert Franck's discussion in the first volume of the Methodos Series (2002, pp. 142-144), in which the present book is published: Franck shows that the structure of neural functions proposed by McCulloch and Pittswithout which the neural properties then known would not be what they are-allows the production of all those properties.
The model, coupled with the 1936 implementation of Turing machines (now called computers), with which McCulloch and Pitts were totally familiar, set the stage for artificial intelligence (AI). Its current developments in Artificial Neural Networks (ANNs) are increasingly significant. ANNs still perform calculations consistently with the neuron-based model designed by McCulloch and Pitts: the artificial neurons, now located on different layers, transmit information to neurons on the next layer that enable them to activate themselves or not. While McCulloch and Pitts were not particularly interested in memory issues, their model was adopted by many researchers working on learning and memory, such as [START_REF] Hebb | The organization of behavior[END_REF], [START_REF] Brindley | Nerve net models of plausible size that perform many simple learning tasks[END_REF], [START_REF] Gange | Avant les dieux, la mère universelle. Monte-Carlo[END_REF][START_REF] Marr | A theory for cerebral neocortex[END_REF].
Although the model largely succeeded in explaining artificial intelligence, it proved unable to explain the functioning of the human brain. As early as 1946, John von Neumann, the creator of the first computer with a recorded program in 1945, started having doubts about the parallels drawn between computers and the human brain. In a letter to Wiener (Masani, 1990, p. 243), he wrote: [. . .] after the great positive contribution of Turing-cum-Pitts-and-McCulloch is assimilated, the situation is rather worse than better than before.
For von Neumann, the human brain is far more complex than a computer, and equating the two is a serious mistake.
Even McCulloch in 1948, in a lecture on "Cerebral Mechanisms in Behaviour" and a presentation entitled "Why the mind is in the head?," expressed doubts about applying his approach to the entire cortex (Jeffress, 1951, p. 55):
To understand its (the cerebral cortex) function we need to know what it computes. Its output is some function of its input. As yet we do not know, even for the simplest structure, what that function is. [. . .] Walter Pitts is analyzing them mathematically at the present moment and has yet no very simple answer. There is no chance that we can do even this for the entire complex.
In the ensuing discussion, von Neumann even argued against the possibility that memory could reside in neurons, but no clear response to the argument was provided. We shall see later how the neurosciences addressed the question. This clearly shows that the human brain is far more complicated than a machine and operates according to different logical structures than those of binary mathematics, most notably through mechanisms based on analogy.
After 1950, artificial intelligence (AI)-a term definitively adopted at the 1956 Dartmouth seminar-experienced major developments. AI was defined as capable of being simulated by a machine, generally a computer. We shall examine it here with a view to determining whether it can surpass human intelligence (and, if so, in what conditions), or whether, on the contrary, it is of a different nature that prevents computers from thinking like humans.
First, several computer models were introduced to seek a better understanding of human intelligence. In 1958, Oliver Selfridge developed a form recognition model called Pandemonium, whose main assumption was that letters are identified by their component lines; that same year, Franck Rosenblatt proposed a similar model called Perceptron using parallel processing, which is closer to human intelligence. As he specifically stated in his 1961 report on Principles of Neurodynamics (p. 28):
Perceptrons are not introduced to serve as detailed copies of any actual nervous system. They are simplified networks designed to permit the study of lawful relationships between the organization of a nerve net, the organization of its environment and the "psychological" performances of which the network is capable. Perceptrons might actually correspond to parts of more extended networks in biological systems. [. . .] More likely they represent extreme simplifications of the central nervous system, in which some properties are exaggerated, others suppressed.
He thus clearly indicated the major difference between human and artificial intelligence. In 1969, Marvin Minsky and Seymour Papert offered a mathematical analysis showing that this parallel approach was a "dead end," both for artificial intelligence and for understanding human intelligence.
In his posthumous book The Computer and the Brain (1958), von Neumann, whose doubts about artificial intelligence we noted earlier, concluded (p. 83):
Consequently, there exist here different logical structures from the ones we are ordinarily used to in logic and mathematics.
He thus fully recognized the difference between the language used by computers and that used by human biology.
However, in 1965, the mathematician Irving Good saw in the ultra-intelligent machine that "can far all the intellectual activities of any man however clever" (p. 33) an "intelligence explosion" that would leave human intelligence far behind. Yet, nearly sixty years later, we are still waiting for that explosion.
To the contrary, many authors recognize the weaknesses of artificial intelligence. Terry Winograd, for example, despite his ardent defense of AI, has this to say about it in "Thinking machines: Can there be? Are we?" (1990, p. 168):
I will argue that "artificial intelligence" as now conceived is limited to a very particular kind of intelligence: one that can usefully be likened to bureaucracy in its rigidity, obtuseness, and inability to adapt to changing circumstances. The weakness comes not from insufficient development of the technology, but from the inadequacy of the basic tenets.
Similarly, many researchers have argued that "ANN[s] are known 'blind', or non-explanatory" (Franck, 2002, p. 127).
Without examining all the rationales and consequences of these models here, we shall try to assess what AI can contribute to the study of human intelligence and memory.
In practice, AI has proved capable of defeating humans in games with fully preset rules. For instance, on May 11, 1997, the world chess champion Kasparov-undefeated since 1985-faced the Deeper Blue program for the second time, after an initial victory in 1996. This time, the computer program prevailed with two wins, one loss, and three draws. Likewise, for the game of go, but eighteen years later in October 2015, the AlphaGo program defeated Fan Hui, the European champion since 2013, winning five out of five matches [START_REF] Silver | Mastering the game of Go with deep neural networks and tree search[END_REF]. And on March 15, 2016, world champion Lee Sedol was beaten by AlphaGo four games to one.
But human intelligence has proved altogether different nature from AI when it comes to taking decisions in real life, where rules are generally not predetermined.
In 1990, the book The foundations of artificial intelligence, edited by Derek Partridge and Yorick Wilks, sought to provide a fuller view of the differences between the two intelligences. For example, in his article "Thinking machines: Can there be? Are we?," Terry Winograd is very clear about this (p. 167):
Computers, with their foundations of cold logic, can never be creative or insightful or possess real judgement. No matter how competent they appear, they do not have the genuine intention that is at heart of human understanding. The vain pretentions of those who seek to understand mind as computation can be dismissed as yet another demonstration of the arrogance of modern science.
He notes the parallel between human bureaucracy and AI processes, which reduce human judgment to the systematic application of explicit rules. Similarly, the philosopher Eric Dietrich, in his article "Programs in the search of intelligent machines: The mistaken foundations of AI," looked for the theoretical foundations of AI. He makes a very interesting comparison between AI in our time and astrology in antiquity. His view of astrology concurs with our assessment in Chapter 3. He shows the predominance of astrology throughout ancient times despite its many failed predictions. The same is true, he states, for AI (p. 229):
Artificial intelligence today is much like astrology was then. As I argued above, AI is based on a mistaken theoretical assumption: the idea that we now know what kind of computing thinking is, which in turn mistakenly draws support from the theory of computation. AI researchers have seemingly reasonable explanations for the shortcomings of their programs: an insufficient number of rules, the wrong heuristics, or not enough processing speed.
Dietrich believes that neuroscience, whose advances we shall discuss later, offer a far better theory of the processes underlying human thought.
In 2012, the World Congress on Natural Computing/Unconventional computing and its significance distinguished between AI and human intelligence and proposed a biological approach to the study of the latter. In the volume of Congress proceedings, Carl Lindley contributed an article on "Neurobiological computation and synthetic intelligence" arguing in favor of a truly biological approach to human intelligence:
An alternative approach in the synthesis of intelligence is to take inspiration more directly from biological nervous systems. Such an approach, however, must go beyond twentieth century models of artificial networks (ANNs), which greatly oversimplify brain and neural functions. The synthesis of intelligence based upon biological foundations must draw upon and become part of the ongoing rapid expansion of the science of biological intelligence.
He thus recognized the inadequacy of AI for explaining human intelligence.
However, Stephen Hawking, while a physicist, went so far as to predict that advances in AI would spell the end of humankind (BBC interview, December 3, 2014):
The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence it would take off on its own, and redesign itself at an increasing rate. Humans, who are limited by slow biological evolution, couldn't compete and would be superseded.
Admittedly, the belief that AI can surpass human intelligence seems to be enjoying something of a revival. We say things "machine learning is the new electricity." I'd like to offer another analogy: Machine learning has become alchemy. [. . .] For the physics and chemistry of the 1700s to usher in the sea change in our understanding of the universe that we now experience, scientists had to dismantle 2,000 years worth of alchemical theories. [. . .] I would like to live in a society whose systems are built on top of verifiable, rigorous, thorough knowledge and not on alchemy.
Interestingly, he replaced astrology (which Dietrich cited) with alchemy, but the result is the same: the lack of serious criteria for a clear, unambiguous definition of AI and its subfield, machine learning. Erik Larson's 2021 book, The myth of artificial intelligence: why computers can't think the way we do, presents AI as mythical. He shows that AI relies on induction, not in Bacon's sense (see our Chapter 3.2), but in its traditional meaning of a generalization as defined, for example, by Hume (1739-1740, I, 3, 2, 6):
[. . .] that there can be no demonstrative arguments to prove, that these instances, of which we have had no experience, resemble those, of which we have had experience. AI functions precisely by using experience to improve its performance. For instance, it can outperform human intelligence in games because their rules are set once and for all (see our earlier discussion). By contrast, when faced with problems such as those encountered in scientific research or, more simply, in many areas of human life, AI is clearly unable to resolve them. Larson gives the example of Google's "deep learning," which, having identified gorillas on a photograph of two U.S. citizens, preferred to remove gorilla images from its system. The rational given was that, even if Google had tried to improve the system, the risk of confusion would have persisted.
However, by equating human intelligence with abduction as defined by Pierce 160 in various ways over many years , Larson is-in our view-mistaken. While space precludes a detailed discussion of abduction here, we refer the reader to its critical examination by [START_REF] Frankfurt | Pierce notion of abduction[END_REF], [START_REF] Kapitan | In what way is abductive inference creative?[END_REF][START_REF] Kapitan | Pierce and the autonomy of abductive reasoning[END_REF], and Hoffman (1999), all of whom conclude that the notion is not valid.
We contrast AI with the concept of induction as introduced in 1620 by Bacon, who clearly opposes it to the traditional meaning of the term (Novum Organon, I, 105):
For the induction which proceeds by simple enumeration is childish; its conclusions are precarious and exposed to peril from a contradictory instance; and it generally decides on too small number of facts, and on those only which are at hand. But the induction which is to be available for the discovery and demonstration of sciences and arts, [. . .]-which has not yet been done or even attempted, save only by Plato [. . .] While AI can proceed by means of a simple enumeration, we shall now see that neuroscience prefers to use the second form of induction defined by Bacon.
The contribution of neuroscience to explaining memory
We introduced neuroscience in Chapter 8 in order to compare the Freudian and neuroscientific conceptions of the unconscious. We take our discussion further here by recounting the development of neuroscience from the founding of the International Brain Research 160 References to Pierce's writings are from the Collected Papers of Charles Sanders Pierce , Cambridge: Harvard University Press. For example, he defined abduction in the 1903 Harvard Lectures on Pragmatism (Collected Papers, 5, 189) as follows:
(1) The surprising fact, C, is observed; But if A were true, C would be a matter of course, Hence, there is a reason to suspect that A is true.
Likewise, a great deal of scientific work remains to be done both to understand what an ideally complete neurobiological explanation of learning or memory would look like and to envision and develop the kinds of experimental techniques that would show convincingly that such explanation had been achieved.
As we shall see, despite many improvements, this mechanistic explanation of memory remains incomplete.
In his book Explaining the brain (2007), Craver goes further, stressing the difference between (a) representation and categorization-two customary operations in neuroscience (such as classifying neurons or glial cells into different types, or describing the brain as composed of different spatial areas)-and (b) the causal explanation. In particular, he emphasizes the difference between the hypothetico-deductive approach and the approaches that he refers to as U-models (pp. 40-41):
According to the unification model (henceforth, U-model), explanation is not a matter of deriving the explanandum phenomenon from laws but a matter of unifying diverse beliefs under a few simple argument patterns [. . .] The appeal of the Umodel derives from the fact that many of the most successful explanations in the history of science (such as Newton's laws, Maxwell equations, and Darwin's theory of evolution by natural selection) encompass large domains of phenomena within the purview of few basic argument patterns.
What is described here is indeed the mechanistic approach, which, for example, will try to explain the genesis of life stories with the aid of a small number of entities and activities.
We shall attempt to see how this approach can be applied not to life stories in all their complexity but to certain more specific elements that characterize them. Before that, however, let us examine in detail what the mechanistic approach means by "entities" and "activities." "Entities" are the parts used by the mechanism to perform the various "activities" that the mechanism must explain. "Entities" have properties enabling them to act in a wide variety of "activities."
In his book Explaining the Brain: Mechanisms and the Mosaic Unity of Neuroscience (2007, p. 6), Craver describes the latter as follows:
Activities are the causal components in mechanisms. I use the term "activity" as a filter-term for productive behaviours (such as opening), causal interactions (such as attracting), omissions (as occurs in case of inhibition), preventions (such as blocking), and so on. In saying that these activities are predictive, I mean that they are not mere correlations, that they are not mere temporal sequences, and, most fundamentally, that they can be exploited for the purposes of manipulation and control.
This quotation clearly shows that the mechanistic approach is not interested in the correlations that may exist between entities, but seeks to predict how the phenomenon will play out in certain conditions.
The same book highlights another key feature of this approach: its multilevel character. We already pointed out some advantages of the multilevel approach in the social sciences, but Craver attributes a greater importance to it in neuroscience research by stressing the latter's "mosaic unity." For this purpose, he contrasts the McCulloch-Pitts approach with that of Bliss, Lømo, and Gardner-Medwin (2007, pp. 241-243).
McCulloch and Pitts clearly aimed at reductionism (1943, pp. 100-101):
The "all-or-none" law of nervous activity is sufficient to insure that the activity of any neuron may be represented as a proposition. Physiological relations existing among nervous activities correspond, of course, to relations among the propositions; and the utility of the representation depends upon the identity of these relations to relations among the propositions. To each reaction of any neuron there is a corresponding assertion of a simple proposition. This, in turn, implies either some other simple proposition or the disjunction or the conjunction, with or without negation, of similar propositions according to the configuration of the synapses upon and the threshold of the neuron in question.
They were thus able to represent the various propositions of Boolean logic as changes in the connecting forces between neurons. A complex psychological phenomenon could therefore be reduced to a phenomenon occurring at neural level.
The approach adopted by Bliss, Lømo, and Gardner-Medwin does not prioritize an analysis at neural level. On the contrary, to flesh out their approach, they propose a top-down or a bottom-up analysis. In the words of Bliss and Lømo (1973, p. 350):
The amplitude of an evoked population potential depends on a number of factors, and it will be helpful to review these before considering the possible mechanism which might be responsible for long-lasting potentiation.
This acknowledgment is followed by a discussion on the various mechanisms that can influence memory.
Indeed, their approach focuses on an intermediate level that we may regard as "electrical synaptic" (Craver, 2002, p. S89), which includes entities such as neurons, synapses, and activities such as the generation and propagation of action potentials. But the memory mechanism is not confined to that level. Craver (2002, p. S91) sees at least four different levels:
To summarize, the mechanism sketch for memory is multilevel; its current description includes mice learning and remembering, hippocampi generating spatial maps, synapses inducing LTP, and macromolecules binding and changing conformation [. . .] Memory mechanisms must therefore be studied at all these levels in order to obtain a full picture.
The levels listed may be covered by various disciplines. Some have been in existence for a long time, while others have developed more recently within the broader context of neuroscience. For instance, the behavioral level of learning and memorization is studied mainly by general psychology, as discussed in Chapter 8, and biological psychology. The second level is studied mainly by cognitive neuroscience and computational neuroscience. The third level is studied by cellular neuroscience, the fourth by molecular neuroscience.
In 2007, Craver and Bechtel, in their article on "Top-down causation without top-down causes," examine this recourse to interlevel causes and clearly show (p. 562) that it is unwarranted:
The interlevel relationship is a relationship of constitution. Where there are mechanistically mediated effects, there is no need for the mysterious metaphysics of interlevel causation at all. They assume, in particular, that the authors who resort to interlevel causation use a different definition of level than that employed in the mechanistic approach. [START_REF] Craver | Explaining the brain. Mechanisms and the mosaic unity of neuroscience[END_REF] seeks to go further by proposing a fuller analysis of levels. He draws a distinction between mechanism levels (relationships between a whole and its parts), size levels, and natural "merological" levels (from the Greek μέρος, "part") such as elementary particles, atoms, molecules, organisms, and societies. Two or more items belong to the same mechanism level if they are part of the same mechanism. Carver can thus distinguish between "intralevel causal relations" and "interlevel constitutive relations." If the first are causal, then the second are not. As he clearly states (p. 179):
To view LM (cf.: explanation of spatial memory, commonly said to have four levels) levels as causally related, one must violate the common assumption that causal relationships are contingent and that cause and effect must be wholly distinct.
As a result, there can be no confusion between the levels of a mechanism that involve causal relationships and "interlevel constitutive relations" that involve none. While causality requires a diachronic study, "interlevel constitutive relations" require a synchronic study.
In 2003, Robert Franck went even further, showing that the analysis of the emergence of a level is not an extension of the causal mechanistic approach, but in fact a system analysis of the relationships between the system generated by the new level and the factors from which it emerges. As noted earlier, this system analysis-which studies the overall functioning of biological systems at different organizational level-was theorized by Ludwig von Bertalanffy in 1968. This approach contrasts with the then prevailing reductionist theories by viewing a system as a whole, with its relationships and interactions with other systems. Franck relies on this concept of system to state (2003, p. 188):
I will base myself upon concept of emergence in order to propose a theoretical definition of the notion of a social "level". Then, I will associate the concept of level with that of system, and try to show how causal analysis and system analysis can be combined, in order to do justice to relations of determination between social levels.
He thus treats the two analyses as totally distinct: the constraints of system analysis are not causal determinations, whereas the mechanistic approach seeks the causes of these mechanisms. Franck takes his argument one step further by pointing out why the emerging entity has properties that its components lack (p. 192):
A system emerges from the combination of its elements. These act upon each other, some in a unidirectional way, other in a reciprocal one, and we call them factors. These are the matter of the system. The system is nothing other than the factors which compose it, except that in a system the action of factors on each other conforms to a definite structure, a structure that we call the form of the system. It is because of this form that the system acquires properties the factors which compose the system don't have. Such a constituent relationship between form and matter-which Aristotle called hylemorphism, from the ancient Greek ὕλη (matter) and μορφή (form), sets the limits of possible actions for a given system and clarifies the nature of the levels. This creates a problem for causal mechanistic analysis, for one needs to take into account, at the same time, the organization of the system studied-which is not causal but systemic.
We can conclude that the analysis of a system's functions is as important as causal mechanistic analysis, and that the two should ideally be combined. But how can this be done? We examine the question in greater detail in §9.4.
Can autonomy theory be associated with hermeneutics?
When we study a small set of events in a person's life (birth, death, and migrations in demography; suicide and marriage in sociology; employment, unemployment, and exit from the workforce in economics, for example), we can develop a social science with its mechanisms. But when we examine a human life in all its complexity, we can no longer elaborate such a science, and we must consider the life in hermeneutical terms as an epic, a tragedy, a comedy, a novel, or another genre.
It is worth asking, however, if the two approaches are truly irreconcilable or if we can build a bridge to connect them.
As our book is concerned with methodology, we shall not describe in detail the philosophical origins of such a relationship-if one exists. We shall only indicate the broad directions followed by the philosophers who established the relationship.
Maurice Merleau-Ponty (1908-1961), in our view, best explained this approach, known as phenomenology (or phenomenological hermeneutics, if we accept Grondin's argument cited in the introduction to this chapter). Introduced by Edmund Husserl , phenomenology lies on the frontier of psychology, neurology, and sociology. In La Structure du comportement (The Structure of Behavior), Merleau-Ponty clearly states that behavior is the main cause of all stimulations, and he notes ([1943] 1963, p. 13):
[. . .] all the stimulations received by the organism are made possible, in turn, only by its earlier movements, which have ultimately exposed the receptor organ to external influences. As a result, one could also say that behavior is the prime cause of all the stimulations. The excitant's form is thus created by the organism itself, by its specific manner of exposing itself to actions from outside. In order to subsist, it must, no doubt, encounter a certain number of physical and chemical agents in its surroundings. But it is the organism itself-according to the specific nature of its receptors, the thresholds of its nerve centers, and the movements of its organs-that chooses the stimuli in the physical world to which it will be sensitive. 161 In other words, a perception is not simply constrained by the outside world, but also contributes to the representation of that world. While Merleau-Ponty relied on contemporary studies in psychology and neurology, his approach is totally philosophical. It expresses the emergence of reason in a contingent world-an endeavor comparable to that of an artist (for the painter's role, see his posthumous book, L'oeil et l'esprit [The Eye and the Mind], 1964).
The Santiago School of Biology, whose chief exponents were Francisco Varela (1946Varela ( -2001) ) and Humberto Maturana (1928Maturana ( -2021)), specifically applied this philosophical approach to neuroscience. [START_REF] Varela | Neurophenomenology. A methodological remedy for the hard problem[END_REF] even called his approach neurophenomenology in order to underscore the connection. Let us examine the School's main features in some detail.
Rather than attempting to characterize biological systems by their components' physical properties, as the mechanistic approach does, the Santiago School looks for the characteristics in the systems' overall organization. This approach, therefore, is completely systemic. It sees the system as a general process whose characteristics are in fact inseparable. The basic concepts applied are autonomy, autopoiesis, and adaptation.
161 French text: … comme toutes les stimulations que l'organisme reçoit, n'ont à leur tour été possibles que par ses mouvements précédents, qui ont fini par exposer l'organisme récepteur aux influences externes, on pourrait dire aussi que le comportement est la cause première de toutes les stimulations. Ainsi la forme de l'exitant, est créée par l'organisme lui-même, par sa manière propre de s'offrir aux actions du dehors. Sans doute, pour pouvoir subsister, il doit rencontrer autour de lui un certain nombre d'agents physiques ou chimiques. Mais c'est lui, selon la nature propre de ses récepteurs, selon les seuils de ses centres nerveux, selon les mouvements des organes, qui choisit dans le monde physique les stimuli auxquels il sera sensible.
The concept of autonomy features in the title of Varela's book, Principles of biological autonomy (1979). The Santiago approach defines the term-which can apply to any system, whether biological or not-as the control by a structure of what constitutes the system, and the way in which the structure can handle disturbances from outside, with no need for a external reference for its operations (Varela, 1979, p. xv). Varela accordingly defines an autonomous system as follows (p. 55):
We shall say that autonomous systems are organizationally closed. That is, their organization is characterized by processes such that (1) the processes are related as a network, so that they recursively depend on each other in the generation and realization of the processes themselves, and (2) they constitute the systems as a unity recognizable in the space (domain) in which the processes exist. This closure does not mean that the system is shut off from the outside world in material and energy terms. That would be impossible. On the contrary, autonomous systems are thermodynamically far from equilibrium, and they exchange energy with their surroundings. The organizational closure refers to the selfreferential network that defines the system as a unit.
We can thus see how much this approach owes to the vision of philosophers such as Merleau-Ponty, insofar as the organism both initiates and shapes its environment (Varela et al., 1993, p. 174).
The term autopoiesis derives from two Greek roots: αύτόσ ("oneself") and ποιεῖν ("to produce"). This property, identified by [START_REF] Maturana | Autopoiesis and cognition. The realization of the living[END_REF], characterizes all living beings, as Varela states more simply (1997, p. 75):
An autopoietic system-the minimal living organization-is one that continuously produces the components that specify it, while at the same time realizing it (the system) as a concrete unity in space and time, which makes the network of production of components possible.
This definition implies that life can be characterized with the aid of the system's dynamics, which depend solely on its organization's structural composition and remain separate from its physical environment. As soon as the process is interrupted, the system dies.
We can thus define clear criteria for determining whether or not an organization is autopoietic: (1) it must have a semi-permeable frontier; (2) its components must be produced by a network of reactions inside that frontier; (3) the two preceding conditions must be interdependent (Thompson, 2007, p. 103). Thus, while a bacterium qualifies as autopoietic because it meets the three criteria, a virus does not, for its molecular components (nucleic acids) are not generated within itself, but in the host cell.
In light of these criteria, we may ask whether certain social systems are autopoietic or not [START_REF] Luhmann | The autopoiesis of social systems[END_REF]. [START_REF] Mingers | Can social systems be autopoietic?[END_REF] examines this possibility in detail. He shows that, despite major similarities with autopoietic theory, the notion of autopoietic social systems imposes an incredibly abstract and reductionist view of the social world. In a later work, [START_REF] Mingers | Can social systems be autopoietic? Bhaskar's and Giddens' social theories[END_REF] assesses the validity of autopoiesis for Bhaskar and Giddens' social theories. Again, he concludes that in these cases it is extremely hard to identify a social system's frontier in empirical terms. These observations are wholly consistent with Varela's on the same topic (1979, pp. 54-55):
[. . .] there have been some proposals suggesting that certain human systems, such as institutions, should be understood as autopoietic [. . .]. From this point of view, I believe that these characterizations are category mistakes: they confuse autopoiesis with autonomy, that we can take the lessons offered by the autonomy of living systems and convert them into an operational characterization of autonomy in general, living and otherwise.
These studies therefore nullify the hypothesis, proposed by certain sociologists, that social systems may be regarded as autopoietic. By contrast, social systems lacking a specific frontier may be viewed as autonomous (Hooker, 2013, pp. 772-773).
Lastly, adaptation is the ability of all living organisms to stay alive. Maturana and Varela (1987, p. 102) define it as follows:
If we turn our attention to the maintenance of the organisms as dynamic systems in their environment, the maintenance will appear to us as centered on a compatibility of the organisms with their environment which we call adaptation. If at any time, however, we observe a destructive interaction between a living being and its environment, and the former disintegrates as an autopoietic system, we see the disintegrating living system as having lost its adaptation.
While autopoiesis enables us to understand how an organism is alive, adaptation will allow us to understand how it can die if it ceases to be adapted to its environment. This also offers an explanation for the organism's dysfunctions, stress, fatigue, pathological conditions, the evolution of species, and other phenomena (Di Paolo, 2005, p. 439).
The views of the Santiago School on the evolution of species lie outside the scope of our work. Let us look instead at its working hypothesis (Varela, 1996, p. 343) before describing in greater detail its approach to actual cases: Phenomenological accounts of the structure of experience and their counterparts in cognitive science relate to each other through reciprocal constraints.
This hypothesis perfectly defines the dual aspect of the School's planned analysis, which begins by classifying the data it will use in three categories.
First-person data are the introspective data collected from personal experience, such as the data obtained through "contemplative meditative traditions." That is what Varela calls enaction, as describing the dynamic structure of lived experience. Such data can be considered as purely hermeneutic data, as they constitute interpretations which are inseparable from our bodies, our language and our social history (Varela et al., 1993, p. 149). In fact, he emphasize the affiliation of his ideas to the philosophical school of hermeneutics that show how knowledge depends on being in a world that is inseparable from our bodies, our language and our social history.
Second person data defines an interviewer that collects data from an interviewee. Finally, Third-person data use first-person data to access physiological processes, which would otherwise remain opaque. They achieve this with the aid of the variability of responses obtained through scientific methods such as electroencephalograms [START_REF] Lutz | Guiding the study of brain dynamics by using first-person data: synchrony patterns correlate with ongoing conscious states during a simple visual task[END_REF]. While the use of the two types of data is not univocal, their relationship is, in fact, dynamic: third-person data can, in some cases, lead researchers to modify the method for collecting first-person data. Varela (1996, pp. 341-343) describes various kinds of experiments to illustrate this approach, but he clearly notes that these case studies do not constitute a proof of what he is proposing, which thus remains a working hypothesis open to criticism. This ends our brief presentation of the biological autonomy approach, which has continued to develop in the past twenty years around the principles defined by Maturana and Varela. Let us simply quote from Alvaro Moreno and Matteo Mossio's book entitled Biological autonomy (2015), which documents the additions and alterations to the approach. The authors emphasize the central role of autonomy in their work (2015, p. xxix):
The autonomous perspective that we develop here endeavours then to grasp the complexity of biological phenomena, by adequately accounting for their various dimensions, specificities, and relations with the physical and chemical domains.
The focus on autonomy distinguishes their approach from that of Maturana and Varela, who attached greater importance to autopoiesis. We will see that this concept was latter used by [START_REF] Bich | Mechanism, autonomy and biological explanation[END_REF], to try to link mechanistic theory to system theory.
Another development worth noting for our purposes is the micro-phenomenological approach [START_REF] Bitbol | La sphère d'intersubjectivité durant l'entretien micro-phénoménologique[END_REF], which gives a new perspective on biographical interviews. These are viewed (Bitbol and Petitmengin, 2017, p. 731) as offering:
[. . .] a disciplined, explicit "circulation" between first and third person analyses, which is the principle of the neurophenomenological approach to cognitive processes.
The biological autonomy approach thus treats biographical interviews in a totally different way from demography. It seeks to conduct a rigorous study of lived experience rather than analyze human lives from an external point of view.
Many authors, however, have hesitated to characterize the biological autonomy approach as fully scientific. Podgórski (2010, p. 85), for example, concludes his presentation of the theory as follows:
What is value? It is certainly a very complicated theory, logically coherent, leaning on complicated language. This theory is not false or verifiable. Transferring it to another scientific discipline, it appears to be more or less adequate in a particular field of research. Is it worth today investigating the thought of the Chilean biologists? In my humble opinion-it is. Drifting among many of their ideas, expressed in difficult and hermeneutical-like language is challenging.
In the context of our book, devoted to understanding human life, we largely approve Podgórski's conclusion: the hermeneutic approach followed by Maturana, Varela, and others does not really enable us to understand human lives as we described them in Chapters 7 and 8, but it does pave the way for a scientific approach to human life that is entirely worth pursuing and differs greatly from the mechanistic approach discussed in §9.2.
In reality, the two approaches practically ignored each other in the late twentieth and very early twenty-first centuries. The mechanistic approach seeks to show how a mechanism's components are organized so that their activities produce a given phenomenon. The autonomy approach focuses instead on the organism envisaged as a whole, and aims to show how it succeeds in maintaining its integrity. Craver's book: Explaining the Brain (2007) totally ignores the autonomy approach, whereas Thompson's book Mind in Life (2010) completely ignores the mechanistic approach. However, around 2007, some authors set out to identify their common features.
At this point, therefore, it is worth asking whether we could use the two theories simultaneously to better understand human behaviors.
How can we associate system biology, mechanisms, and autonomy?
Autonomy theory, discussed in the previous section, is indeed an extension of system analysis, for it aims to treat intelligence and memory as a whole. It is therefore effectively a biological systems theory. However, another approach has emerged in the same period: dynamical system theory. While similar, in principle, to autonomy theory, we shall point out the major differences between the two.
Dynamical systems theory (DST) developed in the 1990s as a consequence of the Human Genome Project [START_REF] Ideker | A new approach to decoding life: System Biology[END_REF], although we can find earlier signs of it. It is open to the systemic interpretation of extensive quantitative data with the aid of mathematical and computer models, but also to an examination of the way in which cognitive phenomena are studied and conceptualized.
DST is presented in detail by Timothy van Gelder and Robert Port in Mind as motion: explorations in the dynamics of cognition (1996). They regard it as a new research paradigm (p. 10), for:
[. . .] dynamical and computational systems are fundamentally different kinds of systems, and hence the dynamical and computational approaches to cognition are fundamentally different in their deepest foundations.
DST has since achieved substantial progress in modeling biological and neural systems-ranging from simple neurons to large neural networks-that concern biological populations and complex systems. It is highly mathematized, for it tracks the course of a set of variables interacting with one another over time. Of the many books devoted to it, one of the most recent, Dynamical systems theory (2020) (Awrejcewics and Grzelczyk, eds.), reports the latest mathematical advances in the field.
With its holistic emphasis, DST violently rejects the mechanistic tradition in biology, describing it as "reductionist." For example, [START_REF] Regenmortel | Reductionism and complexity in molecular biology[END_REF]van Regenmortel ( , p. 1015) ) writes:
The reductionist method of dissecting biological systems into their constituent parts has been effective in explaining the chemical basis of numerous living processes. However, many biologists realize that this approach has reached its limits. Biological systems are extremely complex and have emergent properties that cannot be explained, or even predicted, by studying their individual parts.
DST proponents therefore consider that, if we want to try to understand human life, we should examine the entire system in which biological phenomena occur. By contrast, the mechanistic approach seeks to understand a complex whole by breaking it down into its components and locating the phenomenon to be analyzed in certain parts of the system.
In 1998, Bechtel-one of the founders of the mechanistic approach, as noted in the introduction to this chapter-responded to the rejection by DST advocates in an article entitled "Representations and cognitive explanations: assessing the dynamicist's challenge in cognitive science." The thrust of his rebuttal concerns the validity of the explanatory model that forms the basis of DST. He shows that DST, in fact, is looking for laws and that it follows the covering law explanation (Bechtel, 1998, p. 311):
The difference and differential equations in these models are intended to describe patterns of linked change in the values of specified parameters in the course of the system's evolution over time. The parameters do not correspond to components of the system which interact causally. They are, rather, features in the phenomenon itself (e.g., the motivational value a person assigns to a particular consequence). [. . .] In this respect, these DST explanations better fit the alternative, covering law model of explanation presented earlier. In order to distinguish explanations from descriptions, proponents of the covering law model argued that the generalization from which the behavior of the particular instance is to be derived really had to be a law.
The covering law explanation claims (Bechtel, 1998, p. 307) "that a phenomenon was explained when a statement describing it was derived from statements specifying one or more laws and relevant initial conditions [START_REF] Hempel | Aspects of scientific explanation and other essays in the philosophy of science[END_REF]." Bechtel argues that the covering law has been largely discredited, most notably for the life sciences (including cognitive science), for which a mechanistic approach seems better suited.
In 2007, Bechtel rehearsed and elaborated some of these arguments in a new article on "Biological mechanisms: organized to maintain autonomy." He now clearly stated the need to move beyond the mechanistic approach (2007, p. 269):
Mechanistic explanations in biology have continually confronted the challenge that they are insufficient to account for biological phenomena. This challenge is often justified as accounts of biological mechanisms frequently fail to consider the modes of organization required to explain the phenomena of life. [. . .] Although some theorists would construe such self-organizing and self-repairing systems as beyond the mechanistic perspective, I maintain that they can be accommodated within the framework of mechanistic explanation properly construed.
The article, published in a book on Systems biology, clearly presents the two opposing approaches. The first is a systemic holistic approach, whose explanations are supported by the covering law. The second is a mechanistic reductionist approach, which breaks down biological systems into their molecular components. To mitigate this extreme reductionism, Bechtel proposes an approach using autonomy theory (see §9.3 above), defended by [START_REF] Maturana | The tree of knowledge. The biological roots of human understanding[END_REF].
Bechtel's proposal is in fact very similar to what Franck had suggested back in 1995, namely, that part of the solution to biological reductionism can and must be sought in an approach that avoids the dualist opposition between the whole and its parts (Franck, 1995, p. 79):
We leave these two cosmogonies and penetrate into another, and at once the question of reductionism changes meaning, for the aim is no longer to determine if the whole can be reduced to its elementary components, but to determine how the different strata or levels are actually linked-from top to bottom and from bottom to top. 162In these conditions, choosing between holism and reductionism becomes pointless. However, it will be necessary to find out how to incorporate units at a given level into units at a higher level. This approach offers a better solution to the problem posed in §9.2 on how to take the introduction of multilevel analysis into account in the mechanistic approach.
More specifically, Bechtel advocates associating the reductionist mechanistic theory with a holistic approach, which can deal with the gaps encountered by the theory (Bechtel, 2007, p. 297):
Ideas such as negative feedback, self-organizing positive feedback, and cyclic organization are critical to explaining the phenomena exhibited by living organisms. [. . .] These critical features are nicely captured in Moreno's conception of basic autonomy in which we recognize living systems as so organized to metabolize inputs to extract matter and energy and direct these to building and repairing themselves.
In other words, the combination of autonomy theory and mechanistic theory is indeed what ought to make it possible to explain the phenomena of human life and memory. This position has since been accepted by most mechanistic theoreticians [START_REF] Kaplan | Dynamical models: an alternative or complement to mechanistic explanation[END_REF][START_REF] Kaplan | The explanatory force of dynamical and mathematical models in neuroscience: a mechanistic perspective[END_REF][START_REF] Kaplan | Moving parts: the natural alliance between dynamical and mechanistic modelling approaches[END_REF][START_REF] Kaplan | Mechanisms and dynamical systems[END_REF]. However, Kaplan concludes his article as follows (2018, p. 278):
In addition to the challenge for the mechanistic approach to dynamic explanation identified earlier, more hard work remains for those wishing to defend a mechanistic perspective. For instance, dynamics have also appealed to notions of emergence or downward causation to complex dynamical systems to argue for the limitations of the mechanistic perspective [. . .]. To date defenders of the mechanistic approach have not sufficiently addressed this challenge.
Indeed, some authors, in their defense of the covering law explanation, continue to argue that Dynamical systems theory is independent of the mechanistic tradition [START_REF] Chemero | After the philosophy of mind: replacing scholasticism with science[END_REF][START_REF] Meyer | The non-mechanistic option: defending dynamical explanations[END_REF], but we have seen how this explanation falls short. Nevertheless, the mechanistic approach still faces many difficulties in fully assimilating autonomy theory.
For instance, in a very recent article on "Mechanisms, autonomy and biological explanation" (2021), Bich and Bechtel list the many problems still to be resolved before the two theories can be fully integrated. Comparing these theories, they write (p. 2):
The first group espouses explanation in terms of mechanisms, the second in terms of closed networks of constraints that maintain themselves. Although these two philosophical traditions have largely developed independently of each other, we argue that they should be constructively integrated, as each supplies ingredients the other tradition has ignored or not accounted for in sufficient detail.
That way much still needs to be done to achieve integration, but the authors clearly set out the path to follow, which would lead to the successful association between mechanisms and autonomy.
In order, not to oppose the two traditions but to show how they should be constructively integrated, we can say that the recent developments in the autonomy tradition such as those developed by [START_REF] Moreno | Biological autonomy. A philosophical and theoretical enquiry[END_REF] provide a bridge between these two traditions: they show why mechanistic explanation must be supplemented with other explanatory approaches. Simultaneously, the mechanistic tradition has confronted a number of challenges in specifying the phenomena about which it is applied, and their boundaries: there is a challenge in determining which entities constitute the mechanism and which are outside it. Also, it fails to try to organize: it may well identify parts of this organization but does not permit to understand how these parts succeed in producing it (Bechtel, 2007, p. 209).
We have now generated all the elements about the three more general traditions, which permit to understand human life: system biology (here mainly the autonomy approach), mechanisms and hermeneutics. In first view, they appeared as largely independent from each other, leading to three different approaches of human life.
However we have seen that very recently there is a growing interest to provide a bridge between the autonomy tradition and the mechanist one. Collaboration from the two approaches appears now as necessary, in order to understand how mechanisms are controlled to serve organisms.
For hermeneutics, we have also seen how [START_REF] Varela | The embodied mind[END_REF] tried to link systemic biology with this approach. For example, in 1996 he uses first-person data in order to describe the dynamic structure of lived experience. However, despite the various kinds of experiments, this hypothesis remains a working one open to criticism. While we have not developed here this more philosophical point, [START_REF] Varela | The embodied mind[END_REF] introduce Buddhist thought with the scientific one, in order to understand the embodied mind. This Buddhist thought encounters many common points with relational hermeneutics, as Nicholas [START_REF] Davey | The turning word: relational hermeneutics and aspects of Buddhist thought[END_REF] clearly shows. However, Varela never proposes a scientific explanation of Buddhism which, in fact, he opposes to science. Even later, [START_REF] Thompson | Mind in life. Biology, phenomenology and the sciences of mind[END_REF], who has been co-author of this previous book, gives no more reference to this Buddhist tradition.
As we have already shown a true scientific explanation needs a drastic simplification of the phenomena to be studied [START_REF] Suppe | The semantic conception of theories and scientific realism[END_REF]. Only a small number of parameters abstracted from those phenomena may figure in this explanation. However, an individual life is too much complex to be explained in this way, and this leads to the notion of comprehension as presented by Dilthey. This notion is no more a scientific one, but it permits to understand, in the sense of to comprehend, human lives in all their complexity without any appeal to explanation which will be too restrictive. If social sciences, like demography, had been able to develop event-history analysis, it is at the price of keeping only in their field three main phenomena: births, deaths and migratory movements (Courgeau, Franck, 2007). To extend their field in order to understand the behaviour of human populations, from individual to society level, would end up embracing everything. Only a comprehensive approach is able to give this view, taking a more ethical, religious and political view of the society in which the individual lives. This book addresses the challenge of understanding human life. It compares our life experience with the attempts to grasp it by astrologers, eugenicists, psychologists, neuroscientists, social scientists, and philosophers. The main opposition among these specialties lies between understanding and misunderstanding. We also address the central methodological difficulty of capturing a human life.
Subject index
First, we examine how certain approaches may lead to a misunderstanding of human life. We contrast the example of astrology-an accepted practice in ancient civilizations, but now classified among the pseudosciences-with astronomy, a full-fledged science since Galileo's time. Another, more recent approach regards human life as predetermined by genes: the methods used by eugenicists, and later by political regimes under the name of hereditarianism, came to compete with genetics. A broader analysis will show how astrology and eugenicism are not truly scientific approaches.
Next, we look at the ways of capturing an imaginary or real human life story. A comprehensive approach will try to fully understand their complexity, while a more explanatory approach will consider only certain specific phenomena of human life. For example, demography studies only births, deaths, and migration. Another crucial factor in the collection of life histories is memory and its transmission. Psychology and psychoanalysis have developed different schools to try to explain them.
We conclude with a detailed discussion of the concepts and tools that have been proposed in more recent times for understanding the various aspects of life stories: mechanisms, systems, hermeneutics, and autonomy.
9. 1
1 How demographic theories consider human life 9.1.1 From the origin of population science to the nineteenth century 9.1.2 Industrial revolution in the nineteenth century: the ascendency of economics 9.1.3 The demographic revolution in the twentieth century: the comeback of demography 9.1.4 Three contemporary theories 9.2 How can we explain the complexity of memory and the human brain: from artificial intelligence
Fig
Fig 8.2Natural logarithm of instantaneous rates of migration estimated for couples and from population register..
Figure 3
3 Figure 3.1. Change in search interest for "astronomy", "astrology", and "horoscope" in all countries since Jan. 1, 2004. Data source: Google Trends.
Figure3. 3
3 Figure3. 3.2. Change in search interest for "astrology" and "astronomy" on YouTube in all countries since Jan. 1, 2008. Data source: Google Trends.
Source:Galton, 1869, p. 83.
Figure 4
4 Figure 4.1. How the percentage of "eminent" members of the families of English judges decreases with the degree of kinship.
Fig. 5
5 Fig. 5.1 Change in searches for "astrology", "cartomancy", and "chiromancy" in Switzerland since Jan. 1, 2004. Source: Google Trends.
[. . .] to think (through polylogy and trials) and to introduce new, common political values (through a pragmatic approach to recognition) [. . .] 86
[. . .] to conceive (by experience and by polyphenism) and to bring into existence[…] new forms of salvation, on a solitary basis.93 Therefore, in terms of the thought pattern, the novel does indeed belong to a totally different domain from the epic. As before, we shall give some examples-with no attempt at exhaustiveness-of novels from different periods to show their diversity and, at the same time, what unites them. The romances recounting the life of Tristan and Yseult were written in different versions and in different parts of Europe in the twelfth and thirteenth centuries. Only fragments remain. These texts opened the genre to the feeling of passionate love between two persons: the German poem by Eilhart von Oberg, the fragment by the Norman juggler Béroul, the poem by Thomas of England, the unfinished poem by Gottfried of Strasbourg, and other versions show the novel's widespread circulation in a feudal society. More generally, the Arthurian romances represent a revolution in Medieval European literature that combines religion, armed struggles, and love. For further details, see the Handbook of Arthurian Romance edited by Tether and McFadyen in 2017.
Figure 8 . 1 :
81 Figure 8.1: Natural logarithm of instantaneous rates of migration estimated for men and women
More than seventy years later, Matthew Cobb (2020, p. 285) noted: [. . .] the failure of the rigid McCulloch and Pitts model of neuronal logic to translate into how real nervous systems function should also be a warning
For instance, at the 2009 TED Global Conference in Oxford, Henry Markram announced that he was ready to simulate the human the United States in 2014: Brain Research through Advancing Innovative Neurotechnologies (BRAIN), with an annual budget of 300 million dollars over ten years. The resemblance between the two initiatives is blatant. Both are clearly Big Science projects with no robust neuroscientific basis. Ali Rahimi, receiving the Test-of-time Award at the 2017 Neural Information Processing (NIPS) Conference, went even further and declared:
Table 4 .
4 1 Total fertility rates in the late nineteenth century for Paris, Berlin, Vienna, and London by degree of comfort of quarters.
Table 8 . 1
81
Table 4 .
4
1. Total fertility rates in the late nineteenth century for Paris, Berlin, Vienna, and London by degree of comfort of quarters
For the fifth century B.C.E., we have only fragments of biographies or autobiographies by authors often born outside Greece, such as Skylax of Caryanda, Ion of Chios, and Xanthos of Lydia. The fourth century B.C.E., by contrast, saw the flourishing of the genre, of which Xenophon and Plato appear to have been the true creators. Although he did not sign his Anabasis (ca. 370 B.C.E.), Xenophon (ca. 430-355 B.C.E.) does tell the story of his campaigns and successes during the retreat of the Ten Thousand Greek mercenaries conscripted by Cyrus the Younger. Artaxerxes was the eldest son of Darius II, king of the Persian Empire from 423 to 404 B.C.E.. He succeeded his father in 404 B.C.E. His younger brother, Cyrus the Younger, rebelled against him in 401 B.C.E.. Cyrus had several hundred men at this disposal, including ten thousand Greek mercenaries, Xenophon among them. Cyrus was defeated and killed at the battle of Cunaxa, about a hundred kilometers from Babylon. But the Greek mercenaries, who had won a victory over Artaxerxes' troops, managed to escape them under Xenophon's command. In this autobiographical account, Xenophon describes the retreat in detail, with great clarity and precision. He also wrote biographies such as the Cyropaedia (ca. 370 B.C.E.), describing the education of Cyrus the Great, who had lived more than a century earlier. Some time later, Plato (ca. 428-346 B.C.E.) is thought to have written The seventh letter (ca. 354 BC) to the parents and friends of Dion (408-354 B.C.E.), tyrant of Syracuse, after his death.
107 Hence his decision to visit Sicily in 388 B.C.E. in order to persuade the kingdom's ruler,, to subscribe to his philosophy. As it turned out, Plato became friends with Dion (ca. 408-354 B.C.E.), Dionysius' young brother-in-law, who greatly appreciated Plato's ideas and seemed ready to apply them in the event that he should reign in Sicily. On his return to Athens, Plato learned of Dionysius' death, and Dion summoned him back to Dionysius II banished him from Sicily and expelled Plato to Athens. In 361 B.C.E., however, Plato was again invited to Syracuse by Dionysius II to decide Dion's fate and kindle his own philosophical flame. Alas, Plato soon realized that Dionysius II had understood nothing of his philosophy and that he had composed a work, after listening to Plato just once and boasting of being very knowledgeable himself. Violating his written commitment to Plato, Dionysius II stripped Dion of his possessions and, after placing him under house arrest, sent Plato back to Athens. Around 357 B.C.E., Dion raised an army and took Syracuse by surprise. In Plato's words (VII th letter, 351 c):
206
Sicily, despite the fact that the country was now run by his young
nephew Dionysius II (ca. 397-343 B.C.E.). After Dion tried to
establish a more moderate government on Plato's advice,
107 Greek°text: σκοποῦντι δή μοι ταῦτά τε καὶ τοὺς ἀνθρώπους τοὺς πράττοντας τὰ
πολιτικά, καὶ τοὺς νόμους γε καὶ ἔθη, ὅσῳ μᾶλλον διεσκόπουν ἡλικίας τε εἰς τὸ πρ
όσθε προύβαινον, τοσούτῳ χαλεπώτερον ἐφαίνετο ὀρθῶς εἶναί μοι τὰ πολιτικὰ°δι
οικεῖν.
What is the most probable value of one deviation r
x corresponding
to assigned values 2 x 1 , x &c. of the other variables? and What is
the dispersion of the values of r x about its mean (the other
variables being assigned)?
Table 8 .
8 1: Residential mobility analysis: effect of time since marriage, duration of residence (in years), and tenure status on the probability of moving by data set (parameter estimates with standard deviation in parentheses)
was the first to theorize what he called La révolution démographique (The demographic revolution) in 1934. His book offered a synthesis of his investigations since 1909, most notably including a reprint of his article in Scientia entitled "Les trois théories principales de la population." This text clearly describes the two theories we discussed earlier-populationism and physiocracy-and the emergence of a new theory in late eighteenth-century France (p. 181):
The fact, for a person, of not being able to escape a fate or, for the course of events, of being determined in an ineluctable and
French text: Effet de la volonté de Dieu décidant de toute éternité du destin des hommes, et vouant certains d'entre eux, les élus, à recevoir une grâce particulière conduisant au salut éternel. Au e V siècle, saint Augustin défendit la réalité de la prédestination contre les tenants du pélagianisme. Des théologiens protestants ont soutenu la prédestination des réprouvés. Le concile de Trente réaffirma contre Calvin que la prédestination n'exclut pas le libre arbitre. Par affaibl. Le fait pour une personne de ne pouvoir échapper à un destin ou, pour le cours des choses, d'être réglé de manière inéluctable et fatale. Les effets de la prédestination. Prédestination à la gloire, au malheur, au crime.
See https://en.wikipedia.org/wiki/Methods_of_divination.
Translation by Fincke (2003), p. 111.
English translation by B. Jowett, 2015. Greek text: Δεῖ μέν, εἶπον, ἐκ τῶν ὡμολογημένων τοὺς ἀρίστους ταῖς ἀρίσταις συγγίγνεσθαι ὡς πλειστάκις, τοὺς δὲ φαυλοτάτους ταῖς φαυλοτάταις τοὐναντίον, καὶ τῶν μὲν τὰ ἔκγονα τρέφειν, τῶν δὲ μή, εἰ μέλλει τὸ ποίμνιον ὅτι ἀκρότατον εἶναι, καὶ ταῦτα πάντα γιγνόμενα λανθάνειν πλὴν αὐτοὺς τοὺς ἄρχοντας, εἰ αὖ ἡ ἀγέλη τῶν φυλάκων ὅτι μάλιστα ἀστασίαστος ἔσται. … τὸ δὲ πλῆθος τῶν γάμων ἐπὶ τοῖς ἄρχουσι ποιήσομεν, ἵν' ὡς μάλιστα διασῴζωσι τὸν αὐτὸν ἀριθμὸν τῶν ἀνδρῶν, πρὸς πολέμους τε καὶ νόσους καὶ πάντα τὰ τοιαῦτα ἀποσκοποῦντες, καὶ μήτε μεγάλη ἡμῖν ἡ πόλις κατὰ τὸ δυνατὸν μήτε σμικρὰ γίγνηται.
Τὰ μὲν δὴ τῶν ἀγαθῶν, δοκῶ, λαβοῦσαι εἰς τὸν σηκὸν οἴσουσιν παρά τινας τροφοὺς χωρὶς οἰκούσας ἔν τινι μέρει τῆς πόλεως· τὰ δὲ τῶν χειρόνων, καὶ ἐάν τι τῶν ἑτέρων ἀνάπηρον γίγνηται, ἐν ἀπορρήτῳ τε καὶ ἀδήλῳ κατακρύψουσιν ὡς πρέπει.
καὶ ταῦτά γ' ἤδη πάντα διακελευσάμενοι προθυμεῖσθαι μάλιστα μὲν μηδ' εἰς φῶς ἐκφέρειν κύημα μηδέ γ' ἕν, ἐὰν γένηται, ἐὰν δέ τι βιάσηται, οὕτω τιθέναι, ὡς οὐκ οὔσης τροφῆς τῷ τοιούτῳ.
Περὶ δὲ ἀποθέσεως καὶ τροφῆς τῶν γιγνομένων ἔστω νόμος μηδὲν πεπηρωμένον τρέφειν, διὰ δὲ πλῆθος τέκνων ἡ τάξις τῶν ἐθῶν κωλύῃ μηθὲν ἀποτίθεσθαι τῶν γιγνομένων· ὁρισθῆναι δὲ δεῖ τῆς τεκνοποιίας τὸ πλῆθος, ἐὰν δέ τισι γίγνηται παρὰ ταῦτα συνδυασθέντων, πρὶν αἴσθησιν ἐγγενέσθαι καὶ ζωὴν ἐμποιεῖσθαι δεῖ τὴν ἄμβλωσιν· τὸ γὰρ ὅσιον καὶ τὸ μὴ διωρισμένον τῇ αἰσθήσει καὶ τῷ ζῆν ἔσται.
Τὸ μὲν γὰρ δυνάμενον τῇ διανοίᾳ προορᾶν ἄρχον φύσει καὶ δεσπόζον φύσει, τὸ δὲ δυνάμενον τῷ σώματι ταῦτα πονεῖν ἀρχόμενον καὶ φύσει δοῦλον•διὸ δεσπότῃ καὶ δούλῳ ταὐτὸ συμφέρει.
Amore ha cura della generazione, con unir li maschi e le femine in modo che faccin buona razza; e si riden di noi che attendemo alla razza de cani e cavalli, e trascuramo la nostra.
French text: Par l'explication de ce système, on voit aisément que l'on peut perfectionner les animaux, en les variant de diverses façons. Pourquoi ne travaillerait-on pas aussi pour l'espèce humaine ? Il serait aussi sûr en combinant toutes les circonstances dont nous avons parlé, en réunissant toutes nos règles, d'embellir les hommes, qu'il est constant qu'un habile sculpteur peut faire sortir d'un bloc de marbre un modèle de la belle nature.
We give the page numbers of the second edition (1803), as the author states that the first edition (1801) was prepared in haste and the second edition is far superior.
French text: j'ai pensé que l'identité des lois physiologiques chez l'homme et dans les animaux, m'autorisait à croire à la possibilité de la Mégalanthropogénésie, puisqu'elle existe depuis longtemps dans l'économie rurale.
French text: pourrais-tu négliger un instant la reproduction des grands hommes[?]
For come, let me examine it by all that is probable: how could a thousand or ten thousand or even fifty thousand, at least if they
Greek text: ἐπεὶ φέρε ἴδω παντὶ τῷ οἰκότι· κῶς ἂν δυναίατο χίλιοι ἢ καὶ μύριοι ἢ καὶ πεντακισμύριοι, ἐόντες γε ἐλεύθεροι πάντες ὁμοίως καὶ μὴ ὑπ᾽ ἑνὸς ἀρχόμενοι, στρατῷ τοσῷδε ἀντιστῆναι; ἐπεί τοι πλεῦνες περὶ ἕνα ἕκαστον γινόμεθα ἢ χίλιοι, ἐόντων ἐκείνων πέντε χιλιάδων. [4] ὑπὸ μὲν γὰρ ἑνὸς ἀρχόμενοι κατὰ τρόπον τὸν ἡμέτερον γενοίατ᾽ ἄν, δειμαίνοντες τοῦτον, καὶ παρὰ τὴν ἑωυτῶν φύσιν ἀμείνονες, καὶ ἴοιεν ἀναγκαζόμενοι μάστιγι ἐς πλεῦνας ἐλάσσονες ἐόντες· ἀνειμένοι δὲ ἐς τὸ ἐλεύθερον οὐκ ἂν ποιέοιεν τούτων οὐδέτερα. δοκέω δὲ ἔγωγε καὶ ἀνισωθέντας πλήθεϊ χαλεπῶς ἂν Ἕλληνας Πέρσῃσι μούνοισι μάχεσθαι. [5] ἀλλὰ παρ᾽ ἡμῖν μὲν μούνοισι τοῦτο ἐστὶ τὸ σὺ λέγεις, ἔστι γε μὲν οὐ πολλὸν ἀλλὰ σπάνιον· εἰσὶ γὰρ Περσέων τῶν ἐμῶν αἰχμοφόρων οἳ ἐθελήσουσι Ἑλλήνων ἀνδράσι τρισὶ ὁμοῦ μάχεσθαι· τῶν σὺ ἐὼν ἄπειρος πολλὰ φλυηρέεις.
Greek°text:'ὦ παῖδες Ἑλλήνων ἴτε,°ἐλευθεροῦτε πατρίδ᾽,°ἐλευθεροῦτε δὲπαῖδας ,°γυναῖκας,°θεῶν τέ πατρῴων ἕδη,θήκας τε προγόνων: νῦν ὑπὲρ πάντων ἀγών.
Greek text: πλῆθος δὲ ἄρχον πρῶτα μὲν οὔνομα πάντων κάλλιστον ἔχει, ἰσονομίην, δεύτερα δὲ τούτων τῶν ὁ μούναρχος ποιέει οὐδέν· πάλῳ μὲν ἀρχὰς ἄρχει, ὑπεύθυνον δὲ ἀρχὴν ἔχει, βουλεύματα δὲ πάντα ἐς τὸ κοινὸν ἀναφέρει. τίθεμαι ὦν γνώμην μετέντας ἡμέας μουναρχίην τὸ πλῆθος ἀέξειν· ἐν γὰρ τῷ πολλῷ ἔνι τὰ πάντα.
Greek text: Εἰ δ' ἐπὶ τοῦ σώματος τοῦτ' ἀληθές, πολὺ δικαιότερον ἐπὶ τῆς ψυχῆς τοῦτο διωρίσθαι• ἀλλ' οὐχ ὁμοίως ῥᾴδιον ἰδεῖν τό τε τῆς ψυχῆς κάλλος καὶ τὸ τοῦ σώματος.
Latin text: "Fatum est " inquit "sempiterna quaedam et indeclinabilis series rerum et catena volvens semetipsa sese et inplicans per aeternos consequentiae ordines, ex quibus apta nexaque est." Ipsa autem verba Chrysippi, quantum valui memoria, ascripsi, ut, si cui meum istud interpretamentum videbitur esse obscurius, ad ipsius verba animadvertat.
Latin text: Quid est enim libertas? Potestas vivendi ut velis. Quis igitur vivit ut volt, nisi qui recta sequitur, qui guadet officio, cui vivendi via considerata atque provisa est, qui ne legibus quidem propter metum paret, sed eas sequitur et colit quod id salutare esse maxime iudieat, qui nihil dicit, nihil faeit, nihil cogitat denique nisi lubenter ac libere, cuius omnia consilia resque omnes quas gerit ab ipso proficiscuntur eodemque referuntur, nec est ulla res quae plus apud eum polleat quam ipsius voluntas atque iudicium, cui quidem etiam quae vim habere maximam dicitur, Fortuna ipsa cedit, si, ut sapiens poeta dixit, suis ea cuique fingitur moribus? Soli igitur hoc contingit sapienti, ut nihil faciat invitus, nihil dolens, nihil coactus.
Evodius. Now explain to me, if you can, why God has given man free choice of will. For if man had not received this gift, he would not be capable of sin.
Latin text: Evodius. Iam, si fieri potest, explica mihi, quare dederit deus homini liberum arbitrium voluntatis, quod utique si non accepisset, peccare non posset. Augustinus. Iam enim certum tibi atque cognitum est deum dedisse homini hoc, quod dari debuisse non putas? Evodius. Quantum in superiori libro intellegere mihi visus sum, et habemus liberum voluntatis arbitrium et non nisi eo peccamus.
Latin text: Nec illa igitur trinitas, quae nunc non est, imago Dei erit; nec ista imago Dei est, quae tunc non erit: sed ea est invenienda in anima hominis, id est rationali, sive intellectuali, imago Creatoris, quae immortaliter immortalitati eius est insita.
The text was initially published in Latin under the title Meditationes de prima philosophia in 1641. The French translation by the Duc de Luynes and approved by Descartes also includes six series of Objections with the author's replies.
French text: Et ainsi l'indifférence qui convient à la liberté de l'homme est fort différente de celle qui convient à la liberté de Dieu. Et il ne sert ici de rien d'alléguer que les essences des choses sont indivisibles; car, premièrement il n'y en a point qui puisse convenir d'une même façon à Dieu et à la créature; et enfin l'indifférence n'est point de l'essence de la liberté humaine, vu que nous ne sommes pas seulement libres, quand l'ignorance du bien et du vrai nous rend indifférents, mais principalement aussi lorsque la claire et distincte connaissance d'une chose nous pousse et nous engage à sa recherche.
French text: La liberté philosophique consiste dans l'exercice de sa volonté, ou du moins (s'il faut parler dans tous les systèmes) dans l'opinion où l'on est que l'on exerce sa volonté. La liberté politique consiste dans la sureté, ou du moins dans l'opinion que l'on a de sa sureté.
French text: La liberté individuelle, je le répète, voilà la véritable liberté moderne. La liberté politique en est la garantie; la liberté politique est par conséquent indispensable.
French text: Mais ce qui prédestinait tout particulièrement les Stoïciens à se porter garants des spéculations astrologiques et à leur chercher des raisons démonstratives, c'est leur foi inébranlable dan la légitimité de la divination, dont l'astrologie n'est qu'une forme particlière.
Papyrus no. 804, Greek text : Φυλάττου ἔως ήμερὢ(ν) μ χάριν τσΰ ˘ Αρεως
Original Latin text: Duae viae sunt, atque esse possunt, ad inquirendam et inveniendam veritatem. Altera a sensu et particularibus advolat ad axiomata maxime generalia, atque ex iis principiis eorumque immota veritate judicat et invenit axiomata media; atque haec via in usu est. Altera a sensu et particularibus excitat axiomata, ascendendo continenter et gradatim, ut ultimo loco perveniatur ad maxime generalia; quae via vera est, sed intentata.
French text: Je constate volontiers, et même avec plaisir, que peu de gens se soucient aujourd'hui de l'astrologie. Si elle est encore vivante et agissante dans les pays d'orient, chez nous elle appartient au passé et n'intéresse plus que les historiens.
Individual astrology, of course, has not disappeared in our time, but it remains confined within its former scope, which we assessed in critical terms earlier.
Map 3.1. Breakdown by country of search interest for astrology (grey) and astronomy (black.) Data source: Google Trends.
It seems that in Britain, as in Germany or France, belief in astrology is prevalent among particular social groups which, as we have
[3]
See Jacquard (1983) for the different usages of "heritability."
In German: Archiv für Rassen-und Gesellschafts-Biologie.
In German: Deutsche Gesellschaft für Rassenhygiene.
In German: Kaiser-Wilhelm-Institut für Anthropologie, Menschliche Erblehre und Eugenik.
In German: Gesetz zur Wiederherstellung des Berufsbeamptentums and Gesetz für Verhütung erbkranken Nachwuches.
In German: Erbesungdheitsgerichte.
In German: Reichsburgergesetz.
In German: Gesetz zum Schutze des Deutschen Blutes und des Deutschengesetz.
"das Schwert unserer Wissenschaft."
The Rockefeller Fondation supported eugenics research in Germany in the 1920s and continued to do so when the Nazis came to power.
The second prize was awarded to the Indian Premier Indira Gandhi, whose government tightened mandatory birth control procedures, including sterilization.
In their paper, Capron et al. state that Vetta found this error in 1974 and discussed it with Jinks, who acknowledged it. However, the editor of the Psychological Bulletin, in which the Jinks and Fulker paper originally appeared, refused to publish the correction. It was eventually included as an appendix to[START_REF] Hirsch | To "unfrock the charlatans[END_REF] and later in[START_REF] Capron | Misconceptions of biometrical IQists[END_REF]
This figure is provided, for example, in http://fr.slideshare.net/GenomeRef (as of 2015).
Twin models may be extended to other relatives, such as parents, siblings, spouses or offspring.
And such is the way of all superstition, whether in astrology, dreams, omens, divine judgments, or the like; wherein men, having a delight in such vanities, mark the events when they are fulfilled, but when they fail, though this happens much oftener, neglect and pass them by.60 In other words, by the early seventeenth century, Bacon had already debunked astrology and other divination methods, whose axioms are not verified by experiment.58 Latin text: ...aut enim sunt rerum nomina, quae non sunt (quemadmodum enim sunt res, quae nomine carent per inobservationem; ita sunt et nomina, quae carent rebus, per suppositionem phantasticam), aut sunt nomina rerum, quae sunt, sed confusa et male terminata, et temere et inaequaliter a rebus abstracta.59 For more details:[START_REF] Courgeau | Are the four Baconian idols still alive in demography?[END_REF].60 Latin text: Eadem ratio est fere omnis superstitionis, ut in astrologicis, in somniis, ominibus, nemesibus, et hujusmodi; in quibus homines delectati hujusmodi vanitatibus advertunt eventus, ubi emplentur; ast ubi fallunt, licet multo frequentius, tamen negligunt et praetereunt.
This form of induction should not be confused with ampliative induction, about which Bacon writes (Novum Organon, I, 105): "For the induction which proceeds by simple enumeration is childish; its conclusions are precarious and exposed to peril from a contradictory instance; and it generally decides on too small a number of facts, and on those only which are at hand." (Latin text: Inductio enim quae procedit per enumerationem simplicem res puerilis est, et precario concludit, et periculo exponitur ab instantia contradictoria, et plerumque secundum pauciora quam par est, et ex his tantummodo quae praesto sunt, pronunciat.) Yet it is this form of induction that was recommended by the empiricist tradition embodied by JohnStuart Mill (1906[START_REF] Galton | Hereditary improvement[END_REF] and many others.
French text: Cette échelle permet, non pas à proprement parler la mesure de l'intelligence, -car les qualités intellectuelles ne se mesurent pas comme des longueurs, elles ne sont pas superposables, -mais un classement, une hiérarchie entre des intelligences diverses; et pour les besoins de la pratique, ce classement équivaut à une mesure.
Latin text: … sunt rerum nomina, quae non sunt … aut sunt nomina rerum, quae sunt, sed confusa et male terminata, et temere et inaequaliter a rebus abstracta.
For more details on these discoveries, see L'hérédité sans gènes, 2013.
French text: …l'hérédité n'est pas écrite dans l'ADN. Elle résulte des tirages de la sélection naturelle. Ce sont donc les principes de cette sélection qu'il faut comprendre, plutôt que d'essayer désespérément de lire dans les gènes ce qui n'y est pas écrit.
French text:… appliquer strictement la théorie de l'évolution aux populations des cellules et de molécules, comme nous le faisons pour les populations de plantes et d'animaux.
Latin text: "Nihil," inquit, "equidem novi, nec quod praeter ceteros ipse sentiam; nam cum antiquissimam sententiam, tum omnium populorum et gentium consensu comprobatam sequor. Duo sunt enim divinandi genera, quorum alterum artis est, alterum naturae."
Latin text: Atque ut in seminibus vis inest earum rerum, quae ex iis progignuntur, sic in causis conditae sunt res futurae, quas esse futuras aut concitata mens aut soluta somno cernit aut ratio aut coniectura praesentit.
Latin text: Fatum autem id appello, quod Graeci εἱμαρμένη, id est ordinem seriemque causarum, cum causae causa nexa rem ex se gignat. Ea est ex omni aeternitate fluens veritas sempiterna. Quod cum ita sit, nihil est factum quod non futurum fuerit, eodemque modo nihil est futurum cuius non causas id ipsum efficientes natura contineat.
Latin text: Ego enim sic existimo, si sint ea genera divinandi vera, de quibus accepimus quaeque colimus, esse deos, vicissimque, si di sint, esse qui divinent.
French text: C'est de révolution copernicienne qu'il s'agit. En ce sens que, jusqu'à présent, et sous certains rapports, l'ethnologie a laissé les cultures primitives tourner autour de la civilisation occidentale, et d'un mouvement centripète, pourrait on dire.
French text: En effet, tout est permis si Dieu n'existe pas, et par conséquent l'homme est délaissé, parce qu'il ne trouve ni en lui, ni hors de lui une possibilité de s'accrocher. Il ne trouve d'abord pas d'excuses. Si, en effet, l'existence précède l'essence, on ne pourra jamais expliquer par référence à une nature humaine donnée et figée ; autrement dit, il n'y a pas de déterminisme, l'homme est libre, l'homme est liberté. Si, d'autre part, Dieu n'existe pas, nous ne trouvons pas en face de nous des valeurs ou des ordres qui légitimeront notre conduite. Ainsi, nous n'avons ni derrière nous, ni devant nous, dans le domaine lumineux des valeurs, des justifications ou des excuses. Nous sommes seuls, sans excuses. C'est ce que
Greek°text:°ὅτι τῆς ποιήσεώς τε καὶ μυθολογίας°ἡ μὲν διὰ μιμήσεως ὅλη ἐστίν, ὥσπερ σὺ λέγεις, τραγῳδία τε καὶ κωμῳδία, ἡ δὲ δι᾽ ἀπαγγελίας αὐτοῦ τοῦ ποιητο ῦ-εὕροις δ᾽ ἂν αὐτὴν μάλιστά που ἐν διθυράμβοιςἡ δ᾽ αὖ δι᾽ ἀμφοτέρων ἔν τε τῇ τῶν ἐπῶν ποιήσει, πολλαχοῦ δὲ καὶ ἄλλοθι, εἴ μοι μανθάνεις.
French text: Et si l'épopée, en dépit d'un système narratif commun qu'elle partage avec le roman, doit être considérée comme un genre nettement différencié, c'est qu'elle donne à la formation du langage qui l'engendre la signification d'un commencement absolu.
French text: …aucun des rares illustres successeurs d'Aristote n'a réussi à aller plus loin que l'auteur de la Poétique, chacun s'ingéniant au contraire à rendre les problèmes encore plus insondables que ne les avait déjà rendus son prédécesseur
French text: …tout texte est en effet un acte communicationnel ; tout texte a une structure à partir de laquelle on peut extrapoler des règles ad hoc ; tout texte … se situe par rapport à d'autres textes, donc possède une dimension hyper textuelle ; tout texte enfin ressemble à d'autres textes
French text: …les différents genres de façon descriptive, en se basant sur leurs éléments, mais les déduisent d'un concept précis (dans le cas de la tragédie, du concept de nécessité). Cette nécessité, le héros tragique l'affronte comme son destin avec cette « indépendance morale » que Schlegel, dans le même passage, reconnaît à Prométhée et à Antigone, héros tragiques, mais dénie au héros épique Achille.
German text: Got ist todt! Got bleibt todt! Und wir haben ihn getödtet!
French text: L'unité de temps n'est pas plus solide que l'unité de lieu. L'action, encadrée de force dans les vingt-quatre heures, est aussi ridicule qu'encadrée dans le vestibule. Toute action a sa durée propre comme son lieu particulier. Verser la même dose de temps à tous les événements ! appliquer la même mesure sur tout ! On rirait d'un cordonnier qui voudrait mettre le même soulier à tous les pieds. Croiser l'unité de temps à l'unité de lieu comme les barreaux d'une cage, et y faire pédantesquement entrer, de par Aristote, tous ces faits, tous ces peuples, toutes ces figures que la providence déroule à si grandes masses dans la réalité ! c'est mutiler hommes et choses, c'est faire grimacer l'histoire.
French text : …à penser (par expérience et par polyphénie) et à faire advenir […] en solitaire, de nouvelles formes de salut.
°Greek°text:°ἡ δὲ κωμῳδία ἐστὶν ὥσπερ εἴπομεν μίμησις φαυλοτέρων μέν, οὐ μέ ντοι°μέντοι κατὰ πᾶσαν κακίαν, ἀλλὰ τοῦ αἰσχροῦ ἐστι τὸ γελοῖον μόριον. τὸ γὰρ γελοῖόν ἐστιν ἁμάρτημά τι καὶ αἶσχος ἀνώδυνον καὶ οὐ φθαρτικόν, οἷον εὐθὺς τὸ γελοῖον πρόσωπον αἰσχρόν τι καὶ διεστραμμένον ἄνευ ὀδύνης.
French text : Molière a renoncé à la primauté de l'intrigue et a déplacé l'enjeu de la comédie vers les caractères. Il s'agit alors non plus de raconter une histoire en faisant rire, mais de décrire des hommes en faisant rire.
French text : … l'image réfléchie de leur façon de vivre et de penser, de leur culture, de leurs désirs et de leurs problèmes, de leurs valeurs et de leurs limites, de tout ce qui baignait leur existence et lui donnait un sens, et, par-delà encore, des réactions plus universellement humaines qui étaient les leurs devant les grands problèmes de notre destin …
French text : Cependant, un mariage avec Ishtar n'aurait pas réglé la question de succession, car leurs enfants auraient reçu de leur mère un héritage ambigu : Ishtar est une déesse du sexe, mais elle est aussi une déesse de la guerre et de la mort, deux faces de la même monnaie présentées sous un syntagme métonymique : le sexe augmente la population et la guerre la diminue.
French text : L'on dit qu'il fist mille débauches, mais que lors qu'on pensoit qu'il eust du tout donné son coeur aux licences mondaines, il alla se jetter entre ces
pères qui lui ont donné l'habit. Il a persévéré avecque tant de constance, que sa mort a esté le bout de sa pénitence.
French text : Là-dessus ce grand prédicateur tourna les yeux en la teste, demeura longtemps comme esvanouy, se reprend pour s'estendre sur les douleurs de la Passion, desquelles il fit comparaison avec toutes douleurs dont il peut se souvenir, mesprisant toute sorte de fievres et de maladies, qu'il cotta de rang, et puis les blessures legeres et les autres maux ; là il se pasma pour la seconde fois, et tout transporté de fureur, tira de sa poche une corde faite en licol avec le noeud courant ; il se la mit au col, tirant la langue, et pour certains se fust estranglé s'il eut tiré bien fort ; les compagnons de la petite observance y accoururent et lui osterent la corde du licol. Tout la voute retentissait de cris des spectateurs, qui avaient changé les ris en plaintes, l'entree comique en tragédie, laquelle fut toutefois sacrifice non sanglant.
Il se confirme tout d'abord que la compréhension consiste bien, en partant des expériences vécues, à construire l'ensemble qui les réunit et, de ce qui n'était qu'ne simple succession, fait émerger proprement une vie, c'est-à-dire une totalité orientée vers une fin qui donne sa signification à toute les étapes ; …
Greek title: τά κατά 'Ηρακλείδην τόν Μυλασσϖν βασιλέα.
French text : Genre faussement simple, la biographie est située au carrefour des représentations de la vie humaine ; elle offre un champ où se confrontent les paradigmes et les compétences déployés par les sciences humaines, la spiritualité profane ou sacrée et les formes symboliques propres à la littérature.
Of the fifty lives recounted, only four are not paired: Artaxerxes, Aratos, Galba, and Otho.
Greek°text:°οὔτε γὰρ ἱστορίας γράφομεν, ἀλλὰ βίους, οὔτε ταῖς ἐπιφανεστάταις πράξεσι πάντως ἔνεστι δήλωσις ἀρετῆς ἢ κακίας, ἀλλὰ πρᾶγμα βραχὺ πολλάκις καὶ ῥῆμα καὶ παιδιά τις ἔμφασιν ἤθους ἐποίησε μᾶλλον ἢ μάχαι μυριόνεκροι καὶ παρατάξεις αἱ μέγισται καὶ πολιορκίαι πόλεων.
French text: Le caractère propre du Livre de raison, quand il était bien tenu, était de présenter en quelques traits, et avec simplicité, tout ce qui moralement et matériellement constituait la famille et le foyer. Sur ses pages on inscrivait la généalogie des ancêtres, la biographie des parents, les naissances, mariages et décès, les principaux événements du ménage, l'accroissement de ce ménage, c'està-dire l'emploi de l'épargne, l'inventaire des biens, les derniers conseils laissés aux enfants.
French text : Plutôt que d'une « opposition exclusive » entre explication et compréhension, serait il juste de parler ici, comme le fait explicitement Dilthey, d'une « dépendance réciproque entre les deux types de démarches » : d'une part, nous venons de l'apercevoir, l'explication appelle la compréhension pour achever le projet d'intelligibilité qui la définit ; d'autre part et réciproquement la mise en lumière de relations causales est un des moyens qui révèlent entre les divers moments d'un processus ou les divers aspects d'une époque cette interdépendance qui fait d'eux les éléments d'un ensemble interactif auquel s'applique alors la démarche compréhensive.
German text: Vorlesung. Die Wissenschaft denkt nicht im Sinne des Denkens der Denker. Aber daraus folgt keineswegs, daßt das Denken sich nicht an die Wissenschaften zu kehren brauche. Der Satz "Die Wissenschaft denk nichts" enthält keinen Freibrief, der dern Denken erlaubte, sich gleichsam freihändig dadurch zu bewerkstellingen, daß es sich etwas ausdenkt.
French text: L'étonnant, ici, est qu'Augustin et Aristote ne se font pas seulement face en tant que premier phénoménologue que premier cosmologue, mais en tant que portés par deux courants archaïques, issus de sources différentes la source grecque et la source biblique -, qui ont ultérieurement mêlé leurs eaux dans la pensée de l'occident.
French text: La démographie historique, c'est-à-dire la démographie en perspective temporelle, met en tableau l'évolution biologique de l'humanité considérée comme une seule masse. En même temps, elle fait apparaître des rythmes mondiaux de population qui installent la longue durée à l'échelle du demi-millénaire et remettent en question la périodisation de l'histoire traditionnelle.
The author undertook this analysis, in a less elaborate form and in French, in[START_REF] Courgeau | La mesure dans les sciences de la population[END_REF].
French text: Ainsi, joignant la rigueur des démonstrations de la science à l'incertitude du hasard, et conciliant ces choses en apparence contraires, elle peut, tirant son nom des deux, s'arroger à bon droit ce titre stupéfiant : La Géométrie du Hasard.
Because of errors in his reasoning, Graunt actually arrived at an estimate of 100,000.
For details of the rates computed, see[START_REF] Landry | Traité de démographie[END_REF].
French text: C'est la notion fondamentale de la probabilité évaluée, qui me semble directement irrationnelle et même sophistiquée: je la regarde comme essentiellement impropre à régler notre conduite en aucun cas, si ce n'est tout au plus dans les jeux de hazard. Elle nous amènerait habituellement, dans la pratique, à rejeter, comme numériquement invraisemblable, des événements qui vont pourtant s'accomplir.
For more details of the implementation of regression methods, see[START_REF] Courgeau | Probability and Social sciences. Methodological relationships between the two approaches[END_REF].
French text: L'homme moyen ainsi défini, bien loin d'être en quelque sorte le type de l'espèce, serait tout simplement un homme impossible, ou du moins rien n'autorise jusqu'ici à le considérer comme possible.
French text: Autrement dit il est impossible, quand on n'a pas recueilli les histoires socio-professionnelles des individus, de les reconstituer à partir des seuls flux de mobilité.
French text: … l'inscription de toute histoire et de toute mémoire individuelles dans une histoire et mémoire collectives.
French text: Cette inclination à se faire l'idéologue de sa propre vie en sélectionnant en fonction d'une intension globale, certains en fonction d'une intension globale, certains événements significatifs er en établissant entre eux des connexions propres à leur donner cohérence, comme celles qu'impliquent leur institution en tant que causes ou, plus souvent, en tant que fins, trouve la complicité naturelle du biographe que tout, à commencer par ses dispositions de professionnel de l'interprétation, porte à accepter cette création artificielle de sens.
French text: Et c'est pitié, en y revenant plus de vingt ans après, que de réaliser à quel point la superbe intelligence qui fut celle de Bourdieu a pu se dévoyer dans
French Text: dans le meilleur des cas, on peut partir d'une seule ou de plusieurs variables bien connues, déterminer des catégories susceptibles de définir des souspopulations homogènes (ou qu'on suppose telles) et mesurer les différences démographiques entre ces populations. Mais la détermination de groupes homogènes n'est pas toujours aussi simple, car on peut vouloir identifier des sousensembles qui aient en commun un grand nombre de caractéristiques […
] 133In Denmark, a single identification number for each individual provides an interconnection for 35 statistical registers(Thygesen, 1983).
French Text: Étant donné les difficultés pratiques, l'on ne manquera pas de se demander si le problème posé est soluble.
French text: [. . .] qui conserve toutes ses caractéristiques et les mêmes caractères tant que le phénomène se manifeste.
In this first paper on the topic, de Finetti called it equivalence.
French text: Les études classiques sur la mémoire sont demeurées étonnamment positivistes, c'est-à-dire limitées aux entrées et aux sorties de la boite noire [. . .], en faisant varier avec une grande ingéniosité les divers facteurs du stimulus mais sans dépasser les observables pour chercher à reconstituer l'intérieur de la boite.
Even if errors in the dating of past events are frequent, apparently these do not affect their logical sequence, or only very slightly so. This sequence is correctly memorized, and the errors only form a kind of background noise, which does not prevent coherent information from being drawn from all sources. Thus memory seems to be reliable where analysis needs it to be.
French text: Au reste si la mémoire collective tire sa force et sa durée de ce qu'elle a pour support un ensemble d'hommes, ce sont cependant des individus qui se souviennent, en tant que membres du groupe.
German text: Eine Haupteigenschaft des Nervengewebes ist das Gedächtnis, d. h. ganz allgemein die Fähigkeit, durch einmalige Vorgänge dauernd verändert zu werden, was einen so auffälligen Gegensatz gibt zum Verhalten einer Materie, die eine Wellenbewegung durchläßt und darauf in ihren früheren Zustand zurückkehrt. Eine irgendwie beachtenswerte psychologische Theorie muß eine Erklärung des >Gedächtnisses< liefern.
German text: Das Gedächtnis ist dargestellt durch die zwischen den ψ-Neuronen vorhandenen Bahnungen.
German text: Du weiβt, ich arbeite mit der Annahme, daβ unser psychicher Mechanismus durch Aufeinanderschichtung entstanden ist, indem von Zeit zu Zeit das vorhandene Material von Errinerugsspuren eine Umordnung nach neuen Beziehungen, eine Umschrift erfährt. Das wesentlich Neue an meiner Theorie ist also die Behauptung, daβ das Gedächtnis nicht an einfach sondern mehrfach vorhanden ist, in verschiedenen Arten von Zeichen niedergelegt.
For Freud, consciousness is a mere lighting-up, an "internal sense organ," whose only role is to throw light on existing associations resulting from resemblances and contiguities between unconscious memories. This means that he denies to conscious activity what for most contemporary authors is its essential characteristic, i.e., the constitution of thought, which is a real constructive activity. Freudism does not consider the problem of intelligence, which is a great pity, for consideration of the question of awareness in the act of comprehension and of the relationship between unconscious intellectual schemas and conscious "reflection" would certainly have simplified the theory of the affective unconscious.145 144 French text: Quand on voit MM. Stuart Mill, Herbert, Spencer et Bain en Angleterre; des physiologistes, M. Luys et M. Vulpian en France, en Allemagne, avant eux, Herbart et Miller, ramener tous nos actes psychologiques à des modes divers d'association entre nos idées, sentiments sensations, désirs, on ne peut s'empêcher de croire que cette loi d'association est destinée à devenir prépondérante dans la psychologie expérimentale, à rester pour quelque temps au moins, le dernier mode d'explication des phénomènes psychiques.145 French text: D'une manière générale il conçoit la conscience comme un simple éclairage (un organe des sens interne) dont le rôle est uniquement de projeter sa
French text: Une autre conséquence non moins importante, c'est que la situation analytique devient une situation de dialogue et la relation analytique, un rapport de compréhension.
French text: La décroissance exponentielle et l'immortalité ne faisant pas bon ménage, il apparaît ainsi clairement que la question de l'espérance de vie de nos représentations mentales inconscientes constitue un second motif d'abandon définitif, par les neuroscientifiques, de la conception freudienne d'inconscient.
French text: La mémoire de l'enfant de deux à trois ans est encore un mélange de récits fabulés et de reconstitutions exactes mais chaotiques, et la mémoire organisée ne se développe qu'avec les progrès de l'intelligence entière.
French text: [. . .] il existerait un réseau normal absolument différent de ces autres circuits, dont le contenu correspondrait à chaque instant à la représentation mentale dont nous faisons l'expérience consciente. Nous appellerons ce réseau neuronal unique en son genre l'"espace de travail global conscient".
French text: Je crédite Freud d'avoir inventé une méthode de traitement, la cure analytique, dans laquelle le matériel utilisé pour soigner repose exclusivement sur la manipulation des attitudes mentales conscientes du patient et de son soignant, la psychanalyse. Cette reconnaissance du rôle à proprement parler vital des
To move to a world beyond "p < 0.05," we must recognize afresh that statistical inference is not-and never has been-equivalent to scientific inference.
French text: Quant à la notion d'unité narrative de la vie, il faut aussi y voir un mixte instable entre fabulation et expérience vive. C'est précisément en raison du caractère évasif de la vie réelle que nous avons besoin du recours à la fiction pour organiser cette dernière rétrospectivement dans l'après-coup, quitte à tenir pour révisable et provisoire toute figure de mise en intrigue empruntée à la fiction ou à l'histoire.
Thus, because we are rapidly advancing along this non-sustainable course, the world's environment problems will get resolved, in one way or another, within the lifetimes of the children and young adults alive today. The only question is whether they will become resolved in pleasant ways of our own choice, or in unpleasant ways not of our choice, such as warfare, starvation, disease epidemics, and collapse of societies.
French text: Tel est le but de l'ouvrage que j'ai entrepris, et dont le résultat sera de montrer par les faits, comme par le raisonement que la nature n'a marqué aucun terme au perfectionnement des facultés humaines que la perfecttibilité de l'homme est réelement indéfinie, que les progrès de cette perfectibilité désormais indépendants de la volonté de ceux qui voudraient les arrtre terme que la durée du globe où la nature nous a jetés.
French text: L'industrie prépare les peuples à l'activité collective comme à tous les genres d'activité nécessaires au développement et à la conservation de l'espèce. Il ne faut qu'ouvrir les yeux pour voir que, de notre temps, les populations les plus industrieuses et les plus cultivées sont aussi celles qui ont le plus de vie et de capacité politique.
French text: Dans la France du XVIII e siècle, la population était conditionnée par la productionpar la production des subsistances notamment --, et elle variait, si l'on veut s'en tenir à une approximation, comme la production. Dans la France d'aujourd'hui, les mouvements de la population apparaissent comme indépendants, dans une très grande mesure, de la production ; la population ne varie pas en raison des variations de la richesse.
French text: Nous quittons ces deux cosmogonies et nous pénétrons dans une autre, et du coup la question du réductionnisme change de sens car il ne s'agit plus de savoir si l'on peut réduire le tout à ses composantes élémentaires, mais il s'agit de savoir comment s'articulent véritablement les différents étages ou niveaux, du haut vers le bas et du bas vers le haut.
Scientific revolution, Sculptor, Secular community, Segregation, Sensory motor machine, Sex-ratio, Sexual forces, Sign, Single-child policy, Slavery, Social system, Socialism, Societal force, Society, archaic, egalitarian, matriarchal, patriarchal, Sociology, Species, Statistical, individual, law of heredity,
thanks to their connection to the broader object of population sciences.
In sum, the approaches and methods used by philosophical hermeneutics and population science seem incompatible. However we can ask if there is a way to link them that would allow progress in the study of life stories? Many authors, including Dilthey himself, proposed advances in this field. We shall examine them after discussing human memory as the source of life stories.
Chapter 8 Autobiographical memory and its critics
Whenever we face a new problem, we recall the similar ones we encountered in the past, so we try to solve it with all the information available to us. Memory therefore serves to reveal our life story. This requires us to take a closer look at how memory plays such a role, and at the scientific methods used to demonstrate it.
While philosophers ever since antiquity have discussed memory and sought to incorporate it into their theories of the world, the first scientific studies on memory date from the nineteenth century.
In Theaetetus (ca. 360 B.C.E.), Plato proposes two images to represent memory: the wax tablet and the aviary. He introduces them to try to solve the problem of false judgments. We shall not discuss the arguments that lead him to reject both possible solutions, but we must observe that he never attempts a scientific test of their broader validity. His philosophical approach is of great interest, but we shall set it aside here to focus on the more scientific methods that offer further insights into memory, most notably the psychological approaches.
In his 1879 article on "Psychometric experiments," Galton paved the way for such scientific study. His work made it possible to develop more satisfactory psychological approaches to the recollection of past events. By contrast, his approach to visual memory (1880) yielded far more questionable results. Examining the various psychological approaches to memory, we show that it was ingenuity, but without going beyond the observables to try to reconstruct the inside of the box. 139 While their research concerned memory processes in general and not autobiographical memory in particular, their results-obtained for children of different ages-provide a better understanding of the link between memory formation and intelligence.
The authors first show that memory is not acquired at birth but improves with age, a process closely dependent on the development of intelligence. They then emphasize another result that is relevant to the rest of our chapter: there is no intrinsic difference, or difference in content, between a false memory and a true one (p. 468). Lastly, memory is, for the most part, a reconstructive activity: the past is not preserved in a sort of storehouse of imperishable memories; rather, memories are continuously renewed by other experiences. [START_REF] Atkinson | Human memory: a proposed system and its control processes[END_REF], taking into account the duration of information preservation, proposed a now classic model of three types of memory. The first, sensory memory, has a very brief life span and is barely distinguishable from perception. The second, short-term memory, consists in the ability to immediately recall information that has just been perceived. It fades just as rapidly and is lost in 30 seconds, unless it is transferred to long-term memory through control processes. The third type, long-term memory, allows the extended preservation of information no longer present in our environment.
Many later studies have led to distinctions between several types of long-term memory, designated by terms that vary from one author to another. Two broad forms of long-term memory introduced by [START_REF] Tulving | Episodic and semantic theory[END_REF] deserve mention here. The first, episodic memory, concerns specific events that have occurred in a person's life. Also called autobiographical memory, it is of the that this law of association is bound to become dominant in experimental psychology, and to remain-at least for a while-the final way to explain psychic phenomena. 144 While Freud's approach reshaped the notion of association, it remained too dependent on it. As noted above, the advent of the cognitivist era caused associationism to lose much of its appeal: association became a mere chapter of psychology again, but associationism became totally obsolete. We now know that association, far from being a primal fact, always results from a process of information encoding and from the mnesic structure that encodes it. Moreover, Mill's empiricism-with its notion of induction as generalization-had already been rejected back in 1620 by Bacon, who had replaced it with a fully inductive approach (based on Bacon's definition of the term), as described in Chapter 3.
Freud's notion of association turns up in one of the most insightful critiques of his work by the cognitivist Piaget (1965, p. 201):
General conclusion
second edition of his work (1796), and he rewrote the chapter on property (vol. 2, book VIII).
In 1794, a year after Condorcet's death, the first edition of his Esquisse d'un tableau historique des progrès de l'esprit humain was published in France. The English translation, Outline of an historical view of the progress of the human mind, appeared just a year later. The book was merely an initial version of the larger work that he had been preparing since 1772 and had been planning to publish. For details, see the volume edited by Schandeler and Crepel (2004). Like Godwin, Condorcet developed the notion of perfectibility of the human species (p. 4, English translation):
Such is the object of the work I have undertaken; the result of which will be to show, from reasoning and from facts, that no bounds have been fixed to the improvement of the human faculties; that the perfectibility of man is absolutely indefinite; that the progress of this perfectibility henceforth above the control of every power, that would impede it, has no other limit than the duration of the globe upon which nature has placed us. 153 He showed how progress was achieved-at a varying pacethroughout human history. Accordingly, he believed that the average length of human life would increase up to a limit that he could not determine. His theory, however, was not based on Godwin's political concepts, and it led him to view such evolution as a general characteristic of the human species.
In contrast, Thomas Robert Malthus (1766[START_REF] Comte | Traité de la propriété[END_REF] proposed a theory of population in a philosophical pamphlet published in 1798. In his book of 1803, he developed it into a theological concept, pitting it against the notion of perfectibility.
At the very outset of the first essay, he clearly formulated the issue he set out to address (p. 2):
It has been said, that the great question is now at issue, whether man shall henceforth start forwards with accelerated velocity towards illimitable, and hitherto unconceived improvement; or be condemned to a perpetual oscillation between happiness and misery, and after every effort remain still at an immeasurable distance from the wished-for goal.
Rather than attack the proponents of the first scenario, he began by stating his case for the second. He argued that two factors were involved at the same time: demographic phenomena (above all, the then high fertility rate) and the "subsistence" needed for humanity's survival (here, mainly the earth's power to produce food for man). His first and main observation (p. 14) was that:
Population, when unchecked, increases in a geometrical ratio. Subsistence increases only in an arithmetical ratio. A slight acquaintance with numbers will shew the immensity of the first power in comparison to the second.
From these premises, it followed that if the human population did not act against them, it was bound to disappear sooner or later. Malthus naturally undertook to consolidate his theory by a deeper analysis of the society in which he lived, particularly in his 1803 text.
In the 1798 pamphlet, he went on to offer a lengthy criticism of Condorcet, Godwin, and other authors who defended the perfectibility of man. However, in Godwin's 1820 book entitled An enquiry concerning the power of increase in the numbers of mankind, Malthus' criticism was often considered as baseless and his assertions, such as the one concerning the population growth and subsistence, as resting on false postulates.
Far more significantly, these arguments put forward by a late eighteenth-century Anglican minister are indicative of a religious approach to demographic issues, which prevented him from discerning the major revolutions already under way. The first was the industrial revolution, which began in Great Britain in 1760 with the coal industry and steam energy: Malthus regarded it as secondary with respect to agriculture. The second was the would be hard for the number of children issued from the same marriage to exceed five. If we deduce from this number: Cases of sterility, widowhood, delays in marriage, accidents, interruptions… ...1.5 Deaths before marriageable age (the figure today greatly exceeds 50 p. 100) ….2.5 Unmarried persons…………………………………………………………… ...0.5 With the population thus increasing by only one-tenth in each period of about 30 years, it would double in three centuries. 155 [START_REF] Sauvy | A propos d'un calcul démographique de Proudhon[END_REF] showed the flaws in this scenario, which would entail the extinction of the French population.
In Système, Proudhon also examined the contraception methods available in his day (pp. 450-464)-Fourier system (artificial sterility), Doctor G.'s system (extraction of the fetus, or eradication of seeds), interruption system, three-year breastfeeding system-but he rejected them all because he believed that the problem was unresolved. He did not recognize the decline in fertility due to industrialization and higher living standards [START_REF] Charbit | Proudhon et le piège malthusien[END_REF], and his outlook ultimately echoed Malthusian pessimism. Many authors of the second half of the nineteenth century clearly noted the decline in fertility among workers. Leroy-Beaulieu (1868, p. 103), for example, recognized that:
155 French text: … le mariage ayant lieu pour l'homme à 28 ans révolus, pour la femme à 21 ; l'usage des nourrices disparaissant dans l'égalité ; la durée de l'allaitement étant réduite à 15 ou 18 mois ; la période de fécondité pouvant aller de 15 à 18 mois, le nombre des enfants issus d'un même mariage s'élèverait difficilement au dessus de cinq. Si l'on déduit de ce nombre : Cas de stérilité, veuvages, retards dans le mariage, accidents, interruptions………...1.5 Morts avant l'âge nubile (le chiffre dépasse aujourd'hui de beaucoup 50 p. 100) ….2.5 Célibataires …………………………………………………………………………..0.5 La population n'augmentant ainsi que d'un dixième par chaque période d'environ 30 ans, le doublement aurait lieu en trois siècles. brain and that this could be achieved in ten years. He began his lecture by stating:
Our mission is to build a detailed, realistic computer model of the human brain.
His project was thus indeed driven by AI. In 2011, with thirteen high-ranking researchers, he submitted the Human Brain Project to the European Commission, which awarded him a ten-year grant for the colossal sum of 1.19 billion euros. To win the Commission's approval, he proposed that the project should involve not only AI researchers but also physicians and neuroscientists, for a total 150 research teams from twenty-two countries. Soon, however, the project cut back the role of neuroscience and was contested in 2014 by an open letter to the Commission signed by more than 750 researchers-most of them neurobiologists-as well as an article by Yves Frégnac and Gilles Laurent in Nature entitled "Where is the brain in the human brain project?" Among other things, the authors noted (p. 28):
There is no unified format for building functional databases or for annotating data sets that encompass data collected under varying conditions. Most importantly, there are no formulated biological hypotheses for these simulations to test.
As we can see, the project prioritizes the use of Big Data, leaving neuroscience by the wayside. In 2019, a decade after the TED Conference, Ed Yong published an article in The Atlantic entitled "The Human Brain Project hasn't lived up to its promise," in which he observed:
Ten years ago, a neuroscientist said that within a decade he could simulate a human brain. Spoiler: It didn't happen.
Yong thus confirmed what many neuroscientists had thought of the project back in 2014, but this had little impact on Markram's objectives.
Almost at the same time as the European Commission accepted the Human Brain Project, a similar program was set up in Organization in 1961 to that of the Society for Neurosciences in 1969 (https://www.sfn.org/about: Mission and Strategic Plan, updated 8/23/2019, p. 2), whose aim was to:
Advance the understanding of the brain and the nervous system by bringing together scientists of diverse backgrounds, by facilitating the integration of research directed at all levels of biological organization and by encouraging translational research and the application of new scientific knowledge to develop improved disease instruments and cures.
As we can see, neuroscience sets out to explore far more diverse fields than psychology.
Regarding AI, we can contrast it with the discovery by Tim Bliss, Terje Lømo, and Tony Gardner-Medwin (in three articles published in 1973) of visual long-term potentiation, as a component of a multilevel memory mechanism. Squire and Kandel (2000, pp. 110-111) provide an excellent presentation of this discovery:
[. . .] they attempted to see whether the synapses between neurons in the hippocampus had the capability of storing information. [. . .] They found that a brief high-frequency period of electrical activity (called a tetanus) applied artificially to a hippocampal pathway produced an increase in synaptic strength that lasted for hours in anesthetized animal and would, if repeated, last for days and even weeks in an alert freely moving animal.
The authors describe this long-term potentiation not as identical to memory, or as a sort of memory, but rather as part of a more complex memory mechanism.
However, these studies concerned laboratory animals, and it was not until 1996 (Chen et al.) that the first experiments on human visual memory in vitro, then in 2005 (Teyler et al.) in vivo, made it possible to show the generality of long-term potentiation and to develop a coherent paradigm for it.
In his exhaustive analysis of the earlier studies, Craver (2003, pp. 189-190) |
04109179 | en | [
"shs.hist",
"shs.anthro-se"
] | 2024/03/04 16:41:24 | 2000 | https://inalco.hal.science/hal-04109179/file/The_Origins_of_Status_Politics_Family_cl.pdf | Xiao-Planes Xiaohong
The Origins of « Status Politics »: Family clans and Factions in CCP's Top Leadership During the Cultural Revolution, 1966-1976
In a bid to successfully draw his grand blueprint for an anti-capitalist anti-revisionist regime, Mao Zedong smashed the Yan'an Round Table -the power structure established during the Seventh National Congress of CCP, approved the military circle led by Lin Biao and the radical civilian circle led by Jiang Qing and Kang Sheng, and launched a brand-new revolution that struck the Party bureaucracy of the time. Yet it was not enough for these circles to act as an instrument of the Revolution. They preferred growing their family clans' power and their factions' influences in order to scramble for the supreme power. The exclusionism and the cruelty of factionalism compelled the diminished ones as well as their families to put all their faction's resources to use, thus swept up into a desperate struggle. The power structure of the PRC's top leadership during the Cultural Revolution was characterized by the numerous family clans and factions, a phenomenon that was in great contrast to CCP's conventional system and to the objective proclaimed by the Revolution: fighting against capitalist roaders. The resulting "Status Politics" seriously affected the proper functioning of public power, which led to abuses of power, inequitable distribution of resources, and distorted standard of values. 1 To know more about how the Yan'an Round Table went through the Gao-Rao affair and how the crisis ended, see Teiwes, Frederick C.
In western countries, specialists in Chinese political history call the leadership formed during the Seventh National Congress in 1945 "The Yan'an Round Table ". It refers to the power structure established by CCP's top leaders with Mao Zedong in command and factions brought together, each faction comprised of military or civilian officials. In 1954, the leadership went through a crisis of division due to the Gao Gang -Rao Shushi affair 1 . In the aftermath of the affair, the leadership was further consolidated and reinforced during the Eighth National Congress in 1956. American sinologist specialized in political history Roderick MacFarquhar, pointed out that several years before the Revolution started Mao found many of the first-front leaders "suspect because of their independent stature and authority, and surrounded himself with toadies whose loyalty was to himself rather than to the party, Marxism-Leninism, or their peers. Mao thus stripped China of a priceless asset, a united and capable leadership, the Yan'an Round Table, that 'select group' which had conquered China and guided it through the early travails of nation-building. 2 ". After the Cultural Revolution started, thanks to Mao's support, Mao's wife, Jiang Qing, and the Central Cultural Revolution Group (CCRG) represented by Kang Sheng and Chen Boda occupied high-ranking positions and held great power. Similarly, PLA Marshal Lin Biao, Mao's officially chosen successor, had his family clan and his faction in the PLA stuffed with power before the September 13 Incident in 1971. It can be argued that the power structure of the PRC's top leadership during the Cultural Revolution was characterized by the numerous family clans and factions, a phenomenon that was in great contrast to CCP's conventional system and to the objective proclaimed by the Revolution: fighting against capitalist roaders (i.e. the forces that would lead a society down a "capitalist road"). Familial politics and factionalism spoilt the proper functioning of public power, which inevitably led to abuses of power and a severely inequitable distribution of resources. Drawing on recently published research on the subject, this paper attempts to make a preliminary study of the manifestations of families and factions in CCP's top leadership during the Cultural Revolution.
In order to proceed to an analysis of the "familial politics and factionalism" phenomenon in CCP's top leadership during the Cultural Revolution, I would like to employ the concept of "Status Politics". The term "Status" is borrowed from Henry Sumner Maine, the 19 th century English historian specializing in ancient legal system, and it was used in civil law. Henry Sumner Maine pointed out that "All the forms of Status taken notice of in the Law of Persons (Civil Law) were derived from, and to some extent are still colored by, the powers and privileges anciently residing in the Family. 3 " "Status Politics" implies that a person has been granted power and privileges thanks to his family clan or/and faction (a faction can be considered an extension of kinship), not because of his political achievements or his contributions to the power structure. The "Status Politics" during the Cultural Revolution was characterized by its revolutionary ideology façade. In addition to their status as Mao's wife and the officially chosen successor, Jiang Qing and Lin Biao were respectively entitled "the valiant standard-bearer of the Great Cultural Revolution" and "the deputy commander of the Proletarian Headquarter". At the same time, Mao himself was in possession of an absolute truth -preventing revisionism and capitalism's revival via persistent revolution -and he even became the embodiment of this "Truth". These titles created a context in which Mao's inner circle, made up of his relatives and staff, was granted the privilege of interpreting the Revolution, judging people, and even commenting on state affairs. This inner circle found its source of power closely attached to the leader himself, a phenomenon very similar to that of the legal relation based on kinship: the positions of power of children and wife originate from the father or the husband, and thus the so-called "status" refers to the legal relation based on kinship (as well as the broader kinship network) and one's political performances.
In modern society, Status Politics brings endless harm to public power and to the person who makes use of it. Mao Zedong himself was partly aware of that. According to some China's officially published sources, around the beginning of the Ninth National Congress in April 1969, Mao started taking various measures, in an attempt to restrain factionalism from growing. However, the September 13 Incident in 1971 and the downfall of the Gang of Four proved the failure of his intense efforts. Lin Biao and Jiang Qing's factions were carefully supported by Mao to be used as tools to wage the Cultural Revolution. But tools don't always follow the master's command. Since they were hence in a favorable position, once entered the core of the power community, they could not help following the logic of familial politics and factionalism, thus looked to grab the supreme power and to take over the regime. The endless disputes over the succession to the "pater familias" was over Mao's control. Moreover, familial politics and factionalism were highly exclusive. They can heat up divisions in views and standpoints, and then turn a power struggle into a life-or-death battle. This inevitably leads to desperate struggle of 3 Henry Sumner Maine: Ancient Law, The Connection with the Early History of Society and Its Relation to Modern Ideas, (1861). As we all know, the most famous quote from H. S. Maine was: "The movement of the progressive societies has hitherto been a movement from Status to Contract." those who have lost favor. We will see hereinafter that most of the times even the losing ones relied on their families and former factions during their struggle, since these highranking leaders who fell into disgrace during political fights had absolutely no other supports to mobilize. In addition, Status Politics in which the high-ranking leaders engaged may result in a power mechanism and a social order that are inequitable and unjust. A person's position in the power structure can benefit his/her family, relatives and attendants, whereas one's failure in political fights may also involve his/her family, relatives and attendants 4 . The power structure of the PRC's top leadership during the Cultural Revolution was highly person-centered and paternalistic. If we define CCP's "top leaders" as key leaders who held ministerial, provincial, municipal, regional (autonomous regions and PLA military regions) or higher ranks 5 at the beginning of the Cultural Revolution, we see that the political life of each of them was in the hands of Mao, the supreme Pater familias 6 . Under the power mechanism of Status Politics, all changes in the status of a top leader implied rise and fall in his power, in his family's power, and in the supporting sources that he could mobilize. Not surprisingly, despite the omnipresence of brilliant revolutionary language, behaviors such as "seeking back door" and "pulling some strings" gradually spread and became the lubricant of the entire power mechanism behind the Party, the government and the army.
The present essay brings in an initial interpretation of the evolution of CCP's power mechanism during the Cultural Revolution. I look forward to receiving your criticism of this essay for its mistakes due to limited references about the Cultural Revolution and CCP's top leadership. 4 After the Cultural Revolution, many of those from Lin Biao and Jiang Qing's factions and family clans considered that they were treated unfairly. Indeed, as they claimed, their "unfair treatment" was very often due to ruthless disposal of their affiliated institutions. Some of these institutions even refused to carry out decisions made by higher-ups that were meant to be put into effect. This can be considered as one of the consequences of Status Politics. 5 It is quite difficult to determine the total number of in-service senior cadres holding these ranks and working for the central government, the local government and the army. They were probably between one thousand and two thousand. In order to have an idea about their background, we can refer to analysis made by historian Shen Zhihua of the Eighth Party Congress and the composition of the Politburo. According to Shen, "As for the electoral principles, CCP confined itself to the 'seniority-based system' and the idea of 'he who wins a kingdom rules a kingdom'. Mao Zedong suggested not to consider the electability for the Party Congress of most of the 'cadres of the 1938 era'. Consequently, the cadres with the minimum level of modern education were excluded from CCP's leadership. Among 170 official and alternate Congress members, the great majority were career military men who just returned from the battlefield. These seasoned fighters were used to following and issuing orders, but they lacked experience and essential knowledges in economic construction. This reflected CCP's composition. By the time of the Eighth National Congress, among 10.73 million CCP members, 14% were workers, 69.1% were farmers, whereas intellectuals were only 11.7%. This composition reflected the actual backgrounds of chiefs leading central ministries, provinces, cities and autonomous regions. For Mao Zedong, his suggestion was kind of a last resort." In Shen Zhihua 沈志華: Sikao yu xuanze: cong zhishi fenzi huiyi dao fanyou yundong 思考與選擇 : 從知識分子會議 到反右派運動 (1956-1957) (Reflections and choices: the consciousness of the Chinese intellectuals and the Anti-Rightist campaign (1956-1957)), Zhonghua renmin gongheguo shi disanjuan 中华人民共和国史第 3 卷 (The History of the People's Republic of China, Volume 3), Chinese University of Hong Kong, 2008, p. 317. For further analysis, see p. 317 to 322. 6 The Volume 6 of Chronicles of Mao Zedong was full of Mao's instructions about each of the top leaders, indicating if the person could be "liberated" and determining his/her political status. See: Mao Zedong nianpu 毛泽东年谱(1949-1976) (Chronicles of Mao Zedong, 1949-1976)
Mao Zedong, Jiang Qing, their family and attendants
The structure of CCP's top leadership was highly patriarchal. Men occupied predominant positions in the core of the power community, whilst the number of women was very low. And of the handful women, most worked as leaders of institutions like All-China Women's Federation and cultural propaganda services, among which Cai Chang (Deputy Prime Minister Li Fuchun's wife), Kang Keqing (Zhu De's wife), Deng Yingchao (Zhou Enlai's wife). For many male leaders, their young wives were appointed to be the husbands' secretaries and were mainly in charge of daily life. For instance, Jiang Qing worked as Mao Zedong's secretary, Wang Guangmei as Liu Shaoqi's secretary, Ye Qun as Lin Biao's secretary, etc. For some leaders, such as Liu Shaoqi (wife: Wang Guangmei) and Foreign Minister, Deputy Prime Minister Chen Yi (wife: Zhang Qian), their wives became public figures with filmed images since they accompanied their husbands in diplomatic activities and often showed up in public. As for Jiang Qing, she only started to make frequent appearance as Mao's wife in the top circle two or three years before the Cultural Revolution. Thanks to Mao's significant support, Jiang Qing played an active role in the fields of culture and propaganda by promoting reforms of modern Peking opera, writing revolutionary model operas, and participating in the exposure and the critique of "incorrect" songs, paintings, plays, films and other art pieces. In November 1965, as instructed by Mao, Jiang Qing took charge of the publication of "Notes on the New Historical Drama Hai Rui Dismissed from Office", an article considered to be the flashpoint leading to the Cultural Revolution. In February 1966, with Lin Biao's approval, Jiang Qing hosted the Military Literature & Arts Work Conference in Shanghai and was then appointed at the Enlarged meeting of the Central Politburo in May 1966 first deputy chief of the newly established Central Cultural Revolution Group (CCRG). Later, she became acting chief of the Group, and consultant of People's Liberation Army's Cultural Revolution Group 7 . According to the memoirs of Jiang Qing's secretary Yan Changgui and of other members of the CCRG, Jiang held huge authority in the Group, since she often passed on the latest "supreme orders" from Mao 8 . As chief of the CCRG, Chen Boda was absolutely incapable of giving her orders; at the same time, Kang Sheng, consultant of the CCRG, was Jiang Qing's most powerful backer of the period. When receiving the Red Guards or the Rebel factions, or showing up in front of public assemblies, Jiang Qing often claimed that she came to visit the people "on behalf of Chairman Mao", declared that she approved or opposed to some factions, and called by name the individuals who should be "unmasked" and criticized. For a long time, Mao tacitly allowed this kind of "status transplant". After the February Countercurrent which was the joint efforts by a group of veterans to oppose the ultra-leftist radicalism, the (enlarged) brief meeting of the Standing Committee of the Central Politburo in charge of ordinary State affairs was suspended. The responsibility of the brief meeting was taken over by the Brief meeting of the CCRG, until April 1969 when 7 The main sources of information for the pre-Revolution background are as follows: aforementioned Roderick MacFarquhar, The Origins of the Cultural Revolution, t. III: The Coming of the Catalysme, 1961-1966); Qian Xiangli 錢庠理: Lishi de bianju -cong wanjiu weiji dao fanxiu fangxiu 歷史的變局--從挽救危機到 反修防修 (1962-1965) (Historical Turn: Responding to Crisis and Combating Revisionism (1962-1965)), Zhonghua renmin gongheguo shi diwujuan 中华人民共和国史第 5 卷 (The History of the People's Republic of China, Vol. 5); Bu Weihua 卜偉華, "Zalan jiushijie"-wenhua dageming de dongluan yu haojie「砸爛舊世 界」--文化大革命的動亂與浩劫 (1966-1968) ("Smashing the Old World": Havoc of the Chinese Cultural Revolution (1966-1968)), Zhonghua renmin gongheguo shi diliujuan 中华人民共和国史第 6 卷 (The History of the People's Republic of China), Vol. 6, Chinese University of Hong Kong, 2008. 8 Yan Changgui and Wang Guangyu, Wenshi qiuxin ji 问史求信集 (Reflections on History), Beijing, Red Flag Publishing House, 2009. the new Politburo was established during the Ninth National Congress. That was the time when Jiang Qing's power was at its peak. It was quite often that a simple word from her could decide about someone's political life or even life or death. After the downfall of Lin Biao's faction, Jiang Qing and the leading members of the CCRG, Zhang Chunqiao, Yao Wenyuan and Wang Hongwen, gained enormous power. During the Ninth and the Tenth National Congress, Jiang was member of the Central Politburo. In the administration system, she enjoyed great influence over culture, science and education, and even had the final decision on these fields. But Mao had always been containing her attempt to join the Standing Committee of the Central Politburo and to grab even more power in state institutions. After 1974, Mao drew a line between him and Jiang Qing by claiming publicly at the meeting of the Politburo that "She doesn't represent me; she only represents herself". Yet, for all that, Jiang Qing remained an irreplaceable key figure of the new campaigns started by Mao, such as "Counterattack the Return of Right-Deviationists", "Criticize Lin, Criticize Confucius", "Pro-Legalists and Anti-Confucianism", "Counterattack the Capitulationism in 'The Water Margin'", "Criticize Deng", and so on9 .
Let's take a look at other members of Mao's family.
Li Na (1940 -) is the only child of Mao Zedong and Jiang Qing. In 1965, she graduated from Peking University in history, and then joined the Four Cleanups Movement for one year. In early July 1966, the CCRG set up an office in Diaoyutai Hotel Beijing, and Li Na, who changed her name to "Xiao Li", began to work in the office. Her main duties were inspecting higher education institutions for more information about their status quo, getting in touch with Rebel leaders, and travelling with schoolmates for revolutionary tours. Li Na frequented Zhongnanhai, her parents' residence, as well as Diaoyutai, and it seemed that her primary missions were inspecting the grassroots on behalf of her father and passing information. Moreover, she was granted access to the same files as the members of the CCRG [START_REF] Changgui | Xiao Li (Li Nan) zai Zhongyang wenge xiaozu" 肖力(Li Nan 李讷)在中央文革小组[END_REF] . In early 1967, she returned to her affiliated institution, The People's Liberation Army Daily (PLA Daily for short), rebelled and seized power with some others, which was an action supported by Lin Biao and approved by Mao himself. Half a year later, she rebelled again, displacing the newly appointed chief editor and party secretary, and became leader of the Chief Editor Leading Group of PLA Daily (the equivalent of chief editor) 11 . However, during the Ninth National Congress, Mao Zedong didn't accept that Li Na and Mao Yuanxin become delegates, despite that many wellknown Rebels turned into delegates or even were promoted to exalted delegates of CCP Central Committee 12 . In early 1970, she followed her father's instructions and got trained at the May Seventh Cadre Schools of Jiangxi General Office of the Central Committee.
During the Tenth National Congress in August 1973, both Li Na and Mao Yuanxin became delegates, but they didn't join the CCP Central Committee. From 1974 to 1975, she successively served as the Party Chief of the CCP Pinggu County Committee and Deputy Secretary of the CCP Beijing Committee. Apparently, she was a practitioner loyal to her father and to the "revolutionary line" of the Leader. Although Mao didn't make his descendants his heir apparents, he allowed them to serve in top leadership positions. Among Mao's close relatives, both his nephew Mao Yuanxin (1941-) and his cousin's granddaughter Wang Hairong (1938-2017) got rapid promotions during the Cultural Revolution. In addition, they both played significant roles in politics thanks to their special connection with Mao13 . This "special connection" was born out of a series of conversations between Mao and the two about educational revolution from 1964 to 196514 . Revealed by Higher Education authorities, the army and some internal sources, these conversations were known to many students, especially the children of senior local and military cadres. Mao retained a distrust and discontent with a lot of cultural and educational institutions within the system, thus by discussing with the young generation of his family he passed messages and listened to feedback from grassroots students. However, in Beijing and other cities, colleges where the offspring of cadres were put together saw these conversations as another signal. Many of these school's directors believed that the children of senior cadres were born to be successors to the revolutionary cause, and thus should be given priority in terms of training over students from other family backgrounds. According to the memoirs of relevant people, their schools had set up classes specifically for cadres' offspring, or they had organized political study conferences that only cadres' offspring could attend.
Mao Yuanxin graduated from PLA Military Institute of Engineering (also called "Harbin Military Institute of Engineering") in 1965 and was soon assigned out of a missile division in Yunnan by Wu Faxian, commander of the Air Force of the time. There he served as platoon cadre 15 . In September 1966, following Mao's instruction, he joined the Rebels of his old school, Harbin Military Institute of Engineering, and became leader of a famous Rebel faction. From spring to summer 1967 he returned to Beijing from the northeastern region to work as Zhou Enlai's liaison officer, helping Zhou to deal with the problems in the alignment between the Eastern and the Western factions. After that, he worked as vice-president of the Revolutionary Committee in Liaoning Province, Secretary of Liaoning Provincial Party Committee, and Political Commissar of Shenyang Military Area. Mao Yuanxin had always been a loyal follower and practitioner of his uncle's revolutionary ideology and guideline. Historian Shi Yun indicates that Mao Yuanxin should bear some responsibility for the case in which the Liaoning authorities executed Zhang Zhixin in April 1975. Zhang was charged of criticizing the Cultural Revolution and pointing out Mao Zedong's mistakes. From 1973 to 1974, in response to Mao's expectation of an education reform, Mao Yuanxin set up some role models and school managing experiences (i.e. the Zhang Tiesheng affair, in which Zhang handed in a blank paper; students from the Chaoyang Agricultural Institute "come from the communes and return to the communes after graduation") that counterattacked the "Return of Right-Deviationists". Nevertheless, his political influence probably reached its peak at the end of Mao Zedong's life: from October 1975 to September 1976, he became the liaison officer between Mao Zedong and the Central Politburo. At that time, both Zhou Enlai and Mao Zedong's health were deteriorating, which heated up the disputes over succession among different factions of high-ranking politicians. As a strong believer in the Cultural Revolution, Mao Yuanxin used the convenience of passing messages to Mao Zedong. It is said that he thus played a highly active role in the political struggle that led to the second downfall of Deng Xiaoping16 .
Wang Hairong began to work at the General office of the Ministry of Foreign Affairs in 1965. Since 1971, she served as Deputy Director of the Protocol Department of the Ministry of Foreign Affairs and Assistant Foreign Minister. From July 1974 to February 1979, she became the influential Deputy Foreign Minister. She and her colleague, the interpreter Tang Wensheng, met Mao Zedong frequently, passing messages to Mao and communicating Mao's instructions. Some historians believe that they have been planted by Mao within the Ministry of Foreign Affairs to watch Zhou Enlai. From 1973 to 1975, they became Mao's de facto liaison officers, and they were so powerful that even Jiang Qing, Kang Sheng and Deng Xiaoping should count on them to send messages to Mao or to figure out Mao's opinions of relevant affairs.
Apart from his family members, Mao also put his closest staff in important positions. Before the Cultural Revolution, he already replaced the former Secretary General of the General Office of the CCP Central Committee, Yang Shangkun, by his long-time chief of personal bodyguard force Wang Dongxing. In 1968, Mao sent several military officers (including Chi Qun) of the Zhongnanhai Security Guard Regiment -Unit 8341 -as well as a member of his confidential department, Xie Jingyi, to run Peking University and Tsinghua University. Since then, both Chi Qun and Xie Jingyi remained leaders of the two universities for a long time. Following Mao's direct command, they took control of the two universities and made them the front of educational revolution, an important position for public opinion, as well as the test site of brand new criticism movements ("Counterattack the Return of Right-Deviationists", "Criticize Lin, Criticize Confucius", "Pro-Legalists and Anti-Confucianism", "Counterattack the Capitulationism in 'The Water Margin'", "Criticize Deng", etc.). Chi Qun had also held some significant positions at the Ministry of Education and the Science and Education Group of the State Council. At the same time, Xie Jingyi was particularly appreciated by Mao. She had served as member of the Standing Committee of Beijing Municipal Committee, the secretary of Beijing Municipal Committee, and became a member of the CCP Central Committee during the Tenth CCP National Congress in 1973. Two years later, she became a member of the Standing Committee of the Fourth National People's Congress.
Mao Zedong started the Cultural Revolution as Supreme Leader, consequently, his family and those close to him gained a sort of special "political status": they are regarded and regard themselves as born revolutionaries more than any others, and they are considered and consider themselves to be more capable of comprehending the will of the Leader and getting his support. Their "revolutionary nature" was unparalleled, since it comes from bloodline and marriage (Mao's closest staff could be seen as part of an extended kinship network). All these people won their power and status from their pater familias in common -the Leader, and they unconditionally approved the Cultural Revolution started by their Leader. According to recent studies, Mao had no intentions of anointing his wife or children his successors, and he did not want to promote them into positions of power, either. Instead, during the last years of his life, he kept warning Jiang Qing and her faction not to build "mountain tops" and to avoid factionalism. However, by grooming family clans and factions and putting them in important positions, he deeply corrupted the core of the CCP's power community. Thus, a mechanism supposed to coordinate all factions degenerated into a battlefield where factions fought fiercely and rejected each other. Mao set a precedent, which allowed other clans and factions to grow their influence by the same means.
Lin Biao and Ye Qun's family and its military circle
Lin Biao's faction was based on the army and consisted mainly of high-ranking officers from the First Front Army of the Red Army that fought during "the age of Ruijin" as well as the First Army Group that he reorganized. For Mao Zedong, this army was also his pillar of strength in the power structure [START_REF]To know more about the factions that were Mao's pillar in the "Yan'an Round Table[END_REF] . Since July 1967 Lin Biao's faction held great power over the major services and branches of the Central Military Commission: it took control of the Central Military Commission Office and was in charge of the army's daily routine by following instructions from Mao Zedong, Lin Biao and Zhou Enlai. After March 1968 when the acting Chief of Staff Yang Chengwu was purged, an inner circle took shape consisting of Lin Biao-Ye Qun, Huang Yongsheng, Wu Faxian, Qiu Huizuo and Li Zuopeng. During the Ninth National Congress in April 1969, Lin Biao became Vice-Chairman of the CCP Central Committee, Mao's officially chosen successor, and all the five others (including Ye Qun) became member of the Central Politburo. Both Lin Biao's faction and the Central Cultural Revolution Group (CCRG) had a large number of followers join the CCP Central Committee, and both parties got involved in intense disputes over the succession thereafter. Lin Biao's faction was featured by a strong family context. For example, when Lin used his health problem as an excuse for missing some public events or important meetings, his wife Ye Qun often showed up in his place. In late 1965, she was involved in the Luo Ruiqing affair; when the Cultural Revolution started, she became member of the All Forces Cultural Revolution Group, deputy head of the Group, Director of Lin Biao's office, and member of the Central Military Commission Office. In February 1967, military and political leaders of the Central Military Commission and the State Council opposed the way in which the Cultural Revolution was conducted, so they had a series of stormy debates with the CCRG (which became the famous "February Countercurrent"). Consequently, Mao ordered these leaders to make self-criticism, some were even forced to step aside. After that Mao made Ye Qun attend in Lin Biao's place Enlarged meetings of the Central Politburo and Brief meetings of the CCRG organized by Mao himself or Zhou Enlai. Hence Ye Qun enjoyed the same privilege as Jiang Qing: taking part in ultimate decisions on the CCP and the country's day-to-day issues [START_REF] Wu Faxian | Suiyue jiannan -Wu Faxian huiyilu 岁月艰难--吴法宪回忆录 (Rough Years: Wu Faxian's Memoir)[END_REF] . Generally speaking, Ye Qun kept being cautious with Jiang Qing and avoided direct confrontation.
Although Ye Qun was apparently inferior to Jiang Qing in terms of status and position, she got the powerful army in her pocket. For both of them, their power depended on that of their husbands', but it seemed that Ye Qun had way more supporting sources to mobilize, and that her supporting sources were much more accessible, because the army is a system of its own within the state apparatus and is endowed with rich resources. As a mother, Ye Qun was extremely concerned about the future and the marriage of her daughter Lin Liheng and her son Lin Liguo 19 . To pick the potential partners for her children, she never hesitated to resort to the mighty army for nationwide selection. As for their career, the Lin siblings were appointed to serve in the Air Force, a service considered to be highly skilled. Lin Liheng (1944-) started to work for Air Force News in 1965 and was promoted to Deputy Chief Editor of the newspaper after the beginning of the Cultural Revolution. Her little brother Lin Liguo (1945-1971) majored in physics at Peking University. During the Cultural Revolution, since classes were suspended, Lin Liguo joined the Air Force in April 1967 and served as military staff at Air Command (some say he worked as office secretary), and was soon admitted to the CCP. In October 1969, thanks to Wu Faxian, Lin Liguo was appointed deputy director of both the Office of the Air Force Command and its combat division. Wu recalled precisely in his memoir that the Lin siblings were promoted to positions ranked a little bit lower than those of Mao Yuanxin and Li Na 20 . Wu Faxian notified his colleagues at the Air Force that "The fact that Vice-President Lin entrusted Liguo to us is a symbol of his trust and support towards the Air Force. We must do the best to train Liguo." Wu provided Lin Liguo with all kinds of conditions that assert his authority and gave him the right to organize research teams and to visit subordinate units of the Air Force. Wu even allowed him to report directly to Lin Biao some missions of the Air Force and enabled him to mobilize and to command the Air Force. The recklessness of Lin Liguo and his fellow young officers eventually led to the September 13 Incident in which he died with his parents, Lin Biao and Ye Qun. Mao Zedong used to sharply criticize Lin Biao and his wife for letting their son abuse power: "Those are the few sworn followers, among which Ye Qun is the main character, and from last year to this year it becomes Lin Liguo. Even his parents can't believe that 21 ." Mao also accused Lin Biao of making his wife his office director, however it was Mao himself who approved this assignment, and it was also Mao who allowed Ye Qun to attend in Lin Biao's place Enlarged meetings of the Central Politburo and Brief meetings of the CCRG.
To some extent, Ye Qun truly tended to manage the Central Military Commission like an enlarged family issue. When Yang Chengwu was purged in March 1968, she made the wives of Huang Yongsheng, Wu Faxian, Qiu Huizuo and Li Zuopeng office directors of their 19 Marshal Chen Yi's son Chen Xiaolu was Lin Liguo's schoolmate at Beijing 4 th Boy Middle High School. He recalls: "My father never talked about his work with us. He didn't allow us to read documents, neither. But some children of high-ranking cadres had access to that kind of information, such as Lin Liguo. Lin Biao had been training him on purpose. He made his son read confidential documents and internal reference reports. That is why everybody at school loved listen to him sharing inside information. Cadres' children usually cared a lot about politics, so we often flocked together gabbing about that." "Chen Xiaolu fangwen jlu" 陈小 鲁访问记录 (Interview with Chen Xiaolu), 1 st May 2013 (The interview transcript has been reviewed by the interviewee). 20 Wu Faxian, Suiyue jiannan, p. 708-711, 769-771. According to Gu Xunzhong's "Wo zai kongjun budui jili 'jiuyisan" 我在空军部队经历"九•一三"(The September 13 Incident that I Witnessed at the Air Force): "In those years, the Air Force was the PLA's example set by Deputy Commander Lin Biao himself. When Mao Zedong put forward the idea of 'Following the example of the PLA', Lin Biao called 'the PLA to follow the example of the Air Force'. His two children were assigned one after another to posts at the Air Force: in early 1965, his daughter Doudou became journalist of Air Force News and was later promoted to Deputy Chief Editor of the newspaper; his son Lin Liguo worked as secretary of the Office of CCP committee of the Air Force (later on changed into 'the Office of the Air Force Command') in the beginning of 1967, and then became deputy director of the Office and deputy director of the Air Force Command's combat division. Both of them had a skyrocketing rise, skipped several grades and were promoted to cadres of deputy division rank." http://prchistory.org/wp-content/uploads/2014/05/REMEMBRANCE-No-75-2011 年 9 月 13 日.pdf 21 Shi Yun and Li Danhui, p. 14. husbands, which was a measure to ensure stability in the faction's backyard in order to prevent internal disputes from weakening its power. At the same time, these women were actively involved in the "spouse selection" for Ye Qun's son and daughter across the country 22 . Ye Qun was also very concerned about Huang, Wu, Qiu and Li's children, and she particularly cared about young adult boys, because she saw them as legitimate successors of their fathers in the army, just like Lin Liguo. Most of Huang, Wu, Qiu and Li's children joined the army at the beginning of the Cultural Revolution and got promoted within short time. Due to the September 13 Incident in 1971, these four people as well as their wives and children were implicated, most of them have been imprisoned and put under investigation for many years. After that, many suffered from discrimination and rejection. For a long time, they had no way to be employed or to get enough living treatment. They could only rely on the personal intervention of important figures such as Hu Yaobang or Zhao Ziyang to hope for a better situation, but even so, the staff did not always carry out the orders 23 . The difficult situation of those who lost ground in political fights reveals the disadvantages and the harms of "Status Politics" -which means kinship determines one's power and profits, social position and reputation. Lin Biao was the chief of his military faction. It is said that the September 13 Incident led to a collective punishment for hundreds of thousands of military cadres, many of which have been purged from the PLA as part of "three kinds of people". Similarly, in 1959 criticism of "Peng Dehuai's anti-Party clique" extended to counties and even smaller scale areas, in consequence millions of "Peng Dehuai" have been rooted out. Yet that could be more or less explained by the divergence of political guidelines. However, the political fights during the Cultural Revolution usually aimed at power and benefits for the faction and the family, as well as individual power and interests.
Struggles of factions that lost ground
During the Cultural Revolution, the political fights among high-ranking CCP politicians were extremely relentless. Once a male leader fell, his wife, children, and even his close relatives and subordinates might be subject to collective punishment. Facing the brutal situation, the family clans that fell into disgrace had no choice but to put up a fight, by mobilizing any existing contacts and resources. The family was obviously out front in this fight, especially its female members and children, while its original network became the main resource that it could mobilize. We may take a close look at this through Ye Fei, Zhong Qiguang, Chen Pixian and others who were the key leaders of the First Division of the New Fourth Army and the Central Jiangsu Military Region.
22 Qiu Huizuo's wife, Hu Min, was transferred to the PLA's General Logistics Department in October 1968 and worked as office director. After the September 13 Incident, she has been imprisoned with hard labor for ten years due to her husband's implication. However, Qiu thinks that "Hu Min's real mistake was her active efforts to help Ye Qun 'select the beauty queen'. Such 'beauty pageants' reflected extremely corrupt morals and brought about serious problems…At that time, a lot of high-ranking cadres' wives contributed to the 'beauty pageants' of Ye Qun's family. The New Fourth Army was built up in the early days of the Anti-Japanese War. It consisted of the guerrillas left by the former Central Red Army in the south, and a portion of the Eighth Route Army sent down south by Yan'an, the revolutionary base. During the Anti-Japanese War, this troop built the CCP base area and expanded its armed forces in central China, and was reorganized into the East China Field Army and the Third Field Army during the War of Liberation. After 1949, many New Fourth Army generals and cadres from base areas became leaders or army leaders in East China. Before the Cultural Revolution, the first secretaries of "the six provinces and one city" in East China were Tan Qilong (Shandong Province), Jiang Weiqing (Jiangsu Province), Chen Pixian (Shanghai), Li Baohua (Anhui Province), Yang Shangkui (Jiangxi Province), Jiang Hua (Zhejiang Province) and Ye Fei (Fujian Province) 24 . Several of them used to be members of the former New Fourth Army. The parallel military authority, the Nanjing Military Region, was in charge of five provinces and one city in East China except Shandong. Its basic troops also consisted of the former New Fourth Army, East China Field Army, and later reorganized Third Field Army and East China Military Region. In Beijing, the respectable leaders of these local military and political officials were mainly Marshal Chen Yi and General Su Yu. The CCP authorities strictly forbade cadres from taking part in factional activities and periodically carried out criticisms and rectifications against "mountain tops" and factionalism. But as Chen Yi's son, Chen Xiaolu said, "The 'mountain tops' within the party were the product of history. My father was the founder of the New Fourth Army in the Anti-Japanese War and the leader of the Chinese East Field Army in the War of Liberation. As he settled in Beijing after the liberation, he had more frequent exchanges with his old comrades of the New Fourth Army. When old comrades working in their local areas came to Beijing for a business meeting, my father would invite them to dinner." When the cadres of the former New Fourth Army had disagreement with other factions about personnel organization, Mao Zedong would also send Chen Yi to act as mediator 25 . When it came to be serious family difficulties or political failure, the cadres of the New Fourth Army and the Soviet Central base would also actively provide support 26 . 24 The Local Bureaus of the CCP Central Committee were established during the War of Liberation in 1945. When "Gao-Rao affair" occurred in 1954, and the six Local Bureaus of the CCP Central Committees (Northeast, North China, Northwest, East China, Central South, Southwest) were subsequently abolished to prevent local power from expanding. However, they resumed in January 1961. The "six provinces and one city" in East China affiliated with the East China Bureau were Shandong, Jiangsu, Shanghai, Anhui, Jiangxi, Zhejiang and Fujian. 25 "The 'mountain tops' within the party were the product of history. My father was the founder of the New Fourth Army in the Anti-Japanese War and the leader of the Chinese East Field Army in the War of Liberation. As he settled in Beijing after the liberation, he had more frequent exchanges with his old comrades of the New Fourth Army. When old comrades working in their local areas came to Beijing for a business meeting, my father would invite them to dinner. The review of ranks in 1955 was made according to one's background and all others aspects, while paying special attention to veterans from the First, the Second and the Fourth Route Army of the Red Army, the four major Field Forces, and the North China Field Army, etc. In 1958, when Mao heard about the friction between Peng Dehuai and Su Yu, he sent my dad to contact Su Yu for more details about the situation. Later, when the commander of the Fujian Military Region Han Xianchu (the Fourth Field Force) had conflict with the political commissar Ye Fei (the Third Field Force), Mao also asked my father to resolve it. Before 1964, during the Spring Festival, central leaders still had dinner parties and mass greetings, but as struggles within the CCP got more and more fierce, there were less contacts between them." Interview with Chen Xiaolu, 1 st May 2013. 26 For example, Shanghai Municipal Party Committee Secretary Chen Pixian and his wife have helped Chen Yi's son and Zhong Qiguang's daughter come to Shanghai for treatment. In summer 1958, the army fought against the "dogmatic" military line, consequently, Liu Bocheng, president of the Nanjing Military Academy, and Zhong Qiguang, political commissar, were publicly criticized. Afterwards, Ye Fei, the first secretary of the Fujian Provincial Party Committee, and his wife Wang Yugeng specially invited Zhong Qiguang and his In October 1966, the CCP Central Committee held a central working conference attended by key leaders of various provinces, municipalities, and autonomous regions to criticize the "bourgeois reactionary line". Liu Shaoqi and Deng Xiaoping were forced to review their mistakes. The purpose of this conference was to criticize the institutional repression of the rebellion of those who worked for the institution. Local officials got lost between being loyal to Mao Zedong's line and not understanding Mao's Cultural Revolution. Ye Fei told his wife Wang Yugeng that during the meeting Chen Yi had invited the first secretaries of provinces and cities in East China to dinner, admonishing everyone that "No matter how difficult it can be, we must adhere to principles and persist in struggle. We must not be a weathercock that goes with the flow 27 ." However, half a year later, Chen Yi, who was trying to uphold the principles of the party's system, was taken down by the stormy "February Countercurrent". From the end of 1966 to the beginning of 1967, key leaders of most provinces, municipalities and autonomous regions were struck down and detained for review, leaving their wives and children who stepped up to rescue their husbands and fathers.
Women soldiers of the New Fourth Army
Chen Danhuai, another son of Marshal Chen Yi, and Ye Waiwai, General Ye Fei's daughter, wrote an interesting book depicturing the life of three women soldiers of the New Fourth Army, including their respective mothers Zhang Qian and Wang Yugeng, and Ling Ben, the wife of another General of the New Fourth Army, Zhong Qiguang. The three women joined the Communist Party in the early days of the Anti-Japanese War, and later married New Fourth Army generals. After marriage, they continued their revolutionary work while raising children. Their life stories were legendary, and they were all charismatic figures.
In the late 1950s, Zhong Qiguang was criticized as belonging to the so-called "wrong line". This made Ling Ben realize earlier than her friends the possible family misfortunes brought about by the intensified struggles within the party. In early 1960, she made a deal with two of her comrades of the New Fourth Army, Wang Yugeng and Yu Ling (wife of Qiao Xinming, a cadre of the New Fourth Army) that if one day someone is in trouble, the two others shall support each other and raise the children of the one in distress. In the 1960s, Zhong Qiguang and his wife moved to Beijing to work at the PLA Academy of Military Science. After the start of the Cultural Revolution, in the compound of the Academy of Military Science at the foot of Western Hills in Beijing, Ling Ben led her children to take sides with her husband who was criticized, and to help and support the superiors and comrades who had also been struck down. Wang Yugeng, the director of the Fujian Provincial Education Department, and her husband, Ye Fei, the first secretary of the Fujian Provincial Party Committee, were the first to be struck down, and their children drifted from one place to another. Regardless of obstructions, Ling Ben kept her promise made with her friends, and took in their two daughters who studied in Beijing, as well as up to 19 children of the first secretaries of provincial committees in East China. Some of these young people were studying at universities in Beijing, some stayed in wife in adversity as well as two other sick comrades-in-arms couples from the New Fourth Army to recuperate in Fujian. See Chen Danhuai 陳丹淮 Ye Weiwei 葉葳葳:Sange Xinsijun nübing de duocai rensheng:Huiyi muqin Zhang Qian, Wang Yugeng, Ling Ben 三个新四军女兵的多彩人生:回忆母亲张茜、 王于畊、凌奔 (Colorful Life of Three Women Soldiers of the New Fourth Army -to Memorize Mother Zhang Qian, Wang Yugeng and Ling Ben), Beijing: People's Publishing House, 2011. 27 Chen Danhuai and Ye Weiwei: Sange Xinsijun nübing…, p. 247.
Beijing to shelter from the tide, and others were sent to Beijing by their parents to inquire about more information or to find ways to improve their parents' situation.
After the September 13 Incident in 1971, the political situation became loose, and Wang Yugeng was released from detention and assigned to a military retirement home in the northern Fujian mountainous area. She began to have frequent contacts with her children, relatives and friends, exchanged information and made plans to rescue her husband Ye Fei. In the spring of 1972 she managed to send her daughter and son to Beijing to ask for permission to visit their father, who had been held for six years. The request was approved. When she was "liberated", she immediately wrote to Zhou Enlai, asking to meet her husband, and then requested that Ye Fei be hospitalized. In Beijing, Wang Yugeng and her children were looked after and helped by Chen Yi's wife Zhang Qian and other women comrades of the New Fourth Army, but instead of transferring their petition by other people, she insisted on handing in the petition by her family members to the Bureau of Letters and Calls of the General Office of the CCP Central Committee at the West Gate of Zhongnanhai 28 . Women soldiers of the New Fourth Army clearly knew how to fight for "legitimate" rights for their husbands and families, both formally and informally. In addition to Zhou Enlai -the government channel, Marshal Ye Jianying, the newly appointed Vice Chairman of the Central Military Commission, soon became the second channel. With his military resources (such as the military hospitals), he could to a certain extent help ease the plight of former military and political officers. However, Zhou and Ye's power was limited to granting visits, sending doctors, or approving hospitalization. The women soldiers soon understood that the true "liberation" depended on Chairman Mao Zedong. In August 1972, after writing to Mao Zedong Deng Xiaoping was restored to office in the next spring. This example inspired lots of CCP senior officials who were waiting for the final judgement. Driven by Wang Yugeng and Zhang Qian, Ye Fei wrote to Mao on June 17, 1973, asking to lift custody and to work for the party. Fortunately, a week later, he received Mao's approval of his liberation and his job assignment. Even so, he would have to wait a year and a half before actually receiving a new job appointment 29 .
For senior CCP cadres who were struck down during the Cultural Revolution, "liberation" was their most important "identity card", which indicated the beginning of the gradual achievement of political status and various related rights: reappearing at regular activities of the party, reading internal reference documents, being treated in accordance with their ranks, assigning jobs, and so on 30 . When Chen Yi, the leader of the New Fourth Army, died in January 1972, Wang Yugeng had worried that it would be even more difficult to get her husband out of the woods. She envisaged bringing her children home to Baoding and living the lives of ordinary people once there was no other way out. However, the September 13 Incident has greatly changed the previous power structure, which led to the expansion of other factions' living space and to the turn of fortune for individuals and families. 28 This contact channel was likely to have been secretly agreed between Zhou Enlai and some senior cadres. By sending letters, Zhou could keep in touch with them or their families. 29 Chen Danhuai and Ye Weiwei: Sange Xinsijun nübing…, p. 245-259. In January 1975, Ye Fei was appointed Minister of Transport. 30 "Reappearing at regular activities of the party, reading internal reference documents" means being qualified to gain the party's trust and powerful enough to hold information. "Treatment in accordance with their ranks" referred to wages and all kinds of tangible or intangible material benefits, such as housing, private car, medical treatment, travel, as well as transportation and accommodation during travel. However, only "assigning jobs" means enjoying one's function and power.
"Sons of senior officials" as "correspondents"
The families of senior CCP leaders in the early days were unstable. With the establishment of Yan'an revolutionary base and other bases in various places after the Anti-Japanese war, many cadres married (or remarried) and had children, thus began to have stable and long-lasting families 31 . The 1940s to 1950s were the baby boom among senior CCP cadres. Due to the lack of the idea of birth control and contraceptive methods, and because of basic living conditions provided by newly established revolutionary bases and CCP regime, most cadres' families had lots of children. At the beginning of the Cultural Revolution, many of these teenagers were old enough to attend college or high school. After the few months of glory at first, many teenagers started to be "politically involved" in another form as their parents were struck down or stepped aside: they pounded the pavement looking for possibilities of the liberation and comeback of their parents. Chen Xiaojin, the eldest son of the former First Secretary of the CCP Shanghai Municipal Committee, Chen Pixian, was one of the most active teenagers. There is a very detailed and vivid description of this period in his memoir My Experiences during the "Cultural Revolution" Years 32 .
Shanghai was the birthplace of workers' rebellion and the base of the Cultural Revolution. At the end of December 1966, Chen Pixian, the first secretary of the municipal party committee who might be the first to be affected, found that he and his family might be left unprotected. He decided to send his eldest son, Chen Xiaojin (1944.11-), a thirdyear student at Shanghai Jiaotong University, to meet with his former superior Chen Yi in Beijing, hoping that the latter would report to Mao Zedong and Zhou Enlai about the situation in Shanghai. At the same time, Chen Pixian also wished to know more about the ongoing Cultural Revolution in Beijing. A few days later, Chen Pixian sent his second son away from Shanghai through military connections. He also told his wife that in case of an accident, she should take the children to meet the commander of the East Sea Fleet, Tao Yong 33 . After leaving Shanghai, Chen Xiaojin first went to Nanjing. He stayed with Liu Yan, 31 For early CCP leaders' families and children, see Zai Sulian zhangda de hongse houdai 在苏联长大的红色 后代 (The Descendants of the Reds), edited by Du Weihua, Wang Yiqiu, Beijing, World Affair Press, 2000; for the life of senior cadres' children during the Cultural Revolution, see Zhongguo gaogan zinü chenfulu 中国 高干子女沉浮录 (The Rise and Fall of Chinese Senior Cadres' Descendants), compiled by Shi Xiang, Changchun, Jilin People's Publishing House, 1996. For CCP 's rules on senior cadres' marriage after the Anti-Japanese War, see online article of the Museum of the CCP History in Nanjing University, "Er ba wu qi tuan de jiehun zhengce" 二八五七团的结婚政策 (The Marriage Policy of "2857 and County Regiment") : "In the context of the outbreak of the Anti-Japanese War, the central government duly implemented a new marriage standard for party members and military officers, government offices, and localities based on the original marriage regulations in the revolutionary bases: the '2857 and county regiment'. Generally speaking, the standard was as follows: military cadres, at least 28 years of age, 5 years of party age, 7 years of military age, and cadres at or above the regimental rank; if they are local government cadres or party cadres, they are required to be at least 28 years of age, 5 years of party age, county-rank or at least section-rank cadres. […] When the Red Army arrived in northern Shaanxi, Wayaobao and Yan'an experienced a wedding boom one after another. In Wayaobao, only in the month of December 1935, a large number of senior cadres got married, such as Liu Shaoqi, Zhang Wentian, Dong Biwu, Zhou Kun, and Song Renqiong. Newly married couples emerged almost every day, and sometimes even several couples got married in one day." http://history.nju.edu.cn/dsbwg/show.php?id=1268&catid=94, viewed on June 13, 2013. 32 Chen Xiaojin 陳小津: Wo de wenge suiyue 我的文革岁月 (My Experiences during the "Cultural Revolution" Years), Beijing, Central Party Literature Press, 2009. 33 Chen Xiaojin: Wo de wenge suiyue, p. 126-127. However, Chen Pixian could not imagine that the brutality of the factional struggle in the army eclipsed that of other cities. Tao Yong was one of the main generals of the First Division of the New Fourth Army. He was known for his enthusiasm and boldness and served as commander of the East Sea Fleet in Shanghai. However, Tao Yong was even more misery than Chen Yixian.
Unit" in Shanghai and the leaders of the municipal party committee. Following Hu Yaobang's suggestion, Chen Xiaojin immediately sent a letter to Mao Zedong through Central Committee delegate, Wang Zhen, who was one of the first generals to whom Mao gave shelter. In late August 1972, Chen finally met his long-parted parents. Considering that the situation in Shanghai was different from Beijing, Hu suggested that Chen Pixian write to Mao Zedong directly. He also told him through Chen Xiaojin that other veteran cadres of Shanghai who had been struck down also directly asked Mao for the right to see a doctor, to be hospitalized, to study party's documents and to participate in regular activities of the party. Since the "Criticize Lin, Criticize Confucius" campaign in 1974, the conflict had intensified between the pro-Cultural Revolution faction on one hand, and Zhou Enlai, Deng Xiaoping (who was restored to office) and others on the other hand. The disturbance of the former probably affected Mao Zedong's deployment of domestic and foreign strategies, therefore it had been repeatedly criticized by Mao. Thus, Hu Yaobang believed it was the perfect time to write to Mao, and he taught Chen Xiaojin how and what his father should write in the letter:
"With Chairman Mao, we must admit our mistakes. We must avoid two approaches: one is to be excessively moralistic, practicing self-criticism by cursing ourselves as worse than useless. Chairman Mao dislikes this kind of self-criticism. He would say that this must be written by the rebels of the Red Guards rather than by ourselves. The other approach should be avoided too. It consists of not mentioning any mistakes, claiming we are always right. So, is Chairman Mao wrong? Never write such a letter. You must tell your father that he must show deep feelings for Chairman Mao, that he has been missing Chairman Mao for many years, and he should ask Chairman Mao for a chance to be transformed and to learn more. 36 " After following the instructions, Chen Pixian was permitted to return home in November 1974, as expected. However, he was always under semi-surveillance, and there were neither conclusion of his investigation nor restoration of his wages and participation in regular activities of the party. Chen had no choice but send his son to Beijing again for help from old comrades such as Hu Yaobang and Su Yu. At that time, Deng Xiaoping was appointed to preside over the regular duties of the State Council and the CCP Central Committee, so Hu Yaobang suggested that Chen Pixian write to Deng. With Deng's active intervention, in July 1975, Chen was able to resume his participation in regular activities of the party that had been suspended for nine years, and was subsequently relocated to Beijing. He finally escaped the control of the pro-Cultural Revolution faction in Shanghai.
Around the Tenth National Congress of the CCP in 1973, despite the fact that the pro-Cultural Revolution faction was still powerful, a lot of veteran cadres of the party, government, and army were liberated, among which many even entered the Tenth Central Committee. Chen Xiaojin wrote in his book: "In this context, many veteran cadres who remained free were trying their best to rescue their old comrades. Uncle Yaobang was the most active. And I naturally got involved in this matter because of my father. I became Uncle Yaobang's 'correspondent'. 37 " Of course, Chen was not the only "son of senior official" who pounded the pavement for his parents. He often encountered his peers here and there, especially in the guest house of the Organization Department of the CCP Central Committee on Wanshou Road in Beijing, where many senior CCP officials of ministries, and provincial and municipal cadres lived with their families, waiting to be "liberated" or assigned a job. Since senior officials who had not been assigned a job were neither allowed 36 Chen Xiaojin: Wo de wenge suiyue, p. 354. 37 Chen Xiaojin: Wo de wenge suiyue, p. 343.
to read the inner documents in time nor to attend meetings of the party, they had to rely on their children to get high-level government information. These "sons of senior officials" were "all well-informed and had their own channels of information. They could quickly know about the latest news of struggles within the CCP inner circle. 38 " At the end of the Cultural Revolution, many of senior cadres' children were involved in high-level political affairs, and their intervention was due to the basic needs for survival. Since there existed no formal and legal channels, the spouses and children of senior cadres could only reach their goals through old friends and the network of former superiors and comrades-in-arms. Once restored to office, senior cadres found it legitimate to make arrangements for their children or to help each other make arrangements for the children. In this sense, the "Status Politics" during the Cultural Revolution contributed to the rise of family clans and factions.
The protection network of the army
Protection and supervision of the children of senior official
The army is the backbone of CCP's regime, and it was also what Mao Zedong depended on to start the Cultural Revolution and to go from chaos to order. The army is a relatively independent body in the political system, and it holds a lot of resources. It could serve as an ideal sanctuary to protect the families of high-ranking cadres from hazards and risks in political struggles. Before the Cultural Revolution, the children of senior CCP officials (especially sons), mostly chose famous universities such as Harbin Military Engineering Institute, Tsinghua University, and Peking University to study science and engineering instead of joining the army. Air Force Commander Wu Faxian said in his memoir that in April 1967 Ye Qun made Lin Liguo join the army. Ye Qun did not want him to appear in all kinds of activities after the school was closed. Of course, entering the Air Force was also a guarantee for her son's future. According to Wu Faxian, the Air Force was a highly technical service, so many senior cadres were willing to send their children to join the Air Force. Apart from Mao's nephew Mao Yuanxin, "the children or relatives of many leaders, including Zhou Enlai, Zhu De, Dong Biwu, Peng Zhen, Liu Bocheng, Ye Jianying, Li Fuchun, Li Xiannian ,Yang Chengwu, Xu Shiyou, Han Xianchu, Wang Dongxing and Yang De, rushed to the Air Force one after another. The Air Force was swarmed with children of government and military leaders." 39 Beijing gathered a lot of children of military and political officials, many of whom were the earliest Red Guards at the beginning of the Cultural Revolution and heads of Red Guard groups. Influenced by pedigree theory, they had a strong sense of superiority. However, after the campaign in which Liu Shaoqi and Deng Xiaoping were criticized for following the "bourgeois reactionary line", veteran cadres were struck down one after another, and their children were not pleased. In December 1966, some of them set up the "Capital Red Guard United Action Committee" (or "United Action Committee" for short) to openly challenge the Central Cultural Revolution Group (CCRG). In the end, they were suppressed by the police as a counterrevolutionary organization, and many members were arrested and detained. In early 1967, the Central Military Commission and the CCRG jointly formulated seven articles defining how the military should carry out the Cultural Revolution. Beyond the seven articles, Mao Zedong "added one more article about how to discipline the children of cadres." That was the eight articles of the "Order of the Central 38 Chen Danhuai and Ye Weiwei: Sange xinsijun nübing…, p. 252. 39 Wu Faxian : Suiyue jiannan…, p. 708-709.
Military Commission" issued on January 28 40 . Subsequently the suppression of the United Action Committee by the pro-Cultural Revolution faction became one of the topics that triggered fierce clashes during the "February Countercurrent". At that time, Chen Yi was left out for being involved in the "February political struggle". His son Chen Xiaolu had been very active among the middle school Red Guards in the early days of the Cultural Revolution. Although he did not directly participate in the United Action Committee, he was sent away from Beijing in April 1967 by Zhou Enlai to be protected and supervised by the army in order to avoid making more trouble for his father 41 . Soon, members of the United Action Committee were released by Mao. A lot of children of senior cadres joined the army at that time, because many of them lost their enthusiasm for the Cultural Revolution, probably also because they were under their parents' control. When the situation of political struggles is not clear yet, the army can serve as a security blanket for everyone concerned.
Getting drafted through the back door
When the large scale "Up to the mountains and down to the countryside" movement started in late 1968, the army became the first employment expectation for students. A lot of military officers decided for their children to join the army, either serving in various military institutions or working in military schools. The September 13 Incident broke some of the old unwritten taboos. Many of the struck-down military cadres returned to their leadership positions, while other warm-hearted senior generals used their power to help children of comrades-in-arms persecuted to death or children of local cadres still left out. Senior generals arranged for these children to join the army, attend school or enter the factory. Ren Zhiqiang (1951-), a well-known party member and real estate tycoon, openly described how he got drafted through the back door. Ren's father was one of Li Xiannian's subordinates, serving in the Fifth Division of the New Fourth Army. Before the Cultural Revolution, he worked as Deputy Minister of Commerce. In January 1969, Ren Zhiqiang and his schoolmates went to Yan'an to join a production team. Within less than one year, he joined the army in Jinan, Shandong, with the help of his father's old comradesin-arms. There he came across nearly a hundred young men who joined the army through their parents' network. According to Ren, this was because after getting in touch the old comrades from the Fifth Division of the New Fourth Army who held important positions in the Jinan Military Area, the Jinan Air Force, and the Shandong Province Military Area, decided to jointly help the children of the "struck-down capitalist-roaders" to join the 40 Mao Zedong nianpu (1949-1976), Vol. 6, January 1967, p. 39-43. For the formulation of the eight articles, and more details about the United Action Committee and the "February Countercurrent", see Bu Weihua: "Zalan jiushijie"…, p. 442-455; Wu Faxian: Suiyue jiannan…, p. 613-614, 648-649 ; Wang Li 王力: Wang Li fansilu 王力反思录 (Reflections of Wang Li, vol. 2), Hong Kong: North Star Publishing Company, 2001, p. 652-669. According to Wang Li, Mao Zedong had asked him to talk to the sons of Liu Tao, He Pengfei, and other generals. In his book, Wang described in detail the confrontation between the United Action Committee and the CCRG, how the United Action Committee was labeled as a reactionary organization, and the release of the chief of the United Action Committe who was detained on April 22, 1967. During one of the Politburo meetings of the "February Countercurrent", Nie Rongzhen, Li Xiannian, Tan Zhenlin voiced their complaints about the suppression of the United Action Committee. 41 Chen Xiaolu: "Ji suo bu yue, wu shi yu ren" 己所不欲,勿施于人 (Do Unto Others As You Would Be Done), quote from chapter compiled by Mi Hedu: Huiyi yu fansi-hongwenbing shidai fengyun renwu-koushu lishi zhi'er 回憶與反思,紅衛兵時代風雲人物--口述歷史之二 (Memories and Introspection, Influential Men of the Red Guards Era--Oral History, Part 2), Hong Kong, CNHK Publications Limited, 2011, p. 36-37. army 42 . Ren Zhiqiang's unit belonged to the very famous 38 th Troop of the Fourth Field Army that went through the Long March and had fought in the Anti-Japanese War. Therefore, it has attracted lots of "sons of senior officials" from all kinds of channels: firstly, children of military cadres of all levels that had served in this troop during the war, i.e. successive corps commanders, division commanders and regimental commanders. Secondly, children of generals still serving in the army. These generals used various networks to send their children to the 38 th Troop. Thirdly, children of those who did not serve in the army but had contacts with the former 38 th Troop cadres. Fourthly, children like Ren who used the power of their parents' comrades in the local military area and entered the 38th Troop through a formal recruitment channel, although their families had no historical tie to the Troop. Besides these four channels, other ways such as quota exchange could help to enter the Troop. The number of fresh men got drafted through these channels was so large that the 38 th Troop was called "three thousand sons of cadres". However, becoming a soldier was only a springboard. According to Ren, the promotion was another matter altogether. Generally, they must be excellent in the army to be promoted, although military cadres' children often got promoted faster than children of local cadres 43 . Ren Zhiqiang explained that Zhou Enlai had tacitly allowed the practice of "getting drafted through the back door" all over the country, with the purpose of protecting the descendants of the struck-down senior cadres. Joining the army and being promoted to cadres quickly evolved into the privileges of military officials and the interests of different military services. Before the "reform and opening up" opened the way for other channels of social mobility, the "children of the compounds" in Beijing and other military institutions usually enjoyed the priority to join the army. At the local level, getting drafted had also become the prerogative of cadres. Even at the grassroots level in the rural areas, children of cadres enjoyed this priority. The "back door" practice soon engulfed all other public sectors. Powerful people and those who had networks were always the first ones to be given access to enlistment, school admission, employment, CCP membership and promotion to cadres.
High-ranking military officers and the pro-Cultural Revolution faction
In early 1974 the "Criticize Lin, Criticize Confucius" campaign was launched. At the Central Mobilization Conference and the Politburo Meeting, Jiang Qing and the pro-Cultural Revolution faction of the Department of Education openly raised criticisms against the unhealthy "back door" practice of military officials for arranging their children to serve in the army, attend college, and join the diplomatic service. The targets of the criticism were clearly the army generals and the veteran cadres restored to office. At the same time, the criticism made innuendo about Zhou Enlai, while Ye Jianying, who presided over the work of the Central Military Commission, was the first to bear the brunt. The move was likely to provoke a new round of power struggles among high-ranking leaders. Mao seemed very dissatisfied with such a disturbance. He wrote an instruction to stop the 42 Ren Zhiqiang 任志强: Yexin youya: Ren Zhiqiang huiyilu 野心优雅:任志强回忆录 (Graceful Ambition-Memoir of Ren Zhiqiang), Nanjing, Jiangsu Literature and Art Publishing House, 2013, p. 361-363. Ren highly appreciated the fact that veteran comrades-in-arms helped the children of "capitalist-roaders" to join the army. However, he forgot that taking the recruitment quota of Shandong province was actually depriving other people, especially peasants' children, of joining the army. The PLA's fresh men mainly came from rural areas, and before the "reform and opening up" in the 1980s, becoming soldier was one of the very limited social mobility channels for rural youth. 43 Ren Zhiqiang: Yexin youya…, p. 437-438; -: "Wo dei zuo wo ziji" 我得做我自己 (I Should Be Myself), quote from chapter compiled by Mi Hedu: Memories and Introspection, Influential Men of the Red Guards Era--Oral History, Part 2, p. 370-371. mess: "This is a great matter; it involves millions of people from party branches to Beijing. There are also good people entering through the back door, there are also bad guys coming from the front door... The 'Criticize Lin, Criticize Confucius' campaign, if mixed with (the criticism of) the 'back door' practice, would be overshadowed. 44 " In 1975, during the "Criticize Deng" and the "Counterattack the Return of Right-Deviationists" campaign, the pro-Cultural Revolution faction once again put forward the proposition against "bourgeois rights", which was approved by Mao. In a series of speeches, he underlined that the so-called bourgeoisie consisted of "the capitalist roaders in power" in the party, which meant the privileged bureaucrats 45 . However, Mao failed to convince his former comrades-in-arms and subordinates to agree with the Cultural Revolution, and it was especially difficult to get the approval of veteran cadres and generals for the targets of the Cultural Revolution and the way it was conducted. The army was the most powerful clan that Mao couldn't bypass in his power structure. The generation of soldiers who went through the war supported each other. As for their children, they entered the army in large numbers and quickly occupied high positions 46 . Many people wrote in their memoirs that from the "Counterattack the Return of Right-Deviationists" campaign in October 1975 to the arrest of the "Gang of Four" in October 1976, many senior military generals and children of cadres claimed that they would "wage guerrilla warfare in the mountains" once the situation deteriorates. Before his death, Mao had been wholeheartedly promoting General Xu Shiyou to enter the Politburo. During that time, Xu even threatened to lead troops northward to storm Beijing 47 . The number of military leaders who were determined to stand on the opposite side may be minimal, but obviously their voices could not be ignored, especially after the Supreme Leader died. Both the pro-Cultural Revolution faction and those who were middle-level cadres before the Cultural Revolution and diligently supported by Mao afterwards would find it difficult to stand against high-level military generals with latent supportive forces and local political allies with historical ties to these generals.
Conclusion
The high-ranking officers of the party, the government and the army before the Cultural Revolution were founders of various CCP armies and revolutionary bases, and 44 Shi Yun and Li Danhui : Nanyi jixu de "jixu geming"…, p. 379-340. Mao also said that he himself had used the back door. With the help of Xie Jingyi, he made actresses of Zhejiang Art Troupe and waitresses of Lushan Hotel become worker-peasant-soldier students at Peking University. For more information about the process and consequences of the "Criticize Lin, Criticize Confucius" campaign, see p. 329-383 of the book. 45 Idem, p. 613-618. 46 For example, Luo Ruiqing, a high-ranking military officer, had been struck down before the Cultural Revolution. He was in custody in hospital after the beginning of the Cultural Revolution. His wife was put in prison. His son Luo Yu (1944-) was a student at Tsinghua University and was also imprisoned for five years. After the September 13 Incident, Luo Ruiqing and his wife were released one after another, their children returned to the city, and the family finally reunited. In July 1975, pushed by Deng Xiaoping, Mao approved that Luo and some other old generals serve as consultants to the Central Military Commission. Ye Jianying wanted Luo Yu to join the army, which got Deng Xiaoping's approval. Thus, Luo Yu served in the equipment section of the PLA General Staff Department. At that time, a lot of children of high-ranking military cadres started to work in various military institutions. Many of them became heads of important branches of the military after the Cultural Revolution. After the Third Plenum of the Eleventh CCP Central Committee in 1978, Luo Ruiqing served as Secretary General of the Central Military Commission. Luo Yu became his secretary, dealing with official documents and working as Deng Xiaoping's correspondent. See Luo Yu 罗宇: Gaobie Zong canmoubu (Luo Yu huiyilu) 告别总参谋部(罗宇回忆录)(Farewell to the General Staff Department -Memoir of Luo Yu), Hong Kong, Open Books, 2015. 47 Shi Yun and Li Danhui : Nanyi jixu de "jixu geming"…, p. 688. leaders of long-term wars. Most of them were strongly convinced of Mao Zedong's authority and wisdom, and they highly agreed with his historical achievements in seizing power and re-integrating China. They could not understand the Cultural Revolution that Mao insisted on, but for Mao it was difficult to remain indifferent to these people who had such a high degree of recognition, because they had jointly created the history and they shared power. To a certain extent, Mao Zedong's relationship with CCP's senior officials was very similar to that between an emperor or a tribal chief and his subordinates. He held the power of life and death of all civil and military officials, never let go until the day of his death. Probably he never intended to eliminate all his comrades-in-arms and subordinates who followed him in the war to seize political power, but neither did he ever consider restoring the power structure before the Cultural Revolution. After the September 13 Incident in 1971, by regulating the personnel system, Mao used his power to achieve his goal: reforming the bureaucracy. In the reorganized party, government, military and state institutions, he tried to keep a power structure in which all generations of leaders and all types of factions -moderate or radical -coexist. He hoped that this power structure would not only ensure the smooth progress and transition of domestic and foreign affairs, but also prevent the purposes and the trajectory of the Cultural Revolution from being abandoned. However, the vision he laid out was already collapsing before he died. That was fundamentally because the success of this vision only depended on one person: the Leader who was identified as the supreme pater familias holding the supreme authority.
"
Qiu Huizuo 邱会作, Qiu Huizuo huiyilu 邱会作回忆录 (The Memoir of Qiu Huizuo), Hong Kong New Century Press, 2011, Volume 2, p. 958-963. 23 Cheng Guang 程光:Wangshi huimu 往事回眸 (Looking Back at the Past), Hong Kong North Star Press, 2012 (Cheng Guang is the second son of Qiu Huizuo) ; -, Guanghuai yu yinying -Huang Chunguang koushushi 光环与阴影-黄春光口述史 (Halo and Shadow -Oral History by Huang Chunguang), quote from chapter compiled by Mi Hedu 米鶴都: Guanghuan yu yinying -huiyi yu fansi koushu lishi zhisi 光环与阴影 -回忆与反思口述历史之四 (Halo and Shadow -Memories and Introspection, Oral History, Part 4), CNHK Publications Limited, 2013, p. 2-121 (This chapter quotes Qiu Luguang, Qiu Huizuo's eldest son. He made the remarks during an interview).
,
Vol. 6, edited by Party Literature Research Center, CCP Central Committee, 2013.
The main sources of information for the development of the Cultural Revolution are as follows: aforementioned Bu Weihua, "Zalan jiushijie"…; Shi Yun 史雲 Li Danhui 李丹慧: Nanyi jixu de "jixu geming"-
The main source of information for how Mao Zedong appointed Wang Hairong, Mao Yuanxin, Chi Qun, Xie Jingyi and others in this paragraph: Shi Yun and Li Danhui: Nanyi jixu de "jixu geming"…
Jianguo yilai Mao Zedong wengao 建国以来毛泽东文稿 (Mao Zedong's Manuscripts Since the Founding of the State),Vol. 11, Pékin, Zhongyang wenxian chubanshe, 1996, p. 96-97; p. 177-178.
The author would like to thank Mr Yu Ruxin for providing this information.
Shi Yun and Li Danhui, Nanyi jixude "jixu geming"…, p. 588-589, p. 613, p. 639-640.
the deputy commander of the Nanjing Military Area at the time. Liu was also a general of the Fourth Army and one of Chen Pixian's old colleagues in Shanghai. By mid-to-late January 1967, Commander Liu Yan's house was already full of children of old comradesin-arms and old subordinates. These children were networking away from home. In Beijing, Chen Xiaojin visited many of his father's former comrades-in-arms or current colleagues, but the "February Countercurrent" prevented him from visiting Chen Yi. After returning to Shanghai, he found out that both his parents were detained for investigation. He himself was detained by the authority of the new Jiaotong University to undergo reform through labor, before being assigned to a military reclamation farm in Hunan Province. After Lin Biao's September 13 Incident, his father's old comrades-in-arms, veteran cadres of Jiangxi Province such as Huang Zhizhen made their comeback 34 . Jiangxi Province, run by veteran cadres, was regarded by Chen Xiaojin as a "liberated area" during the Cultural Revolution. He first tried to be transferred to a factory in the provincial capital Nanchang in order to pound the pavement for his father under Huang Zhizhen's shelter. Similarly, at the same time, many of the children of senior cadres who had been struck down also rushed to Jiangxi -the "liberated area". They either went to school or worked, and most of them were taken care of by Huang Zhizhen. Huang also tried his best to look after many former CCP senior officials who were sent down to Jiangxi, including Deng Xiaoping, Chen Yun, Wang Zhen, Shuai Mengqi and others 35 .
Since the spring of 1972, Chen Xiaojin has frequently traveled between Beijing, Shanghai and his workplace, Nanchang (Jiangxi province), visiting his father's old acquaintances, friends and colleagues of the Red Army and the New Fourth Army, passing messages to elders who also lost power, analyzing the political changes with them, and seeking appropriate opportunities and ways to rescue his parents in distress. In Beijing, the elder who met Chen Xiaojin the most was Hu Yaobang, who was also Chen Pixian's old comrade in the Jiangxi Red Army. Before the Twelfth Plenum of the Eighth CCP Central Committee in October 1968, in order to make up half of the number of the Central Committee delegates required for the meeting, Hu Yaobang was liberated abruptly, but he was not assigned any job afterwards. Hu remained unemployed and stayed at home for a long time. Except for reading and meditating, he discussed a lot with children of old comrades-in-arms who came to see him from other cities. In comparison with their parents, these children were free-thinking and lively youngsters. Hu Yaobang shared with them his opinion about the political movements in CCP's history and about the Cultural Revolution, while giving them advice and suggestions, encouraging these children to come forward to rescue their parents.
The pro-Cultural Revolution faction in Shanghai being too powerful, Chen Pixian and his wife were detained and put under investigation for a long time since January 1967. Hu started from analyzing with Chen Xiaojin the reasons for Chen Pixian's investigation, and then advised Xiaojin to take his first step: asking for meeting his father. This advice was based on the fact that the spouses and children of senior cadres in Beijing obtained step by step visits, the right for health exam, the right to be hospitalized, the annulment of custody, and assigned jobs. However, this request was rejected by the "Chen Pixian Case He and his wife suddenly died for no reason one after another in 1967. Their seven children were expelled from home and drifted from one place to another. Later, these children depended on the Chens', the general Xu Shiyou and others to be resettled. For more information about Tao Yong, see p. 159-161 of the book. 34 Huang Zhizhen, Hu Yaobang, Chen Pixian, Tan Qilong, and others were all underage soldiers in Jiangxi Red Army. Huang also served with Chen and Tan in the New Fourth Army. 35 Chen Xiaojin: Wo de wenge suiyue, p. 248-253. |
04109195 | en | [
"math.math-dg"
] | 2024/03/04 16:41:24 | 2023 | https://hal.science/hal-04109195/file/Pseudo_Conformal_Actions_Mobius_Group.pdf | Mehdi Belraouti
Mohamed Deffaf
Yazid Raffed
Abdelghani Zeghib
Pseudo-Conformal
PSEUDO-CONFORMAL ACTIONS OF THE MÖBIUS GROUP
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
A pseudo-Riemannian manifold is a differentiable manifold M endowed with a pseudo-Riemannian metric g of signature (p, q). Two metrics g 1 and g 2 on M are said to be conformally equivalent if and only if g 1 = exp(f )g 2 where f is C ∞ function. A conformal structure is then an equivalence class [g] of a pseudo-Riemannian metric g and a conformal manifold is a manifold endowed with a pseudo-Riemannian conformal structure. A remarkable family of conformal manifolds is given by the conformally flat ones. These are pseudo-Riemannian conformal manifolds that are locally conformally diffeomorphic (i.e preserving the conformal structures) to the Minkowski space R p,q i.e the vector space R p+q endowed with the pseudo-Riemannian metric -dx 2 0 -... -dx 2 p-1 + dy 2 0 + ... + dy 2 q-1 . The conformal group Conf(M, g) is the group of transformations that preserve the conformal structure [g]. It is said to be essential if there is no metric in the conformal class of g for which it acts isometrically. In the Riemannian case, the sphere S n is a compact conformally flat manifold with an essential conformal group. The Einstein universe Ein p,q is the equivalent model of the standard conformal sphere in the pseudo-Riemannian setting. It admits a two-fold covering conformally equivalent to the product S p ×S q endowed with the conformal class of -g S p ⊕g S q . It is conformally flat and its conformal group, which is in fact the pseudo-Riemannian Möbius group O(p + 1, q + 1), is essential. Actually the Einstein universe is the flat model of conformal pseudo-Riemannian geometry. This is essentially due to the fact that the Minkowski space embeds conformally as a dense open subset of the Einstein universe Ein p,q and in addition to the Liouville theorem asserting that conformal local diffeomorphisms on Ein p,q are unique restrictions of elements of O(p + 1, q + 1). Hence a manifold is conformally flat if and only if it admits a (O(p + 1, q + 1), Ein p,q )-structure.
In the sixties A. Lichnérowicz conjectured that among compact Riemannian manifolds, the sphere is the only essential conformal structure. This was generalised and proved independently by Obatta and Ferrand (see [START_REF] Obata | The conjectures on conformal transformations of Riemannian manifolds[END_REF], [START_REF] Lelong-Ferrand | Transformations conformes et quasiconformes des variétés riemanniennes; application à la démonstration d'une conjecture de A. Lichnerowicz[END_REF]). In the pseudo-Riemannian case, a similar question, called the pseudo-Riemannian Lichnérowicz conjecture, was raised by D'Ambra and Gromov [START_REF] Ambra | Lectures on transformation groups: geometry and dynamics[END_REF]. Namely, if a compact pseudo-Riemannian conformal manifold is essential then it is conformally flat. This was disproved by Frances see [START_REF] Frances | About pseudo-Riemannian Lichnerowicz conjecture[END_REF], [START_REF] Frances | The lorentzian Lichnerowicz conjecture for real-analytic, three-dimensional manifolds[END_REF].
The present article is the first of a series on the pseudo-Riemannian Lichnérowicz conjecture in a homogeneous setting [START_REF] Belraouti | On homogeneous holomorphic conformal structures[END_REF][START_REF] Belraouti | Conformal pseudoriemannian Lichnerowicz conjecture in homogeneous setting[END_REF]. The general non homogeneous case, but with signature restrictions, was amply studied by Zimmer, Bader, Nevo, Frances, Zeghib, Melnick and Pecastaing (see [START_REF] Zimmer | Split rank and semisimple automorphism groups of G-structures[END_REF], [START_REF] Bader | Conformal actions of simple Lie groups on compact pseudo-Riemannian manifolds[END_REF], [START_REF] Frances | Some remarks on conformal pseudo-Riemannian actions of simple Lie groups[END_REF], [START_REF] Pecastaing | Conformal actions of real-rank 1 simple Lie groups on pseudo-Riemannian manifolds[END_REF], [START_REF] Pecastaing | Conformal actions of higher rank lattices on compact pseudo-Riemannian manifolds[END_REF], [START_REF] Pecastaing | Essential conformal actions of PSL(2, R) on real-analytic compact Lorentz manifolds[END_REF], [START_REF] Melnick | The conformal group of a compact simply connected Lorentzian manifold[END_REF]). Let us also quote [START_REF] Leistner | Conformal transformations of cahen-wallach spaces[END_REF] as a recent work in the Lorentz case.
We are investigating in this first part the case where the non-compact semi-simple part of the conformal group is locally isomorphic to the Möbius group SO(1, n + 1). More exactly, we prove the following classification result. This Möbius situation will actually play a central role towards the general case treated in [START_REF] Belraouti | On homogeneous holomorphic conformal structures[END_REF].
Theorem 1.1. Let (M, [g]) be a conformal connected compact pseudo-Riemannian manifold. We suppose that there exists G a subgroup of the conformal group Conf(M, g) acting essentially and transitively on (M, [g]). We suppose moreover that the noncompact semi-simple part of G is locally isomorphic to the Möbius group SO(1, n + 1). Then (M, [g]) is conformally flat. More precisely (M, [g]) is conformally equivalent to
• The conformal Riemannian n-sphere or;
• Up to a cover, the Einstein universe Ein 1,1 or; • Up to a finite cover, the Einstein universe Ein 3,3 . Remark 1.2. It turns out that, in the first and third cases, the acting group G is locally isomorphic to the Môbius group, that is, G is simple. In the second case, the universal cover G is a subgroup of SL(2, R) × SL(2, R). It can in particular be SL(2, R) × SO(2).
Preliminaries
2.1. Notations. Throughout this paper (M, g) will be a compact connected pseudo-Riemannian manifold of dimension n endowed with a transitive and essential action of the conformal group G = Conf(M, g). We suppose without loss of generality that G is connected.
Fix a point x in M and denote by H = Stab(x) its stabilizer in G. Denote respectively by g, h the Lie algebras of G and H. Let g = s r be a Levi decomposition of g, where s is semi-simple and r is the solvable radical of g. Denote by s nc the non-compact semi-simple factor of s, by s c the compact one and let n be the nilpotent radical of g. Note that n is an ideal of g. Let us denote respectively by S, S nc , S c , R and N the connected Lie sub-groups of G associated to s, s nc , s c , r and n.
Let a be a Cartan subalgebra of s associated with a Cartan involution Θ. Consider s = s 0 ⊕ α∈∆ s α = a ⊕ m ⊕ α∈∆ s α the root space decomposition of s, where ∆ is the set of roots of (s, a). Denote respectively by ∆ + , ∆ -the set of positive and negative roots of s for some chosen notion of positivity on a * . Then
s = s -⊕ a ⊕ m ⊕ s + , where s + = α∈∆ + s α and s -= α∈∆ -s α .
For every α ∈ a * , consider
g α = {X ∈ g, ∀H ∈ a : ad H (X) = α(H)X}.
We say that α is a weight if g α = 0. In this case g α is its associated weight space. As [g, r] ⊂ n (see [START_REF] Jacobson | Lie algebras[END_REF]Theorem 13]) then, for every α = 0, g α = s α ⊕ n α , where
n α = {X ∈ n, ∀H ∈ a : ad H (X) = α(H)X}.
Moreover, the commutativity of a together with the fact that finite dimensional representations of a semi-simple Lie algebra preserve the Jordan decomposition implies that elements of a are simultaneously diagonalisable in some basis of g.
Thus g = g 0 ⊕ α =0 g α .
Finally we will denote respectively by A, S + the connected Lie subgroups of G corresponding to a and s + .
2.2. General facts. We will prove some general results about the conformal group G. We start with the following general fact: Proposition 2.1. We have that [s, n] = [s, r]. In particular the sub-algebra s n is an ideal of g.
Proof.
For this, let us consider the semi-simple S-representation in GL(r). It preserves n and thus has a supplementary invariant subspace. But [g, r] ⊂ n so automorphisms of r act trivially on r/n and hence [s, g] ⊂ s ⊕ [s, n] ⊂ s n. We deduce that s n is an ideal of g.
Next we will prove:
Proposition 2.2. The non-compact semi-simple factor S nc of S is non trivial.
Let us first start with the following simple observation: Proposition 2.3. If a conformal diffeomorphism f of (M, g) preserves a volume form ω on M , then it preserves a metric in the conformal class of g.
Proof. Let f be a diffeomorphism preserving the conformal class [g] and a volume form ω on M . Denote by ω g the volume form defined on M by the metric g. On the one hand, there exists a C ∞ real function φ such that ω = e φ ω g . Hence ω is the volume form defined by the metric e Assume that the non-compact semi-simple factor S nc is trivial. Then by [START_REF] Zimmer | Ergodic theory and semisimple groups[END_REF]Corollary 4.1.7] the group G is amenable. So it preserves a regular Borel measure µ on the compact manifold M . It is in particular a quasi-invariant measure with associated rho-function ρ 1 = 1 (in the sense of [START_REF] Bekka | Kazhdan's property (T)[END_REF]). Let now ω g be the volume form corresponding to the metric g. As the group G acts conformally and the action is C ∞ , the measure ω g is also quasi-invariant with C ∞ rho-function ρ 2 (see [START_REF] Bekka | Kazhdan's property (T)[END_REF]Theorem B.1.4]). Again by [START_REF] Bekka | Kazhdan's property (T)[END_REF]Theorem B.1.4], the measures µ and ω g are equivalent and dµ dωg = 1 ρ2 . This shows that µ is a volume form. Then one use Corollary 2.4 to get the Proposition 2.2.
In the general case the essentiality of the action ensure the non discreetness of the stabilizer H.
Proposition 2.5. The stabilizer H is not discrete.
Proof. If it was not the case then H would be a uniform lattice in G. But as the action is essential, there is an element h ∈ H that does not preserve the metric on g/h. So |det (Ad h )| = 1 contradicting the unimodilarity of G.
To finish this part let us prove the two following important Lemmas that will be used later in the paper: Lemma 2.6. Let π : S nc -→ GL(V ) be a linear representation of S nc into a linear space V . Then, the compact orbits of S nc are trivial.
Proof. Assume that S nc has a compact orbit C ⊂ V . Then the convex envelope Conv(C ∪ -C) is an S nc -invariant compact convex symmetric set with non empty interior. Thus the action of S nc preserves the Minkowski gauge . (which is in fact a norm) of Conv(C ∪ -C). But Isom (Conv(C ∪ -C), . ) is compact. So the restriction of the representation π to Conv(C ∪ -C) gives rise to an homomorphism from a semi-simple group with no compact factor to a compact group and hence is trivial.
Lemma 2.7. A linear representation π : s nc -→ gl(V ) of s nc into a linear space V is completely determined by its restriction to a ⊕ m ⊕ s + . More precisely, π snc (V ) = Vect π a⊕m⊕s+ (V ) .
Proof. It is in fact sufficient to show that π s-(V ) ⊂ Vect π a⊕s+ (V ) . For that, fix x ∈ s -α ⊂ s -and let a ∈ a such that Rx ⊕ Ra ⊕ RΘ(x) ∼ = sl(2, R) (see for example [15, Proposition 6.52]). Thus the restriction of π to Rx ⊕ Ra ⊕ RΘ(x) is isomorphic to a linear representation of sl(2, R) into V . Using Weyl Theorem we can assume without loss of generality that this last is irreducible. But irreducible linear representations of sl(2, R) into V are unique up to isomorphism (see for instance [START_REF] Hall | Lie groups, Lie algebras, and representations[END_REF]Theorem 4.32]). It is then easy to check that they verify π(x)(V ) ⊂ Vect π Ra⊕RΘ(x) (V ) (see [START_REF] Hall | Lie groups, Lie algebras, and representations[END_REF]Examples 4.2]). This finishes the proof.
Lie algebra formulation
3.1. Enlargement of the isotropy group. As the manifold G/H is compact, the isotropy subgroup H is a uniform subgroup of G. If H was discrete then it is a uniform lattice and in this case G would be unimodular. In the non discrete case, this imposes strong restrictions on the group H. When H and G are both complex algebraic it is equivalent to being parabolic i.e contains maximal solvable connected subgroup of H. In the real case, Borel and Tits [START_REF] Borel | Groupes réductifs[END_REF] proved that an algebraic group H of a real linear algebraic group G is uniform if it contains a maximal connected triangular subgroup of G. Recall that a subgroup of G (respectively a sub-algebra of g) is said to be triangular if, in some real basis of g, its image under the adjoint representation is triangular.
Let H * = Ad -1 Ad(H) Zariski be the smallest algebraic Lie subgroup of G containing H. By [12, Corollary 5.1.1], the Lie algebra h * of H * contains a maximal triangular sub-algebra of g. The sub-algebra (a ⊕ s + ) n being triangular, we get the following fact: Fact 3.1. Up to conjugacy, the sub-algebra h * contains (a ⊕ s + ) n.
Consider the vector space Sym(g) of bilinear symmetric forms on g. The group G acts naturally on Sym(g) by g.Φ(X, Y ) = Φ(Ad g -1 X, Ad g -1 Y ). Let ., . be the bilinear symmetric form on g defined by
X, Y = g (X * (x), Y * (x)) ,
where g is the pseudo-Riemannian metric, X * , Y * are the fundamental vector fields associated to X and Y and x is the point fixed previously. It is a degenerate symmetric form with kernel equal to h.
Let P be the subgroup of G preserving the conformal class of ., . . It is an algebraic group containing H and normalizing the sub-algebra h. In particular, it contains H * : the smallest algebraic group containing H. Using Fact 3.1 we get that up to conjugacy, the Lie algebra p of P contains (a ⊕ s + ) n.
Proposition 3.2. The Cartan sub-group A does not preserve the metric ., . . Proof. First as h is an ideal of p then by taking quotient of both P and H by H
• , we can suppose that H is a uniform lattice of P and in particular that P is unimodular.
Assume that A preserves the metric ., . . On the one hand, the groups S + and N preserve the conformal class of ., . . On the other hand, they act on Sym(g) by unipotent elements. So the groups A, S + , and N preserve the metric ., . . But by Iwasawa decomposition (A S + ) is co-compact in S. Thus the S-orbit of ., . is compact in Sym(g) and hence trivial by Lemma 2.6. Therefore S and N are subgroups of P . This implies that for any p ∈ P , det (Ad p ) |g/p = 1. Indeed, the action of G on (s c + r)/n factors trough the product of the action of S c on s c by the trivial action on r/n. As P contains S and N , its action on g/p is a quotient of the action of S c on s c . But S c is compact, thus it preserves some positive definite scalar product and hence the determinant det (Ad p ) |g/p = 1. Now let h ∈ H such that Ad h does not preserve ., . . We have that
1 = det (Ad h ) |g/h = det (Ad h ) |g/p det (Ad h ) |p/h
Finally we get det (Ad h ) |p/h = 1 which contradicts the unimodularity of P .
3.2. Distortion. The group P preserves the conformal class of ., . . There exists thus an homomorphism δ : P → R such that: for every p ∈ P and every u, v ∈ g,
(1) Ad p (u), Ad p (v) = e δ(p) u, v = det (Ad p ) |g/h 2 n u, v
In particular if p ∈ P preserves the metric then δ(p) = 0 and
(2)
Ad p (u), Ad p (v) = u, v
Or equivalently
(3) ad p (u), v + u, ad p (v) = 0
It follows that if the action of p ∈ P on g is unipotent then δ(p) = 0. Therefore, the homomorphism δ is trivial on S -and N but not on A by Proposition 3.2. We continue to denote by δ the restriction of δ to A. We can see it alternatively as a linear form δ : a → R, called distortion, verifying: for every a ∈ a and every u, v ∈ g,
(4) ad a (u), v + u, ad a (v) = δ(a) u, v Definition 3.1.
Two weights spaces g α and g β are said to be paired if they are not ., . -orthogonal.
Definition 3.2. A weight α is a non-degenerate weight if g α is not contained in h.
Definition 3.3. We say that a subalgebra g is a modification of g if g projects surjectively on g/h. In this case g / (g ∩ h) = g/h.
Proposition 3.3. If the weight space g 0 is degenerate then up to modification, g is semi-simple and M = G/H is conformally flat.
Proof. On the one hand, g 0 ⊂ h implies that a ⊂ h. As h is an ideal of p, we get that s + = [s + , a] ⊂ h. On the other hand, r ⊂ g 0 + n ⊂ g 0 + [n, a] ⊂ h. Thus, up to modification, we can assume that g is semi-simple and that h contains a + s + . Now let α max be the highest positive root and let X ∈ g αmax . Then d 1 e X : g/h → g/h is trivial. Yet e X is not trivial. We conclude using [START_REF] Frances | Formes normales pour les champs conformes pseudoriemanniens[END_REF]Theorem 1.4].
A direct consequence of Equation 4, is that if g α and g β are paired then α+β = δ. This shows that if α is a non-degenerate weight then δ -α is also a non-degenerate weight. In particular if 0 is a non-degenerate weight, then g 0 and g δ are paired and hence δ is a non-degenerate weight. In fact: Proposition 3.4. If 0 is a non-degenerate weight then δ is a root. Moreover s δ ⊂ h.
Proof. First we will prove that the subalgebras a and n δ are ., . -orthogonal. Let a ∈ a such that δ(a) = 0. Using Equation 4 for a, u = a and v ∈ n δ , we get, a, ad a (v) = δ(a) a, v . But v preserves ., . , thus by Equation 3, δ(a) a, v = 0. Hence a, v = 0, for every v ∈ n δ . We conclude by continuity. Now if δ was not a root then s δ = 0 and g δ = n δ . Thus a and g δ are orthogonal. Which implies that a ⊂ h. But h is an ideal of p, so g δ = [g δ , a] ⊂ h. This contradicts the fact that g δ is paired with g 0 .
To finish we need to prove that s δ ⊂ h. If this was not the case then a would be orthogonal to g δ . Hence g δ ⊂ h which contradicts again the fact that g δ is paired with g 0 .
3.3. The isotropy group is big. From now and until the end we will suppose that the non-compact semi-simple part S nc of G is locally isomorphic to the Möbius group SO(1, n + 1). In this case the Cartan Lie algebra a is one dimensional and we have s nc = s -α ⊕ a ⊕ m ⊕ s α , where α is a positive root, a ∼ = R, m ∼ = so(n), and
s -α ∼ = s α ∼ = R n . Moreover, g ±α = s ±α ⊕ n ±α , g 0 = a ⊕ m ⊕ s c ⊕ r 0 , g β = n β for every β = 0, ±α and r = r 0 ⊕ β =0 n β .
In section 3.1 we saw that the isotropy group H is contained in the algebraic group P which turn out to be big i.e to contain the connected Lie groups A, S α and N . Our next result shows that the group H itself is big:
Proposition 3.5. The Lie algebra h contains a ⊕ s α ⊕ β =0 n β .
Proof. We have that a ⊂ h. Indeed, if 0 is a degenerate weight then we are done. If not, then δ is a root and a ⊂ g 0 is orthogonal to every g β with β = δ. From the proof of Proposition 3.4 we know that a and n δ are orthogonal. Thus it remains to show that a and s δ are orthogonal. For that, let x ∈ s δ then Θ(x) ∈ s -δ and [x, Θ(x)] = 0 in a. Now using Equation 3 and the fact that one of x or Θ(x) preserve ., . , we get ad x (Θ(x)), x = 0. But a is one dimensional so it is orthogonal to s δ .
To end this proof, we have that h is an ideal of p and so
a ⊕ s α ⊕ β =0 n β = a ⊕ a, s α ⊕ β =0 n β ⊂ h ⊕ [h, p] ⊂ h.
As a consequence we get:
Corollary 3.6. If 0 is a non-degenerate weight, then δ = -α.
3.4.
A suitable modification of g. We will show that g admits a suitable modification g . This allows us to considerably simplify the proofs in the next section.
More precisely, we have:
Proposition 3.7. The solvable radical decomposes as a direct sum r = r 1 ⊕r 2 , where r 1 is a subalgebra commuting with the semi-simple factor s and r 2 is an s-invariant linear subspace contained in h. In particular g = s ⊕ r 1 is a modification of g.
To prove Proposition 3.7, we need the following lemma:
Lemma 3.8. We have [s, n] = [s, r] ⊂ h.
Proof of Lemma 3.8. First we prove that [n, g 0 ] ⊂ h. For this, note that by the Jacobi identity and the fact that n is an ideal of g, we have β =0 n β , g 0 = β =0 n β which in turn is a subset of h by Proposition 3.5. Thus one need to prove that [n 0 , g 0 ] ⊂ h. We know that n preserve the metric ., . . So using Equation 3 for p ∈ n 0 , u ∈ g 0 and v ∈ g δ gives us: ad p (u), v + u, ad p (v) = 0. But once again by Jacobi identity, the fact that n is an ideal of g and Proposition 3.5 we have ad p (v) ∈ g δ ∩ n = n δ ⊂ h. So ad p (u), v = 0, which means that [n 0 , g 0 ] is orthogonal to g δ . Using the fact that [n 0 , g 0 ] ⊂ g 0 and that g 0 is orthogonal to every g β 7
Date:
May 29, 2023.
2φ n g. On the other hand, we have f * e 2φ n g = e ψ e 2φ n g, for some C ∞ function ψ. Thus f * ω = e n 2 ψ ω. But, f preserves the volume form ω, so ψ = 0 which means that f preserves the metric e 2φ n g. As a consequence we get: Corollary 2.4. The conformal group G preserves no volume form on M .
for β = δ we get that [n 0 , g 0 ] ⊂ h.
Next we have that s c ⊂ g 0 thus [s c , n] ⊂ [g 0 , n] ⊂ h.
Finally we finish by proving that [s nc , n] ⊂ h. On the one hand we have,
On the other hand, as s nc is semi-simple we have by Lemma 2.7 that [s nc , n] ⊂ Vect ([a ⊕ m ⊕ s α , n]) ⊂ h.
Proof of Proposition 3.7. The subalgebra [s, n] = [s, r] is s-invariant, so it admits an s-invariant supplementary subspace r 1 in r. But s acts trivially on r/ [s, n] and thus it acts trivially on r 1 . We take r 1 to be the s-invariant subalgebra generated by r 1 (in fact the action of s on r 1 is trivial).
It is clear that r 1 is a direct sum of r 1 and r 1 : an s-invariant subspace of [s, n]. Consider r 2 to be the supplementary of r 1 in [s, n] = [s, r]. It is s-invariant and by Lemma 3.8 we have r 2 ⊂ h.
The Möbius conformal group: a classification theorem
This section is devoted to prove Theorem 1.1. We distinguish two situations: when m is contained in h and when it is not. In this last one, we first consider the case where only the non-compact semi-simple part S nc is non trivial. Then deduce from it the general case. From now and until the end we will assume, up to modification, that g = s ⊕ r 1 .
4.1. The Frances-Melnick case. We suppose that the sub-algebra m is contained in h. Then we have the following proposition: Proposition 4.1. M is conformally equivalent to the standard sphere S n or the Einstein universe Ein 1,1 .
Proof. Assume first that g 0 is contained in h. Then by Proposition 3.3, M is conformally flat and after modification, r = 0. Moreover, g/h ∼ = s -α . This is because
Now suppose that g 0 is not in h. In this case g -α = g δ is paired with g 0 . But a, m and n δ are contained in h so s -α is paired with s c ⊕ (r 0 ∩ r 1 ). Note that m acts on s -α ⊕ (s c ⊕ (r 0 ∩ r 1 )) by preserving the pairing (in fact the action of m preserves the metric ., . ). On the contrary for n ≥ 2, m ∼ = so(n) acts trivially on r 0 ∩ r 1 and transitively on s -α -{0}, so n = 1. As the metric is of type (p, q), we conclude that the projection of s c ⊕ (r 0 ∩ r 1 ) on g/h is ∼ = R. Thus, after modification g = so(1, 2) ⊕ R = u(1, 1), h = a ⊕ s α = R ⊕ R and hence M is, up to cover, conformally equivalent to Ein 1,1 .
4.2.
The non-compact semi-simple case. Here we suppose that m is not contained in h, the compact semi-simple part s c and the radical solvable part r 1 are both trivial. We will show: Proposition 4.2. The pseudo-Riemannian manifold M is conformally equivalent to Ein 3,3 By corollary 3.6, δ is a negative root. In particular δ = -α and g -α is paired with g 0 . In addition g = s -α ⊕ a ⊕ m ⊕ s α and a ⊕ s α ⊂ h. We have: Proposition 4.3. The root space s δ does not intersect h. In particular the metric is of type (n, n).
Proof. If it was the case then let 0 = X ∈ s δ ∩ h. We have [[X, s -δ ] , X] = s δ so s δ ⊂ h. This contradicts the fact that g δ is paired with g 0 .
Consider the bracket [., .] :
and .∨. : s α ×s -α -→ a its projections on m and a respectively. Direct computations give us: Lemma 4.4.
(
The Cartan involution identifies s α and s -α , which when identified with R n , m acts on them as so(n). In this case, the map .∨. can be seen as a bilinear symmetric map from R n × R n to R n , and when composed with α gives rise to an m-invariant scalar product ., . 0 on R n . Moreover, by Lemma 4.4, for every x, X ∈ R n , X ∧ x is the antisymmetric endomorphism of R n defined by X ∧x(y) = X, y 0 x-x, y 0 X.
Let x, X ∈ R n and consider P the plane generated by x, X. Then X ∧ x when seen as element of m ∼ = so(n) is the infinitesimal generator of a one parameter group acting trivially on the orthogonal P ⊥ of P with respect to the scalar product ., . 0 . Hence X ∧ x ∈ so(P ). More generally: Proposition 4.5. Let E be a linear subspace of R n and let x ∈ E. Consider c the Lie subalgebra of so(n) generated by {X ∧ x/X ∈ E}. Then c equals the Lie algebra linearly generated by {X ∧ X /X, X ∈ E}, which in turn equals so(E), the Lie algebra of orthogonal transformations preserving E and acting trivially on its orthogonal (with respect to ., . 0 ).
Proof. First we have c(E) ⊂ E and hence c ⊂ so(E). It is then sufficient to prove that c and so(E) have the same dimensions. For that let {x, X 2 , ..., X k } be a basis of E. Note that {X 2 ∧ x, ..., X k ∧ x, [X i ∧ x, X j ∧ x] , for 2 ≤ i < j ≤ k} are linearly independent. Thus c = so(E).
For every x = 0 ∈ s -α consider:
Proof. By Proposition 4.5 we have,
This implies that
Hence, the projections Θ(Z x )\{0}
x∈s-α form a partition of g/h.
Next we prove:
Proposition 4.7. The pseudo-Riemannian manifold M is conformally flat.
Proof. We need to prove that the Weyl tensor W (or the Cotton tensor C if the dimension of M is 3) vanishes. Actually we will just make use of their conformal invariance property. Namely: if f is a conformal transformation of M then, (5)
We denote by x the projection in g/h of an element x ∈ g. A direct application of Equation 5gives us:
(1) W(x, ȳ, z) = 0 for every x, y, z ∈ s -α ;
(2) W(x, ȳ, m) = 0 for every x, y ∈ s -α and every m ∈ m;
Then, from Equation 5we obtain:
In other words W(x, m 1 , m 2 ) ∈ Θ(Z x ). Now let x, y ∈ s -α , X ∈ s α and m ∈ m. Then again Equation 5gives us:
But W(x, [X, ȳ] , m) ∈ Θ(Z x ) and W([X, x] , ȳ, m) ∈ Θ(Z y ). Thus, Proposition 4.6 gives us:
(1) If y / ∈ Θ(Z x ) then W(x, [X, ȳ] , m) = 0; (2) In the case y ∈ Θ(Z x ) and X ∈ Z x , we have W(x, [X, ȳ] , m) = 0 (3) If y ∈ Θ(Z x ) and X / ∈ Z x . Then because Θ(X) / ∈ Θ(Z x ) we have:
So as a conclusion we get W = 0.
We finish this section by proving Proposition 4.2: Proof of Proposition 4.2. First note that if n = 1 then m = 0. Thus we assume n ≥ 2. So far we have seen that M = SO(1, n + 1)/H is a conformally flat pseudo-Riemannian manifold of signature (n, n). Since the Lie algebra h contains a + s α , the group H
• is cocompact in SO(1, n + 1). Therefore SO
• is connected and compact, with a connected isotropy and hence simply connected. As M is connected, it covers SO
• and thus equals it.
On the one hand, the Einstein universe Ein n,n is simply connected. Thus M is identified to Ein n,n . So SO(1, n + 1) acts transitively on Ein n,n with isotropy H. By Montgomery Theorem [19, Theorem A] any maximal compact subgroup in SO(1, n + 1), e.g. K 2 = SO(n + 1), acts transitively on S n × S n the two fold cover of Ein n,n .
On the other hand, the conformal group of Ein n,n is SO(n+1, n+1). A maximal compact subgroup of it is K 1 = SO(n + 1) × SO(n + 1). Up to conjugacy , we can assume K 2 ⊂ K 1 . Therefore, K 2 = SO(n + 1) acts via a homomorphism ρ = (ρ 1 , ρ 2 ) : SO(n + 1) → SO(n + 1) × SO(n + 1). If SO(n + 1) is simple, then:
-either ρ 1 or ρ 2 is trivial and the other one is bijective, in which case ρ(SO(n + 1)) does not act transitively on S n × S n , -or both are bijective, and ρ(SO(n + 1) is up to conjugacy in SO(n + 1) × SO(n + 1) the diagonal {(g, g)/g ∈ SO(n + 1)}. The latter, too, does not act transitively on S n × S n .
Hence SO(n + 1) must be non-simple which implies n = 1 or n = 3. but n = 1 was excluded, and then remains exactly the case n = 3, for which M is conformally equivalent to Ein 3,3 . 4.3. The general case. In this section we will show Theorem 1.1 in the general case. We suppose that g = s -α ⊕ a ⊕ m ⊕ s α ⊕ s c ⊕ r 1 . Let us denote by m 0 = m ∩ h so that so(1, n + 1) ∩ h = a ⊕ s α ⊕ m 0 . A priori the subalgebra m 0 could be of any dimension in m. Nevertheless the hypothesis m ⊂ h restricts drastically the possibilities. So we have: Proof. If n = 2 then m = so(2). Hence [p, s α ] = a ⊕ m for any non null p ∈ s -α . Recall that s -α preserves the metric so by applying Equation 3 for p = v ∈ s -α , u ∈ s α we get s -α , m = 0. Thus m ⊂ h which contradicts our hypothesis.
Assume that n ≥ 3 and suppose that m 0 has codimension less then n-1. Denote by M 0 the connected subgroup of SO(n) corresponding to m 0 .
If the action of M 0 on s -α ∼ = R n is reducible then M 0 preserves the splitting R d × R n-d and hence is contained in SO(d)×SO(n-d). Thus M 0 has codimension bigger than the codimension of SO(d) × SO(n -d) which in turn achieves its minimum if d = 1 or n -d = 1 and hence M 0 = SO(n -1). One can identify m 0 with so(E) for some n-1 dimensional linear subspace E of s -α . Let then e ∈ s -α such that s -α = Re ⊕ E. Fix a non zero element x ∈ Θ(E), we have ad x (e), X + e, ad x X = 0 for every X ∈ s -α and so in particular e, ad x e = 0. In addition by Proposition 4.5, [E, Θ(E)] = a ⊕ m 0 ⊂ h thus ad x e, X = 0 for every X ∈ E and hence ad x e is orthogonal to s -α . This implies that x ∧ e ∈ h ∩ m = m 0 = so(E) which contradicts the fact that x ∧ e is the infinitesimal rotation of the plane Re ⊕ Rx.
The last case to consider is when M 0 acts irreducibly. Let m ∈ m 0 , X ∈ s -α and y ∈ s c ⊕ r 1 then ad m (X), y + X, ad m y = 0. But ad m y = 0 and hence s c ⊕ r 1 is orthogonal to [m 0 , s -α ] which is equal to s -α by irreducibility. Thus s c ⊕ r 1 ⊂ h and we are in the non-compact semi-simple case. Therefore n = 3 and m ∼ = so(3). Non trivial Sub-algebras of so(3) have dimension one and are reducible. So the only left possibility is m 0 = m ∼ = so(3) which show that m ⊂ h and this is a contradiction.
End of Proof of Theorem 1.1. By Proposition 4.8, m 0 is of codimension n in m. But s -α is paired with g 0 = a ⊕ m ⊕ s c ⊕ (r 1 ∩ r 0 ). Thus s c ⊕ (r 1 ∩ r 0 ) ⊂ h and we are also in the non-compact semi-simple case. Therefore n = 3 and M is conformally equivalent to Ein 3,3 .
[] Mehdi Belraouti Faculté de Mathématiques,, USTHB, BP 32, El-Alia,, 16111 Bab-Ezzouar, Alger (Algeria)
E-mail address: [email protected] Mohamed Deffaf Faculté de Mathématiques,, USTHB, BP 32, El-Alia,, 16111 Bab-Ezzouar, Alger (Algeria)
E-mail address: [email protected] Yazid raffed Faculté de Mathématiques,, USTHB, BP 32, El-Alia,, 16111 Bab-Ezzouar, Alger (Algeria)
E-mail address: [email protected]
Abdelghani Zeghib UMPA, ENS de Lyon, France E-mail address: [email protected] |
04109252 | en | [
"info",
"spi.gciv.it"
] | 2024/03/04 16:41:24 | 2020 | https://hal.science/hal-04109252/file/paper9.pdf | Fabio Mensi
Angelo Furno
Rémy Cazabet
Traffic speed prediction in the Lyon area using DCRNN
Keywords: Traffic prediction, Machine learning on graphs, Graph neural networks
2 Graph Neural Networks, the idea In many domains, data is naturally represented in the form of graphs. In chemistry, molecules are made of atoms linked via chemical bonds, in e-commerce, costumers and products are linked via their consumer relationship. Novel deep learning methods have been introduced to address those problems where data
Introduction
In this paper, we deal with the problem of traffic speed prediction, i.e., forecasting, based on relevant historical data (e.g., previous speed values, topological features, etc.) up to a certain time t, what will be the average speed on a given set of road segments at time t + T , where T is the prediction horizon.
More specifically, we focus on short term prediction (i.e., a maximum T of one hour) for a subset of road segments in the road network of Lyon, France, by exploiting the past history of observed traffic speeds from a real dataset.
Lyon road network
Our dataset consists in floating car data, reconstructed from GPS trajectories collected from 20,000 cars on an average working day. Data is heterogeneously distributed and most of the signal relates to a minor subset of road segments during daytime. To address this issue we have developed a filtering pipeline to select a subset of road segments that meet several criteria, including high data availability, non-stationarity and noise.
From the initial road network, consisting in 317,693 nodes, we extract a subgraph of 180 road segments, as highlighted in Fig. 1. This subset is interested by several congestion phenomena, making the traffic speed prediction task particularly challenging and the adoption of deep learning techniques, such as Graph Neural Networks (GNN), well justified. lies on graph by exploiting their underlying network structure [START_REF] Scarselli | The graph neural network model[END_REF]. These methods are generally referred as GNNs and they are typically implemented to solve problems such as node classification, graph classification or link prediction.
We focus our attention on a specific kind of GNN, namely Diffusion Convolutional Recurrent Neural Network (DCRNN), that has already shown promising results in traffic speed forecasting settings.
DCRNN
The DCRNN is a type of GNN specifically aimed at tackling spatiotemporal forecasting tasks, first introduced in an article by Y. Li et al. [START_REF] Li | Diffusion convolutional recurrent neural network: Data-driven traffic forecasting[END_REF]. In the original paper it was described and evaluated on a traffic speed prediction problem, reporting state-of-the-art performances.
Speed data can be represented as graph signals (time series) X t laying on the road network, modelled as a directed graph G, where road segments are represented as nodes and intersections as edges.
Diffusion Convolution operation
Spatial dependency is modelled following an analogy with a diffusion process.
Given a graph signal X ∈ R N ×P , with N being the number of nodes in the graph, and the diffusion transition matrices D -1 O W , D -1 I W T , with D O representing the out-degree diagonal matrix, D I the in-degree diagonal matrix, and W the adjacency matrix of the directed graph G, the convolution operation is defined as:
X :,p f θ = K-1 k=0 (θ k,1 (D -1 O W ) k + θ k,2 (D -1 I W T ) k )X :,p for p ∈ {1, ..., P } (1)
The filtered signal of a node i (1 ≤ i ≤ N ) is the result of combinations of the signals of the neighbors in the graph up to a distance of K steps from the node, weighted by the parameter θ ∈ R K×2 .
DCGRU cell
The diffusion convolution operation accounts for the spatial dynamics component of the problem and its implementation is directly connected with the temporal dynamics part.
The temporal dependency is modelled through a recurrent neural network variant, the gated recurrent unit (GRU) [START_REF] Bahdanau | Neural machine translation by jointly learning to align and translate[END_REF]. To combine spatial and temporal modelling each matrix multiplication operation is replaced by the the diffusion convolution operation, described in the previous equation. The resulting modified GRU cell, called DCGRU, can be defined by the following equations:
r (t) = σ(Θ r G [X (t) , H (t-1) ] + b r ) u (t) = σ(Θ u G [X (t) , H (t-1) ] + b u ) (2)
C (t) = tanh(Θ C G [X (t) , (r (t) H (t-1) )] + b c ) H (t) = u (t) H (t-1) + (1 -u (t) ) C (t)
where X (t) , H (t) are the input and output (or activation) at time t, r (t) , u (t) are the reset gate and update gate, and C (t) is the candidate output, which contributes to the new output based on the value of the update gate u (t) .
Results in the Lyon area
We have applied the DCRNN method to our case study, using data from October-November 2017, and compared it with simple baselines, such as historical average and naive forecasting, and other more traditional methods, such as ARIMA (Au-toRegressive Integrated Moving Average) and VAR (Vector AutoRegression).
In Table 1, the performances of these techniques are reported according to commonly used error metrics. Performance improvement can be observed when using DCRNN, especially with shorter horizons and the Mean Average Error (MAE) metric (the metric the neural network optimizes during training). An additional study has been performed on two subareas, where we compare DCRNN to other classical deep learning methods obtaining similar results, with DCRNN exhibiting better performance overall.
Reusable DCGRU
In order to allow future model developments, the DCGRU cell should be readily usable, independently from its original context. Starting from the original implementation [START_REF] Li | Diffusion convolutional recurrent neural network: Data-driven traffic forecasting[END_REF], we have built the DCGRU cell using the latest version of Tensorflow, a popular machine learning library, in such a way that is compatible with Keras, a deep learning API, and does not rely on deprecated libraries.
Our goal is to provide code that allows for quick model upgrading and can be integrated with minimal effort4 . We have tested the newly implemented DCGRU layer on the prediction of a graph-based synthetic signal and compared it with other commonly used deep learning architectures. The DCGRU layer obtained the best performance, showing potential for employment in similar tasks.
Future perspectives
We aim at upgrading the current DCGRU cell to make it able to exploit information other than traffic speed or graph-based signals. For example non-graph related temporal data (in a traffic prediction context this could mean weather, holidays, weekday, etc...), static graph-related features (e.g. road structure) or the adoption of attention mechanisms [START_REF] Veličković | Graph attention networks[END_REF] in the diffusion convolution operation.
Other efforts could be dedicated to redefine the traffic speed prediction problem in order to meaningfully consider those nodes that have been excluded in the filtering step (mostly urban road segments).
Fig. 1 .
1 Fig. 1. Lyon subgraph, blue markers represent nodes
Table 1 .
1 Performance comparison for different forecasting approaches, applied on the
Lyon area subgraph
T Metric Hist Avg. Naive ARIMA VAR DCRNN
MAE 6.27 4.01 4.36 4.13 3.23
15min MSE 152.83 79.83 85.84 69.33 59.80
MAPE 9.70% 8.37% 9.63% 9.23% 7.39%
MAE 6.27 5.43 5.78 5.05 4.07
30min MSE 152.83 151.48 147.53 131.04 101.82
MAPE 9.70% 12.11% 13.42% 12.07% 10.07%
MAE 6.27 6.81 7.01 5.76 4.74
60min MSE 152.83 295.93 256.92 232.44 167.13
MAPE 9.70% 19.00% 19.35% 16.32% 14.54%
Code available at github.com/mensif/DCGRU Tensorflow2 |
04109341 | en | [
"shs.socio"
] | 2024/03/04 16:41:24 | 2023 | https://hal.science/hal-04109341/file/socsci-12-00320-v2.pdf | João Pedro Baptista
Anabela Gradim
Romy Sauvayre
email: [email protected]
Dissemination of a "Fake Miracle Cure" against COVID-19 on Twitter: The Case of Chlorine Dioxide
Keywords: diffusion, beliefs, chlorine dioxide, pseudoscience, science, complementary and alternative medicine, rationality, social network analysis, Twitter, misinformation, controversy
de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction: Changing Beliefs: From Infodemic to "Miracle Cures"
Some beliefs persist and spread from century to century, while others fall into oblivion. The last two decades have revealed a growing increase in misinformation on the internet following the 1990s [START_REF] Barkun | Conspiracy theories as stigmatized knowledge[END_REF] and then on social media [START_REF] Zhou | Characterizing the dissemination of misinformation on social media in health emergencies: An empirical study based on COVID-19[END_REF]). Misinformation can be considered a virus that spreads out of control. According to the journalist John Zarocostas (2020), while the "infodemic" increases following each pandemic, that of COVID-19, which appeared in December 2019, has the particularity of being based on the preponderant influence of social networks. The COVID-19 pandemic then contributed to the dissemination of new beliefs and adherence to new cults, particularly in the field of health (MIVILUDES 2021). The content of these beliefs then evolves with our societies to include additional current aspects, such as distrust of authorities, conspiracy, or antivaccinism. Adherence to conspiratorial beliefs, alternative medicine, and rejection of political or medical authorities have affected the way citizens accept recommendations concerning COVID-19 (barrier gestures, physical distancing, etc.) and vaccination [START_REF] Soveri | Unwillingness to engage in behaviors that protect against COVID-19: The role of conspiracy beliefs, trust, and endorsement of complementary and alternative medicine[END_REF]. The misinformation widely disseminated via social networks amplified vaccine hesitancy [START_REF] Kanozia | Fake news", religion, and COVID-19 vaccine hesitancy in India, Pakistan, and Bangladesh[END_REF], which limited vaccine adherence (believing in the safety of vaccines) and acceptance (receiving vaccines).
At the same time, the craze for "miracle cures" grew during the COVID-19 pandemic. A "miracle cure" is defined in the present study as a type of complementary and alternative medicine (CAM) that offers unfailing effectiveness against a fatal disease with safety and Soc. Sci. 2023, 12, 320 2 of 13 without side effects, but the claimed efficiency is not proven by the science community. Beliefs and promotion are then linked. Some people promote a remedy as a "miracle cure", and some may believe in its "miracle" effectiveness.
In French-speaking countries, where the study takes place, the number of French newspaper articles extracted from Europresse, the requests by French internet users on Google extracted from Google Trends, and the French messages posted on Twitter extracted using the Twitter application programming interface (API) V2 peaked during the COVID-19 pandemic (Figure 1). Indeed, Europresse counted 9787 articles published between 2006 and 2021, mainly concentrated in 2020 (2319 articles) concomitant with the COVID-19 pandemic. The same evolution can be seen on Twitter: 87,564 tweets mentioning the terms "miracle cure" or "miracle cures" were posted between 2006 and 2021, and 52% were posted in 2020 and 2021 (i.e., 45,603 tweets).
At the same time, the craze for "miracle cures" grew during the COVID-19 pandemic. A "miracle cure" is defined in the present study as a type of complementary and alternative medicine (CAM) that offers unfailing effectiveness against a fatal disease with safety and without side effects, but the claimed efficiency is not proven by the science community. Beliefs and promotion are then linked. Some people promote a remedy as a "miracle cure," and some may believe in its "miracle" effectiveness.
In French-speaking countries, where the study takes place, the number of French newspaper articles extracted from Europresse, the requests by French internet users on Google extracted from Google Trends, and the French messages posted on Twitter extracted using the Twitter application programming interface (API) V2 peaked during the COVID-19 pandemic (Figure 1). Indeed, Europresse counted 9787 articles published between 2006 and 2021, mainly concentrated in 2020 (2319 articles) concomitant with the COVID-19 pandemic. The same evolution can be seen on Twitter: 87,564 tweets mentioning the terms "miracle cure" or "miracle cures" were posted between 2006 and 2021, and 52% were posted in 2020 and 2021 (i.e., 45,603 tweets). During the COVID-19 pandemic, promises of healing through "miracle cures" have multiplied and spread. As revealed by Google Fact Check Explorer (Fact Check Tools n.d.) on 14 December 2021, there are three types of "miracle cures." The first are based on medicinal substances, such as molnupiravir, budesonide, bromhexine, ivermectin, and hydroxychloroquine [START_REF] Baker | From COVID-19 Treatment to Miracle Cure: The Role of Influencers and Public Figures in Amplifying the Hydroxychloroquine and Ivermectin Conspiracy Theories during the Pandemic[END_REF]. The second type of "cures" is chemical substances promoted long before the pandemic, such as hydrogen peroxide promoted by Dr. William [START_REF] Douglass | Hydrogen Peroxide-Medical Miracle[END_REF] or chlorine dioxide promoted by Jim V. Humble. The third type of "cure" is natural, proposing the ingestion of red onions, condensed milk, honey, pepper, ginger, vegetable soup, tea, hot water with lemon, and many other foods. These natural "cures" also invite the inhalation of salt water, clove, or orange vapor.
It might seem surprising to see rational individuals adhering to these types of cure not proven by science. However, this tendency to consider alternative promises of healing is found among a majority of French people. Indeed, in 2019, 68% of French people believed in the benefits of CAM, such as osteopathy, acupuncture, homeopathy, hypnosis, phytotherapy, sophrology, and meditation [START_REF] Odoxa | Baromètre santé 360. Les médecines alternatives et complémentaires[END_REF]. Almost half of the French (44%) even consider them to be more effective than conventional medicine. This leads some to replace conventional medicine with alternative medicine. Parallel to this trend observed During the COVID-19 pandemic, promises of healing through "miracle cures" have multiplied and spread. As revealed by Google Fact Check Explorer (Fact Check Tools n.d.) on 14 December 2021, there are three types of "miracle cures". The first are based on medicinal substances, such as molnupiravir, budesonide, bromhexine, ivermectin, and hydroxychloroquine [START_REF] Baker | From COVID-19 Treatment to Miracle Cure: The Role of Influencers and Public Figures in Amplifying the Hydroxychloroquine and Ivermectin Conspiracy Theories during the Pandemic[END_REF]. The second type of "cures" is chemical substances promoted long before the pandemic, such as hydrogen peroxide promoted by Dr. William Campbell [START_REF] Douglass | Hydrogen Peroxide-Medical Miracle[END_REF] or chlorine dioxide promoted by Jim V. Humble. The third type of "cure" is natural, proposing the ingestion of red onions, condensed milk, honey, pepper, ginger, vegetable soup, tea, hot water with lemon, and many other foods. These natural "cures" also invite the inhalation of salt water, clove, or orange vapor.
It might seem surprising to see rational individuals adhering to these types of cure not proven by science. However, this tendency to consider alternative promises of healing is found among a majority of French people. Indeed, in 2019, 68% of French people believed in the benefits of CAM, such as osteopathy, acupuncture, homeopathy, hypnosis, phytotherapy, sophrology, and meditation [START_REF] Odoxa | Baromètre santé 360. Les médecines alternatives et complémentaires[END_REF]. Almost half of the French (44%) even consider them to be more effective than conventional medicine. This leads some to replace conventional medicine with alternative medicine. Parallel to this trend observed among citizens, researchers are increasingly interested in the beneficial effects of CAM. A systematic review [START_REF] Badakhsh | Complementary and alternative medicine therapies and COVID-19: A systematic review[END_REF] published in September 2021 showed that CAM improved the physical and psychological symptoms of patients with COVID-19. How then can these "miracle cures" be considered as beliefs when science demonstrates the effectiveness of some of them?
Objective
The objective of this study is to explore the boundary between science and belief based on the case of chlorine dioxide, a toxic bleaching agent for textiles or paper that also has disinfectant properties (water, surfaces) presented as a "miracle cure" particularly effective against COVID-19. The aim is to measure the spread of misinformation and fact-checking of a "miracle cure" presented as scientifically proven to the French-speaking people using Twitter.
First, the two primary forms of chlorine dioxide promoted will be described: the Miracle Mineral Solution (MMS) and Chlorine Dioxide Solution (CDS). Second, the content of messages posted in French on Twitter about chlorine dioxide and the links between users (tweeters) will be analyzed.
Using these data and academic articles published on chlorine dioxide, this study aims to explore how a belief perceived as irrational (ingesting a bleach derivative to treat COVID-19) can be legitimized by a scientific discourse and what consequences this has in the dissemination of "miracle cures" on a social network such as Twitter.
While the dissemination of fake miracle cures on social media can generate public health problems, to the best of our knowledge, there is a lack of studies on this issue. While health misinformation has been explored [START_REF] Waszak | The spread of medical fake news in social media-The pilot quantitative study[END_REF][START_REF] Rovetta | Global Infodemiology of COVID-19: Analysis of Google Web Searches and Instagram Hashtags[END_REF][START_REF] Suarez-Lledo | Prevalence of Health Misinformation on Social Media: Systematic Review[END_REF], the dissemination of fake miracle cure information has yet to be investigated. Thus, this study will reduce the research gap on this topic and provide insight into the mechanisms of the spread of medical beliefs and its interaction with science and misinformation.
Two Types of Chlorine Dioxide Miracle Cures
The Miracle Mineral Solution
The first type of "miracle cure" based on chlorine dioxide was "discovered" by Jim V. Humble, the founder of the Genesis II Church of Health and Healing (Genesis II Church of Health and Healing n.d.). Founded in 2010, this nonreligious church is located in Florida (United States v. Genesis II Church of Health and Healing 2020). "Bishop" Humble presented himself as a former aerospace "research engineer" (Humble and Loyd 2016) converted to "gold mining" [START_REF] Humble | MMS Health Recovery Guidebook[END_REF]. He promoted a miracle cure derived from bleach, which he called the Miracle Mineral Supplement (MMS), comprising chlorine dioxide (MMS1) or calcium hypochlorite (MMS2) [START_REF] Humble | MMS Health Recovery Guidebook[END_REF].
Humble said he discovered, as early as 1996, that chlorine dioxide quickly eradicated malaria during an expedition to South America. This product has, according to him, proven its effectiveness on "a wide range of diseases, including cancer, diabetes, hepatitis A, B, C, Lyme disease, MRSA, multiple sclerosis, Parkinson's, Alzheimer's, HIV/AIDS, malaria, autism, infections of all kinds, arthritis, acid reflux, kidney or liver disease, aches and pains, allergies, urinary tract infections, digestive problems, high blood pressure, obesity, parasites, tumors and cysts, depression, sinus problems, eye disease, ear infections, dengue fever, skin problems, dental issues, problems with prostate (high PSA), erectile dysfunction, and many others" [START_REF] Humble | MMS Health Recovery Guidebook[END_REF].
The Humble website also contains many testimonials classified by type of illness or condition. The most filled category is that of cancer with 111 testimonials (What Is MMS n.d.). That of the coronavirus contains 23 testimonials posted between March and November 2020. One attests to the effectiveness of the MMS treatment on COVID-19 in four doses of 24 drops (Coronavirus n.d.).
To convince his followers, as shown in the following quotation, Humble uses pseudoscientific arguments based on valid scientific knowledge but grants the MMS a broader scope of application than what science has demonstrated: "A great deal of evidence given by the FDA, EPA and various industrial corporations prove scientifically that MMS1 (chlorine dioxide) kills and or oxidizes pathogens and poisons in food, public water systems, hospitals, and even slaughter houses. It is our belief that the same thing can and does happen in the human body". [START_REF] Humble | MMS Health Recovery Guidebook[END_REF] In short, according to this fallacious reasoning, if chlorine dioxide purifies and disinfects water and surfaces, that it is used in sectors of activity (hospitals, laboratories, treatment of drinking water), then it can also disinfect the water contained in the body of anyone who ingests it.
However, as early as 2010, the French Agency for the Safety of Medicines and Health Products (ANSM) warned against the ingestion of chlorine dioxide, stating that "no medical efficacy of this product has been proven" (ANSM 2010). The Food and Drug Administration (FDA) did the same in 2019 (FDA 2019), specifying the risks of ingesting MMS that the agency had identified in consumers: "severe vomiting, severe diarrhea, life-threatening low blood pressure caused by dehydration, and acute liver failure after drinking these products" (FDA 2019). In 2020, the FDA fined Humble, Mark Grenon, and his sons (Joseph and Jordan) for the unauthorized sale of products for treating COVID-19 (Center for Drug Evaluation and Research 2020). The Grenons were responsible for selling and promoting MMS within the Genesis II Church before being arrested and charged on the basis of the FDA report (The United States Attorney's Office, Southern District of Florida 2021). Finally, a study published in March 2022 demonstrated the toxicity of this product [START_REF] Peltzer | Risk of chlorine dioxide as emerging contaminant during SARS-CoV-2 pandemic: Enzyme, cardiac, and behavior effects on amphibian tadpoles[END_REF].
The Chlorine Dioxide Solution
The second type of "miracle cure" based on chlorine dioxide is the Chlorine Dioxide Solution (CDS) "invented" by Andreas L. Kalcker. Kalcker presents himself as a "biophysicist" (Kalcker 2021) who has been studying chlorine dioxide for over a decade [START_REF] Kalcker | Bye Bye Covid[END_REF].
Kalcker tried to scientificize the promotion of this "miracle cure" by removing the spiritual dimensions brought by Humble and his church to give it medical and scientific probity. Indeed, in 2021, Kalcker published the book Bye Bye Covid (Kalcker 2021), as he said, which will change the "perspective on medicine" of readers. Chlorine dioxide is presented as a cure for COVID-19. Moreover, Kalcker claimed that the "CDS has brought healing in countless well-documented cases, and its effectiveness is now, despite what anyone might say, irrefutable" [START_REF] Kalcker | Bye Bye Covid[END_REF]). Kalcker compares the CDS and MMS. He claims that, unlike the MMS, the CDS does not cause adverse effects and that laboratory experiments with mice show that it "prolonged their lifespans up to 30%" [START_REF] Kalcker | Bye Bye Covid[END_REF]).
To reinforce this scientific "face" [START_REF] Goffman | On Face-Work: An Analysis of Ritual Elements in Social Interaction[END_REF], he founded the Coalición Mundial Salud y Vida (COMUSAV, "World Health and Life Coalition" in English) (Accueil n.d.), bringing together several thousand doctors using CDS (Insignares-Carrione et al. 2020). The COMUSAV has more than 46,000 subscribers to encrypted Telegram messaging. Members of the COMUSAV publish scientific articles and reports and give lectures alongside Kalcker and with members of the Liechtenstein Association for Science and Health (LVWG) (Home n.d.) based in Switzerland. The LVWG presents itself as an association aiming to bring together researchers, physicians, and funders. Its website provides numerous documents and publications oriented toward a single subject: demonstrating the effectiveness of chlorine dioxide for COVID-19. The scientific articles highlighted on the LVWG website and through speeches on the internet are signed by COMUSAV members attached to the LVWG or to the Jurica Medical Center (Centro Médico Jurica) located in Mexico. Dr. Aparicio-Alonso practices at this medical center. He specializes in traumatology and orthopedics (Conoce al Dr. Manuel Aparicio Alonso n.d.), and has published three peer-reviewed articles demonstrating the effectiveness of CDS for COVID-19 (Aparicio-Alonso et al. 2021a, 2021b, 2021c). However, the evidence for the effectiveness of chlorine dioxide is contested in other scientific works [START_REF] Baracaldo-Santamaría | Drug safety of frequently used drugs and substances for self-medication in COVID-19[END_REF][START_REF] De | Use of Ivermectin and Chlorine Dioxide for COVID-19 Treatment and Prophylaxis in Peru: A Narrative Review[END_REF].
Materials and Methods
To measure the diffusion of chlorine dioxide as a "miracle cure" against COVID-19, the terms "chlorine dioxide" and "COVID" were searched in French-speaking messages posted on Twitter between 1 December 2019 and 30 November 2021 and extracted using the Twitter API V2 Academic Research with a Python script request ("dioxide de chlore" AND COVID lang: fr). The following data were downloaded: tweet content, tweet ID, author ID, and creation date. A total of 1,252 messages were collected containing 596 unique tweets. An analysis of tweets per user was conducted to identify bot accounts, and fake users were removed from the collected data.
As "retweeting rapidly disseminates messages across user networks, creating a wider sphere of influence" [START_REF] Harrigan | Identifying influencers on social media[END_REF], several data were collected and aggregated to create a new indicator, a "diffusion metric", to measure the dissemination of messages. The larger the "diffusion metric" is, the more the message is spread on social media, and the more influential it is likely to be. To obtain a diffusion metric for each tweet, the number of retweets, mentions, likes, and replies was summed. This new metric was then used to select the 100 most influential tweets (excluding duplicates and tweets posted by robots), i.e., the 100 most retweeted, liked, mentioned, and replied-to tweets. Indeed, as many messages are relayed, focusing on influential tweets is more informative for the study of information dissemination.
Following this step, the influential tweets were labeled in two categories: (1) a sentiment analysis category (pro, against, or neutral) and ( 2) an abductive categorization according to the recurrence of the content and the main arguments presented (effectiveness of the treatment, scientific evidence, fight against misinformation, etc.). The abductive approach, inspired by the philosopher Charles S. Peirce [START_REF] Peirce | Collected Papers of Charles Sanders Peirce[END_REF], consists of grouping the content of messages as faithfully as possible by abstraction. This approach then makes it possible to avoid defining categories a priori and thus avoid being far from the observable. Finally, this has the advantage of limiting the biases likely to influence the classification process.
The categories of these 100 influential tweets were then applied to the retweets to build a file of edges and nodes to visualize the links between tweeters using Gephi 0.9.2 software. The most relevant indicator for this study is the eigenvector centrality [START_REF] Bonacich | Eigenvector-like measures of centrality for asymmetric relations[END_REF] because it makes it possible to highlight the links in a graph between the most influential tweeters and thus better visualize the dissemination of information in the social network.
Results
The 100 most influential tweets posted by 70 tweeters include 6287 "diffusion metrics" (likes, retweets, quotes, or replies). They are followed by a total of 1,620,808 people (followers). It should be noted that only 3 tweeters have an identity verified by Twitter and practice journalism. They alone have 1,306,259 followers. These influential tweets were posted between 24 April 2020 and 28 November 2021, with the highest activity in August, September, and November 2021 (Figure 2). Note that, in this influential sample of tweets, the higher the number of followers, following, participation in a discussion, the higher the diffusion metrics as Spearman's correlation analysis shows (respectively, r s = 0.488, p < 0.001; r s = 0.271 p < 0.05; r s = 0.502 p < 0.001).
As Table 1 shows, while the majority of tweets (77%) do not mention anyone in particular, almost a quarter of tweets (23%) mention people belonging to the COMUSAV promoting CDS (including Manuel Aparicio-Alonso and Andreas Kalcker). Mark Grenon, one of the promoters of the MMS, was also found. As Table 1 shows, while the majority of tweets (77%) do not mention anyone in particular, almost a quarter of tweets (23%) mention people belonging to the COMUSAV promoting CDS (including Manuel Aparicio-Alonso and Andreas Kalcker). Mark Grenon, one of the promoters of the MMS, was also found. A sentiment analysis (Figure 3) shows that these influential tweets mainly relay content favorable to chlorine dioxide ("pro chlorine dioxide") and, more rarely, comments coming in opposition to this product ("con chlorine dioxide"). The "pro chlorine dioxide" messages not only are more numerous but also include 3.2 times as many "diffusion metrics" than "con chlorine dioxide." In addition, the "pro" tweets have 9.2 times fewer followers than the "con" tweets but were nevertheless more widely relayed.
As shown in Table 2, influential tweets mostly promote the effectiveness of chlorine dioxide against COVID-19 (50%) or argue that scientific evidence demonstrates its effectiveness (15%). A total of 69% of tweets present chlorine dioxide as an effective treatment, administered by doctors or having been scientifically proven. A third type of tweet attempts to deny or contest the effectiveness of this treatment. These debunking tweets are the most numerous in the full data but are poorly represented (14%) among the most influential tweets. A sentiment analysis (Figure 3) shows that these influential tweets mainly relay content favorable to chlorine dioxide ("pro chlorine dioxide") and, more rarely, comments coming in opposition to this product ("con chlorine dioxide"). The "pro chlorine dioxide" messages not only are more numerous but also include 3.2 times as many "diffusion metrics" than "con chlorine dioxide". In addition, the "pro" tweets have 9.2 times fewer followers than the "con" tweets but were nevertheless more widely relayed. shown in Table 2, influential tweets mostly promote the effectiveness of chlorine dioxide against COVID-19 (50%) or argue that scientific evidence demonstrates its effectiveness (15%). A total of 69% of tweets present chlorine dioxide as an effective treatment, administered by doctors or having been scientifically proven. A third type of tweet attempts to deny or contest the effectiveness of this treatment. These debunking tweets are the most numerous in the full data but are poorly represented (14%) among the most influential tweets. Considering these data, fact-checking or any message aimed at reducing the spread of misinformation has a limited scope in the process of disseminating information on Twitter. This is confirmed by a network analysis of all the tweets and retweets collected. Indeed, the influence of tweets disseminated by a French media dedicated to fact-checking ("Fact and Furious", which disappeared at the end of 2022 (La Rédaction 2022)) had a limited and circumscribed scope (see green group in Figure 4). In contrast, the "pro chlorine dioxide" network has more links and more people relaying tweets with arguments in favor of chlorine dioxide (in purple in Figure 4). Arguments based on scientific evidence (claiming that a study demonstrates the effectiveness of the product) and doctors claiming the effectiveness of chlorine dioxide have a wider reach on this social network. The pro chlorine dioxide group, formed by relaying the same types of tweets, is carried by three main influencers. Few discussions take place with the fact-checking group of tweeters "con chlorine dioxide" (in green in Figure 4).
Therefore, the "pro chlorine dioxide" constitute a homogeneous group in terms of beliefs that relay more what they present as excellent news, namely, to have scientific proof of the effectiveness of the treatment, which also allows them to oppose anti-COVID vaccination. In addition, their followers are invited to click on the links pointing to the Odysee website dedicated to hosting videos similar to YouTube (Odysee Help 2021; Pezet 2020). Created in September 2020, Odysee had more than one million subscribers at the end of 2021. It notably hosts the very controversial conspiratorial documentary Hold-Up on COVID-19. Links posted on Twitter lead tweeters to Odysee, where they can see videos promoting the CDS treatment against COVID-19 by Dr. Aparicio-Alonso introducing himself as a doctor (13,000 views as of November 2021) and by Kalcker introducing himself as a researcher (49,000 views as of August 2021).
himself as a doctor (13,000 views as of November 2021) and by Kalcker introducing himself as a researcher (49,000 views as of August 2021).
Discussion
Dynamic of Misinformation Dissemination
This study aimed to explore the dissemination of false health information and, in particular, the promotion on Twitter of a "miracle cure" for COVID-19 during the pandemic. The number of tweets mentioning chlorine dioxide and COVID-19 had several peaks: a first peak in July 2020 and then three peaks in August, September, and November 2021. By comparison, a study looking at the evolution of Google queries about chlorine dioxide among the people of Mexico also showed peaks in July 2020 but differed at the level of the January 2021 peak [START_REF] Chejfec-Ciociano | Misinformation About and Interest in Chlorine Dioxide During the COVID-19 Pandemic in Mexico Identified Using Google Trends Data: Infodemiology Study[END_REF].
Furthermore, this study has shown by means of a network graph that messages promoting misinformation, even if they are likely to be quantitatively less numerous, spread more widely than those based on more reliable information. This finding aligns with previous results showing that social media promotes the widespread and rapid dissemination of conspiracy theories [START_REF] Cinelli | The COVID-19 social media infodemic[END_REF]) and false news [START_REF] Vosoughi | The spread of true and false news online[END_REF]) by their supporters [START_REF] Featherstone | Relationship of people's sources of health information and political ideology with acceptance of conspiratorial beliefs about vaccines[END_REF] and that misinformation about COVID-19 spreads more on social media if the content is presented positively [START_REF] Zhou | Characterizing the dissemination of misinformation on social media in health emergencies: An empirical study based on COVID-19[END_REF]). This phenomenon is also found in the dissemination of antivaccinism on the internet [START_REF] Kata | A postmodern Pandora's box: Anti-vaccination misinformation on the Internet[END_REF]. Finally, this observation was established on social networks during the pandemic: even if the antivaccine network is quantitatively smaller than the provaccine network, it extends more widely and thus tends to spread to a growing number of people, especially undecided people [START_REF] Johnson | The online competition between pro-and anti-vaccination views[END_REF]). In addition, the fact-checkers had few interactions with misinformation spreaders in the network; consequently, they had little influence on
Discussion
Dynamic of Misinformation Dissemination
This study aimed to explore the dissemination of false health information and, in particular, the promotion on Twitter of a "miracle cure" for COVID-19 during the pandemic. The number of tweets mentioning chlorine dioxide and COVID-19 had several peaks: a first peak in July 2020 and then three peaks in August, September, and November 2021. By comparison, a study looking at the evolution of Google queries about chlorine dioxide among the people of Mexico also showed peaks in July 2020 but differed at the level of the January 2021 peak [START_REF] Chejfec-Ciociano | Misinformation About and Interest in Chlorine Dioxide During the COVID-19 Pandemic in Mexico Identified Using Google Trends Data: Infodemiology Study[END_REF].
Furthermore, this study has shown by means of a network graph that messages promoting misinformation, even if they are likely to be quantitatively less numerous, spread more widely than those based on more reliable information. This finding aligns with previous results showing that social media promotes the widespread and rapid dissemination of conspiracy theories [START_REF] Cinelli | The COVID-19 social media infodemic[END_REF]) and false news [START_REF] Vosoughi | The spread of true and false news online[END_REF]) by their supporters [START_REF] Featherstone | Relationship of people's sources of health information and political ideology with acceptance of conspiratorial beliefs about vaccines[END_REF] and that misinformation about COVID-19 spreads more on social media if the content is presented positively [START_REF] Zhou | Characterizing the dissemination of misinformation on social media in health emergencies: An empirical study based on COVID-19[END_REF]. This phenomenon is also found in the dissemination of antivaccinism on the internet [START_REF] Kata | A postmodern Pandora's box: Anti-vaccination misinformation on the Internet[END_REF]. Finally, this observation was established on social networks during the pandemic: even if the antivaccine network is quantitatively smaller than the provaccine network, it extends more widely and thus tends to spread to a growing number of people, especially undecided people [START_REF] Johnson | The online competition between pro-and anti-vaccination views[END_REF]). In addition, the fact-checkers had few interactions with misinformation spreaders in the network; consequently, they had little influence on the COVID-19 misinformation spreading, consistent with previous findings [START_REF] Shahi | An exploratory study of COVID-19 misinformation on Twitter[END_REF].
The Dynamic of Adherence to Misinformation
can we explain the adherence to and diffusion of "fake miracle cures"? The ethics of belief consists, as expressed by the philosopher William K. Clifford [START_REF] Clifford | The Ethics of Belief[END_REF], of giving up believing without sufficient proof to do so. However, people adhering to or relaying pro chlorine dioxide messages can justify their adherence by means of assertions by experts (doctors, researchers) and academic journals. Consequently, they have factual elements whose epistemic value increases with the status of the people promoting this treatment (doctor or researcher) and the process by which the administration of the proof was carried out (scientific method with publication of conclusive results in an academic journal). The justification of their beliefs is therefore both rational (2003) and deontological.
In the case of chlorine dioxide dissemination, three elements justify the belief adherence of the followers. First, physicians such as Dr. Aparicio-Alonso present an unambiguous discourse on the efficacy of the product against COVID-19, as when he was interviewed by Stew Peters on his podcast [START_REF] Stew | À ÉCOUTER POUR TELLEMENT DE RAISONS!!!![END_REF].
Second, in the most retweeted chlorine dioxide video since the start of the pandemic dated 20 August 2021 (Astrid Stuckelberger-Andreas Kalcker-dioxyde de chlore 2021), Kalcker presents himself as a biophysicist who has been researching this product for 14 years. He claims to have found "the best oxygen provider in the blood that eliminates pathogens, especially viruses". He also claims that nearly 5000 physicians practicing in more than 25 countries are using the CDS with great success. He then recalls that they have proven the effects of this product without any toxicity or side effects by publishing, in particular, three peer-reviewed articles. One of these articles is based on a study conducted on a very large corpus of patients (100,000) in collaboration with many doctors practicing at hospitals. Kalcker said that the CDS had a preventive effect on 90,000 patients and their families, who then did not contract COVID-19. Like Aparicio-Alonso, he claims a recovery rate of 99.3%. Finally, he claims to have been his first human research subject since he saved his own "life more than once with this solution".
Third, internet users have "scientific evidence" published in peer-reviewed articles. They can be found on the Kalcker site but also on other journals' websites. Aparicio-Alonso is a coauthor of these three articles published in August and September 2021 in the journals Journal of Infectious Diseases and Therapy and International Journal of Multidisciplinary Research and Analysis. These journals, like thousands of others, are referenced neither by the Journal Citation Reports (JCR) based on the Web of Science (WoS) nor in the Scimago Journal & Country Rank (SJR) based on Scopus. Therefore, these journals do not benefit from a reputation in the scientific field. However, they are also not listed among the predatory journals ((New) List of Predatory Journals-2023 2020; Potential Predatory Scholarly Open-Access Journals 2021), which would fully disqualify their contribution.
Fake Miracle Cure or Scientific Discovery of an Effective Treatment?
During the COVID-19 pandemic, there have been numerous potential treatments (remdesivir, lopinavir, interferon, dexamethasone, tocilizumab, etc.), which have led to several randomized clinical trials (Solidarity, Discovery, Recovery, and others), which are the most esteemed method in the medical field. In 2020, Professor Didier Raoult's team published a study on hydroxychloroquine [START_REF] Gautret | Hydroxychloroquine and azithromycin as a treatment of COVID-19: Results of an open-label non-randomized clinical trial[END_REF], which led many scientists to believe that it was an effective remedy for COVID-19 and then to conduct clinical studies to test this hypothesis. In "normal science" [START_REF] Kuhn | The Structure of Scientific Revolutions[END_REF], sometimes a study transforms knowledge, and then a process of verification and controversy ensues to confirm or dispute the results obtained. In the case of hydroxychloroquine, posterity and replication studies have not confirmed the substance's effectiveness against COVID-19.
The only clinical trial on chlorine dioxide to our knowledge was registered on the National Institutes of Health (NIH) website in April 2020 (Insignares-Carrione and Bolano 2020) and led to the publication of an article in March 2021 in the unclassified Journal of Molecular and Genetic Medicine (Insignares-Carrione et al. 2021). The article demonstrates, based on a sample of 40 patients, the effectiveness of the substance against COVID-19. However, several indicators call for caution: the time submission and publication of the article was only 7 days, whereas the process requiring evaluation by experts is generally much longer (several months); the journal is not classified; and the authors are not affiliated with universities or research institutes.
On 17 November 2021, another article appeared in the esteemed journal BMC Public Health demonstrating the effectiveness of chlorine dioxide in the prevention and treatment of COVID-19 in a sample of 3630 Peruvians. Articles in favor of the effectiveness of chlorine dioxide subsequently accumulated. On 16 July 2022, an article published in the journal Oral Diseases [START_REF] Soriano-Moreno | Factors associated with the consumption of chlorine dioxide to prevent and treat COVID-19 in the Peruvian population: A cross-sectional study[END_REF], classified by the JCR and SJR, concluded that chlorine dioxide can be effective against SARS-CoV-2. On 9 December 2022, the JCR-and SJRranked journal BioScience Trends published an editorial [START_REF] Asakawa | Focusing on development of novel sampling approaches and alternative therapies for COVID-19: Are they still useful in an era after the pandemic?[END_REF]) and correspondence [START_REF] Cao | Can nasal irrigation with chlorine dioxide be considered as a potential alternative therapy for respiratory infectious diseases? The example of COVID-19[END_REF]) citing nasal chlorine dioxide as a possible treatment for COVID-19. The authors cite two of the three articles by Dr. Aparicio. The correspondence [START_REF] Cao | Can nasal irrigation with chlorine dioxide be considered as a potential alternative therapy for respiratory infectious diseases? The example of COVID-19[END_REF]) concludes that chlorine dioxide may be an effective remedy but that further research is needed.
In the case of chlorine dioxide, the research conducted offers contradictory results, and some researchers speak of controversy [START_REF] Liester | The chlorine dioxide controversy: A deadly poison or a cure for COVID-19[END_REF]. Then, chlorine dioxide could no longer be presented as a "fake miracle cure" but as a potentially therapeutic substance analyzed, discussed, and published as any other substance studied during the pandemic for treatment purposes. This raises important questions about the boundary between science and misinformation. Indeed, science makes it possible to draw this border. However, when scientists have unverified beliefs and try to publish them, the beliefs become potential knowledge until the process of replication allows the scientific community to decide. Until then, it has been difficult to say that this is misinformation. This leads unverified knowledge to spread widely on social networks, as this study shows.
Conclusions
The COVID-19 pandemic and the uncertainties regarding the disease have generated high expectations in terms of care. Researchers around the world have worked hard and published more than 50,000 articles on COVID-19 in 2020 and 2021 according to the WoS. Along with this effort by scientists, pseudoscience has offered "miracle cures" against COVID-19.
This study focuses on the examination of the dissemination a "miracle cure" on Twitter, namely, chlorine dioxide, a bleach derivative considered toxic by the FDA (FDA 2019) and in academic articles [START_REF] Peltzer | Risk of chlorine dioxide as emerging contaminant during SARS-CoV-2 pandemic: Enzyme, cardiac, and behavior effects on amphibian tadpoles[END_REF]. Chlorine dioxide has been promoted by a nonreligious movement since 1996 as a cure for all fatal diseases and recently by physicians as a cure for COVID-19 despite insufficient evidence of its efficacy against this disease [START_REF] Baracaldo-Santamaría | Drug safety of frequently used drugs and substances for self-medication in COVID-19[END_REF].
The results show that messages promoting misinformation, even if they are likely to be quantitatively less numerous, spread more widely than those based on more reliable information. This result aligns with those found in previous studies conducted on misinformation spreading. In addition, the analysis of messages posted on Twitter has shown that the dissemination of this "miracle cure" accelerated following the publication of peer-reviewed articles, proving the effectiveness of chlorine dioxide on COVID-19 and the promotion on Odysee of very high success rates for patients treated by physicians. This shows that physicians, as well as famous or influential people, may generate a "super influencer effect" in medical misinformation that potentiates to the infodemic problem.
This study also shows that the process of misinformation entered the sphere of scientific controversy. When scientists have unverified beliefs and try to publish them, the beliefs become potential knowledge until the process of replication allows the scientific community to decide. This allows unverified knowledge to spread widely on social networks. The boundary between science and misinformation is becoming blurred to the point that it is no longer possible until proven otherwise to call chlorine dioxide a "false miracle cure" but a controversial treatment against COVID-19.
Figure 1 .
1 Figure 1. Comparison of the normalized number of French newspaper articles, Google requests by French users, and French messages posted on Twitter between 2006 and 2021. Sources: Europresse, Twitter API V2, and Google Trends.
Figure 1 .
1 Figure 1. Comparison of the normalized number of French newspaper articles, Google requests by French users, and French messages posted on Twitter between 2006 and 2021. Sources: Europresse, Twitter API V2, and Google Trends.
Figure 2 .
2 Figure 2. A total of 100 most influential tweets mentioning "chlorine dioxide" and "COVID" in French messages posted on Twitter as a function of the date and "diffusion metric."
Figure 2 .
2 Figure 2. A total of 100 most influential tweets mentioning "chlorine dioxide" and "COVID" in French messages posted on Twitter as a function of the date and "diffusion metric".
Figure 3 .
3 Figure 3. Sentiments (pro, con, neutral) of the 100 most influential tweets mentioning the terms "chlorine dioxide" and COVID on Twitter.
Figure 3 .
3 Figure 3. Sentiments (pro, con, neutral) of the 100 most influential tweets mentioning the terms "chlorine dioxide" and COVID on Twitter.
Figure 4 .
4 Figure 4. Network graph of tweets and retweets mentioning the terms "miracle cure" (N = 1252) as a function of content type (color group) and eigenvector centrality (size of nodes), using Gephi 0.9.2 and the Force Atlas algorithm.
Figure 4 .
4 Figure 4. Network graph of tweets and retweets mentioning the terms "miracle cure" (N = 1252) as a function of content type (color group) and eigenvector centrality (size of nodes), using Gephi 0.9.2 and the Force Atlas algorithm.
Table 1 .
1 Names mentioned in the most influential tweets.
Name Number of Mentions %
Manuel Aparicio-Alonso 9 36%
Andreas Kalcker 7 28%
Chinda Brandolino 3 12%
Astrid Stuckelberger 2 8%
Carlos Alvarado 1 4%
Denis Agret 1 4%
Mark Grenon 1 4%
Patricia Callisperis 1 4%
Table 1 .
1 Names mentioned in the most influential tweets.
Name Number of Mentions %
Manuel Aparicio-Alonso 9 36%
Andreas Kalcker 7 28%
Chinda Brandolino 3 12%
Astrid Stuckelberger 2 8%
Carlos Alvarado 1 4%
Denis Agret 1 4%
Mark Grenon 1 4%
Patricia Callisperis 1 4%
Table 2 .
2 Content
Tweet Content Number of Tweets %
Effective treatment 50 50%
Scientific evidence 15 15%
of the 100 most influential tweets mentioning the terms "chlorine dioxide" and "COVID" on Twitter.
Table 2 .
2 Content of the 100 most influential tweets mentioning the terms "chlorine dioxide" and "COVID" on Twitter.
Tweet Content Number of Tweets %
Effective treatment 50 50%
Scientific evidence 15 15%
Fact-check 14 14%
Other 7 7%
Political or judicial information 6 6%
Product feedback 4 4%
Product danger 3 3%
Genesis II Church of Health and Healing 1 1%
Funding: This research received no external funding.
Data Protection Regulation:
The study is registered with the French National Centre for Scientific Research (CNRS) data protection officer under the following number: 2-22120.
Data Availability Statement: Data are available by contacting the corresponding author upon reasonable request. Note that, in accordance with Twitter's terms of use under the European General Data Protection Regulation, tweets cannot be shared (Twitter Controller-to-Controller Data Protection Addendum n.d.).
Conflicts of Interest:
The author declares no conflict of interest. |
04109445 | en | [
"sdv.mhep"
] | 2024/03/04 16:41:24 | 2021 | https://theses.hal.science/tel-04109445/file/va_Plessier_Aurelie.pdf | PREDICTIVE FACTORS OF THROMBOSIS AND VASCULAR RECANALISATION IN VASCULAR LIVER DISEASES IN THE ABSENCE OF CIRRHOSIS
In vascular liver diseases, a risk factor is found in more than 50% of cases, associated in more than 30% of cases. However, genetic and familial causes remain poorly characterized. Curently, in the absence of risk factors, the impact of long-term anticoagulant therapy on patency and thrombotic risk has not been demonstrated.
RESUME ETUDES DES FACTEURS PREDICTIFS DE THROMBOSE ET DE REPERMEABILISATION VASCULAIRE AU COURS DES MALADIES VASCULAIRES DU FOIE EN DEHORS DE LA CIRRHOSE
Au cours des maladies vasculaires du foie, un facteur systémique est retrouvé dans plus de 50% des cas et plusieurs facteurs systémiques dans plus d'un tiers des cas. Cependant, certaines causes génétiques et familiales restent mal ou non caractérisées. Enfin, en l'absence de cause, l'impact de la poursuite du traitement anticoagulant au long cours n'est pas démontré.
1. Première partie : préciser les facteurs héréditaires pouvant favoriser le développement des maladies vasculaires du foie.
-Etude 1. Nous avons comparé une cohorte de patients mutés sur les gènes de la maintenance des télomères (TRG) à 396 patients contrôles d'une population d'un centre de soin primaire. Les patients TRG ont été divisés en 2 groupes : i) « avec anomalie hépatique (AH) » (transaminases > à 30 UI/L et/ou anomalie à l'imagerie) ; et ii) « sans AH ». Parmi 132 patients porteurs de mutations TRG, 95 avaient une maladie hépatique (19 transaminases > 30 UI/L, 12 un foie dysmorphique et 64 les deux). La présence d'une mutation TRG était associée à un risque accru de maladie du foie : Hazard Ratio 12,9 IC95 % 7,8-21,3. (p<0,001). La biopsie hépatique chez 52/95 patients a identifié des lésions hétérogènes : 42 % de maladie vasculaire porto-sinusoïdale, 15 % de fibrose/cirrhose avancée et 13 % de NASH. La survie globale et la survie sans TH étaient respectivement de 79 % et 69 %. Un score FIB-4 ' 3,25 et au moins un cofacteur de cirrhose du foie sont associés à une faible survie sans TH.
2. Deuxième partie : Analyse de l'impact du traitement de de la cause sur les complications de la maladie vasculaire du foie.
-Etude 2. Le pronostic particulièrement sombre de l'hémoglobinurie paroxystique nocturne -sans traitement -est lié aux thromboses artérielles et veineuses observées chez 30% de ces patients non traités. L'Eculizumab, anticorps anti C5, a permis de réduire l'incidence cumulative des événements thrombotiques entre 0.8 et 3%. Il n'y a pas de données publiées sur l'impact de l'Eculizumab sur l'évolution de la maladie vasculaire du foie en particulier. Nous avons observé un impact significatif de l'Eculizumab sur la survie avec une incidence de mortalité significativement plus faible chez les patients exposés vs non exposés. (IR 2.62 % patients-year; vs 8.7 % patients-year, IRR 2.99 [1][2][3][4][5][6][7][8][9][10] (p value= 0.035)
3. Troisième partie : En l'absence de cause forte de thrombose portale, impact du traitement anticoagulant sur le risque de récidive et analyse de facteurs de risque récidive -Etude 3. L'objectif principal de l'étude était d'évaluer l'efficacité préventive du rivaroxaban sur le risque de récidive de thrombose veineuse profonde de toute localisation, et de décès, chez des patients ayant une thrombose portale chronique, sans thrombophilie à haut risque, randomisés au rivaroxaban 15 mg/jour ou sans anticoagulation. Le critère d'évaluation principal était la survie sans thrombose. La survenue d'événements hémorragiques majeurs constituait un critère d'évaluation secondaire. Une analyse intermédiaire demandée par le promoteur, montrait un taux d'incidence de thrombose de 0 pour 100 personnes-années (PA) dans le bras rivaroxaban, et 19,71 pour 100 AP (IC à 95 % [7, 49 31,92]) dans le bras sans anticoagulation (valeur p du log-rank = 0,0008) avec une médiane de 11,8 mois (IC à 95 % [8,[8][9][10][11][12][13]2]). Sur la base de ces données, le comité indépendant de surveillance a recommandé de switcher les patients du groupe sans anticoagulation à un traitement anticoagulant. Après un suivi médian de 30,3 mois (IC à 95 % [29,9]), une hémorragie sévère est survenue chez deux patients recevant du rivaroxaban et un sans anticoagulant. Dans le groupe sans anticoagulation, les D-dimères > 500 ng/mL à 1 mois étaient associés à une récidive de thrombose (HR 7,78 [1,[START_REF] Bounameaux | Duration of anticoagulation therapy for venous thromboembolism[END_REF]), avec une valeur prédictive négative de 93,5 %.
Mots clés : maladies vasculaires du foie, anticoagulants, cause, thrombophilie, récidive thrombose, téloméropathie, gènes de la maintenance des télomères, hémoglobinurie paroxystique nocturne, Eculizumab, rivaroxaban 1. Specify hereditary factors that can promote the development of vascular liver diseases -Study 1. We compared a cohort of telomere maintenance genes (TRG) mutated patients with 396 control patients from a population of a primary care center. TRG patients were divided into 2 groups: i) "with hepatic abnormality (HA)" (transaminases> 30 IU / L and / or imaging abnormality); and ii) "without HA". Of 132 patients with TRG mutations, 95 had liver disease (19 transaminases> 30 IU / L, 12 with dysmorphic liver and 64 both). The presence of a TRG mutation was associated with an increased risk of liver disease: Hazard Ratio 12.9 95% CI 7.8-21.3. (p <0.001). Liver biopsy in 52/95 patients identified heterogeneous lesions: 42% portosinusoidal vascular disease, 15% advanced fibrosis / cirrhosis and 13% NASH. Overall survival and LT-free survival were 79% and 69%, respectively. A FIB-4 score > 3.25 and at least one cofactor of liver cirrhosis are associated with poor LT-free survival.
2. Analysis of the impact of treating the causes of vascular liver disease, on VLD survival and complications.
-Study 2. The poor prognosis of paroxysmal nocturnal hemoglobinuria -without treatment -is linked to arterial and venous thrombosis observed in 30% of these untreated patients. Eculizumab, an anti-C5 antibody, reduced the cumulative incidence of thrombotic events to between 0.8% and 3%. There are no published data on the impact of Eculizumab on the course of vascular liver disease. We observed a significant impact of Eculizumab on survival with a significantly lower incidence of mortality in exposed vs. unexposed patients. (IR 2.62% patients-year; vs 8.7% patients-year, IRR 2.99 [1][2][3][4][5][6][7][8][9][10] (p value = 0.035) 3. In the absence of a major cause of portal thrombosis, impact of anticoagulant treatment on the risk of recurrence and analysis of risk factors for recurrence -Study 3. The main objective of the study was to assess the efficacy of rivaroxaban on the risk of deep vein thrombosis recurrence, and of death, in patients with chronic portal thrombosis, in the absence of major risk factors of recurrent thrombosis, in petients randomized to rivaroxaban 15 mg / day or without anticoagulation. The primary endpoint was thrombosis-free survival. The occurrence of major bleeding events was a secondary endpoint. An interim analysis requested by the promoter, showed an incidence rate for thrombosis of 0 per 100 person-years (PY) in the rivaroxaban arm, and 19.71 per 100 PY (95% CI [7.49 -31.92]) in the no-anticoagulation arm (logrank p-value=0.0008) with a median follow-up of 11.8 months (95% CI [8.8-13.2]). On the basis of these data, the independent monitoring committee recommended switching patients from the group without anticoagulation to anticoagulation treatment. After a median follow-up of 30.3 months (95% CI [29.8-35.9]), severe bleeding occurred in two patients receiving rivaroxaban and one without anticoagulant. In the group without anticoagulation, D-dimers> 500 ng / mL at 1 month were associated with recurrent thrombosis (HR 7.78 [1.49-40.67]), with a negative predictive value of 93.5 %.
Key words: mutations of telomere-related genes (TRG), paroxysmal nocturnal hemoglobinuria, Eculizumab, rivaroxaban
REMERCIEMENTS
« Si tu veux tracer ton sillon droit, accroche ta charrue à une étoile. » Merci Dominique de m'avoir guidée toutes ces années pour tracer ce sillon afin que tous ensemble nous ajoutions une petite pierre à la recherche des maladies vasculaires du foie. Votre exigence à tout égard doublée de bienveillance est un exemple pour nous tous.
Merci aux
Vascular liver diseases: Definitions and classifications
I-Definitions
Vascular liver diseases (VLD) are rare diseases mostly occuring in the context of liver venous inflow or outflow obstruction. Non cirrhotic portal vein thrombosis (PVT), Budd Chiari syndrome (BCS), and porto sinusoidal vascular diseases (PSVD) are the three most frequent diseases affecting the venous system, and although these disorders affect distinct venous sites, simultaneous involvment is frequently encountered. These diseases are related to alterations in hepatic macrocirculation (hepatic veins, portal vein) or microcirculation (portal veinules). Even though various types of VLD differ in presentation and etiology, they still share in part underlying causes and mechanisms.
Venous circulation of the liver involve the portal vein and its branches that supply blood flow from intestinal tract to the liver. The portal vein is formed by the union of the superior mesenteric vein and the splenic vein. It subdivides in left and right branches to segmental veins, and to terminal portal veinules in the portal tract. Theses venules drain into the sinusoids, then to the hepatic vein circulation from small hepatic veins to inferior vena cava.
1-Non cirrhotic portal vein thrombosis (Non cirrhotic PVT)
Non cirrhotic PVT is defined as the obstruction of the extrahepatic portal vein, and/or right or left branches, associated or not to obstruction of other segments of the splanchnic venous axis. [1] It does not include isolated thrombosis of the splenic or superior mesenteric veins. Portal vein obstruction secondary to malignant tumor (frequently but improperly referred to as malignant thrombosis) is considered as a different entity, related to encasement or invasion of the veins by malignant tumors, including primary hepatobiliary malignancy most often in the presence of cirrhosis. We will discuss only non-malignant non cirrhotic portal vein thrombosis. Non cirrhotic portal vein thrombosis is either diagnosed at a recent stage (so called recent or acute) or at a chronic stage, porto-portal collaterals as a sequel of portal vein obstruction constitute a so called portal cavernoma. These 2 stages of a same disease share similar causes (chapter 2). Complications of portal hypertension and mesenteric ischemia due to thrombosis extension to mesenteric veins are the most dreaded complications of PVT [2,3]. The consequences of portal vein thrombosis depend on the extension of the thrombus. Mesenteric ischemia results from the extension of the thrombus towards the mesenteric venous arches. When ischemia is prolonged, an intestinal infarction may occur. Mortality from intestinal infarction is 20-50%, death is due to peritonitis and multivisceral failure. Intestinal resection following mesenteric-portal venous thrombosis is one of the main causes of short small bowel syndrome. Stenosis of the small intestine may be a late sequelae of mesenteric venous ischemia. [4][5][6] Downstream of portal vein thrombosis, the hepatic consequences appear minor. Clinical and biological signs of hepatic suffering are absent or transient. Collateral veins appear at the periphery or within the structures adjacent to the obstructed portion of the portal vein: main bile duct, gallbladder, pancreas, gastric antrum, duodenum. When the obstruction of the trunk of the portal vein persists, the collateral veins tend to progressively enlarge to form a portal cavernoma. Despite this, portal pressure increases which maintains hepatic blood perfusion. Episodes of gastrointestinal hemorrhage due to rupture of oesogastric varices or portal hypertension gastropathy may occur within an unpredictable period of time. The incidence of GI hemorrhage is 12-20% per year. [1,7]The varicose veins responsible for the hemorrhage may belong to a porto-systemic collateral circulation (esophageal, gastric or fundic varices) or to the veins of the cavernoma (gastric antrum and duodenum). Diagnosis of PVT is based on findings at liver doppler ultrasound associated to 3-phase contrast enhance imaging. Multiple classifications of PVT have been suggested, based broadly on anatomical location, degree of obstruction, presence or absence of collaterals, functional implications or combinations of these. The latest classification of PVT includes 3 parameters: time course, transversal and longitudinal occlusion of main PV, response to treatment or interval change. [1]
2-Budd Chiari syndrome
BCS is defined as the obstruction of hepatic venous outflow that can be located from the small hepatic veins up to the entrance of the IVC into the right atrium. [8]Hepatic outflow obstruction related to cardiac disease, pericardial disease or sinusoidal obstruction syndrome (SOS) are excluded from this definition. BCS is classified into: a) primary, most often caused by thrombosis; and b) secondary in the presence of compression by space occupying lesions, or invasion by malignancy or parasites otherwise. Only primary BCS will be discussed in this manuscript.
BCS is a rare disease, and epidemiological data is scarce. Incidence and prevalence were respectively 0.68/4.1 in France in 2010 in a recent French study reporting prospective patients from hepatology units. A recently published meta-analysis reported an estimated pooled incidence of 1 case per million inhabitants per year. [9,10] Manifestations are diverse and range from an absence of symptoms to fulminant hepatic failure depending on the site of obstruction, the number of hepatic veins obstructed, and the presence of hepatic venous collaterals. In a multicenter European study, ascites, present in 83% of patients, hepatomegaly in 67% and abdominal pain in 61%, were the main clinical features. Diagnosis is performed with liver doppler ultrasound by an experienced and avised operator. Three contrast enhance phase imaging is helpful to complete assesment.
3-Porto sinusoidal vascular disease (PSVD)
Porto-sinusoidal vascular liver disorder (PSVD), is a rare liver disorder, accounting for up to 10% of the causes of portal hypertension in Europe and North America. [11]It is characterized by i) microvascular liver abnormalities that may cause portal hypertension, and ii) the absence of cirrhosis. PSVD pathogenesis is unknown. PSVD has been associated with diverse systemic diseases (including among others, various immune system or inherited disorders) which, however, are absent in around 30% of the patients. The course of PVSD is highly variable ranging from the absence of clinical manifestations to lifethreatening complications. PSVD mostly affects young adults. A large proportion of patients develop severe complications related to portal hypertension, including variceal bleeding (in 40% of patients), portal vein thrombosis (in up to 40% at 5 years), or ascites (in 20-50% at 10 years). Liver tumors and lung disorders can also occur. In patients with such severe complications, transjugular intrahepatic portosystemic shunt or liver transplantation have to be considered. By contrast, in other patients who do not have clinically significant portal hypertension, manifestations are limited to abnormal liver blood tests. Due to its rarity, PVSD is frequently overlooked or misdiagnosed as cirrhosis; and knowledge is scarce, coming from small-sized studies of heterogeneous patient cohorts, without long-term data. [11] There is no simple blood test or imaging technique that can lead to a diagnosis of non-cirrhotic portal hypertension. Histology is essential for diagnosis. In terms of the causes of non-cirrhotic portal hypertension there are multiple terms in the literature for the same histologic findings in the liver, there are multiple diseases associated with the same hepatic histology, and there may be multiple histologic findings in the liver over time in the same patient. [12] As an illustration the terms idiopathic non-cirrhotic portal hypertension, incomplete septal cirrhosis, non-cirrhotic portal fibrosis, non-cirrhotic intrahepatic portal hypertension, benign intrahepatic portal hypertension, and noncirrhotic perisinusoidal hepatic fibrosis have all been used to refer to similar histology. Recently, the Vascular Liver Diseases Interest Group (VALDIG) proposed diagnostic criteria combining the exclusion of cirrhosis, particular histopathologic alterations, and specific portal hypertension manifestations (figure 1). [11] Prognosis of patients with VLD is also determined by the underlying cause. Recent data suggest that it may be a determinant factor for survival in particular in PSVD and PVT.
Figure 1 Definition of PSVD
II-Causes and mecanisms of thrombosis in VLD
Prothrombotic disorders are found in 65 to 85% of patients with BCS or PVT, but only 20-40% of PSVD patients. By contrast, chronic inflammatory conditions are more frequently encountered in patients with PSVD and PVT than in those with BCS. There is evidence for a significant genetic influence in patients with PSVD. Still, in over 40% of PSVD patients the cause remains unknown. The reason for these differences in causes among the various VLD, as well as the mechanisms for specific sites of thrombosis according to the underlying conditions remain obscure. Identified causes and prevalence according to VLD are presented table 1. Evidence of prothrombotic disease is an important consideration in the decision to continue long-term anticoagulant therapy.
1-Acquired conditions
A hereditary or acquired prothrombotic condition (Table 1) is found in approximately 10-50% of cases and an exogenous risk factor for venous thrombosis (hormonal, etc.) is present in approximately 6-30% of cases. Several factors are associated in the same patient in 10% of cases, which is more frequent than expected by chance alone . Therefore, the identification of a risk factor should not suspend the search for another factor. However, some conditions are difficult to diagnose, such as coagulation inhibitor deficiencies and antiphospholipid syndrome. Indeed, a non-specific decrease in coagulation inhibitors and the presence of non-specific antiphospholipid antibodies with low positive levels may be the consequence (and not the cause) of portal thrombosis. Strict diagnostic criteria for anti phospholipid syndrome and inhibitor deficiencies are mandatory to caracterise high or low risk thrombotic factors in VLD.
An antiphospholipid syndrome is confirmed if patients with portal thrombosis have the following findings:
-anti-cardiolipid antibodies > 40, or >99th percentile -or antiB2GP1 antibodies >99th percentile -or positive circulating anticoagulant lupus with 2 biological tests at least 12 weeks apart -or a protein C deficiency defined by a functional protein C level of less than 70% -or a protein S deficiency is defined by a functional protein S level of less than 60% in men, less than 50% in young women, less than 55% in postmenopausal women, in the absence of vitamin K antagonist treatment, or estrogenic treatment in women, associated decreases in coagulation factors II, VII and X. If in doubt, family screening should be performed. Genetic testing for known mutations may also be carried out -or antithrombin deficiency is defined as a functional antithrombin level of less than 80% amidolytic activity, in the absence of heparin or derivatives and in the absence of an associated decrease in coagulation factors II, V, VII and X.
a) Myeloproliferative neoplasm
Myeloproliferative neoplasm (MPN) is the most common cause of vascular liver disorder. Most patients do not meet the classical haematological diagnostic criteria due to haemodilution and hypersplenism as a consequence of portal hypertension. Therefore, screening for MPN in patients with VLD, especially with portal hypertension, should be performed irrespective of blood cell count. Diagnosis of MPN is now easier by testing a peripheral blood sample for the V617F JAK2 mutation. Indeed, it has been detected in 90% of BCS patients with MPN and in 30-45% of BCS patients. [13]This mutation is present in 21-37% of patients with portal thrombosis outside the context of cancer and cirrhosis. Finally, in approximately 20% of patients with portal thrombosis, no cause is found despite exhaustive explorations. Other somatic mutations, such as JAK2 exon 12 mutations mutations, MPL mutations and, in the gene encoding calreticulin (CALR ), are less frequently encounterered in VLD patients. [14] Recent data identified high-risk patients for adverse hematologic outcomes (hematological transformation, and poorer survival) in MPN/SVT patients. Indeed in a cohort of 80 MPN /SVT patients, 29% of the cohort harboured at least one molecular risk factor: JAK2 mutant allele burden ≥50%, or presence of chromatin/spliceosome/TP53 mutation. These high-risk patients had wors event free survival (81% versus 100%, p=0.001) and overall 10 years survival(89% versus 100%, p=0.01) than low-risk patients. [15] b) Paroxysmal nocturnal hemoglobinuria (PNH) Paroxysmal nocturnal hemoglobinuria (PNH) is a rare acquired disorder of hematopoietic stem cells, related to a somatic mutation in the Phosphatidilinositol Glycan class A (PIG-A), X-linked gene, responsible for a deficiency in glycosyl phosphatidilinositol-anchored proteins (GPI-AP). The lack of one of the GPI-AP complement regulatory proteins (CD59) leads to hemolysis. The disease is often diagnosed with hemolytic anemia, bone marrow failure and episodes of venous thrombosis. Venous thrombosis occurs in 15 to 30% of the patients and mainly affects cerebral, deep limb or splanchnic veins. In a recent study, thrombosis was the major prognostic factor affecting outcome. Risk factors for thrombosis included older age, thrombosis at diagnosis, transfusions, and decreased risk after immunosuppressive therapy. [16] Splanchnic vein thrombosis in PNH has been described in literature, in larger studies for Budd-chiari syndrome and in small studies or case reports for portal and mesenteric veins. Clinical presentations of patients with PNH and haemolytic anaemia are often associated to severe abdominal pain and inflammatory syndrome, with no clear explanation. Improvement in imaging acquisitions enhances thrombosis recognition in splanchnic vein thrombosis. Furthermore enthusiastic results have been published using new therapeutic agents, such as Eculizumab, a C5 inhibiting recombinant humanized monoclonal antibody. [17] The role of eculizumab on thrombosis outcome in patients with splanchnic thrombosis is not well known. Behcet's disease is highly prevalent in BCS reported around the Mediterranean basin, in Turkey (20-30%), and in Egypt (14.4%), whereas it is found in approximately 5% of patients with BCS in western countries. In a French cohort of BD patients, BCS had a high prognosis impact, increasing the risk of death by 9-fold. Early treatment of BD with immunosuppresive regimen may improve BCS outcome, and avoid the use of recanalisation procedures in BCS patients with initially severe liver disease. [22,23] 2-Genetic conditions a) AT, PC, PS deficiency AT, PC, PS deficiency were reported in respectively i highly variable prevalences in VLD (table1). Diagnosis of inherited AT, PC, and PS deficiency based on plasma measurements is often difficult in these patients due to liver disease which has a significant influence on the synthesis and clearance of these coagulation inhibitors. Indeed acquired abnormalities are frequently due to liver failure or acute thrombosis.Moreover, assessing the inherited character of these deficiencies by usual means requires family investigations which may be difficult to perform or non informative. Therefore, the extrapolation of these results as permanent prothrombotic condition deserves caution.
b) Other syndromic diseases associated to genetic conditions
Other genetic conditions unrelated to coagulation pathway have been described as possibly associated to vascular liver disease, and especially to PSVD. A genetic component is suggested by findings from exome sequencing in familial PSVD and by the association with Adams-Oliver syndrome,Turner's syndrome or telomeropathies in very limited studies. Telomeropathy susceptibility to PSVD merits further evaluation in a large cohort of well-characterized patients.
A French pediatric survey of porto-sinusoidal disease found 40% syndromic pathology (Noonan syndrome, Turner,..) and 16% familial involvement, some of which have missense mutations of Notch1, a gene involved in vascular remodeling. In adults, portosinusoidal disease is also reported during syndromic pathologies such as Turner syndrome, genetic diseases such as cystic fibrosis and telomerase dysfunction due to telomeres regulatory gene (TRG) mutations. [24][25][26]
c) Telomeres regulatory gene (TRG) mutations
Other genetic conditions unrelated to coagulation pathway have been described as possibly associated to vascular liver disease, and especially to PSVD. A genetic component is suggested by findings from exome sequencing in familial PSVD and by the association with Adams-Oliver syndrome,Turner's syndrome or telomeropathies in very limited studies. Telomeropathy susceptibility to PSVD merits further evaluation in a large cohort of well-characterized patients.
Telomeres are essential to maintain chromosome integrity at each cell division. Telomerase repair defective telomeres and regulate telomere length. Telomere shortening is associated to advanced age. Telomere maintenance is essential to avoid premature cell senescence due to telomere shortening. Premature cell senescence and genetic instability are the consequences of telomerase dysfunction. [27]Germline lossof-function rare variations in genes that encode components of the telomere repair complex, called telomere-related genes (TRG), accelerate telomere shortening and premature cellular senescence. Defective telomere repair has been causally associated with several human diseases with great diversity in severity, leading to several appellations in the literature: Hoyeraal-Hreidarsson syndrome and dyskeratosis congenita for the most severe of this variants. Telomere diseases comprise dysfunction of various organs including bone marrow failure, immune deficiency, hepatic disorders, pulmonary fibrosis and an increased risk of cancer. Bone marrow failure is a major complication of TRG mutations. [28,29] Macrocytosis, cytopenias, medullar aplasia ar the main characteristics. [30] Myelodysplasic syndrome and acute myeloid leukemia are the most frequently described cancers in TRG mutations, present in 75% of patients with cancer and TRG mutation. [30] Haematologic disorders are the consequence of quantitative and functionnal alteration of the hematopoietic stem cells. The consequences are a quantitative and qualitative impairment in mature cells (leukocytes, platelets and red cells) associated to lymphocyte B and T alterations (figure ). Similarly, it has been shown that pulmonary stem cells were affected in TRG mutation disorders. Consequently lung mesenchymal and epithelial can be equally affected (figure ). [31] Thus, pulmonary phenotype in TRG mutations varies from pulmonary fibrosis to isolated or associated pulmonary emphysema (figure ). Unexplained intersticial pneumonia, obliterative bronchiolitis are other described pulmonary lesions. Environment has a major impact on pulmonary disease prognosis. Smoking and toxics impair idiopathic pulmonary fibrosis outcome. Liver impairment in telomere diseases has been described in 9 studies which included 4-40 patients, with a total of 86 patients. [32][33][34][35][36][37] The prevalence of liver impairment varies from 5 to 40%, according to patient's selection. In these series, patients could either have an identified TRG mutation or a phenotype of telmeropathy in the absnce of identified mutation. In this setting, liver impairment's prognosis is unknown, possibly underestimated due to the severity of other organs (pulmonary, bone marrow, etc) impairment. Several cases of non -cirrhotic portal hypertension have been described, associated to this condition. Hepatopulmonary syndrome seems to be frequently associated to TRG mutations identified in 25% of patients in a serie of patients with pulmonary disease. [35] Data on prevalence of TRG mutation in chronic liver disease is scarce. One multicentric European studie identified 2,6% TRG mutations in a group of patients with cirrhosis all causes, compared to 0,5% in a control group with no cirrhosis, [RR] 1.859; 95% CI 1.552-2.227. [32] An overview of these patient's clinical characteristics seemed to identify shorter telomeres, younger patients and more aggressive liver diseases. Similarly the presence of new TERT mutations indentified in 3 patients with NASH, cirrhosis and HCC led to the hypothesis that TRG mutations may play a role in accelerating fibrosis and eventually HCC occurrence . [38] Clarification of liver disease characteristics and outcome in TRG mutations is crucially needed. More over the impact of TRG mutations, regarding other risk factors for liver disease has never been assessed. The prevalence of local causes varies according to vascular liver disease, 4 times more frequent in non-cirrhotic portal vein thrombosis than in Budd-Chiari syndrome. [3,20] Intra-abdominal septic foci are the most common. Local inflammation or infection, with or without systemic inflammatory response, are responsible for a prothrombotic state. Indeed, activation of coagulation is a major response in the host's defense against infection. Leukocytes, platelets and endothelial cells play an important role in thrombus development in combination with activation of the coagulation system [39]. Septic pylephlebitis is a septic portal vein thrombosis, frequently associated with the presence of intra-abdominal infection/inflammation, occurring in a territory drained by the splanchnic veins. It is associated with bacteremia in 44% of cases and most often discribed after appendicitis or diverticulitis.CMV infection has both a local and systemic role in coagulation activation system. Nevertheless only few cases are reported [40,41]. Characteristics and outcome of patients with PVT associated with CMV disease are not described. PVT can also occur after intra-abdominal surgery following intraoperative vascular manipulation or postoperative local inflammation. The natural history of postoperative DVT is influenced by the type of surgery, the degree of occlusion and the type of anticoagulation used. The presence of abdominal neoplasia is associated with a complex and multifactorial coagulopathy state. A pro-thrombotic state, compression or vascular invasion may favor the occurrence of thrombotic events. [42] b) Central obesity and metabolic syndrome Obesity is a major risk factor for cardiovascular disease. High body mass index (BMI) is associated with venous and arterial thromboembolic risks. Suggested mechanisms for the development of obesity-related thrombosis are coagulation activation (hypofibrinolysis, increased factor VII and plasminogen activator inhibitor (PAI)-1), platelet dysfunction, and endothelial dysfunction. Recent studies suggest that central obesity and metabolic syndrome are risk factors for idiopathic portal vein thrombosis. Similarly, in a recent epidemiological study, 50% of patients with idiopathic SBC had a BMI>25 kg/m2 compared to only 27% when a cause was identified. [9,21]. Data on glucose metabolism in this population is needed.
c) Hormones and VLD
The association of BCS with oral oestrogene containing contraceptive use has been evaluated in two case-control studies from 1970 to 1983 (OR 2.37 (95% CI 1.05-5.34, p < 0.02) and 1985 to 2000 (OR 2.4, 95% CI 0.9-6.2). Pregnancy also seems to be a trigger for BCS, based on the temporal association between both conditions, although it is not so clear for PVT. [43,44] Data on cycle disorders and endocrine parameters lack in women with VLD. Appropriate contraception remains challenging after VLD diagnosis and there is currently no data to support the use of progestin-only contraception.
d) Other causes
Hypereosinophilic syndromes, granulomatous venulitis, ulcerative colitis, celiac disease, have been described in VLD. Human immunodeficiency virus (HIV) infection frequently associated with PSVD and PVT. Exposure to didanosine and acquired decrease in protein S activity may contribute to the PSVD venopathy.
III-Treatment 1-Medical therapy: Anticoagulation and treatment of the cause
A-Anticoagulation a) Indications
The outcome of VLD has been recently evaluated in large prospective European studies. In BCS, a recent update in a large European cohort showed a good outcome with a consecutive strategy from less to more invasive treatments. [20,45]This strategy used anticoagulation therapy and, if inefficient, angioplasty, stenting, TIPS, and ultimately liver transplantation. Similarly, anticoagulation at the early stage of PVT has been shown to prevent thrombus extension and, in 40% of the cases, to be accompanied by recanalization. [3] By contrast, for patients with chronic PVT prospective follow-up data are not available. Retrospective data suggest that anticoagulation is not associated with excessive bleeding. However, in patients at risk for gastrointestinal bleeding due to portal hypertension and a mild or moderate risk of recurrent thrombosis, the benefit-risk ratio of anticoagulation therapy is particularly unclear. Furthermore, the time point, if any, when anticoagulation shifts from a beneficial to a detrimental effect, has not been explored.
Patients with BCS receive anticoagulant therapy as soon as possible for an indefinite period of time in an attempt to reduce the risk of clot extension and new thrombotic episodes. Anticoagulation is the first step of a step by step therapeutic strategy, where therapeutic procedures are performed with increased invasiveness, according to the response to previous therapy.
Figure 5: Therapeutic strategy in Budd Chiari syndrome
There are no randomised studies for anticoagulation in BCS. Nevertheless, the rationales for long term anticoagulation include an improved outcome in nontransplant as well as transplant patients since the implementation of systematic anticoagulation, a high risk of thrombosis recurrence due to a high association rate to prothrombotic diseases; case studies describing improvement following oral anticoagulant treatment, worsening when treatment is stopped and a noticeable improvement when it is reintroduced, as well as recanalization in rare cases of thrombosed hepatic veins. [46] In most centers, low molecular heparin has been used followed by long-term administration of coumarine derivatives, whatever the underlying thrombotic risk factor. Portal hypertension primary and secondary prophylaxis is similar to that for cirrhosis.
Patients with recent or acute PVT receive anticoagulant therapy for 6 months in the absence of severe bleeding contraindication. Usually permanent anticoagulation is recommended in patients harboring high risk associated causes and past history of gut resection for mesenteric infarction. [1,8,47]In expert recommendations for deep vein thrombosis, high risk associated causes are myeloproliferative neoplasms, antiphospholipid syndrome or homozygous or composite heterozygous G20210A factor II and G1691A factor V mutations, personal or 1st degree unprovoked family history of venous thrombosis. The benefit of permanent anticoagulation in patients with low risk factors for thrombosis is unknown.
Anticoagulation in patients with PSVD has been proposed to treat associated portal, splenic and mesenteric vein thrombosis and avoid development of chronic portal vein thrombosis and associated complications, but also to prevent intrahepatic thromboses and consequently liver disease progression. This view is supported by a retrospective study from our group suggesting that patients receiving anticoagulation have a better outcome than those without. [12] Moreover, liver histological and imaging findings suggest that intrahepatic thrombosis could play a major role. The most common histological findings are thickening or obliteration of intrahepatic portal venules, that could result from previous thrombotic events, or nodular regenerative hyperplasia where atrophy of hepatocytes might be due to ischemia. At imaging, abnormal intrahepatic portal branches are observed in half the patients. New portal vein thrombosis develops in 20% of the patients at 2 years. This incidence if twice higher in patients infected with HIV than in HIV negative patients. [19,48]
b) Type of anticoagulation
According to recommendations for deep vein thrombosis, patients are first treated with low molecular weight heparin (LMWH) in the absence of invasive procedure, followed with oral anticoagulant treatment with coumarine derivatives. LMWH is stopped when INR is within the target range, aiming at an international normalised ratio (INR) between 2 and 3.
In the absence of severe liver failure or portal hypertension, or portosystemic shunting, direct-acting oral anticoagulants (DOACs) may change the therapeutic scenario of patients requiring short-and long-term anticoagulation. The evidence gathered so far (from pre-approval pivotal trials to real-world postmarketing observational data) consistently confirms that DOACs are overall comparable to vitamin-K antagonists (VKAs) in terms of safety, efficacy, effectiveness and unequivocally documents a clinically relevant reduced risk of intracranial bleeding in the settings of non-valvular atrial fibrillation (AF) and venous thromboembolism. [START_REF] Kearon | Antithrombotic therapy for VTE disease: Antithrombotic Therapy and Prevention of Thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines[END_REF][START_REF] Kirchhof | ESC Guidelines for the management of atrial fibrillation developed in collaboration with EACTS[END_REF] The metabolism of DOACs is modified in renal and hepatic failure and differently according to the molecule. [START_REF] Graff | Anticoagulant therapy with the oral direct factor Xa inhibitors rivaroxaban, apixaban and edoxaban and the thrombin inhibitor dabigatran etexilate in patients with hepatic impairment[END_REF][START_REF] Potze | Decreased in vitro anticoagulant potency of Rivaroxaban and Apixaban in plasma from patients with cirrhosis[END_REF] Thus patients with coagulopathy or severe liver failure were excluded from major studies. Data on direct-acting oral anticoagulants (DOACs) in patients with VLD is limited, although provide some preliminary data on the efficacy and safety of these drugs in VLD patients. [START_REF] Intagliata | Direct Oral Anticoagulants in Cirrhosis Patients Pose Similar Risks of Bleeding When Compared to Traditional Anticoagulation[END_REF][START_REF] De Gottardi | Antithrombotic treatment with direct-acting oral anticoagulants in patients with splanchnic vein thrombosis and cirrhosis[END_REF]One series reported 94 patients from 17 centers including 36 patients with cirrhosis, 58 splanchnic vein thrombosis and only 4 BCS patients. Child-Pugh score was 6 (range 5-8), and MELD score 10.2 (range 6-19). Indications for anticoagulation were splanchnic vein thrombosis (75%), deep vein thrombosis (5%), atrial fibrillation (14%) and others (6%). DOACs used were rivaroxaban (83%), dabigatran (11%) and apixaban (6%). Patients were followed up for a median duration of 15 months (cirrhotic) and 26.5 months (non-cirrhotic). One had recurrent portal vein thrombosis and five had bleeding. Treatment with DOACs was stopped in three cases. Renal and liver function did not change during treatment. [START_REF] De Gottardi | Antithrombotic treatment with direct-acting oral anticoagulants in patients with splanchnic vein thrombosis and cirrhosis[END_REF] The second series compared rates of bleeding in cirrhosis patients treated with DOACS to traditional anticoagulation (warfarin and low molecular weight heparin), over 3 years from a research database. Clinical characteristics between the two groups were similar. There were no differences in terms of bleeding among both groups (p = 0.9) with two major bleeding events in the traditional anticoagulation group and one in the DOAC group. [START_REF] Intagliata | Direct Oral Anticoagulants in Cirrhosis Patients Pose Similar Risks of Bleeding When Compared to Traditional Anticoagulation[END_REF]A third series analyzed patients with cirrhosis and atrial fibrillation treated with anticoagulation over a 3-year period for thrombosis or prevention of stroke. The primary outcomes were bleeding events and recurrent thrombosis or stroke: Both groups had similar total bleeding events (8 DOAC vs 10 other, P=.12).and recurrent thrombosis. [START_REF] Chokesuwattanaskul | Efficacy and safety of anticoagulation for atrial fibrillation in patients with cirrhosis: A systematic review and meta-analysis[END_REF]The results of these papers strongly suggest that this new class of anticoagulants could be safe in cirrhotic patients in the absence of severe liver failure, with an incidence of major bleedings and other drug-induced side effects seemed close to the incidence reported in patients treated with conventional anticoagulation. [START_REF] Intagliata | Direct Oral Anticoagulants in Cirrhosis Patients Pose Similar Risks of Bleeding When Compared to Traditional Anticoagulation[END_REF][START_REF] De Gottardi | Antithrombotic treatment with direct-acting oral anticoagulants in patients with splanchnic vein thrombosis and cirrhosis[END_REF][START_REF] Chokesuwattanaskul | Efficacy and safety of anticoagulation for atrial fibrillation in patients with cirrhosis: A systematic review and meta-analysis[END_REF][START_REF] Hum | The efficacy and safety of direct oral anticoagulants vs traditional anticoagulants in cirrhosis[END_REF] Data on direct-acting oral anticoagulants (DOACs) in patients with SVT are very limited, and until more data is available, DOAC s are currently not routinely recommended as first line therapy in SVT, in the absence of cirrhosis. Since DOACs are increasingly used off label in this indication, more data about safety and efficacy are urgently needed.
c) Bleeding complications in patients treated by anticoagulant treatment for deep vein thromboses or splanchnic vein thrombosis
The bleeding risk of patients treated with VKA during DVT was assessed in a meta-analysis of 33 studies and 4374 patients. It is significant since the rate of major hemorrhage was respectively 2.06/100 BP before 3 months (95% CI 2.04-2.08) and 2.74/100 BP after 3 months (95% CI 2.71 to 2.77). Fatal hemorrhage rates over these 2 periods were 0.37 (95% CI 0.36-0.38) and 0.63/100 BP (95% CI 0.61-0.65). The case-to-death ratio was 13.4% (95% CI 9.4-17.4) with an intracerebral hemorrhage rate of 1.15/100 BP (95% CI 1.14-1.16). When treatment was continued beyond 3 months the risk was 9.1% (95% CI 2.5-21.7), with an intracerebral hemorrhage rate of 0.65/100 BP (95% CI 0.63-0.68).
The annual risk of major bleeding on anticoagulant treatment is highly variable in observational studies from 2% to 29%. Many scores have evaluated the risk of bleeding [START_REF] Torn | Risks of oral anticoagulant therapy with increasing age[END_REF][START_REF] Gage | Clinical classification schemes for predicting hemorrhage: results from the National Registry of Atrial Fibrillation (NRAF)[END_REF] Severe bleeding seems to be much lower now with DOACS. Indeed, intracranial hemorrhage is about half with DOACs in comparison to standard antithrombotic treatment [START_REF] Caldeira | Intracranial hemorrhage risk with the new oral anticoagulants: a systematic review and meta-analysis[END_REF] Table 2: HEMORR2HAGES bleeding score and RIETE registry bleeding score [START_REF] Torn | Risks of oral anticoagulant therapy with increasing age[END_REF][START_REF] Gage | Clinical classification schemes for predicting hemorrhage: results from the National Registry of Atrial Fibrillation (NRAF)[END_REF] Current data estimates case-fatality rate is 8% for DVT, 12% for pulmonary embolism for recurrent VTE, and about 10% for major bleed, measured during and after anticoagulation. [START_REF] Carrier | Systematic review: case-fatality rates of recurrent venous thromboembolism and major bleeding events among patients treated for venous thromboembolism[END_REF] In BCS, the bleeding rates and heparin-induced thrombopenia are more frequent and more severe in BCS anticoagulated patients than in those receiving anticoagulation for deep venous thromboembolism (VTE). Invasive procedures and portal hypertension are major causes of bleeding in BCS anticoagulated patients, while excess anticoagulation probably plays a secondary role. In a study with 94 consecutive patients, major bleeding episodes occur in 22.8 per 100 patient-years [START_REF] Rautou | Bleeding in patients with Budd-Chiari syndrome[END_REF][START_REF] Zaman | Increased prevalence of heparin-induced thrombocytopenia in patients with Budd-Chiari syndrome: a retrospective analysis[END_REF].
In the study by Condat et al, the incidence of gastro intestinal hemorrhage was 12.5/100 BP (95% CI [10][11][12][13][14][15]. The presence of large varicose veins was an independent risk factor for gastrointestinal hemorrhage. Anticoagulant treatment, age, sex, age of thrombosis, and history of GI hemorrhage did not increase the risk of hemorrhage [2]. In the American study, the incidence of major hemorrhage was 6.9/100 BP. The risk of hemorrhage was significantly higher in cases of large varicose veins (HR 2.63, 1.72-4.03; p<0.001) or treatment with warfarin (HR 1.91, 1.25-2.92; p=0.003).
The risk of hemorrhage with DOACS in DVT is comparable or lower with DOACS compared to the risk with warfarin. In the literature, this risk is estimated at 0.7% of major bleeding vs 7% minor bleedings. With DOACS the risk of digestive hemorrhage is slightly higher, while the risk of severe bleeding in particular of intracranial hemorrhage is lower. Retrospective limited data shows comparable bleeding risk with DOACS compared to other anticoagulation in splanchnic vein thrombosis.
Although the last decade have provided a marked impetus to the field of VLD, crucial information is still lacking, particularly in the areas of etiology, inherited factors, and treatment. Etiology is still unknown in 40% of the patients; genetic data are incomplete, therapeutic trials, although needed have never been performed, so that recommendations for treatments (with potentially fatal side effects in these high bleeding risk population) rely on limited data.
d) Treatment of the cause
In BCS the prevalence of underlying thrombophilia patients is high, since 87% of patients have an underlying risk factor for thrombosis and about 25% have several causal factors. Similarly, in PSVD the prevalence of inflammatory systemic diseases is high. Even though treatments are available for certain of these causes (i.e., myeloproliferative neoplasms, antiphospholipid syndrome, paroxysmal nocturnal hemoglobinuria, and Behcet's disease), data on the impact of these treatments on VLD outcome or survival is very limited. Promising results have been presented in MPN: One published showing that ruxolitinib is effective in reducing spleen size and disease-related symptoms and one published as an abstract showed improved survival with hydroxyurea and pegylated interferon. [START_REF] Derrode | Impact of cytoreductive therapy on the outcome of patients with myeloproliferative neoplasms and hepato-splanchnic vein thrombosis[END_REF][START_REF] Pieri | Safety and efficacy of ruxolitinib in splanchnic vein thrombosis associated with myeloproliferative neoplasms -PubMed n[END_REF]. More recently, 2 studies in BCS and Behcet's disease have shown that in patients treated with anticoagulation therapy, corticosteroids and immunosuppressive therapy about two thirds of patients did not require futher more invasive treatment. Indeed, invasive procedures are usually performed in 60 to 90% of the patients. In both cohorts medical treatment was solely performed in 67% and 62% in Behcet's disease compared to 30% BCS patients without BD, with similar survival rates. [22,[START_REF] Sakr | Characteristics and outcome of primary Budd-Chiari syndrome due to Behçet's syndrome[END_REF]. More data is needed in other acquired conditions.
IV-Risk of thrombosis recurrence in deep vein thrombosis and identified risk factors
Studies assessing the impact of permanent anticoagulation vs 3 months anticoagulation in deep vein thrombosis are based on estimating a balance between reduction in recurrent DVT and increase in major bleeding. An estimation of both indicators will then determine recommendations for permanent anticoagulation The risk of DVT recurrence is higher in:
-Idiopathic events: 10% per year the first two years with an odds ratio of 2.4, -In male subjects: before the age of 60, with an odds ratio of 2-4, -In patients with persistently elevated D-dimer level (odds ratio of 2.3, compared with normal level); especially in low risk factor patients [START_REF] Avnery | Ddimer levels and risk of recurrence following provoked venous thromboembolism: findings from the RIETE registry[END_REF],
-During the first two years after discontinuation of treatment.
Therefore provoking factors for DVT, localization of DVT, past personal or familial history of DVT, male sex and d-dimer, thrombophilia, and residual thrombus balanced to the bleeding risk are indicators that will influence the decision to continue or stop anticoagulation after 3 months. [START_REF] Kearon | Antithrombotic therapy for VTE disease: Antithrombotic Therapy and Prevention of Thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines[END_REF][START_REF] Douketis | Patient-level meta-analysis: effect of measurement timing, threshold, and patient age on ability of D-dimer testing to assess recurrence risk after unprovoked venous thromboembolism[END_REF][START_REF] Adeboyeje | Major Bleeding Risk During Anticoagulation with Warfarin, Dabigatran, Apixaban, or Rivaroxaban in Patients with Nonvalvular Atrial Fibrillation[END_REF][START_REF] Marchiori | The risk of recurrent venous thromboembolism among heterozygous carriers of factor V Leiden or prothrombin G20210A mutation. A systematic review of prospective studies[END_REF][START_REF] Bounameaux | Duration of anticoagulation therapy for venous thromboembolism[END_REF] V-Risk of recurrence in VLD and identified risk factors 1-Risk of recurrence in Budd Chiari syndrome with or without anticoagulant treatment.
Even though there are no randomized studies comparing anticoagulation versus anticoagulation interruption, long term anticoagulation is recommended in patients with BCS. The rational for long term anticoagulation is the improved survival rate after anticoagulation systematic introduction in 1984, the high rate of high risk thrombophilia, the severity of liver failure in case of recurrent thrombosis. Currently, the risk of recurrent thrombosis in patients treated with anticoagulation seems low, although it may be higher in patients with myeloproliferative neoplasms [START_REF] Stefano | Splanchnic vein thrombosis in myeloproliferative neoplasms: risk factors for recurrences in a cohort of 181 patients[END_REF][START_REF] Hayek | Longterm Outcome and Analysis of Dysfunction of Transjugular Intrahepatic Portosystemic Shunt Placement in Chronic Primary Budd-Chiari Syndrome[END_REF].
• In a French radiologic study on TIPS in BCS patients, myeloproliferative neoplasm was a risk factor for TIPS dysfunction during follow-up. [25]. After liver transplantation (LT), long term anticoagulation regimen is not clearly recommended. In a study, including 23 patients with LT, 5 patients, not or inadequately treated with antithrombotic therapy, had recurrent thrombosis. Moreover recurrent BCS after liver transplantation seem to occur in 21 to 27 % of not anticoagulated patients. [START_REF] Mentha | Liver transplantation for Budd-Chiari syndrome: A European study on 248 patients from 51 centres[END_REF][START_REF] Cruz | High incidence of recurrence and hematologic events following liver transplantation for Budd-Chiari syndrome[END_REF][START_REF] Halff | Liver transplantation for the Budd-Chiari syndrome[END_REF]
2-Risk of recurrence or extension of portal thrombosis with or without anticoagulant treatment.
Currently, duration of anticoagulation in patients with SVT is unclear. Expert recommendations suggest permanent anticoagulation in high risk associated causes The rational for permanent anticoagulation in population relies on multiple myeloproliferative neoplasm studies, although all uncontrolled retrospective studies: In a European cohort of 604 patients with splanchnic vein thrombosis, including 28% of cirrhotic patients, a thrombotic event occurred in 7.3 per 100 patientyears. In 465 patients who had anticoagulation the rate was 5.6 per 100 patients-years. They observed a high recurrent thrombotic rate of 10.5 per 100 patient-years (6.8-16.3) after anticoagulation discontinuation, whereas recurrent rate while on VKA was still 3.9 per 100 pt-years. Common risk factors for SVT in this population were MPN in 20%, cancer in 22% and cirrhosis in 18%.The case fatality rate of thrombotic events was 13.2%(95%CI, 6.60%-24.15%). Male sex, solid cancer, myeloproliferative neoplasms, and unprovoked SVT were associated with an increased risk of vascular events [START_REF] Ageno | Long-term Clinical Outcomes of Splanchnic Vein Thrombosis: Results of an International Registry[END_REF] In a European cohort of 181 patients with myeloproliferative neoplasms and splanchnic vein thrombosis, the recurrent thrombosis incidence rate was 4.2 per 100 pt-years [START_REF] Stefano | Splanchnic vein thrombosis in myeloproliferative neoplasms: risk factors for recurrences in a cohort of 181 patients[END_REF] It was reduced to 3.9 per 100 pt-years in patients treated with VKA:, whereas in the small fraction (15%) not receiving VKA it was as high as 7.2 per 100 pt-years. In a small Irish cohort of 14 SVT patients, recurrence of SVT occurred mostly in the setting of interventional liver procedure. Recurrent thrombosis outside of the splanchnic venous system occurred in 28.5% of patients, predominantly off therapeutic anticoagulation. [START_REF] Greenfield | Splanchnic venous thrombosis in JAK2 V617F mutation positive myeloproliferative neoplasms -long term follow-up of a regional case series[END_REF] In a series of 60 patients with chronic mesenteric venous thrombosis, the two factors that significantly improved survival were anticoagulant treatment and beta-blocker treatment.
Another recent retrospective study compared 512 patients with splanchnic thrombosis to 320 patients with DVT. This American study found different results from previously reported series. Recurrence-free survival at 10 years was comparable in both groups (76% for splanchnic thrombosis and 68% for DVT) and anticoagulant treatment was not associated with prolonged recurrence-free survival. Only hormonal treatment was associated with thrombotic recurrence.
There is no assessment in terms of anticoagulant used, as patients in these retrospective studies receive either heparins or VKAs without any real distinction in the studies.
Thus data is still controversial for the use of permanent anticoagulation in portal vein thrombosis. There is no convincing data in patients harboring low risk factor for thrombosis in this setting.
THESIS WORK I-Background and aims
This work focuses on the usefulness of extensive knowledge and workup of causes in VLD patients to improve the care of patients and stratify management accordingly. We wanted to explore new genetic causes and their impact on VLD prognosis, the impact of treating causes on prognosis and the risk of recurrence when these causes were absent or considered as low risk factors for thrombosis.
The first work focused on the description of liver disease in patients with TRG mutations. It has been identified as a possible risk factor for cirrhosis, but further data seems to describe non cirrhotic portal hypertension in patients harboring these mutations. Therefore, the aim of this study was to characterize patients with liver disease, and to assess the impact of these mutations as a new causal factor for liver disease.
The second work focused on the usefulness of treating with anti-complement monoclonal antibodies, paroxysmal nocturnal hemoglobinuria patients with splanchnic vein thrombosis.
The third work focuses on the usefulness of treating with anticoagulants patients with no or low risk factors for thrombosis, and identifying biomarkers of recurrence.
II-Results
1-Article 1
Introduction
Telomeres are repeated hexanucleotides/protein complexes, located at the end of linear chromosomes (1,2). Telomere length is incrementally shortened with each mitosis (3). When telomeres become critically short, a DNA-damage program is activated by the telomerase complex (1-3). Rare germline loss-of-function variations in genes that encode components of the telomere repair complex, called telomere-related genes (TRG), accelerate telomere shortening and premature cellular senescence. Defective telomere repair has been causally associated with several human diseases of variable severity named telomere biology disorders (TBDs) or telomeropathies (4)(5)(6). Telomeropathies diversely combine dysfunction of various organs including bone marrow failure, immune deficiency, liver disease, pulmonary fibrosis and a narrow cancer spectrum (acute myeloide leukemia, myelodisplastic syndrome, and squamous cell carcinoma) (7)(8)(9)(10)(11)(12)(13)(14). Telomeropathies have a highly variable penetrance (7,8).
Liver disease has been rarely described in telomeropathies (15,16). Based on a limited number of patients with biopsy, histopathological findings are varied (6,15,16). The outcome of patients with TRG mutations and liver disease is unknown (6,15). Therefore, the aims of this study were: (a) to determine whether TRG mutations are associated with an increased risk for liver disease; (b) to thoroughly characterize TRG mutations associated liver disease; and (c) to identify risk factors for a poor outcome in patients with TRG mutations.
Methods
Patients
This was a retrospective, multicenter observational study of patients with TRG mutations, identified i) through a genetic registry for TRG mutations (1990-2019); and ii) clinical referral networks for rare diseases registries (CRMVF, national network for vascular liver disease; OrphaLung, national referral center for rare lung diseases; and referral center for aplastic anemia). A previously described community-based cohort of 1190 unselected subjects screened for liver disease was used for community controls (17).
TRG mutations cohort
Patients were included if they had a confirmed pathogenic TRG mutation, at least one organ disease (blood, liver or lung) associated with TRG mutations for the probands, regardless of the telomere length (18,19). Mutations were interpreted as pathogenic variations according to the American College of Genetics and Genomics guidelines, and the European Society for Human Genetics recommendations (18,19). Date of genetic diagnosis was defined as the date the blood sample was sent to the genetics laboratory. Date of first organ disease was defined as the date of diagnosis of one organ disease (blood, liver, or lung) associated with TRG mutations. Date of liver disease was defined as the date of first visit identifying liver transaminases and/or imaging abnormalities. Date of lung disease was defined as the date of first available abnormal chest CT scan or, when unavailable, as the date of lung biopsy. Date of blood disease was defined as the date of bone marrow biopsy or aspiration or, when unavailable the date of first visit identifying blood count abnormalities. Duration of liver disease follow-up started from the date of liver disease to the last follow-up or death. Follow-up durations for lung and blood diseases were defined similarly.
Exclusion criteria were any of the following: the absence of pathogenic genetic variant despite a clinical phenotype suggestive of TRG mutation; "coats-like" disease (CTC1 mutation,); missing data; and absence of patient consent. This TRG mutations cohort was divided into 2 groups (Figure 1): a first group of patients with liver disease characterized by AST and/or ALT >30 IU/L (20-24) and/or abnormal liver imaging (dysmorphic and/or hyperechogenic liver and/or features of portal hypertension). A second group of patients with TRG mutations without liver disease i.e., with AST and ALT ≤ 30 IU/L and no liver imaging abnormality.
Control group
Controls were selected from a cohort of unselected subjects, aged 45 years or more, scheduled for free medical check-up in a social medical centre (Hôpital Avicenne, AP-HP, Bobigny, France) between September 2005 and February 2008, who underwent liver stiffness measurement as a screening procedure for liver disease, as previously described by Roulot et al. (17). Subjects were stratified by age and matched 3:1 using a propensity score according to sex, overweight status, excessive alcohol consumption, arterial hypertension, and diabetes. In this cohort, liver disease was defined on the same criteria as in the TRG mutations cohort.
Pathogenic variations sequencing and telomere measurement
Sequencing was performed in the genetics laboratory of Bichat Hospital in Paris (national referral laboratory for genetic diagnosis of TBD) except for 7 patients for whom molecular exploration was performed in another laboratory. Methods for sequencing are presented in supplementary material. Rare variants were retained and classified according to the American College of Genetics and Genomics guidelines (18,19). Variants identified in previous reports of telomeropathy were deemed disease-causing in this study. In summary, disease-causing variants are subsequently referred to as confirmed TRG mutations.
Assessment of telomere length was performed by flow-FISH method in Hematology
Laboratory of Robert Debré Hospital in Paris as previously described (25). Telomere length was defined based on leukocyte telomere lengths measured relative to age-matched apparently healthy individuals.
Patients' characteristics
Patients' characteristics including risk factors for chronic liver disease [history of arterial hypertension, diabetes, hyperlipidemia, body mass index (BMI) (overweight was defined as BMI >25kg/m²), alcohol consumption (excessive alcohol consumption was defined as >100g per week), B and C hepatitis status] were collected.
For the group of patients with TRG mutations and liver disease, data were collected at the time of liver disease diagnosis, and follow-up outcomes were reported since liver disease diagnosis. For the group of patients with TRG mutations but no liver disease, data were collected at the last follow-up visit which did not show liver disease and follow-up outcomes were reported since the diagnosis of the first organ disease. Family history, defined as a history of anyfirst-or second-degree TBD, was also collected for all patients with TRG mutations.
For the control cohort, data on history of arterial hypertension, diabetes, hyperlipidemia, Body Mass Index (BMI), alcohol consumption, hepatitis B or C virus infection, laboratory data, and liver stiffness were prospectively collected as part of the initial study. (17).
Organs Assessment
Liver, lung, and blood assessments are described in supplementary material. Data from dermatological examination, bone densitometry, immune disease, ophalmological examination, and occurrence of neoplasia were also collected, when available.
Liver biopsies
Liver biopsies were reviewed in the pathology department of Beaujon Hospital except for one patient for whom the review was performed, by the local pathologist according to the predefined grid of pathological features described in supplementary material.
Statistical Analysis
Data are presented as median (interquartile range) or number (proportion) as appropriate. Comparisons of quantitative and qualitative variables were made using Mann-Whitney test and Chi2 or Fisher exact test, respectively.
The comparative control group was derived from the prospective cohort of 1322 patients subjects from community-based population, aged 45 years or more, scheduled for free medical check-up in a social medical center (Bobigny, France) between September 2005 and February 2008 who had liver stiffness measurement as a screening procedure for liver disease (17). We stratified patients with TRG mutations and controls according to their age (under 50, 50 to 60, 60 to 70 and over 70 years old). To control for confounding variables, we used propensity score matching in each age group. We estimated the propensity score for every patient using a logistic regression model. Covariates in the model included sex, excessive alcohol consumption (consumption > 10 g daily), overweight status (BMI>25kg/m²), arterial hypertension and diabetes. The model discriminated well between the TRG and control cohorts in each age group (area under the receiver operating characteristic curve: 0.83, 0.77, 0.93, 0.75, respectively), allowing for a 3-to-1 matching of patients.
To identify variables associated with liver disease in the whole study population (TRG mutations cohort and controls), a binary logistic regression was performed. Hazard ratios (HRs) for binary logistic regression were provided with their 95% confidence intervals (CI).
To avoid survivorship bias, survival analysis was performed only in patients who had diagnosis of both liver disease and TRG mutation after 2004. Univariate and multivariate Cox model regression analysis were performed to determine prespecified baseline factors associated with overall survival and liver transplantation-free (LT-free) survival in patients with TRG mutations and liver disease. Multivariate analysis was performed on variables with <0.01 significance in univariate analysis and for which more than 95% of the data were available.
Hazard ratios (HRs) for Cox model regression were provided with their 95% confidence interval (CI). Kaplan-Meier analysis was applied to overall and LT-free survival in patients with TRG mutations and liver disease. LT-free survival in patients with TRG mutations and liver disease according to prognostic factors were compared using a stratified Log-Rank test.
All tests were two-sided, and p≤0.05 was considered significant. Data handling and analysis was performed with SPSS 25.0 (SPSS Inc., Chicago, IL).
Results
Between October 2018 and August 2019, we screened 189 patients from 25 centers: 27 were excluded; 132 were included, comprising 95 (72%) with liver disease and 37 (28%) without liver disease (Figure 1).
TRG mutations cohort
TERT mutations were the most frequently identified mutations, observed in 78 cases (59%). TERT, TERC, RTEL1, DKC1, TINF2, PARN, and NHP2 mutations were similarly represented in patients with or without liver disease. Gene variants are reported in Supplementary data 1. The pedigrees of 3 families with TRG mutations are reported in Supplementary data 2 and showed more severe phenotypes or/and an earlier onset along generations. Among 68 patients in whom telomere length was assessed, shortened telomere length<1 st percentile was identified in 89% of patients with liver disease, similar to patients without liver disease (Table 1 and Supplementary data 1).
In the TRG mutations cohort, ninety-four patients were male (71%), aged 57 years. Ninety-five patients (72%) had liver disease, 92 (70%) had blood disease and 104 (79%) had lung disease (Table 1).
In total, n=95 (72%) patients had liver disease: n=19 (20%) with isolated serum transaminase elevation, n=12 (13%) with at least one anomaly on imaging and normal serum transaminase value, while n=64 (67%) had both.
Features of liver disease at diagnosis are described in Table 1 and2. All patients underwent at least one imaging test at diagnosis: doppler-ultrasound (n=88), abdominal CT scan (n=67) and/or hepatic MRI (n=48) (Table 2). Three patients had ascites and one had hepatic encephalopathy at diagnosis (Table 2).
Patients with liver disease presented telomeropathy-related related lung, blood, cutaneous, rheumatologic, ophthalmic disease in 82%, 77%, 55%, 39% and 30% of cases, respectively (Table 1). Only 3 patients had isolated liver disease. Rheumatologic and ophthalmic disease are reported in 39% and 30% of cases, respectively (Supplementary data 3). Mean platelet and leukocyte counts were lower and mean corpuscular volume was increased in the patients with liver disease (p<0.005) (Table 1). Median age of the patients with liver disease was similar to the patients without liver disease.
Hepatic venous pressure gradient (HVPG) (measured in 25 patients) was 6 (4 -11) mmHg and liver stiffness (measured in 54 patients) 8.2 (5.6 -14.9) kPa (Table 2). Liver stiffness was significantly higher in patients with large esophageal varices (grade II or III) (p=0.027) (Supplementary data 4).
Liver biopsy (n=52) identified porto-sinusoidal vascular disease specific pathology lesions (PSVD) in 22 patients(42%) including 6 with advanced fibrosis, 7 had non fibrotic NASH and 8 (15%) had advanced fibrosis in the absence of vascular lesions. (Table 2, Supplementary data 5A and 5B). Among 14 patients with minimal lesions, mainly sinusoidal distension and minimal regenerative anomalies, 3 had severe portal hypertension, suggestive of additional PSVD in the context. TERT mutation, low platelet count, high ALP, and myelodysplasia, were significantly associated to PSVD compared to cirrhosis (Supplementary data 6). Among twelve patients with isolated transaminase elevation or with transaminases ≤30 IU/L and with hepatic dysmorphia, biopsy identified 3 with PSVD, 1 with cirrhosis and 3 with NASH.
Outcome in all patients with TRG mutations
Follow-up outcomes in patients with and without liver disease in the TRG mutations cohort (N=132) are presented in Supplementary data 7. After a median follow-up of 21 (12 -54) months, ascites occurred in 14% of patients with liver disease, variceal bleeding in 13%, hepatopulmonary syndrome (HPS) in 13%, portal vein thrombosis in 5%, hepatic encephalopathy in 4%, and hepatocellular carcinoma (HCC) in 2%, respectively. In Supplementary data 8, features of mutated patients who died of hepatic causes and/or underwent a liver transplantation (N=15) are described precisely.
Thirteen patients underwent lung transplantation; 12, liver transplantation (LT); and 5, hematopoietic stem cell transplantation. Among the 12 patients who underwent a liver transplantation, 4 had hematopoietic stem cell transplantation (2 before and 2 after liver disease diagnosis), 4 had lung transplantation after liver disease diagnosis (2 combined and 2 sequential). Eventually, 9 of the 12 patients with HPS received a liver transplantation, among which only 1 patient had a double lung-liver transplantation. In one patient, HCC was diagnosed on liver explant.
Outcome analysis in patients with TRG mutations and liver disease
In total, 89/95 (94%) were included in survival analysis. Among, 89 patients (94%) 27 died, 15 before any transplantation and 5 after liver transplantation. Causes of deaths were hepatic, pulmonary, or hematological in 7, 18 and 2 respectively (Supplementary data 7). Five years overall survival and LT-free survival rates were 79% and 69% respectively (Figure 2A).
In univariate analysis, DKC1 mutation, cofactor for liver disease, ascites, MELD score, FIB-4 score, liver stiffness, HPVG and MCV remained significantly associated with LT or death (Table 3). Telomere length <1 st percentile was not significantly associated with LT or death (Table 3). In multivariate analysis, a FIB-4 score ≥3.25 and cofactor for liver disease (alcohol consumption >100g/week, HBsAg +, BMI>25 kg/m², ferritin>1000 µg/L) remained significantly associated with LT or death (Table 3). LT-free survival rate was significantly better when FIB-4 score was inferior to 3.25 and in the absence of a cofactor for liver disease (p<0.001) (Figure 2B). Prognostic variables associated with overall survival in univariate and multivariate analysis are shown in Supplementary data 9.
Comparison of TRG mutations cohort and controls
Characteristics of patients in the TRG mutations cohort (n=132) versus patients in the control cohort stratified by age and matched 3:1 with propensity score matching using sex, overweight status, excessive alcohol consumption, arterial hypertension, and diabetes (n=396) are reported in Supplementary data 10. Compared to controls, liver disease defined as AST or ALT > 30 UI/L, and/or liver imaging abnormality was significantly more frequent in patients with TRG mutations than in controls: respectively 95/132 (72%) vs 80/396 (20%) (p<0.001).
Patients with TRG mutations had significantly higher liver stiffness than controls (7.4 (5.3-14.1) kPa versus 5.3 (4.3 -6.4) kPa respectively, p<0.001) and frequent blood tests abnormalities (thrombopenia 127 (61-218) 10 9 /L vs 245 (213-288) 10 9 /L and higher mean corpuscular volume (MCV) (92-103) fL vs 89 (86-92) fL; p<0.001)
In multivariate analysis, male sex, presence of a TRG, excessive alcohol consumption and BMI>25 kg/m² were independently associated with liver disease (p<0.001, p<0.001, p=0.007, and p<0.001, respectively) (Table 4).
Discussion
We found a high prevalence of liver disease (72%) in patients with TRG mutations. Indeed, the prevalence of liver impairment varied from 5 to 40%, according to patient's selection in 9 studies which included 4-40 patients, with a total of 86 patients. (15,16) (Supplementary data 11). In these series, patients could either have an identified TRG mutation or a phenotype of telomeropathy in the absence of identified mutation. (6,15,16,26).
In our study, a pathogenic TRG mutation was a mandatory inclusion criterion and consequently present in 100% of the patients. Differences can also be explained: by the heterogeneity of criteria to define liver disease (Supplementary data 11). We chose to use current criteria used for patients with metabolic disease or chronic viral hepatitis (20)(21)(22)(23) and identified clinically relevant pathologic lesions in 60% of liver biopsies.
This study fills a knowledge gap by demonstrating that TRG mutations have a major independent impact on the occurrence of liver disease. Compared to unselected communitybased controls, liver disease was far more prevalent (HR of 12.89 (7.87-21.35)) in TRG mutations carriers. Furthermore, liver disease when present was more severe, more frequently associated to cytopenias and macrocytosis among patients with TRG mutations than in controls.
Data on prevalence of TRG mutation in chronic liver disease is scarce. One multicentric European studie identified 2,6% TRG mutations in a group of patients with cirrhosis all causes, compared to 0,5% in a control group with no cirrhosis, [RR] 1.859; 95% CI 1.552-2.227. An overview of these patient's clinical characteristics seemed to identify shorter telomeres, younger patients and more aggressive liver diseases. Similarly the presence of new TERT mutations indentified in 3 patients with NASH, cirrhosis and HCC led to the hypothesis that TRG mutations may play a role in accelerating fibrosis and eventually HCC occurrence (27)(28)(29)(30) (Supplementary data 11). Thus TRG mutation has been identified as a possible risk factor for cirrhosis, but further data seems to describe non cirrhotic portal hypertension in patients harboring these mutations. Our data show that liver disease in patients with TRG mutation occurred at a younger age, with past personal or familial history of lung, hematological, cutaneous disease in the majority. Early and severe osteoporosis and ophthalmologic involvement occurred in 10% and 20% of cases respectively. Apart from liver tests and imaging abnormalities, lower mean platelet and leukocyte counts, and higher mean corpuscular volume were significant distinctive features among TRG mutations patients with liver disease.
Centralized analysis of liver biopsies in more than half of patients, helped characterizing liver disease (16) and raised physiopathology issues in 3 situations: first, among patients with features of portal hypertension, portosinusoidal vasculaire disease (31) was diagnosed in 32 (42%) cases (all had portal hypertension signs on imaging or on endoscopy), and was more frequently encountered than advanced fibrosis or cirrhosis (15%). Low hepatic vein pressure gradient (HVPG) suggested PSVD in these patients with portal hypertension (31).
Interestingly, patients with PSVD had frequent associated severe hematologic disease.
Secondly, NASH prevalence in our study seemed close to the general French population estimate, contrasting with a low prevalence of diabetes (only 8%) in TRG mutated patients with liver disease. Furthermore, we currently identified 27% with "minor" unspecific lesions whose natural history progression to specific lesions of PSVD or cirrhosis deserves clarification in the future.
Although TRG mutation is independantly associated to liver disease, concomitant environmental cofactors for cirrhosis (BMI>25 kg/m², excessive alcohol consumption,..) were independently predictive of death or liver transplantation in this population. The diversity of associated liver lesions and risk factors for cirrhosis suggests a multifactorial pathophysiology.
Similary, gene environment and shortened telomeres have been shown to severely impact prognosis in idiopathic pulmonary fibrosis (33): in an American cohort of 134 patients from 21 families with TERT mutation, almost all patients with interstitial lung disease (96%) were smokers or declared a pneumotoxic exposure (12).
In our serie telomere length was below 1st percentile in 90% of tested patients, with non direct impact on prognosis. Conversely, liver stiffness ≥20 kPa, FIB-4 score ≥3.25 and HVPG ≥10 mmHg at baseline correlated with poor survival. Furthermore, liver stiffness (FibroScan®) correlated to hepatic venous pressure gradient and presence of grade II or III esophageal varices.
In cirrhosis, the association between liver stiffness and the degree of portal hypertension as well as with patients' outcomes is clearly established (34). Conversely, liver stiffness < 10 kPa strongly suggests PSVD in patients with signs of portal hypertension, whereas 10k Pa> liver stiffness <20kPa suggests the presence of an extrahepatic condition (35). An increasing amount of data supports the use of FIB-4 as a screening tool in general diabetic populations (36,37), and in predicting the clinical prognosis in NASH and viral hepatitis (38)(39)(40)(41)(42)(43). Thus, our findings suggesting that liver stiffness, FIB-4 and HPVG measurement are predictors of major outcomes in TRG mutation context, support literature in other liver diseases but warrant confirmation.
In conclusion, TRG mutations dramatically increase the risk for liver disease, independently from other risk factors. PSVD and dysmetabolic cirrhosis are the 2 most frequently observed liver related phenotypes in TRG mutation suggesting a multifactorial mechanism. These results support routine screening for TRG mutations in patients with unexplained liver disease (early onset age, personal or familial extra liver disease…). When TRG mutation is identified, other causes for liver disease and/or a FIB-4 score ≥ 3.25 are at particularly high risk for severe liver failure, portal hypertension complications or hepatocellular carcinoma. (25,26).
* Including 75 previously reported patients
Liver disease was defined by raised serum transaminases (AST and/or ALT) >30 IU/L and/or abnormal liver imaging (dysmorphic and/or hyperechogenic liver and/or features of portal hypertension). HSCT before liver disease diagnosis 4 (4) --
Immunologic disease
Gammaglobulins (g/L)
Table 2. Features of liver disease in patients with TRG mutations at the time of liver disease diagnosis (N=95).
Patients with liver diseases (N=95)
Clinical characteristics
Ascites N -(%)
Patients with liver (N=95). B. Patients without liver disease (N=37).
A. Patients with liver disease (N=95). In medical charts, the information that the patient was carrier of a mutation was indicated without the nomenclature.
Patients
Supplementary data 9. Univariate and multivariate analysis of factors associated with overall survival in patients with liver disease and TRG mutations (N=95).
Overall survival
Univariate analysis
Age at liver disease diagnosis We want to thank Kamal Zekrini and Dahia Secour for their collaboration [email protected]
Background and aims:
Primary Budd-Chiari syndrome (BCS) is a rare disorder defined as a blocked hepatic venous outflow tract by thrombosis at various levels from small hepatic veins to the terminal portion of the inferior vena cava [1]. Nonmalignant non-cirrhotic extrahepatic portal vein thrombosis (PVT) is characterized by a thrombus developed in the main portal vein and/or its right or left branches and/or splenic or mesenteric veins, or by the permanent obliteration that results from a prior thrombus [2] . The pathogenesis of these vascular liver diseases (VLD) is largely dependent on the presence of systemic prothrombotic conditions that promote thrombus formation in the respective splanchnic veins [3,4].
Four to nineteen percent of patients with BCS have paroxysmal nocturnal hemoglobinuria (PNH) and small studies or case reports also described PNH in patients with portal and mesenteric vein thrombosis [1,5,6]. PNH is a rare acquired disorder of hematopoietic stem cells, related to a somatic mutation in the phosphatidyl-inositol Glycan class A (PIG-A), X-linked gene, responsible for a deficiency in glycosyl phosphatidilinositol-anchored proteins (GPI-AP).
The lack of GPI-AP complement regulatory proteins leads to hemolysis. PNH often presents with hemolytic anemia, bone marrow failure and episodes of venous thrombosis. Venous thrombosis occurs in 15 to 30% of the patients with PNH and mainly affects cerebral, deep limb or splanchnic veins. Thrombosis is the major factor affecting outcome [7]. Risk factors for thrombosis include older age, thrombosis at diagnosis, and the need of transfusions. Recently reported approaches have used new therapeutic agents, such as eculizumab, a C5 inhibiting recombinant humanized monoclonal antibody [8][9][10]. In PNH patients, uncontrolled terminal complement activation and the resulting complement-mediated intravascular haemolysis are
Patients and Methods
Methods and results are reported according to the STROBE statement [12].
Design and setting
We performed a retrospective study including patients with PNH and a diagnosis of BCS or PVT made between 1997 and 2019 in centers of the Vascular Liver Disease interest Group ("VALDIG"). Diagnosis of PNH preceded or followed diagnosis of vascular liver disease. The study was conducted in accordance with the ethical guidelines of the 1975 Declaration of Helsinki and was approved by the institutional review board (IRB n 2003/21, Paris; France).
Demographic and patient's data were collected at the diagnosis of vascular liver disease and during follow-up. Complications related to thrombosis, bleeding, portal hypertension or blood disease were recorded.
Definitions
BCS diagnosis was based on ultrasonography and/or multidetector computed tomography and/or magnetic resonance imaging, and/or venography. Diagnostic criteria for PVT included recent portal, and/or splenic and/or mesenteric venous thrombosis or portal cavernoma. PVT patients with cirrhosis or abdominal malignancies were not included. In the VALDIG group, patients with BCS were treated according a stepwise therapeutic strategy, since 1997 [1,5].
The first step consists of anticoagulation therapy, specific therapy for underlying conditions, and medical or endoscopic management of liver-related complications. Recanalization of stenosis using angioplasty or stenting is routinely considered. In patients who do not respond to this first step therapy, transjugular intrahepatic portosystemic shunt (TIPS) is proposed and, as a fourth step, orthotopic liver transplantation (OLT). Patients with acute portal or mesenteric vein thrombosis are treated with anticoagulants since the 1990's [3,6]. Patients with chronic portal vein thrombosis are treated with anticoagulation when a severe thrombophilic factor is identified, and/or past history of mesenteric infarction or personal or family history of deep vein thrombosis, or on a case to case basis for others [1,13,14]. Anticoagulants therapy is based on low molecular weight heparin during the acute phase, or in patients with high bleeding risk situations, and switch to warfarine derivatives in stabilized conditions [1,4,5,14].
PNH diagnosis was based on the detection of GPI-deficient WBCs by flow cytometry assay.
To avoid biases related to the evolution of flow cytometry techniques a threshold of 5% was used to define a PNH clone. As previously proposed, PNH were categorized as follows [7]: the Classic PNH subcategory includes patients with clinical evidence of intravascular hemolysis but no evidence of other defined bone marrow failure criteria (neutrophils and platelets were per criteria higher than 1,500/L and 120,000/L, respectively). Aplastic anemia-PNH syndrome (AA-PNH syndrome) subcategory includes patients who were diagnosed with PNH (with a PNH clone at diagnosis) and with, at least, 2 or 3 peripheral blood cytopenias (hemoglobin < 100g/L, platelets < 80 × 10 9 /L, neutrophils <1, 000/L). Patients who did not fulfill the last 2 groups' criteria were assigned to a third group of patients called intermediate PNH: this category includes patients with myelodysplastic (MDS) and myeloproliferative neoplasm (MPN).
Temporality of PNH and VLD was categorized as PNH before VLD, PNH after VLD, and contemporary diagnoses when PNH was diagnosed within a month of VLD diagnosis (before or after).
Eculizumab therapy was first available in 2007. In a sensitivity analysis, we considered only patients with a VLD diagnosis made after 2007 (after first availability of eculizumab).
The PNH dosing regimen of eculizumab was the following: of a 4-week initial phase followed by a maintenance phase: Initial phase: 600 mg of Soliris administered via a 25 -45 minute intravenous infusion every week for the first 4 weeks.
Maintenance phase: 900 mg of Soliris administered via a 25 -45 minute intravenous infusion for the fifth week, followed by 900 mg of Soliris administered via a 25 -45 minute intravenous infusion every 14 ± 2 days [9].
Endpoints
Primary endpoint was mortality. Secondary outcomes were as follows: new venous thrombosis (including deep vein thrombosis, new splanchnic vein thrombosis, cerebral thrombosis and TIPS thrombosis) occurring after VLD diagnosis; arterial ischemic event (arterial stroke, myocardial infarction, obliterating arteriopathy), bleeding event including upper gastrointestinal bleeding (UGIB) or other severe bleeding, bacterial infection, and liver complications (including spontaneous bacterial peritonitis (SBP), UGIB, hepato-renal syndrome, hepatic encephalopathy, or ascites (resolved or recurrent)). Ascites resolution was assessed among patients with ascites at VLD diagnosis; and ascites recurrence, among patients with ascites resolution Follow-up Patients were followed according to local practice from the date of VLD diagnosis until death or last visit of follow-up or 10 years follow-up (predefined as the most clinical significant period) after VLD diagnosis, whichever occurred first. For five patients with BMT, follow-up was censored at the date of BMT (the patient being therefore considered as cured).
Investigations for thrombotic risk factors.
Patients were tested according to previously reported methods for the following thrombotic risk factors [4,5]: factor V R 506 Q mutation (factor V Leiden); G 20210 A factor II gene mutation; JAK2 V617F , mutations, deficiencies in protein C, protein S, or antithrombin (regarded as primary deficiencies only in conjunction with a prothrombin index ≥ 80%); PNH; and anti-phospholipid antibodies. Oral contraceptive use was considered as a thrombotic risk factor when taken within the 3 months preceding diagnosis of VLD [15].
Statistical analysis
Data were described using median (interquartile range) or number (proportion) as appropriate.
In order to account and control for immortal time bias [16,17], we defined exposed and unexposed person-time by using start and end date of eculizumab. Incidence rates for individual outcomes were calculated by dividing the incident number of events by the number of person-years (PY) over the corresponding period (either exposed or unexposed period).
Incidence rate ratio (IRR) was estimated by Poisson's regression with their 95% confidence interval (95%CI). In a sensitivity analysis, patients diagnosed and managed after 2007 (i.e. the opportunity of having eculizumab therapy) were analysed separately.
Analyses were performed using the R software (The R Foundation for Statistical Computing Plateform), version 3.6.
Results
Patient characteristics
Patients' characteristics are shown in table 1. Sixty-two patients (33 women, 53%) were included, including 50 with BCS, and 12 with PVT, with a median age of 35 (28 -48) JAK2 V617F myeloproliferative neoplasm was detected in 3 (7%) patients with BCS.
Sixty patients (97%) were treated with anticoagulants: The two patients who were not treated with anticoagulants had severe thrombocytopenia (<40 000 platelets) or/and active bleeding at diagnosis.
PNH characteristics
Median clone size among neutrophils, was 80% (70-90), hemoglobin 10 (8-11)
Patients with BCS
In patients with BCS, hepatic venous outflow obstruction was due to occlusion of one, two and 3 hepatic veins in 7, 7, and 33 patients, respectively, associated or not to obstruction of the suprahepatic segment of the inferior vena cava in 5 patients. Fifteen (31%) out of the 50 patients with BCS also had a PVT, four had an obstruction of the mesenteric vein (one had a mesenteric infarction with small bowel resection), and eight of the splenic vein. Duration between symptoms and BCS diagnosis was longer than 6 months in 17 (37%) patients. One, two and three inconclusive imaging of the liver were performed respectively in 20, 7, and 1 patients before diagnosis of BCS was eventually confirmed. Ten patients underwent a liver biopsy. In 1 patient, imaging was inconclusive whereas liver biopsy found a "small vessel BCS".
Thirty-one (65%) had ascites, 16 (38%) oesophageal varices (4 variceal bleeding). Median Child-Pugh score, Clichy score and Rotterdam score at diagnosis were respectively of 10 (8-11), 6 (5-7) and 1.2 (0.1-1.31).
Patients with PVT
Mesenteric vein obstruction was present in 6 (60%), splenic infarction in 1, and mesenteric infarction with bowel resection, in 1 patient. At diagnosis, 2 patients had oesophageal varices and none had bled.
Therapeutic management
All BCS patients received a stepwise therapeutic strategy, including anticoagulation (n=49, 98%), hepatic venous angioplasty/stenting (n=3; 6%), transjugular intrahepatic portosystemic shunt (n=16, 32%). No patient had liver transplantation.
Patients diagnosed and managed with VLD before and after 2007 were comparable in terms of initial severity, and management (table 3).
Outcomes
Hepatic or portal vein recanalization was obtained in 12 patients, all on anticoagulation therapy.
Overall, 17 (27%) patients died (8 died from severe liver disease complications, 4 from infection and multiple organ failure, 3 from cancer or leukemia, and 2 cerebral bleeding) and fifteen of them had persistent ascites and portal hypertension at the time of death. The mortality was significantly lower with eculizumab therapy (table 4): 2.6 vs 8.7 per 100 PY, IRR 0.30 95%CI (0.10 -0.92) (p value= 0.035).
During follow-up, there were 25 venous thrombosis, including 8 TIPS thrombosis and cerebral thrombosis. New venous thrombosis, in the splanchnic or extrasplanchnic veins, occurred less frequently in patients during exposure time to eculizumab: 2.62 vs 14.2 per PY, IRR 0.22 (0.07-0.64).
Other secondary end-points (bleeding, arterial ischemic lesions, ascites resolution and recurrence, infection and liver complications) were less common in patients exposed versus non-exposed period, although not reaching statistical significancy (table 4). All patients but one have continued long term eculizumab therapy until bone marrow stem cell transplantation, or death. In 8 patients, the dose of eculizumab was temporally increased to 1200mg/every weeks, due to inefficacy on hematological manifestations and on complement blockade: among them, 6 patients had ascites, and 4 had a TIPS. At the end of follow-up, one of these patients was deemed to require BMT, because of severe bone marrow failure. This patient had persisting moderate ascites despite functional TIPS (porto caval gradient below 8 mm Hg).
One patient stopped eculizumab during follow-up, and had recurrent thrombosis despite anticoagulation. Eight (13%) patients underwent haematopoietic stem cell transplantation for aplastic anemia, including 4 who were previously treated with eculizumab.
Twenty-six (42%) still needed regular blood transfusion at the end of follow-up vs. 50 (81%) at diagnosis. Eighteen patients have had severe infection during follow up, which was synchronous to venous thrombosis episode in 3.
When restraining analyses of primary and secondary outcomes to the "after 2007 period", era of eculizumab theapy, we observe significant reduced venous thrombosis and arterial events, and a trend in favour of eculizumab efficacy on survival. (table 5)
Discussion
The most important result from this multicentric cohort is the significant impact on mortality of eculizumab in patients with severe vascular liver diseases complications of PNH, namely BCS or PVT, with an incident rate ratio of death of 8.74 per 100 PY in patients not treated with eculizumab compared to 2.62 per 100 PY in patients treated with eculizumab. To avoid immortal bias, which may arise in cohort studies of drug effects, we have used incidence rates of exposition to assess the effect of eculizumab on endpoints [16,17]. This bias leads, in general, to an estimation in favor of the treatment under study by conferring a spurious advantage to the treated group. Assessing the effect of eculizumab by means of incidence rate ratios permits to limit the bias induced by "time fixed analysis" [16,17]. The drawback of this methodology is that the study spans over a long period of time (1997-2019), with Eculizumab available only after 2007. Nevertheless, this period has the advantage that VLD management otherwise than Eculizumab administration was stable over time among the Valdig group. Indeed, management of BCS or PVT was homogeneous, in terms of anticoagulation therapy for all BCS or PVT patients, and "BCS therapeutic strategy" in BCS within the Valdig group [4,5]. Indeed, most of these centers routinely record VLD patients in a registry for BCS or PVT so called the VALDIG registry, and manage the patients according to a protocol implemented for ENVie studies [3,4]. It is also important to note BCS or PVT patients were comparable before and after 2007, in terms of extension of thrombosis, severity of PNH, severity of VLD (number of BCS and/or PVT) and in BCS patients, prognosis indexes (table 3). Finally, when restricting the analysis from 2007, Eculizumab has similarly a significant impact on thrombosis and arterial complications, and trends to be significant on survival. It is still unclear in what way eculizumab decreases mortality, as immediate causes for deaths varied from severe liver disease complications to septic causes or cerebral bleeding. We have not found a significant impact of eculizumab therapy on liver or portal hypertension complications and conversely we have found that mortality and recurrent thrombotic events are reduced with eculizumab therapy. Indeed, the second main result of our study shows that recurrent thrombosis is less common in PNH patients with VLD treated with eculizumab, with an incident rate ratio of recurrent thrombosis of 14.12 per 100 PY in patients not treated with eculizumab, significantly higher compared to 2.62 per 100 PY in patients treated with eculizumab. (table 4). Although it could have been a hypothesis that patients described with other causes for VLD, would be at higher risk for thrombosis, this risk was not increased in the 11 patients with another cause.
One hypothesis to explain the reduction of mortality in VLD-PNH patients treated with eculizumab could be the persistence of prothrombotic mechanisms in non ecuilzumab-treated patients, despite anticoagulation. Preliminary results presented in 2012, seem to support this hypothesis: survival after eculizumab therapy was a 100% in 19 BCS patients, after a median follow-up of 7 years with eculizumab therapy, recurrent thrombosis did not occur in 6/19 patients after eculizumab late therapy, while 3 had recurrent thrombosis prior eculizumab administration [18]. Development of thrombosis despite anticoagulation therapy was a severe complication prior eculizumab era [7]. Initial studies have shown a significant impact of eculizumab on first incident episode of thrombosis in PNH patients: thrombo-embolic event rate was reduced from 7.3 events per 100 patient-years before eculizumab to 1.07 events per 100 patient-years during eculizumab treatment (P< 0 .001) [9,11] . In these studies, the mean pas history thrombotic event rate at inclusion, was 19 to 43 % [8,9,11]. In our study, all patients (100%) had recent vascular liver disease, and 97% were treated with antithrombotic therapy.
To conclude, we show that recurrent thrombotic risk is much lower in patients with eculizumab therapy, although it still exists in very few patients with other possible local or general co-factors for thrombosis.
Combining PVT and BCS in this study was noteworthy, as it has previously been shown that PNH mainly affects thrombosis in small vessels, sometimes difficult to characterize [19], and associates multiple splanchnic veins sharing similar severe complications such as mesenteric vein infarction [20]. One European study had compared PNH patients to other causes in BCS and identified a significant increase of associated PVT [20]. This is clearly confirmed in this study. Venous thrombosis in other territories occurred in 40% of the patients, mesenteric or splenic infarction occurred in 12%, and BCS was associated with PVT in more than 30% of BCS patients.
This study demonstrates that treating the underlying cause favorably impacts on survival in VLD patients. Actually, a treatable cause for thrombosis is common in VLD patients, including for example myeloproliferative neoplasms, paroxysmal nocturnal haemoglobinuria, or Behcet's disease, collectively found in 50-80% of patients with BCS, and 40-60% of patients with PVT [1].
The management of causes and their treatment could have a major effect on the course of liver disease. Yet, studies on this point are scarce [21]. This important result supports a recommendation for an early treatment for underlying conditions in PNH/VLD. This study also confirms that other causes, such as myeloproliferative neoplasms or antiphospholipid syndrome, can be associated in VLD patients, and that diagnosing one cause should not jeopardise a complete screening of all causes, especially now that we know that treating PNH is efficient on VLD course. It is also interesting to note that eight patients were deemed to require increased doses of eculizumab to reduce hemolysis and obtain complement inhibition as previously shown [22]. Still, few patients died despite eculizumab therapy, or had BMT because of persisting severe hematological prognosis. The impact of persisting ascites in the ability to efficiently block complement with eculizumab needs to be investigated.
In conclusion, eculizumab is significantly associated with a lower mortality and less thrombotic recurrences in the overall VLD/PNH population. Nevertheless, close follow-up is
Introduction
Non-malignant extrahepatic portal vein obstruction in adults is predominantly related to thrombosis and referred to as portal vein thrombosis (PVT). In the absence of cirrhosis, long-standing obstruction due to PVT causes portal hypertension despite the development of hepatopetal veins supplying the liver called portal cavernoma (1)(2)(3)(4). PVT is associated with one or several underlying risk factors of thrombosis in 60-70% of cases (2,(5)(6)(7). The spectrum of the latter covers conditions associated with a major prothrombotic risk factor (e.g. myeloproliferative neoplasms, antiphospholipid syndrome, or homozygous factor V Leiden), to conditions associated with a mild to moderate risk (e.g. heterozygous G20210A
factor II or G1691A factor V mutations, low titer antiphospholipid antibodies, C677T polymorphism in MTHFR gene) (2,4,6). It should be noted that the underlying risk factors cannot be identified in up to 25% of patients using current techniques thus, their risk of recurrent thrombosis is unknown.
The two major complications of long term PVT are gastrointestinal bleeding related to portal hypertension, and recurrent thrombosis. Intestinal infarction is the most severe outcome of recurrent thrombosis. Indeed, this event is associated with a mortality rate of 20 to 60%, while survivors may have severe disability due to extended resection or postischemic intestinal stenosis (1)(2)(3)(4).
Retrospective studies in patients with a previous history of PVT without cirrhosis have suggested that permanent anticoagulation therapy significantly decreases the incidence of recurrent thrombosis without increasing the occurrence or the severity of gastrointestinal bleeding (8)(9)(10)(11). Furthermore, a retrospective study in patients with portomesenteric vein thrombosis provided some evidence that anticoagulation therapy independently improved survival (10). In another multicenter registry study of incidentally diagnosed splanchnic vein thrombosis, anticoagulant therapy reduced the incidence of thrombotic events without increasing the risk of major bleeding (p<0.05) (12). However, this study was limited by a high proportion (28%) of patients with cirrhosis. The scope and strength of existing recommendations from scientific societies are limited by an absence of randomized controlled therapeutic trials (2)(3)(4)13,14). Thus, clarifying the benefit-risk ratio of anticoagulation therapy in patients with a previous history of PVT and without highrisk factors for recurrent thrombosis is an important issue. Moreover, the optimal anticoagulation management in PVT patients has not yet been determined. For the moment the safety and efficacy of direct oral anticoagulants (DOACs) in patients with portal hypertension can only be evaluated from retrospective analyses of small groups of hepatocellular carcinoma or childhood cavernoma were not eligible. Patient eligibility was reviewed by a central eligibility committee including 3 hepatologists and 3 hematologists.
Estroprogestatives were stopped before inclusion. The complete study protocol including a list of eligibility criteria is provided in the Appendices.
Outcomes
The primary endpoint was the occurrence of a venous thromboembolic event at any site and/or death within 2 years of randomization. According to the Prospective Randomized Open Blinded Endpoint (PROBE) methodology, to limit the risk of misclassification due to the open-blinded design, an independent committee including a hepatologist and a radiologist blinded to the treatment group assessed all abdominal CT scans and clinical data (18).
Secondary endpoints included death, pulmonary embolism, deep vein thrombosis, major bleeding according to ISTH (International Society on Thrombosis and Haemostasis) guidelines definition (16), portal hypertension related bleeding, non-bleeding complications of portal hypertension (i.e. ascites, portal cholangiopathy), minor bleeding, liver toxicity (as defined by an otherwise unexplained increase in serum aminotransferase levels) and any other adverse events.
Randomization and treatment
A randomization sequence was used for treatment assignments, with stratification according to anticoagulant therapy at inclusion and center. Patients were randomly allocated in a 1:1 ratio to receive either oral rivaroxaban 15 mg once daily for the duration of follow-up, or no anticoagulant medication. A dose of 15 mg daily was chosen for the current study because PVT patients are at risk of portal-hypertension related bleeding.
Thus the 15 mg dose was based on studies for atrial fibrillation which used a dose of 15 mg of rivaroxaban daily in patients with high risk of bleeding due to altered renal function (17)
Follow-up visits
Patients were seen at randomization (baseline) and, at 1, 3, and 6 months, then every 6 months thereafter for 24 months to 48 months. Demographic data were collected at baseline. Clinical, imaging and laboratory data were obtained at baseline and at each follow-up visit. At each follow-up visit, the investigator collected information on prespecified complications.
Sample size and statistical analyses
We calculated that a sample size of 296 patients was needed to detect 3.5% recurrent thrombosis in the non-treated group versus 0.8% in the rivaroxaban group, with a power of 80%, a two-sided type I error rate of 0.05 and 6% of lost to follow-up patients. We assumed that 24 months were needed to recruit patients, with a minimum follow-up of 24 months. However, after temporary suspension of inclusions due to 5 thrombotic events in the non-treated group within 21 weeks after the study began (first interim analysis), the data monitoring and safety committee recommended that the study continue and that a second interim analysis be performed. The study protocol was therefore amended with an additional interim analysis planned after 10 thrombotic events.
Analyses of the primary and secondary outcomes included all patients who underwent randomization (intent-to-treat population). Analysis of the primary outcome was based on a group-sequential design with two interim analyses with O'Brien & Fleming stopping boundaries (i.e. respectively 0.0005 and 0.014), and stratified two-sided log-rank tests comparing patients randomized in the rivaroxaban group with those in the nonanticoagulation group.
Inclusions were discontinued on January 19, 2018 after the second interim analysis (after 28 months of patient enrolment). All patients were thereafter offered anticoagulant therapy and followed-up for at least 24 months after randomization and up to 48 months. Rivaroxaban 15 mgonce daily was suggested but the attending physicians and patients were allowed to make their own decision. Thus, there were two distinct periods to the study: Period 1 (randomized period) from randomization to January 19, 2018 and the last inclusions with a proposal to switch patients in the no anticoagulation group to anticoagulants, and Period 2 (follow-up period): from January 19, 2018 to end of study on January 30, 2020 (Supplementary figure 1).
Major and minor bleeding events as well as non-bleeding outcomes were analyzed in randomized patients and followed according to their effective treatment received throughout the entire study (both period 1 and period 2). We estimated person-times on treatment with rivaroxaban, on other anticoagulant therapies and with no treatment using the start and stop dates of each type of treatment. The incidence rates of bleeding events and non-bleeding outcomes were calculated by dividing the number of incident events by the number of person-years during the corresponding period. Incidence rate ratios (IRR) were estimated by Poisson's regression with a 95% confidence intervals (95%CI).
Univariate Cox proportional hazard models were used to assess whether thrombophilia, other known risk factors for thrombosis, imaging data including extension of thrombosis (intra, extra hepatic, and recanalized PVT), laboratory data (PT, coagulation factors and factor VIII, fibrinogen, transaminases) collected at randomization, and D-dimer and factor VIII values at one month after randomization, were associated with recurrent thrombosis among patients randomized in the no-anticoagulation group. In this analysis, patients were followed from randomization until January 19, 2018 (second interim analysis and inclusions' discontinuation).
A P-value of 0.05 or less was considered to be statistically significant. Data handling and analysis were performed with SAS statistical software, version 9.4 (SAS Institute, Cary, NC, USA).
Results
Trial Population
Overall, 193 patients with PVT were screened by participating hospitals. The number of included and randomized patients is described in figure 1A. We analyzed 55 patients in the rivaroxaban group and 56 in the "no anticoagulation" group (figure 1A). Median followup for the whole study was 30.3 months [29.8-35.9]. Three patients were lost to follow-up.
The median time from PVT diagnosis to inclusion was 1.7 [0.9-5.3] years.
Ninety-two patients were included in Hôpital Beaujon, and 19 patients in the seven other French centers (Supplementary Table 1).
Median follow-up for period 1 (i.e. randomized period) was 11.8 months [8.8-13.2]. Baseline characteristics were similar in the 2 study groups (Table 1). As shown in table 1, low-risk thrombophilia were identified in 48 patients (50%). When combining low risk thrombophilia and other minor identified risk factors for thrombosis (male gender, BMI>30, over 60 years old, non-O blood group, Systemic disease, PSVD, induced 1st degree family history ≥2, unprovoked 2nd degree family history ≥1, Factor VIII > 150) , at least one persisting low risk factor was identified in 94% of patients. One patient with protein S deficiency had a previously identified detrimental PROS1 heterozygous mutation. When PVT was diagnosed, a local cause was found in 17 (15%) patients, including 2 liver abscesses, 6 infectious colitis, 3 acute pancreatitis and 6 other splanchnic or systemic infections, which had all been cured at inclusion. At inclusion, a portal cavernoma was present in 70% of patients. Obstruction of the portal vein trunk or concomitant obstruction of both right and left portal branches was present in 73% of patients. The location of intra or extra hepatic obstruction according to the study group is shown in Table 1. A detailed description of the splanchnic vein obstruction at diagnosis is provided in Supplementary Figure 2. Porto-systemic collaterals were present in 53% of patients, esophageal varices in 29% and gastric varices in 9%.
At randomization, 82 patients (74%) were on anticoagulation therapy. Nineteen of the 29 remaining patients not treated with anticoagulants at inclusion, had had prior anticoagulation therapy and 10 had not.
Sixteen of the 29 patients without anticoagulants at inclusion were randomized to the rivaroxaban group and 13 to the no-anticoagulation group.
Primary Outcome
During period 1 and up to the second interim analysis, venous thrombosis recurred in 10 patients, involving the splanchnic territory in 4 patients, the lower limb veins in 3, and the pulmonary veins in 3 (Supplementary Table 2). There were no deaths. The incidence rate of thrombotic events was 19.71 per 100 PY (95% CI [7.49 -31.92]) in the control group, while no thrombotic events occurred in the rivaroxaban group. Event-free survival significantly differed between treatment groups (p=0.0008, Figure 2).
Risk factors for recurrent thrombosis
During period 1, univariate analysis was performed in the no-anticoagulation group to identify risk factors for thrombosis recurrence, including thrombophilia factors, imaging data including extension of thrombosis (intra, extra hepatic, and recanalised PVT), laboratory data, and D-dimer and factor VIII values at one month. None of the thrombophilia factors were associated with recurrent thrombosis. One patient with recurrent thrombosis (pulmonary embolism) had been shown to have detrimental PROS1 heterozygous mutation (Supplementary Table 2). D-dimer ≥ 500 ng/mL at 1 month was significantly associated to recurrent thrombosis (HR=7.78 [1.49-40.67]) (Figure 3) with a predictive positive value (PPV) of 37.5 % and a negative predictive value (NPV) of 93.8%. Isolated and resolved causes for PVT (local causes or estrogen containing oral contraceptives) were associated with an absence of recurrent thrombosis (Table 2).
Follow-up period (Period 2)
Period 2 started on January 19, 2018, and lasted until the last patient's visit on January 30, 2020. During Period 2, all randomized patients were followed for a mean 20.6 months (±3.7 months). Anticoagulation therapy was proposed to all patients Forty six of the 55 patients from the rivaroxaban arm continued rivaroxaban, 4 switched to another anticoagulant and 5 received no anticoagulants Thirty two of the 56 patients from the no anticoagulation arm received rivaroxaban, 17 another anticoagulant, and 7 no anticoagulants (figure 1B).
During Period 2, 3 patients not exposed to recent anticoagulation, developed 3 new thrombotic events. The incidence rate of thrombotic events was 2. 2) Additionally, patient # 2, who had already had a recurrent deep vein thrombosis during Period 1 and was treated with warfarin, had recurrent portal vein thrombosis associated with a liver abscess.
A detailed description of patients with recurrent venous obstruction is provided in Supplementary Table 2.
Secondary Outcomes
Major bleeding occurred in 3 patients throughout the study period, 2 in patients treated with rivaroxaban, and 1 in a patient not treated by an anticoagulant (Table 3). Rivaroxaban was not definitively interrupted in any of them. Complications were comparable throughout the periods of exposure (either DOACs, other anticoagulants or untreated period) for major bleeding, portal hypertension related bleeding and other complications.
Fifty-seven minor bleeding episodes occurred in 36 patients: 41 episodes in 24 patients in the rivaroxaban group vs 16 in 12 patients in the no-anticoagulation group. Bleeding episodes mainly involved epistaxis (23%), uterine bleeding (11%), and rectal bleeding (16%). Minor bleeding occurred more frequently during exposure to DOACs than during the period without treatment (IRR 4.86 [1.75 -13.50]). In one patient, recurrent minor bleeding episodes (rectal bleeding) led to permanent discontinuation of anticoagulation therapy.
Twenty-two other adverse events were reported in 17 patients, including 4 episodes of ascites, and 3 episodes of portal cholangiopathy related symptoms (Supplementary table 3). Four episodes of acute pancreatitis occurred in three patients treated with rivaroxaban, 2 of whom had taken codeine known as as a possible drug cofactor. None of these adverse events were directly related to rivaroxaban. There was no evidence for liver toxicity from rivaroxaban (supplementary figure 2).
Discussion
This first randomized controlled trial fills the knowledge gap on the risk of recurrent thrombosis in patients with a previous history of non-cirrhotic portal vein thrombosis.
Despite a careful exclusion of high-risk factors for recurrent venous thrombosis, the incidence of thrombotic episode was close to 20 p.100 PY in the 56 patients randomized to no anticoagulation therapy. The severity of recurrent thrombosis should be highlighted, as the splanchnic veins were involved in 4 cases and pulmonary embolisms in 3. It is important to note that the results in Period 2 confirm the high risk of recurrence found in Period 1, the randomized phase of the study. Indeed, during Period 2, 3/12 patients who stopped anticoagulation developed recurrent episodes of thrombosis. Thus, this study shows that a past history of portal vein thrombosis per se should be considered as a major risk factor for recurrent thrombosis even in the absence of recognizable prothrombotic conditions. It should be noted that although 50% of our patients had identified low-risk thrombophilia the presence and nature of the underlying conditions had no relationship to recurrent thrombosis. Thus far, this comprises the first randomized controlled trial and hence, the previously published data cannot be fully compared to the present findings because of the retrospective design of those previous studies, and the heterogenous patient characteristics and management (8)(9)(10)(11). Nevertheless, the incidence of recurrent thrombosis in the no-anticoagulation group from the present study is clearly higher than reported in retrospective studies which merged patients with and without anticoagulation therapy (8)(9)(10)(11).
On the other hand, the 55 patients allocated to the anticoagulant group that received rivaroxaban 15 mg daily were fully protected from a new episode of thrombosis. The protective effect of rivaroxaban appeared so early in the study that an early interruption was requested by the independent data monitoring and safety committee. These prospective findings clearly validate conclusions from retrospective data, showing that anticoagulation plays an independent in preventing recurrent thrombosis in patients with non-cirrhotic PVT (8)(9)(10)(11)(12)18).
The benefit of preventing recurrent thrombosis with rivaroxaban must be balanced to the risk of bleeding during therapy. The modification in patients' management as requested by the independent data safety committee (all patients were proposed anticoagulation therapy during Period 2) limits full comparison to the control group. Nevertheless, Period 2 has allowed us to follow 99 patients receiving anticoagulation out of the 111 enrolled.
Most of the patients on anticoagulation therapy (78/99) received rivaroxaban 15 mg per day. As expected, the absolute incidence of a minor bleeding episode was high (16)(17)(18)(19)(20)(21)(22)(23)(24) p.100 PY), and significantly higher than in the absence of anticoagulation therapy. On the other hand, the incidence of major bleeding (incidence 1.02 per100 PY) and portalhypertension related bleeding (incidence 1.5-2.7 per100 PY) were rare. In the absence of portal hypertension, the overall risk of major bleeding during anticoagulation therapy with warfarin and DOACs assessed in pivotal meta analyses and real life studies in atrial fibrillation was found to be close to 5 to 6 per 100 PY (19)(20)(21). DOACs have been shown to be associated with an increased frequency of gastrointestinal bleeding compared to warfarin, athough the severity of these episodes does not seem to be increased (22,23).
Furthermore, although it is based on limited data, the risk of bleeding in patients with cirrhosis and portal hypertension seems to be lower with DOACS than with vitamin K antagonists (24)(25)(26)(27). Up to now, evidence for the safety and efficacy of DOACS in PVT in the absence of cirrhosis has been limited (15).
Our results do not show any increased risk of major bleeding or portal-hypertensionrelated bleeding, and strongly support the use of rivaroxaban 15 mg daily in this indication.
It is important to note that there were no signs of liver toxicity from rivaroxaban in the present study.
The findings of this study may pave the way for personalized treatment by clarifying the risk factors of recurrent thrombosis in the absence of anticoagulation therapy. Indeed, like in deep vein thrombosis (28)(29)(30), a D-dimer value <500 ng/ml at 1 month after discontinuation of anticoagulants is strongly associated with a low risk of recurrence, given a negative predictive value of 93.5 %. None of the common causes of thrombosis were significantly related to the risk for recurrence. Obesity and advanced age were not identified as additional risk factors for recurrence. Furthermore, none of the patients who had mainly local infectious or inflammatory causes of PVT at diagnosis that were resolved at inclusion or who took estroprogestative oral contraceptives had recurrent thrombosis. Thus, in patients with isolated and resolved causes of PVT, discontinuation of anticoagulants could be considered based on D-dimer values at 1 month. These conclusions should be validated in a large independent group of patients.
Conclusion
In non-cirrhotic patients with a past history of PVT without major prothrombotic risk factors, long term rivaroxaban p.o. at 15 mg once daily reduced the incidence of recurrent venous thrombosis but did not increase the occurrence of major bleeding. D-dimer<500ng/ml one month after anticoagulation is discontinued predicted a low risk of recurrence. As this is the first randomized clinical trial in the setting of PVT and thrombosis recurrence to date, the predictive value of D-dimers for the risk of recurrence should be validated in further studies.
CONCLUSION I-Conclusion and perspectives
Ours first study has shown the dramatic impact of TRG mutations on the risk for liver disease, independently from other risk factors, especially for PSVD. These results support routine screening for TRG mutations in patients with unexplained liver disease (early onset age, personal or familial extra liver disease…). When TRG mutation is identified, other causes for liver disease and/or a FIB-4 score ≥ 3.25 are at particularly high risk for severe liver failure, portal hypertension complications or hepatocellular carcinoma. Our second study assesses for the first time the impact of eculizumab for PNH on survival, and complications in patients with vascular liver diseases. The impact on clinical practice is major in the field of hematology, hepatology and gastroenterology as for the first time it is clear that in this specific setting of splanchnic vein thrombosis, early administration of eculizumab is mandatory, in addition to standard of care therapy for BCS or PVT, to avoid death from severe liver failure in BCS and mesenteric infarction in PVT due to recurrent thrombosis Early initiation of eculizumab and close monitoring of clinical and biological markers, concomitant to VLD treatment prevents severe VLD and may avoid the use of invasive procedures. Our third study shows that in non-cirrhotic patients with a past history of PVT without major prothrombotic risk factors, long term rivaroxaban at 15 mg once daily reduced the incidence of recurrent venous thrombosis but did not increase the occurrence of major bleeding. We identified D-dimers as specific biomarkers to predict the risk of recurrence In conclusion this thesis work provides new insights into the role of causes for vascular liver diseases, and their impact on prognosis. It also discloses new strategies for therapeutic management, emphasizing the need to treat the cause of VLD concomitantly to the VLD. It also modifies the dogmas, where patients with no cause identified are probably at higher risk of thrombosis, and necessitate permanent anticoagulation.
Additional works would be useful to clarify the role of telomerase mutations in the mechanisms of PSVD, in particular the role of these mutations on telomeres length before and after liver transplantation in leucocytes and in liver tissue.
Analysis of NGS added to D dimers of patients with currently no cause for PVT may also be helpful to adapt personalized therapeutic strategies in these patients.
................................................................................................. a) AT, PC, PS deficiency...................................................................................... b) Other syndromic diseases associated to genetic conditions......................... c) Telomeres regulatory gene (TRG) mutations................................................. 3-Other risk factors..................................................................................................... a) Local cause..................................................................................................... b) Central obesity and metabolic syndrome...................................................... c) Hormones and VLD........................................................................................ d) Other causes.................................................................................................. III-Treatment……………………………………………………………………………………………………….………. 1-Medical therapy: Anticoagulation and treatment of the cause............................... A-Anticoagulation................................................................................................... a) Indications...................................................................................................... b) Type of anticoagulation................................................................................. c) Bleeding complications in patients treated by anticoagulant treatment for deep vein thromboses or splanchnic vein thrombosis.................................. d) Treatment of the cause................................................................................. IV-Risk of thrombosis recurrence in deep vein thrombosis and identified risk factors... V-Risk of recurrence in VLD and identified risk factors.................................................. 1-Risk of recurrence in Budd Chiari syndrome with or without anticoagulant treatment............................................................................................................... 2-Risk of recurrence or extension of portal thrombosis with or without anticoagulant treatment...............................................................................................................
Figure 2 :Figure 3 :
23 Figure 2: Hematologic stem cell is affected quantitatively and functionnaly in TRG mutations
Figure 4 : 3 -
43 Figure 4: Hypothesis regarding TRG mutation impact in NASH, cirrhosis and HCC, in addition to acquired risk factors[38]
Article type: Original Research Title: Mutations of telomere-related genes, a risk factor for liver disease: a multicentric study. Keys words: Liver disease, telomere biology disorder, mutations of telomere-related genes, porto sinusoidal vascular disease, Nonalcoholic steatohepatitis, fibrosis, FIB-4 score. (14) Centre Hospitalier Universitaire de Montpellier, Hépatologie, Montpellier, France (15) Centre Hospitalier Régional Universitaire de Lille, Hépatologie, Lille, France (16) Hôpital Foch, Hépatologie, Suresnes, France (17) Hôpital Cochin, Hépatologie, Paris, France (18) Hôpital Nord, Pneumologie, Marseille, France (19) Centre Hospitalier Régional Universitaire de Lille, Pédiatrie, Lille, France (20) Hôpital Foch, Médecine Interne, Suresnes, France (21) Hôpital Universitaire Dupuytren, Hépatologie, Limoges, France (22) Hôpital Paul-Brousse, AP-HP, Hépatologie, Villejuif, France (23) Hôpitaux Universitaires de Genève (HUG), Pathologie, Genève, Suisse (24) Centre Hospitalier Universitaire Vaudois, Pneumologie, Lausanne, Suisse (25) Hôpitaux Universitaires de Genève (HUG), Hépatologie, Genève, Suisse (26) Centre Hospitalier Universitaire -Haut-Lévêque, Hépatologie, Pessac, France (27) Centre Hospitalier Universitaire, Hépatologie, Clermont-Ferrand, France (28) Centre Hospitalier Universitaire, Hématologie, Clermont-Ferrand, France (29) Centre Hospitalier Universitaire, Hépatologie, Rennes, France (30) Centre Hospitalier Universitaire, Pneumologie, Rennes, France (31) Centre Hospitalier Régional Universitaire de Lille, Pneumologie, Lille, France (32) Hôpital Saint Louis AP-HP, Pneumologie, France (33) Centre Hospitalier Universitaire de Caen Normandie, Hépatologie, Caen, France (34) Centre Hospitalier Universitaire Amiens-Picardie Site Sud, Hépatologie, Amiens, France (35) Hôpital Bichat-Claude Bernard AP-HP, Génétique, Paris, France (36) Hôpital Avicenne AP-HP, Hépatologie, Bobigny, France (37) Hôpital Beaujon AP-HP, Radiologie, Clichy, France (38) Hôpital Purpan, Hépatologie, Toulouse, France (39) Hôpital Avicenne AP-HP, Pneumologie, Bobigny, France (40) Hôpital Beaujon AP-HP, Anatomopathologie, Clichy, France * Both authors contributed equally to this work. Centres de Références des Maladies Rares (CRMR) (Referral centre for rare diseases) Centre de Référence des Maladies Pulmonaires Rares (Orphalung) (Rare Referral centre for Lung Disease) Centre de Référence des Maladies Vasculaires du Foie (CRMVF) (National network for Vascular Liver Disease) Centre de Référence Aplasies (Rare Referral centre for Aplastic Anemia)
This first cohort consisted in patients managed for TRG mutations in twenty-five centers. Screening for liver disease was performed between October 2018 and August 2019 in all known TRG mutations patients, diagnosed between 1990 and 2019. Molecular explorations results were reviewed to confirm mutation pathogenicity. Patients or relatives gave a signed consent for germline genetic analysis. Those patients had been included in registers and consented to the use of their data for research purposes. The study was conducted in accordance with the ethical guidelines of the 1975 Declaration of Helsinki and was approved by the institutional review board (IRB n 2003/21, Paris; France).
*
iron deposition on MRI -N/investigated (%)14/48 (29) Cofactor for liver disease: excessive alcohol consumption >100 g/week, positive HBs Ag, metabolic syndrome (Defined by the presence of at least 3 of the following criteria: hypertension, diabetes, overweight BMI >25 kg/m², HDLc <0.4 g/L in men and <0.5 g/L in women), BMI >25 kg/m², iron overload (ferritin >1000 µg/L). BMI: body mass index (kg/m²).
Figure 2A .
2A Figure 2A. Overall survival and liver transplantation-free survival in patients with liver disease and TRG mutations (N=89) (Kaplan Meir) (89/95 patients had a diagnosis of liver disease and genetic mutation between 2004 and 2019 were included in survival analysis). 2B. Liver transplantation-free survival according to prognostic factor of patients with liver disease who received diagnoses of liver disease and genetic mutation between 2004 and 2019 (N=85).
TINF2 c.849delC, p.(Ser284Leufs*67) <1st TERC r.119delC <1st TINF2 c.873G>C,p.(Arg291Ser) <1st TERT c.2224C>T,p.(Arg742Cys) <1st TERC r.448 A>U <1st TERC r.58C>U <1st TERT c.2446C>G, p.(His816Asp) ; TERC r.434G>U <1st DKC1 § NA TERC r.110_113delGACT NA TERT c.2537A>G,p.(Tyr846Cys) <1st TERT c.2011C>T,p.(Arg671Trp) <1st RTEL1 c.2248C>T,p.(Arg750Cys) NA RTEL1 c.2869C>T, p.(Arg957Trp) NA TERT c.2843+1G>A NA TERC r.130G>A NA TERC r.37A>G NA TERT c.2935 C>T,p.(Arg979Trp) <1st RTEL1 c.2266-1G>C <1st RTEL1 c.3493dupC, p.(Gln1165Profs*22) <1st TERT c.2224C>T,p.(Arg742Cys) <1st TERT c.2383-2A>G NA TERT c.2159T>C, p.(Ile720Thr) NA TERT c.446T>A, p.(Leu149Gln) NA TERC r.313C>U <1st TERT r.448A>T <1st TERT c.3286C>T, p.(Leu1096Phe) <1st # TERC r.36C>U <1st TERT c.1657G>A, p.(Val553Ile) 25th TERT c.2225G>A, p.(Arg742His) <1st TERT c.2080G>A, p.(Val694Met) NA TERT c.2989G>A, p.(Val997Met) <1st TERC r.448A>U NA TERT c.2935C>T, p.(Arg979Trp) NA TERT c.2935C>T, p.(Arg979Trp) NA TERT c.2224C>T, p.(Arg742Cys) <1st TERC r.67G>A NA TERT c.2147C>T, p.(Ala716Val) <1st DKC1 c.851G>A, p.(Arg284Gln) <1st TERC r.35C>U <1st TERC r.235C>G 25th TERT c.2516C>T, p.(Thr839Met) <1st RTEL1 c.2946C>G, p.(His982Gln) 10th TERT c.2968C>T, p.(Gln990*) <1st & ? RTEL1 c.2892T>G, p.(Phe964Leu) ; c.3802T>C, p.(Cys1268Arg) <1st NHP2 c.122G>A, p.(Arg41His) <1st DKC1 c.1318G>A, p.(Glu440Lys) <1st TERT c.2968C>T p.(Gln990*) NA TERC r.441C>U NA TERT c.2287-2A>G <1st TERT c.2665C>T, p.(Arg889*) NA TERT r.182delG <1st TERT c.2227C>T, p.(Arg743Trp) 50-75th PARN § NA TERT c.1072C>T, p.(Arg358Trp) 10th TERT § NA TERC r.323C>G <1st TERC r.270G>C NA NHP2 c.289_290delAT, p.(Met97Valfs*2) <1st RTEL1 c.2767A>C, p.(Lys 923Gln) NA & ? RTEL1 c.394C>T, c.394C>T, p.(Arg132Trp) ; c.1537 dup, p.(Val513Gly fs*18) NA TERT c.2377G>A, p.(Glu793Lys) <1st TERC r.179U>C <1st TERT c.2147C>T, p.(Ala716Val) <1st TERT c.1630T>C, p.(Phe544Leu) NA TERC r.236C>T NA RTEL1 c.1451C>T, p.(Pro484Leu) NA RTEL1 c.146C>T, p.(Thr49Met) NA TERT c.1511C>T, p.(Ser504Leu) NA TERT c.3216G>A, p.(Trp1072*) <1st TERT c.2911C>T, p.(Arg971Cys) <1st TERT c.2093G>A, p.(Arg698Gln) NA TERT c.2911C>T, p.(Arg971Cys) <1st TERT c.2836T>C, p.(Tyr946His) NA TERT c.1864C>T, p.(Arg622Cys) NA TERT c.2287-2A>C NA TERC r.448A>U <1st TERT c.2945G>A, p.(Cys982Tyr) NA TERC r.65G>C NA TERT c.2225G>T, p.(Arg742Leu) <1st TERT c.1990 G>A, p.(Val664Met) <1st TERT c.2552A>T, p.(Asn851Ile) <1st TERT c.2678A>T, p.(Glu893Val) NA TERT c.2960T>C, p.(Leu987Pro) NA TERT c.1710G>C, p.(Lys570Asn) NA TERT c.2225G>A, p.(Arg742His) <1st TERC r.180C>U <1st TERT c.2911C>T, p.(Arg971Cys) NA TERT c.2911C>T, p.(Arg971Cys) NA TERT c.191C>T, p.(Pro64Leu) NA TERT c.2320C>T, p.(Arg774*) NA TERT c.3040G>C, p.(Ala1014Pro) NA TERT c.3040G>C, p.(Ala1014Pro) NA TERT c.265T>C, p.(Cys89Arg) NA B. Patients without liver disease (N=37). Patients Mutations (most mutations were identified at heterozygous state, excepted those indicated by a #) Telomere lengths* TERT c.2542G>A, p.(Asp848Asn) <1st TERT c.2542G>A, p.(Asp848Asn) <1st TERC r.67G>A NA RTEL1 c.2879 A>G, p.(His960Arg) 15th TERT c.1891C>T, p.(Arg631Trp) NA RTEL1 c.146C>T, p.(Thr49Met) NA RTEL1 c.146C>T, p.(Thr49Met) NA RTEL1 c.146C>T, p.(Thr49Met) NA RTEL1 c.146C>T, p.(Thr49Met) NA TERT c.22_43dup22, p.(Arg15ProfsX) NA TERT c.2011C>T, p.Arg671Trp NA TERT c.1990 G>A, p.(Val664Met) <1st TERT § NA PARN c.948_949del AT, p.(Val318Phe fs*9) NA PARN c.948_949del AT, p.(Val318Phe fs
for the French national network for vascular liver diseases 1 Hôpital Beaujon, AP-HP Nord-Université de Paris, DHU Unit, Service d'Hépatologie, Centre de Référence des Maladies Vasculaires du Foie, Inserm U1149, Centre de Recherche sur l'Inflammation (CRI), Paris, Université de Paris, ERN Rare liver Clichy, France BACKGROUND Paroxysmal nocturnal hemoglobinuria (PNH) is identified in 2-10% of vascular liver disease (VLD) patients. Eculizumab is indicated in PNH patients with hemolysis and clinical symptoms indicative of high disease activity. FINDINGS Eculizumab reduces mortality and thrombotic recurrence in patients with VLD and PNH, in addition to standard of care therapy for VLD (anticoagulation and recanalization procedures according to response to medical therapy). IMPLICATIONS FOR PATIENT CARE. Early initiation of eculizumab and close monitoring of clinical and biological markers, concomitant to VLD treatment prevents severe VLD and may avoid the use of invasive procedures. List of Abbreviations PNH Paroxysmal nocturnal hemoglobinuria; BCS, Budd-Chiari syndrome; PVT portal vein thrombosis; VLD vascular liver disease; JAK2, Janus kinase 2 gene; MPN, Myeloproliferative neoplasm; VALDIG European vascular liver disease network; SBP, spontaneous bacterial peritonitis; UGIB, upper gastrointestinal bleeding; TIPS transjugular intrahepatic portosystemic shunt; OLT, orthotopic liver transplantation; BMT, Bone marrow transplantation Paroxysmal Nocturnal Hemoglobinuria, Aplastic Anemia, Budd-Chiari syndrome, Portal vein thrombosis, splenomegaly, arterial ischemic event, cohort study, mortality, eculizumab
years at the time of VLD diagnosis. Median follow-up after VLD diagnosis was 4.7 years (1.2-9.5). In 13 patients, PNH was diagnosed after VLD, with a median interval of 0.2 (0-1.7) years. In 4 patients, PNH was diagnosed at the time of VLD diagnosis, and in 45 patients PNH was diagnosed before VLD diagnosis with a median interval of 3.6 (1.3-7.1) years (table 2). Venous and arterial thrombosis elsewhere at diagnosis had occurred in 34 including 8 (13%) cerebral thrombosis.
Out of the 62
62 patients, 42 (68%) patients received eculizumab, including 29 within the first 4 months after VLD diagnosis; 34/42 (80%) had BCS, and 8/42 (20%) had PVT. Eculizumab administration was delayed in 8 patients with a median delay of 4.2 (3.4 -5.9) years. In 4 of these 8 patients diagnosis of PNH was reached after the diagnosis of VLD. Eculizumab median exposure time was 40.05 [9.33:72.55] months. All had Eculizumab until death or bone marrow stem cell transplantation, or patient's interruption (n=1). Eculizumab therapy according to BCS or PVT diagnosis is described figure 1.
Figure Legends
Figure Legends
Figures and tables: 3
3 Figures and tables:3 tables and 3 figures / 3 supplementary tables and 3
14 per 100 PY (95% CI [0.04-4.24]) during period 2: Patient #11 had stopped anticoagulation therapy for personal reasons 15 months before the recurrent episode of thrombosis. He had a protein S deficiency and portosinusoidal vascular disease. Patients # 12 and # 13 had superficial venous thrombosis and splenic vein thrombosis, 2 days and 23 months after anticoagulation interruption. (Supplementary Table
Figure 1A: flow chart period 1 Figure 1B: flow chart period 2 Figure 3
123 Figure 1A: flow chart period 1
Professeurs Ariane Mallat et Nathalie Ganne, d'avoir accepté d'être rapporteurs de ce travail. Vous portez chacune notre voix et représentez nos efforts. Merci au Professeur Christophe Bureau d'avoir accepté d'être examinateur. Vous avez ensemble travaillé pour tracer le sillon de notre spécialité afin qu'elle progresse avec l'AFEF, lieu d'échange et de collaboration. Merci au Docteur Kannengiesser, de m'avoir éduquée avec tellement de bienveillance aux approches génétiques des téloméropathies. Beaucoup d'amitiés et de beaux souvenirs de travail avec les Docteur Didier Lebrec et le Docteur Richard Moreau. Richard, ce sont ces premiers travaux de recherche avec toi qui m'ont donné envie de poursuivre dans cette voix.
Merci Danielle, Stéphane, aux patients et tous les amis de l'AMVF. Quelle découverte de travailler
avec vous.
Aux chers absents mon père Jacques Plessier, Olga, Grane et Clothilde, Caroline, Manuel,
A mes proches qui m'entourent : A ma mère Marie-Amélie Anquetil, à Suzanne
A Philippe tellement « supportive », A nos chers enfants Arthur, Léo, Marin, Sarah, Victor, William,
Merci Pierre Emmanuel, d'être là et partout à la fois.
Merci à toute l'équipe motivée du centre de référence. A Kamal Zekrini et Djalila Seghier pour votre travail de recherche auprès des malades et des médecins, à Marie Santin pour son rôle de formation à l'empowerment et à Valérie de Bremand sans qui ce travail n'aurait pu aboutir. Merci à l'ensemble du réseau français des maladies vasculaires du foie. Merci à tous mes collègues et amis de Beaujon, personnel médical et para médical, des maladies vasculaires du foie, des maladies rares, et de thèse, Audrey, Sabrina, Olivier, Mohamed, Laure, Sophie, Odile, Julie, Lucie, Lucile, Valérie « s », Onorina, Emmanuelle, Larbi, Claire, Dominique, Aurélie, Laurent, Tarik, François, Jacques et Françoise, Régis et Flore, Raphael, Olivier, Francisca, Vinciane, Violaine, Isabelle, Odile…et tous les autres…avec qui j'ai partagé et partage encore le quotidien.
A Nils et Djoher, Marine, Clément et Jean-Jacques. A Françoise, Valentine, Marie-Pierre, Benoit, Hamid et Jo, Marie-Odile, Caline, Isabelle, Alexandra, Pilar, aux compagnons de voile Laurent « s », Florian, Fred et Maurizio
TABLE OF CONTENTS
OF RESUME…………………….………………………………………………….…........................................…..…. ABSTRACT………………...……………………………………………….….......................................….…...…. ABREVIATIONS USED…………………………………………………………….......................................…... Vascular liver diseases: Definitions and classifications I-Definitions……………………………………………………………......…..…........................................ 1-Non cirrhotic portal vein thrombosis (Non cirrhotic PVT)………......…...................... 2-Budd Chiari syndrome………………………………………………….....................................….. 3-Porto sinusoidal vascular disease (PSVD)……………………...………............................... II-Causes and mecanisms of thrombosis in VL……………………………………....................
…… 1-Acquired conditions................................................................................................ a) Myeloproliferative neoplasm……………………………………………...................….…. b) Paroxysmal nocturnal hemoglobinuria (PNH)................................................ c) Behcet's disease (BD)..................................................................................... 2-Genetic conditions.
3 -
3 Risk of extension to portal vein thrombosis in PSVD with or without anticoagulant treatment............................................................................................................... THESIS WORK..................................................................................................................... I-Background and aims..................................................................................................
..
2-Article 2………………………………………………………………………………………………….….…….
3-Article 3……………………….………………………………………………………………………….……….
Annex 1………………………………………………………………………………………….…………….………..
Annex 2……………………………………………………………………………………………….………….……..
CONCLUSION……………………………………………………………………………………………………….……….
II-Results......................................................................................................................... 1-Article 1...............................................................................................................I-Conclusion and perspectives...................................................................................... REFERENCES………………………………………………………………………………………………………………….
Table 1 : Prevalence of causal factors in vascular liver diseases [1,3,8,11,18-21] Causal factor prevalence PVT BCS PSVD Acquired conditions
1
Myeloproliferative neoplasm (MPN) %* 20 40-50 10-15
Antiphospholipid Syndrome (APLS) %* 8-12 4-25 4-8
Paroxysmal nocturnal hemoglobinuria (PNH) %* 0-2 4-19 NA
Behcet's disease %* 0-30. 0-33 NA
Hyperhomocysteinemia % 12-22. 11-37 NA
Hereditary or Genetic conditions
Factor V Leiden % 5 6-32 0
Factor II gene mutation% 10-15 5-7 3
Protein C deficiency% 0-26 4-30 3
Protein S deficiency% 2-30 3-20 3
Antithrombin deficiency % 1 10 0
Risk factors for thrombosis
Oral contraceptives % 40 6-60 NA
Pregnancy% 6-30 6-12. NA
Metabolic syndrome% 25-47 NA NA
Local cause % 20-25 5 NA
Systemic Disease % 23 4 17
>1 Risk factor % 50 45 5-15
*High risk thrombotic associated causes+ Homozygous or composite heterozygous fV leiden
or Factor II
gene mutation c) Behcet's disease (BD)
thromboses, 5 pulmonary embolisms, 8 mesenteric venous infarctions and 1 splenic infarction. In multivariate analysis, only the absence of thrombophilia and the presence of anticoagulant treatment were significantly associated with survival without new thrombotic events. A history of thrombosis, age, gender, and age of thrombosis were not predictive of thrombosis recurrence. With documented thrombophilia, the risk of recurrence in the portal system and of mesenteric or splenic infarction was 0.
82
and 5.19/100 BP with and without anticoagulants, respectively (RR 6.3, 95% CI
1.3-30.4; p = 0.01). Only 2 patients had a mesenteric infarction on
anticoagulants. Overall, this study suggests a benefit of anticoagulant
treatment on the risk of thrombosis in patients with thrombophilia without
increasing the risk of bleeding. [2]
In a cohort of 44 PVT /MPN patients with a long term follow-up (median 5.8
years (0.4-21 years), recurrent thrombosis occurred in 12 patients (24%). It was
the cause of death in 18% of the patients. [74]
In the retrospective study by Condat et al. which included 136 patients with
acute or chronic portal venous thrombosis followed for 4 years, 84 patients
received anticoagulant treatment, 30 of whom received treatment for a limited time only. Fifty-two patients did not receive anticoagulant treatment
(5)
. Thirty-six thrombotic events were reported in 26 patients, including 18 lower limb venous
Table 3 : Incidence per 100 patient years of recurrent thrombosis in SVT cohorts Recurrent thrombosis incidence per 100 patient-years
3
Ageno N =604 Condat N=136*
Risk of extension to portal vein thrombosis in PSVD with or without anticoagulant treatment
Portal vein thrombosis is common in PSVD with portal hypertension and affects one third of the patients at five years. HIV infection or the presence of variceal bleeding at diagnosis are independent risk factors for portal vein thrombosis. Anticoagulation is proposed in PSVD patients with or without portal vein thrombosis, although there is no evidence yet, to support anticoagulation in this context.
* With documented thrombophilia
Table 1 . Characteristics of patients with and without liver diseases in TRG mutations cohort (N=132).
1
Patients with Patients without p
liver disease liver disease
(N=95) (N=37)
One patient had TERT and TERC mutations, both classified as pathogenic.
*Family history was defined as a history of any telomere disease (first or second degree)
¤ ALP: alkaline phosphatase, ALT: alanine aminotransferase, AST: aspartate aminotransferase, BMI:
body mass index (kg/m²), HSCT: hematopoietic stem cell transplantation, MDS: myelodysplastic
syndrome, GGT: gamma glutamyl transferase, ILD: interstitial lung disease, TLCO: transfer factor of
the lung for carbone Monoxide.
*** Cancers in patients with liver disease: Chondrosarcoma, Cutaneous cancer without precisions,
Breast Cancer. Cancers in patients without liver disease: Kidney Cancer, two Cervical Cancers, Breast
Cancer.
13.6 (10.2-19.2) 13.6 (11.8-16.3) 0.665
Immunologic disease N/investigated 11/58 (19) 4/24 (17) 1.000
(%)
Cutaneous disease 51 (54) 15 (46) 0.163
Premature hair greying before the 34 (36) 7 (19) 0.060
age of 30
Skin hyper / hypopigmentation 18 (19) 5 (14) 0.460
Ungual dysplasia / dystrophia 15 (16) 2 (5) 0.150
Mucosal leucoplasia 12 (13) 4 (11) 1.000
Rheumatologic disease
Osteoporosis -N/investigated (%) 18/46 (39) 4/14 (29) 0.473
Ophthalmologic disease -N/investigated 9/29 (30) 0/9 (0) 0.085
(%)
Cancer history (other than blood
cancer)***
*
Table 3 . Univariate and multivariate analysis of factors associated with liver transplantation-free survival in patients with liver disease in TRG mutations cohort (N=95).
3
Transplantation-free Survival
HR 95% IC p
Univariate analysis
Age at liver disease diagnosis 1.018 0.997 1.039 0.090
Age at lung diagnosis 1.006 0.982 1.029 0.637
Age at blood disease diagnosis 1.020 0.997 1.044 0.096
Age at mutation diagnosis 1.016 0.995 1.037 0.145
Male sex 1.327 0.457 3.856 0.604
TERT mutation 1.467 0.641 3.354 0.364
TERC mutation 0.464 0.139 1.541 0.210
DKC1 mutation 12.466 2.623 59.255 0.002
Telomere lengh <1st percentil 1.494 0.192 11.636 0.701
Cofactor for liver disease* 5.281 1.798 15.511 0.002
Excessive alcohol consumption** 2.813 1.040 7.607 0.042
BMI >25 kg/m ² 1.780 0.833 3.801 0.137
Ascites 5.978 1.335 26.775 0.019
Portal venous thrombosis 7.478 0.946 59.094 0.056
AST (IU/L) 1.012 0.999 1.025 0.063
ALT (IU/L) 0.976 0.956 0.997 0.026
ALP (IU/L) 1.005 1.000 1.010 0.021
GGT (IU/L) 1.004 1.001 1.008 1.008
Serum albumin (g/L) 0.860 0.801 0.925 < 0.001
Ferritin (µg/L) 1.001 1.000 1.001 0.046
Transferrin saturation (%) 1.040 1.012 1.069 0.004
*Cofactor for liver disease: excessive alcohol consumption >100 g/week, positive HBs Ag, metabolic syndrome (Defined by the presence of at least 3 of the following criteria: hypertension, diabetes,
Table 4 . Variables associated with liver disease in the combined population (N=628) of TRG mutations cohort (N=132) and propensity score-matched controls (N = 396) . Binary logistic regression. Matching based on sex, overweight status, excessive alcohol consumption, arterial hypertension, and diabetes
4
HR CI95% p
Univariate analysis
Male sex 2.96 1.93 4.55 < 0.001
At least one TRG mutation 10.14 6.45 15.94 < 0.001
Excessive alcohol consumption* 3.58 1.83 7.02 < 0.001
BMI > 25 kg/m² 1.57 1.09 2.26 0.015
Arterial hypertension 1.56 0.78 3.13 0.210
Diabetes 1.31 0.65 2.62 0.449
Multivariate analysis
Male sex 3.34 1.99 5.60 < 0.001
At least one TRG mutation 12.89 7.78 21.35 < 0.001
Excessive alcohol consumption* 2.96 1.34 6.51 0.007
BMI > 25 kg/m² 2.33 1.47 3.68 < 0.001
*According to WHO definition (>100g per week)
BMI: body mass index (kg/m²)
. TRG mutations cohort: genes variants and telomere length. A.
Article 2 Paroxysmal nocturnal hemoglobinuria (PNH) and vascular liver disease (VLD): eculizumab therapy decreases mortality, and thrombotic
During the study period, samples from biopsies or explants of liver transplant recipients were systematically collected, formalin fixed, paraffin embedded, stained for histological examination, and stored.Hematein and eosin, picrosirius and Perls stained tissue sections were retrospectively reviewed by an expert pathologist (Valerie Paradis) unaware of clinical, laboratory or imaging result.
FIB-4 score ≥3.25 2.278 0.887 5.851 0.087
Liver stiffness (FibroScan®) 0.991 0.917 1.071 0.815
Liver stiffness (FibroScan®) ≥20 kPa 1.654 0.269 10.160 0.587
complications
HVPG (mmHg) 1.127 (15 characters excluding space) 0.123 10.283 0.916
HVPG ≥10 mmHg Authors: 1.040 0.776 1.011 0.358 1.069 1.686 0.006 0.522
Age at lung disease diagnosis Leucokytes (10 9 /L) Histological features Aurélie Plessier 1 Lengh, millimeter Marina Esposito-Farèse 2 1.034 1.000 1.002 1.000 1.068 1.000 0.036 0.262
Age at blood disease diagnosis Hemoglobin (g/dL) Number of portal tract Anna Baiges 3 Porto-sinusoïdal vascular disease Akash Shukla 4 1.045 1.050 1.012 0.881 1.079 1.251 0.007 0.583
Age at mutation diagnosis Platelets (10 9 /L) Juan Carlos Garcia Pagan 3 Specific lesions Emmanuelle De Raucourt 5 1.037 1.000 1.009 1.000 1.067 1.000 0.010 0.607
Male sex MCV (fL) Isabelle Ollivier-Hourmand 6 RNH OPV Jean-Paul Cervoni 7 0.860 1.062 0.284 1.008 2.609 1.119 0.791 0.023
TERT mutation TERC mutation Tabacco exposure Multivariate analysis*** Victor De Ledinghen 8 Incomplete septal cirrhosis Zoubida Tazi 9 Unspecific lesions Jean-Baptiste Nousbaum 10 Sinusoidal distension René Bun 2 1.697 0.695 1.776 0.611 0.202 0.711 4.717 2.388 4.434 0.310 0.564 0.219
DKC1 mutation Cofactor for liver disease* Christophe Bureau 11 11.397 13.382 1.323 1.747 98.164 102.506 0.027 0.013
Telomere lenght <1st percentil FIB-4 score ≥3.25 Christine Sylvain 12 Cirrhosis Olivier Tournilhac 13 0.882 1.074 0.110 1.024 7.100 1.127 0.906 0.003
Cofactor for liver disease* Mathieu Gerfaud Valentin 14 Excessive alcohol consumption** François Durand 1 Advenced Fibrosis F3 Odile Goria 15 Cirrhosis F4 Luis Tellez 16 16.857 3.345 2.222 1.082 127.857 10.348 0.006 0.036
BMI >25 kg/m ² Agustin Albillos 16 Ascites Stefania Gioia 17 NASH Oliviero Riggio 17 Andrea De Gottardi 18 2.878 4.450 1.128 0.554 7.346 35.720 0.027 0.160
Portal venous thrombosis AST (IU/L) ALT (IU/L) ALP (IU/L) Audrey Payance 1 Hepatocellular ballooning Pierre-Emmanuel Rautou 1 Hepatocellular inflammation Louis Terriou 19 Steatosis Aude Charbonnier 20 SAF score 21 Laure Elkrief NAS score Regis Peffault de la Tour 22 Dominique-Charles Valla 1 12.920 0.994 0.962 0.996 1.506 0.971 0.933 0.987 110.810 1.018 0.992 1.005 0.020 0.608 0.014 0.393
GGT (IU/L) Nathalie Gault 2,23 Associated lesions Flore Sicre de Fontbrune 22 1.002 0.997 1.007 0.501
Serum albumin (g/L) 0.893 0.817 0.976 0.012
Ferritin (µg/L) Lobular cholestasis Iron overload 1.000 1.000 1.001 0.203
Transferrin saturation (%) 1.052 1.020 1.085 0.001
MELD score Normal liver biopsy 1.436 1.022 2.018 0.037
FIB-4 score 1.084 1.034 1.138 0.001
*Cofactor for liver disease: excessive alcohol consumption >100 g/week, positive HBs Ag, metabolic syndrome, BMI >25 kg/m², iron overload (ferritin >1000 µg/L). **According to WHO definition (>100g per week) ***Multivariate analysis was performed on variables with <0.01 significance in univariate analysis and for which more than 95% of the data were available. ALP: alkaline phosphatase, ALT: alanine aminotransferase, AST: aspartate aminotransferase, GGT: gamma glutamyl transferase, HVPG: hepatic venous pressure gradient, MCV: Mean corpuscular volume.
OPV: obliterative portal venopathy; NRH: nodular regenerative hyperplasia
2-
g/dL, LDH 736 (482-1744) IU/L; the size of clone was not different over time, before and after 2007 (table 3); 21 patients (37%) had classic PNH, 12 (21%) had aplastic anemia-PNH syndrome (AA-PNH syndrome), and 23 (41%) had intermediate PNH. Among intermediate PNH, 7 had a diagnosis of MDS and 3 a diagnosis of MPN, others not fulfilled the diagnosis of AA-PNH.
Table 1 .
1 Characteristics at Budd-Chiari Syndrome (BCS) or Portal Vein Thrombosis (PVT) diagnosis, risk Factors for Thrombosis and BCS or PVT treatments.
Total PVT BCS
(n=62) (n = 12) (n = 50)
Table 2 -
2 Sequence of diagnosis and delay of diagnosis
Total PVT BCS
Table 3 .
3 Characteristics of BCS or PVT patients at diagnosis before and after 2007
SVT before 2007 SVT after 2007 p
Number 24 38
Age, years 36 (30-49) 35 (27-48) 0.64
Females 15 (62) 18 (47) 0.30
SVT 0.75
BCS 20 (83) 30 (79)
PVT 4 (17) 8 (21)
Acquired thrombophilia
MPNs 1 (6) 2 (8) 0.99
Antiphospholipid antibody syndrome 3 (16) 0 (0) 0.07
Hemoglobin, g/dL 9.9 (8.6-10.8) 10.3 (8.5-11.2) 0.47
Platelet count, x10 9 /L 78 (62-113) 115 (61-205) 0.10
Clone size%, IQR 83 (73-90) 83 (70.8 -95.5) 0.52
AST, U/L 58.5 (29.0-76.8) 54.0 [31.5-86.0) 0.90
ALT, U/L 33.5 (23.5-73.5) 40.0 (30.5-74.5) 0.52
Serum bilirubin, µmol/L 34 (16.8-55.2) 24.5 (15.6-50.8) 0.49
Serum creatinine, µmol/L 66 (55.3-86.3) 62.5 (53.2-85) 0.64
Serum albumin, g/L 35 (29-42.3) 32 (28-37) 0.20
Factor V, % 74 (67-84) 83 (60-93) 0.83
Child Pugh score (among BCS patients, N = 50) 10 (8.5-11.3) 10 (8-11) 0.68
Clichy score (among BCS patients, N = 50) 6 (5.3-6.7) 5.9 (5.5-6.7) 0.99
Anticoagulation 22 (92) 38 (100) 0.15
Stent 2 (8) 1 (3) 0.55
TIPS 7 (29) 9 (24) 0.77
Liver transplantation 0 (0) 0 (0) 0.99
Abbreviations: ALT, alanine transaminase; AST, aspartate transaminase; BCS, Budd-Chiari syndrome, MPN,
myeloproliferative neoplasm; PVT, portal venous system thrombosis;
Values are n (%) or median (interquartile range).
Table 4 :
4 Complications during follow-up Venous thrombosis included deep vein thrombosis, new splanchnic vein thrombosis, cerebral thrombosis and TIPS thrombosis occurring after BCS or PVT diagnosis *** For ascites resolution, analyses were performed among patients with ascites at BCS or PVT diagnosis. **** For recurrent ascites, analyses were performed among patients with ascites resolution
Complication Exposure Number of event Person-years (PY) Incidence rate per 100 PY IRR (95% CI)
Death yes 4 152.6 2.62 0.30 (0.10 -0.92)
no 13 148.7 8.74
yes 5 152.6 3.28 0.41 (0.14-1.15)
Bleeding
no 12 148.7 8.07
yes 4 152.6 2.62 0.43 [0.13-1.41]
Liver complications*
no 9 148.7 6.05
yes 6 152.6 3.93 0.53 [0.2-1.44]
Infection
no 12 148.7 8.07
yes 1 152.6 0.66 0.16 (0.02-1.35)
Arterial ischemic event
no 8 148.7 5.38
yes 4 152.6 2.62 0.22 (0.07-0.64)
Venous thrombosis**
no 21 148.7 14.12
Among 33 patients with ascites
yes 13 80.9 16.1 0.29 [0.08 -1.05]
Ascites resolution***
no 11 86.8 12.7
Among 24 patients with resolved ascites
yes 1 71.5 1.40 0.97 [0.06 -15.5]
Recurrent ascites****
no 5 69.3 7.21
* Hepatic encephalopathy/ hepatorenal syndrome/ péritonitis/ ascites
**
Table 1 . Characteristics of patients at inclusion
1
TOTAL (N=111) Rivaroxaban (N=55) No (N=56) anticoagulation p-value
Age 49.3 [41.9-60.8] 50.5 [41.6-60.8] 48.1 [42.3-61.5] 0.92
Male gender N % 64 (58) 31 (56) 33 (59) 0.78
Ongoing anticoagulation at inclusion N % 82 (74) 39 (71) 43 (77) 0.48
Past anticoagulation therapy in
patients without anticoagulation 19 (66) 9 (56) 10 (77) 0.43
at inclusion N %
Low-risk Thrombophilia N %
Heterozygous G1691A factor V mutation 1 (1) 0 (0) 1 (2) 1.00
Heterozygous G20210A factor II mutation 8 (7) 6 (11) 2 (4) 0.16
Protein S deficiency* 16 (15) 10 (18) 6 (11) 0.28
Protein C deficiency* 22 (20) 14 (26) 8 (14) 0.14
Antithrombin deficiency* 2 (2) 1 (2) 1 (2) 1.00
Hyperhomocysteinemia 24 (26) 12 (27) 12 (25) 0.80
Total thrombophilia 48 (50) 28 ( 60) 20 (42) 0.08
Other risk factors for
thrombosis at the time of PVT
diagnosis N %
Oestroprogestative or
pregnancy 3 months prior PVT 14 (30) 7 (29) 7 (30) 0.92
among 47 women
Systemic disease 1 (1) 1 (2) 0 (0) 0.50
Local risk factors at diagnosis
-Abdominal or pelvic surgery within 3 months prior PVT 13 (12) 8 (15) 5 (9) 0.36
-Local inflammation or
infection at the time of 17 (15) 8 (15) 9 (16) 0.82
thrombosis diagnosis
PSVD (out of 15 liver biopsies) 6 (5) 5 (9) 1 (2) 0.11
Other cardiovascular risk
factors
Past cerebral stroke N % 2 (2) 1 (1) 1 (1) 1.00
Type-2 diabetes N % 6 (5) 4 (7) 2 (4) 0.44
Arterial hypertension N % 19 (17) 14 (26) 5 (9) 0.021
BMI (kg/m2) 26.8 [24.0-30.2] 26.9 [24.0-29.9] 26.8 [24.1-30.3] 0.97
BMI >30 N % 29 (26) 13 (24) 16 (29) 0.59
Ongoing smoking N % 12 (11) 6 (11) 6 (11) 1.00
At least one persisting low-risk factors ** N % 102 (94) 47 (90) 55 (98) 0.06
Number of persisting low-risk
factors : N % 1 13 (12) 4 ( 7.5) 9 (16) 0.012
2 22 ( 20) 5 ( 9) 17 (30)
3 25 ( 23) 11 ( 21) 14 (25)
4 26 ( 24) 16 ( 30) 10 (18)
5 10 ( 9) 7 ( 13) 3 (5)
(3)
(11) 0.096 Data are presented as counts N (%) or medians (interquartile range) *According to WHO definition (>100g per week)
(33) -Retinal detachment 3 (33) -Keratoconus 1 (11) -Macular degeneration at early age 1 (11) -Congenital cataract 1 (11) -
13/32(41) -Variceal bleeding 12 (13) -
(13) -
(3) -
(45) 396 (100) < 0.001
Acknowledgements: Dahia Sekour (Clinical research assistant, Hepatology department, Beaujon Hospital, APHP, France) for data collection, Kamal Zekrini (Clinical research assistant, Hepatology department, Beaujon Hospital, APHP, France) for patients screening, Christelle Ménard and Claire Oudin (Genetic laboratory technicians, Genetics department, Bichat Hospital, APHP, France). Corresponding author : Dr Aurélie Plessier, MD Service d'Hépatologie, Hôpital Beaujon, Assistance Publique des Hôpitaux de Paris, Clichy, France Telephone: +33 1 40 87 55 01 Fax: +33 1 40 87 55 30 E-mail: [email protected] Author's contributions: * The authors contributed equally. Concept and design: SS, AP, FS, RB, CK, DV, PER, VP Acquisition of data: SS Statistical analysis: SS, AP, PER Interpretation of data: SS, AP, PER, DV Drafting and critical revision of manuscript: SS, AP, FS, RB, CK, DV, PER, VP, EL, OG, JC, JMN, VC, BC, VB, JD, EJ, JBN, SH, AB ,MM, SD, SH, VM, MRG, LT, FG, WAC, JEK, PC, Acknowledgements We would like to express our gratitude to Djalila Seghier, Kamal Zekrini and the French national network for vascular liver diseases for their collaboration in data acquisition, the Association des Malades des Vaisseaux du Foie (AMVF) and the data safety monitoring board: Pr Ariane Mallat, Pr Philippe Sogni , and Pr Michaela Fontenay for their scientific support, URC Paris Nord and Estelle Marcault for regulatory and logistic support, and Dr Didier Lebrec, Pr Sarwa Darwish Murad and Dale Lebrec for their critical and English revision of the manuscript.
Financial Support
This work was supported by the Centre de reference des maladies vasculaires du foie and a fund from the Assistance Publique Hôpitaux de Paris CRC 2017 Financial Support This work was supported by a grant Programme Hospitalier de Recherche Clinique (PHRC N° P110150), from the French Ministry of Health; and by a grant from the Association de Malades des Vaisseaux du foie (AMVF) ClinicalTrials.gov identifier NCT02555111; Word count: words (including references) (≤ 3500 words) References: #30
Data are presented as counts N (%) or medians (interquartile range). GFR: glomerular filtration rate.
Data availability statement:
The datasets generated and analysed during the current study are not publicly available but are available from the corresponding author upon reasonable request.
ABBREVIATIONS USED:
APLS
Authors:
Sabrina Sidali (1,2), Raphaël Borie* (3), Flore Sicre de Fontbrune* (4), Elodie Lainey (5), Pierre-Emmanuel Rautou (1), Kinan El Husseini (6), Odile Goria (1,2), Jacques Cadranel (7), Jean-Marc Naccache (7), Vincent Cottin (8), Bruno Crestani (3), Vincent Bunel (3), Jérôme Dumortier (9), Emmanuel Jacquemin (10), Nousbaum Jean-Baptiste (11), Sandrine Hirschi (12), Arnaud Bourdin (13), Magdalena Meszaros (14), Sebastien Dharancy (15), Sophie Hilaire (16), Vincent Mallet (17), Martine Reynaud-Gaubert (18), Louis Terriou (19), Frédéric Gottrand (19), Wadih Abou Chahla (19), Jean-Emmanuel Khan (20), Paul Carrier (21), Faouzi Saliba (22), Laura Rubbia-Brandt (23), John-David Aubert (24), Laure Elkrief (25), Victor de Lédinghen (26), Armand Abergel (27), Tournilhac Olivier (28), Pauline Houssel (29), Stephane Jouneau (30), Ludivine Wemeau (31), Anne Bergeron (32), Yasmina Chouik (9), Antoine Coupier (9), Thierry Leblanc (5), Isabelle Ollivier-Hourmand (33), Eric Nguyen Khac (34), Hélène Morisse-Pradier (6), Ibrahima Ba (35), Françoise Roudot-Thoraval (36), Dominique Roulot (36), Valérie Vilgrain (37), Christophe Bureau (38), Hilario Nunes (39), François Durand (1), Catherine Boileau (35), Claire Francoz (1), Dominique Valla (1), Valérie Paradis (40), Caroline Kannengiesser (35), Aurélie Plessier (1).
Affiliations :
(1) Université de Paris, AP-HP, Hôpital Beaujon, Service d'Hépatologie, DMU DIGEST, Centre de Référence des Maladies Vasculaires du Foie, FILFOIE, ERN RARE-LIVER, Centre de recherche sur l'inflammation, Inserm, UMR 1149, Paris, France (2) Centre Hospitalier Universitaire Charles Nicolle, Hépatogastro-entérologie, Rouen, France (3) Hôpital Bichat AP-HP, Pneumologie A, Paris, France, Orphalung network (4) Hôpital Saint Louis, Hematology and Transplant Unit, APHP, CRMR rare referral center for aplastic anemia, 75475 Paris Cedex 10, France (5) Hôpital Robert Debré, Hématologie, Paris, France (6) Centre Hospitalier Universitaire Charles Nicolle, Pneumologie, Rouen, France (7) Hôpital Tenon AP-HP, Pneumologie, Paris, France (8) Centre Hospitalier Universitaire Lyon Sud, Pneumologie, Pierre-Bénite, France (9) Hôpital Édouard Herriot, Hépatologie, Lyon, France (10) Hôpital Kremlin-Bicêtre AP-HP, Hépatologie Pédiatrique, Le Kremlin-Bicêtre, France (11) Centre Hospitalier Régional Universitaire Morvan, Hépatologie, Brest, France Results: Among 132 patients with TRG mutations, 95 (72%) had liver disease (19 AST and/or ALT > 30 IU/L, 12 a dysmorphic liver; and 64 both) versus 80/396 (20%) of controls (Hazard Ratio 12.9 CI95% 7.8-21.3, p<0.001). Liver biopsy performed in 52 TRG patients with liver disease identified porto-sinusoidal vascular disease in 42%, and advanced fibrosis/cirrhosis in 15%. Patients with liver disease presented TRG lung, blood, cutaneous, rheumatologic, ophthalmic disease in 82%, 77%, 55%, 39% and 30% of cases, respectively.After a median follow-up of 21 months (CI95% 12 months-54 months), ascites, variceal bleeding, and hepatocellular carcinoma occurred in 14%, 13%, and 2% of TRG patients with liver disease, respectively. Overall survival and LT-free survival were 79% and 69% respectively. FIB-4 score ≥ 3.25 and at least one risk factor for cirrhosis were associated with a poor LTfree survival.
Conclusion:
PSVD and dysmetabolic cirrhosis are the 2 most frequently observed liver related phenotypes in TRG mutation suggesting a multifactorial mechanism. Environment has a major impact on TRG liver disease's prognosis. TRG mutations constitute a strong genetic risk factor of liver diseases. PSVD -N/investigated (%) 22/52 (42) NASH -N/investigated (%) 7/52 (13) Advanced fibrosis -Cirrhosis -N/investigated (%) 8/52 (15) Aspecific isolated lesions -N/investigated (%) A.
B.
HVPG: hepatic venous pressure gradient. * Among the patients with aspecific isolated lesions, three had a liver stiffness > 7 kPa and three had severe portal hypertension.
Supplementary data 6. Genetic and general characteristics in patients with PSVD (N=22)
and
Lung Assessment
Have been collected: toxic exposure (tobacco, fibrogenic exposure, avian or to domestic precipitins), pulmonary functional signs, results from chest imaging, blood gases and pulmonary function tests (PFT) (vital capacity, carbon monoxide diffusion capacity), cardiac transthoracic contrast echography, and the occurrence of complication during the follow-up.
Blood Assessment
For blood disease, results were obtained from blood analysis, electrophoresis of plasma proteins, bone marrow biopsy and aspiration, cytogenetic exam, and the occurrence of complication during follow-up.
Supplementary materiel: Pathological features predefined to liver biopsy reviewing.
Contact Information
Keywords
blocked with eculizumab treatment. Chronic administration of eculizumab resulted in a rapid and sustained reduction in complement-mediated haemolytic activity [9][10][11]. EMA information summary of product characteristics assesses that eculizumab is currently indicated in adult and children PNH patients with hemolysis with clinical symptoms indicative of high disease activity, regardless of transfusion history [12]. Therefore, availability has varied over the last 15 years according to time in each country. The effect of eculizumab on outcome of patients with vascular liver disease is unknown.
We aimed to assess the impact of eculizumab therapy on mortality, liver disease complications and thrombotic or bleeding complications, in patients with BCS or PVT.
still needed in these patients who may still have severe, lethal complications, often triggered by bacterial infection. Prospective studies are warranted to evaluate if an early initiation of eculizumab (before 2 weeks) and complement blockade monitoring will help to avoid the use of stent and TIPS and prevent severe VLD in these patients. This is of particular importance in those patients with hematologic disease (aplastic anemia, MDS and MPN) that could be candidates for bone marrow transplantation. Background. In non-cirrhotic patients with a previous history of portal vein thrombosis (PVT), the benefit of long-term anticoagulation in the absence of major risk factors for thrombosis is unknown. We assessed the effects of rivaroxaban on the risk of recurrent venous thrombosis.
Patients and Methods
In this multicenter, open label randomized controlled trial, we assigned patients with a history of previous PVT without major prothrombotic risk factors to rivaroxaban 15 mg/day versus no anticoagulation. Primary endpoint was 2-year thrombosis-free survival. The occurrence of major bleeding events was a secondary endpoint.
Results
The second interim analysis, as requested by the sponsor after the occurrence of 10 thrombotic events in the 111 included patients showed an incidence of thrombosis of 0 per 100 person-years (PY) in the rivaroxaban group, and 19.71 per 100 PY (95% CI [7.49 -31.92]) in the no-anticoagulation group (log-rank p-value=0.0008). Based on these data, the independent safety monitoring board recommended switching patients from the noanticoagulation group to anticoagulation.
After a median follow-up of 30.3 months (95%CI [29.8-35.9]), major bleeding occurred in two patients receiving rivaroxaban, and in one without anticoagulant. In the noanticoagulation group, plasma D-dimer concentrations ≥ 500 ng/mL 1 month after inclusion were significantly associated with recurrent thrombosis (HR 7.78 [1.49-40.67] with a negative predictive value of 94%. There were no deaths.
Conclusion
In patients with previous PVT without cirrhosis and without major prothrombotic risk factors, rivaroxaban reduced the incidence of recurrent venous thrombosis without excess bleeding risk. D-dimer < 500ng/ml one month after anticoagulation discontinuation predicted a low risk of recurrence.
Registration number /registry name: 2015-001190-10 /NCT02555111
selected patients with markedly different doses, agents, and underlying risk factors of thrombosis (15).
Thus, the goal of this trial was to assess the efficacy of rivaroxaban 15 mg/day to prevent the recurrence of thromboembolic events or death in patients with a prior history of noncirrhotic PVT and without major risk factors for thrombosis.
Methodology Design & settings
This randomized, open-label controlled trial was performed between September 16, 2015 and January 30, 2020 (NCT02555111), in 8 centers in France with experience in the management of vascular liver disease.
Ethics
The institutional review board (IRB Ile-de-France IV n°53-15 on September 2, 2015, and the French Medicine Agency on July 28 th , 2015) approved the protocol. All participants were fully informed about the study protocol and provided written informed consent.
Population
Eligible subjects were adult patients, with a previous history of PVT defined as chronic portal vein thrombosis (including those with portal cavernoma) or signs of recent PVT, without major prothrombotic risk factors, diagnosed more than 6 months prior to inclusion.
The diagnosis of portal vein thrombosis was based on previously defined EASL and ASSLD guidelines (2,4), and patients may or may not have been treated with anticoagulants; For patients initially diagnosed with recent PVT, i) the event occurred more than 6 months prior to inclusion; ii) PVT was documented by an contrast-enhanced multiphasic CT or MR, iii) PVT was followed by recanalization or persistence of obstruction; and iv) thrombosis affected the trunk or the right or left branches of the portal vein, with or without splenic or mesenteric vein involvement, with or without ischemic intestinal damage. PVT was classified according to the classification suggested in the recent AASLD practice guidance document (4).
Major prothrombotic risk factors included myeloproliferative neoplasms, antiphospholipid syndrome, homozygous or composite heterozygous G20210A factor II or G1691A factor V mutations, and a personal or 1 st degree family history of unprovoked venous thrombosis.
Patients with intestinal resection (due to past mesenteric infarction), cirrhosis, |
04095742 | en | [
"sdu.stu",
"sde.mcg",
"sdu.stu.gc"
] | 2024/03/04 16:41:24 | 2023 | https://hal.science/hal-04095742/file/poster-portrait-A0-CARNON6.pdf | Garry Dorleon
email: [email protected].
Sylvain Rigaud
Isabelle Techer
Management of dredged marine sediments in southern France: main keys to large-scale beneficial re-use
) make some recommendations on future policy options for the sustainable management of dredged sediments.
• A sustainable approach towards managing dredged sediments requires a change of view rather than a waste; dredged materials need to be seen as valuable resources.
• Perception issues remain crucial challenges to those promoting sustainable sediments management through the dredging process and activities.
• National level support will be necessary for helping to change attitudes and to promote sustainable dredged sediments management within the existing policy framework or introducing a new policy to succeed the mission.
• An effective and favorable policy would be accomplished to achieve the sustainable utilization of dredged sediments for different applications and valuable products.
• It may be expected that sediment management may change in the future, requiring more sustainable and circular approaches to dredged sediment management.
Introduction Conclusion
The presence of anthropogenic activities throughout the globe and the harbors generate an enormous amount of dredged sediments. Various research studies [START_REF] Slimanou | Harbor Dredged Sediment as raw material in fired clay brick production: Characterization and properties[END_REF][START_REF] Yoobanpot | Multiscale laboratory investigation of the mechanical and microstructural properties of dredged sediments stabilized with cement and fly ash[END_REF][START_REF] Zheng | Framework for determining optimal strategy for sustainable remediation of contaminated sediment: A case study in Northern Taiwan[END_REF][START_REF] Bagarani | The reuse of sediments dredged from artificial reservoirs for beach nourishment: technical and economic feasibility[END_REF][START_REF] Baptist | Beneficial Use of Dredged Sediment to Enhance Salt Marsh Development by Applying A 'Mud Motor': Evaluation Based on Monitoring[END_REF][START_REF] Ulibarri | Barriers and opportunities for beneficial reuse of sediment to support coastal resilience[END_REF][START_REF] Balkaya | Beneficial use of dredged materials in geotechnical engineering[END_REF][START_REF] Houlihan | Geoenvironmental Evaluation of RCA-Stabilized Dredged Marine Sediments as Embankment Material[END_REF][START_REF] Bianchini | Sediment management in coastal infrastructures: Technoeconomic and environmental impact assessment of alternative technologies to dredging[END_REF][START_REF] Wang | Recycling dredged sediment into fill materials, partition blocks, and paving blocks: Technical and economic assessment[END_REF][START_REF] Cezarino | Diving into emerging economies bottleneck:Industry 4.0 and implications for circular economy[END_REF][START_REF] Tessier | Study of the spatial and historical distribution of sediment inorganic contamination in the Toulon bay (France)[END_REF] developed process technology to recycled contaminated dredged sediments, and it seems that there is the potential of value proposition and economic viability of the products. The integrated life cycle assessment (LCA) [START_REF] Soleimani | Environmental, economic and experimental assessment of the valorization of dredged sediment through sand substitution in concrete[END_REF], life cycle cost assessment (LCC) are also involved in economic analysis to provide insight for early stage decision-making on the valorization of the dredged sediments.
The development of technology to produce valuable materials from contaminated dredged sediments offers an opportunity to use it as an alternative to virgin resources that could save energy and optimize resource efficiency resource utilization, reduce emissions and potentially contribute to a circular economy. The choice of recycling option for dredged sediments is essential in economic, environmental, sustainability terms and the perspective of the circular economy.
4. An economic approach towards recycling dredged sediments for sustainable development
*Respect for the principle of proximity and economic aspects/realities
Managing waste with proper disposal facilities became a crucial challenge [START_REF] Lirer | Mechanical and chemical properties of composite materials made of dredged sediments in a fly-ash based geopolymer[END_REF][START_REF] Kamali | Marine dredged sediments as new materials resource for road construction[END_REF][START_REF] Norén | Integrated assessment of management strategies for metal-contaminated dredged sediments?[END_REF][START_REF] Marmin | Ocean & coastal management collaborative approach for the management of harbour-dredged sediment in the bay of seine (France)[END_REF][START_REF] Slimanou | Harbor Dredged Sediment as raw material in fired clay brick production: Characterization and properties[END_REF][START_REF] Yoobanpot | Multiscale laboratory investigation of the mechanical and microstructural properties of dredged sediments stabilized with cement and fly ash[END_REF][START_REF] Zheng | Framework for determining optimal strategy for sustainable remediation of contaminated sediment: A case study in Northern Taiwan[END_REF][START_REF] Bagarani | The reuse of sediments dredged from artificial reservoirs for beach nourishment: technical and economic feasibility[END_REF][START_REF] Baptist | Beneficial Use of Dredged Sediment to Enhance Salt Marsh Development by Applying A 'Mud Motor': Evaluation Based on Monitoring[END_REF]. Due to a shortage of land, the cost of waste dumping gradually increases. Unmanaged dumping of dredged sediments leads to damage to environmental and ecological systems. Using dredged sediments through the recycling process is a sustainable approach towards supplementing natural resources, reducing pollution and greenhouse gas (GHG), energy optimization, enhancing economic development [START_REF] Soleimani | Environmental, economic and experimental assessment of the valorization of dredged sediment through sand substitution in concrete[END_REF], improve sustainability on marine ecosystems.
Exploring the required mechanical, chemical, and physical properties of dredged sediments would lead to a most appropriate option for different applications:
The beneficial use of dredged sediments for various applications
Method: literature review on the subject 1. Production of dredged marine sediments in France and abroad
Recommendation on future policy options for the management of dredged sediments
The future policy options available with the policymakers for managing dredged sediments are to promote necessary schemes and facilities within the framework of policy and regulation by different nationals with the alliance of industry partners. The problems related to the management of dredged sediments that need to be resolved as soon as possible to promote sustainable development through dredged sediments management are the following:
Keywords
"dredging" OR "marine sediments", "management", "waste recycling", "chemical compositions", "regulatory aspect", "disposal" AND "potential recovery". ▪ Policy for limiting the use of certain unsustainable methods or technology to specific industries ▪ The particular policy required for penalties wherever applicable ▪ Unfavorable market forces ▪ Legal/policy/governance challenges at the international and national ▪ Disposal capacity and process ▪ Critical barrier on transporting of dredged sediments to a different location regarding cost, safety, physicochemical properties, environmental issues, etc. ▪ Major investments ▪ Lack of government initiative in respect of promoting mission zero waste and to make realize that dredged sediments is wealth ▪ Delay in necessary approval and responses from the government official pertaining to dealing of dredged sediments ▪ Lack of policy frameworks, especially in the context of managing dredged sediments ▪ Transparency of information systems among the government and industry partner ▪ Introduce code and standards on the secondary product from dredged sediments so that users can use the products.
▪ Facilities for transportation of enormous amount of sediments
Major bottlenecks [START_REF] Ulibarri | Barriers and opportunities for beneficial reuse of sediment to support coastal resilience[END_REF] towards managing bulk usage of the dredged sediments are as follows.
Potential applications for reuse of dredged sediments
Example below of the strategy developed for the management of dredged marine sediments from harbor Carnon (south of France)
In France, the decision on sediment suitability for sea dumping or land disposal is primarily based on chemical findings, being the ecotoxicological analyses only advised to be done on guidelines. According to the concentration limits of the required chemical parameters (such as metals, trace elements and organic micropollutants), N1 (high quality, low pollutant concentrations) and N2 (medium quality, mid pollutant concentrations) levels are legally defined. Only in the cases where any of the compounds exceeds the N2 threshold, a biological characterization of sediments is needed to define their ecotoxicity [START_REF] Ospar | Oslo and Paris Conventions[END_REF][START_REF] Harrington | Economic modelling of the management of dredged marine sediments[END_REF][START_REF] Abidi | Characterization of dredged sediments of Bouhanifia dam: potential use as a raw material[END_REF][START_REF] Tessier | Study of the spatial and historical distribution of sediment inorganic contamination in the Toulon bay (France)[END_REF].
Management options
(Based on concentration limits of some physical and chemical parameters such as particle size, metals, organic micropollutants)
Land management = Waste (ICPE thresholds)
Dredging operation and geotube dewatering, @ Carnon 2023 90% 10%
5. Bottleneck preventing bulk re-use of dredged sediments References 2 .
2 French regulatory framework for the management of dredged sediments is complex Dredged sediments (France) Regulatory characterization ▪ Environmental Code, annex I of article L541-8 ▪ Regulation 1357/2014/EU (Waste hazard properties) ▪ Order of February 15, 2016 ▪ Order of June 30, 2020
▪
Flexible licensing systems to process dredged sediments ▪ Safely policy ▪ Insurance scheme ▪ Startup facilities ▪ Financing facilities to set up a plant ▪ Emerging circular economic concept ▪ Approach to introduce code and standard ▪ Policy to design life cycle assessment (LCA) to reduce sediments ▪ Policy for compulsory use of sediments as secondary products wherever applicable ▪ Policy for restricting the use of natural resources
▪
Physical, chemical and biological characterizationOther requirements▪ Determination of nitrogen and phosphorus for sediment discharged in sensitive areas; ▪ Faecal contamination to avoid impacts on shellfish, mariculture, or bathing areas. ▪ Biological characterization of the affected site in case of sediment immersion, if at least one element exceeds the N2 threshold. ▪ The particle size has to be considered in case of reuse for nourishment. ▪ Leaching test to define land storage options, ICPE thresholds |
01522265 | en | [
"math.math-st"
] | 2024/03/04 16:41:24 | 2015 | https://hal.science/hal-01522265/file/soubeyrand_haon-lasportes_2015.pdf | Samuel Soubeyrand
email: [email protected]
Emilie Haon-Lasportes
Weak convergence of posteriors conditional on maximum pseudo-likelihood estimates and implications in ABC
Keywords: Approximate Bayesian computation, Bernstein -von Mises theorem, Weak convergence. 2010 MSC: 62F12, 62Fxx, 65C60
The weak convergence of posterior distributions conditional on maximum pseudo-likelihood estimates (MPLE) is studied and exploited to justify the use of MPLE as summary statistics in approximate Bayesian computation (ABC). Our study could be generalized by replacing the pseudo-likelihood by other estimating functions (e.g. quasi-likelihoods and contrasts).
Introduction
Approximate Bayesian computation (ABC) has been developed to make Bayesian inference with models that can be used to generate data sets but whose probability distribution of state variables is intractable [START_REF] Marin | Approximate Bayesian computational methods[END_REF]. The intractability of this distribution makes impossible the application of the exact Bayesian approach, even by using numerical algorithms.
ABC provides a sample from the parameter space that is approximately distributed under a posterior distribution of parameters conditional on sum-mary statistics. In general, this posterior does not coincide with the posterior distribution of parameters conditional on full raw data. Here, we are interested in the specific case where (some of) the summary statistics are point estimates of parameters (PEP), as in Drovandi et al. (2011), Fearnhead and[START_REF] Fearnhead | Constructing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation[END_REF], [START_REF] Gleim | Approximate Bayesian computation with indirect summary statistics[END_REF] and [START_REF] Mengersen | Bayesian computation via empirical likelihood[END_REF].
In the classical Bayesian framework, posteriors conditional on PEP can be viewed as specific cases of posteriors conditional on partial information [START_REF] Doksum | Consistent and robust Bayes procedures for location based on partial information[END_REF][START_REF] Soubeyrand | Inference with a contrast-based posterior distribution and application in spatial statistics[END_REF]. Here, we provide new results of weak convergence when PEP are either maximum likelihood estimates (MLE) or pseudo-maximum likelihood estimates (MPLE). The case where PEP are MPLE is of specific interest because, in ABC, it may be possible to compute MPLE via simplifications of the dependence structure in the model, and to use MPLE as summary statistics.
The results of weak convergence that are provided in this note can be viewed as new extensions of the Bernstein -von Mises (BvM) theorem.
For parametric models from which independent observations are made, the BvM theorem (i) states conditions under which the posterior distribution is asymptotically normal and (ii) subsequently leads to the efficiency of Bayesian point estimators and to the convergence of Bayesian confidence sets to frequentist limit confidence sets [START_REF] Walker | On the asymptotic behaviour of posterior distributions[END_REF][START_REF] Freedman | On the Bernstein-Von Mises theorem with infinitedimensional parameters[END_REF].
Thus, the BvM theorem can be viewed as a frequentist justification of posterior distributions for the estimation of parameters. Numerous extensions of the BvM theorem have been proposed, for instance, when the model is semiparametric or nonparametric (Bickel and Kleijn, 2012;[START_REF] Bontemps | Bernstein-von Mises theorems for Gaussian regression with increasing number of regressors[END_REF]Castillo, 2012a,b;[START_REF] Castillo | Nonparametric Bernstein-von Mises theorems in Gaussian white noise[END_REF][START_REF] Rivoirard | Bernstein-von Mises theorem for linear functionals of the density[END_REF], when observations are dependent [START_REF] Borwanker | The Bernsteinvon Mises theorem for Markov processes[END_REF][START_REF] Tamaki | The Bernstein-von Mises theorem for stationary processes[END_REF], when the model is misspecified [START_REF] Kleijn | The bernstein-von-Mises theorem under misspecification[END_REF], and when the model is nonregular (e.g. when the true value of the parameter is on the boundary of the parameter space; [START_REF] Bochkina | The bernstein-von Mises theorem and nonregular models[END_REF].
Here, we extend the BvM theorem (i) when raw observations are replaced by the MLE (Lemma 1) or an MPLE (Lemma 2), and (ii) when the posterior conditional on an MPLE is approximated via ABC (Theorem 1). Using a posterior distribution (approximate or not) conditional on an MPLE, that was built by ignoring some dependences in the model, can be viewed as using a misspecified model like in [START_REF] Kleijn | The bernstein-von-Mises theorem under misspecification[END_REF].
The BvM extensions obtained in the classical Bayesian framework (Point (i) in the paragraph above) are viewed as stepping stones that lead to the BvM extension obtained in the ABC framework (Point (ii)). Advancing theory in ABC has generally no direct practical implications because assumptions that may be required to prove theorems cannot be checked for a real-life implicit stochastic model whose distribution theory is intractable.
However, showing an analytic result for a large class of theoretically tractable models may lead to conjecture that the result holds for some stochastic implicit models. Specifically, the work presented here allows us to conjecture that (i) an ABC-posterior distribution conditional on an MPLE is asymptotically normal and centered around the MPLE, and (ii) resulting point estimates and confidence sets converge to their frequentist analogues. A pseudo-likelihood is generally built by ignoring some of the dependencies in the data [START_REF] Gaetan | Modélisation et statistique spatiale[END_REF][START_REF] Gourieroux | Pseudo maximum likelihood methods: Theory[END_REF]. The vector θMP L ∈ Θ is a maximum pseudo-likelihood estimate (MPLE) of θ:
Main notations
θMP L = argmax
Posterior conditional on the MLE
The full sample posterior p(θ | D) and the posterior conditional on the MLE p(θ | θML ) exactly coincide in specific cases (e.g. when the MLE are sufficient statistics), but do not coincide in general. Our aim, in this section, is to provide an asymptotically equivalent distribution for p(θ | θML ).
Bernstein -von Mises (BvM) theorems provide, for various statistical models, the asymptotic behavior of posteriors distributions. For example, following [START_REF] Walker | On the asymptotic behaviour of posterior distributions[END_REF] and Lindley (1965, p. 130), we consider a set D = (D 1 , . . . , D n ) of n i.i.d. variables drawn from a parametric distribution with density f (• | θ) with respect to a σ-finite measure on the real line, where θ is in Θ ⊂ R q . Under this setting and additional regularity conditions, the BvM theorem establishes the asymptotic normality of the full sample posterior (Walker, 1969, Theorem 2 and conclusion): the full sample posterior density of θ is, for large n, equivalent to the normal density with mean vector equal to the MLE θML and covariance matrix equal to Ω n ( θML ) -1 :
p(θ | D) ∼ n→∞ φ θML ,Ωn( θML ) -1 (θ),
where φ µ,Σ denotes the density of the normal distribution with mean vector µ and covariance matrix Σ, and Ω n (α) is the q × q matrix with element (i, j)
equal to -∂ 2 log p(D | θ)/∂θ i ∂θ j θ=α .
To provide an asymptotically equivalent distribution for p(θ | θML ) as in BvM theorems, we assume in Lemma 1 (see below) that the MLE is asymptotically normal and consistent. For example, consider the same statistical model than above and assume that assumptions made in Lehmann and Casella (1998, Theorem 5.1 of the MLE asymptotic normality, p. 463) are satisfied. In particular, assume that data were generated with parameter vector θ. Then, the density of θML is, for large n and given θ, equivalent to the normal density with mean vector equal to the true parameter vector θ and covariance matrix equal to n -1 I(θ) -1 :
p( θML | θ) ∼ n→∞ φ θ,n -1 I(θ) -1 ( θML ).
(
) 1
where I(θ) denotes the q × q Fisher information matrix.
Lemma 1 (Asymptotic normality of the posterior conditional on the MLE).
Consider the modeling setting described in Section 2 and suppose that the MLE satisfies Equation (1) with non-singular matrix I(θ). Assume in addition that the prior π is a positive and Lipschitz function over Θ, that θ → |I(θ)| (determinant of I(θ)) and θ → x I(θ)x (for all x ∈ R q , x being the transpose of x) are Lipschitz functions over Θ, and that the constant which arises in the Lipschitz condition for θ → x I(θ)x and which is a function of x, is also a Lipschitz function over R q . Then, when n → ∞, the posterior density p(θ | θML ) conditional on the MLE is asymptotically equivalent to the density of the normal distribution with mean vector θML and covariance matrix n -1 I( θML ) -1 over a subset B n of Θ whose measure with respect to this normal density is asymptotically one in probability:
p(θ | θML ) ∼ n→∞ φ θML ,n -1 I( θML ) -1 (θ), ∀θ ∈ B n lim n→∞ Bn φ θML ,n -1 I( θML ) -1 (θ)dθ = P 1.
Thus, over the subset B n which asymptotically contains all the mass of the normal density φ θML ,n -1 I( θML ) -1 (•), the posterior conditional on the MLE is asymptotically equivalent to this normal distribution.
From a frequentist point of view, the BvM theorem (which concerns the full sample posterior p(θ | D)) is a justification of the Bayesian approach for parameter estimation since the Bayesian confidence sets asymptotically coincide with the frequentist limit confidence sets [START_REF] Freedman | On the Bernstein-Von Mises theorem with infinitedimensional parameters[END_REF]. Lemma 1 shows a similar result for the posterior conditional on the MLE p(θ | θML ). Thus, Lemma 1 can also be viewed as a justification of the use of the posterior conditional on asymptotically normal MLE for parameter estimation. Note that results similar to the one provided by Lemma 1 have already been obtained for the estimation of an univariate location parameter;
see [START_REF] Doksum | Consistent and robust Bayes procedures for location based on partial information[END_REF] and references therein.
Regarding assumptions in Lemma 1, the asymptotic normality of the MLE (Equation ( 1)) requires classical but strong assumptions (even for the simple i.i.d. case). However, the asymptotic normality of the MLE has been obtained for numerous modeling and sampling settings, even in non-i.i.d.
cases. Lemma 1 is also based on a series of Lipschitz assumptions concerning the prior π and the Fisher information matrix I(θ). These assumptions are satisfied for classical distributions (e.g. when π is uniform on a bounded domain and when (D 1 , . . . , D n ) are independent normal variables with mean µ and variance 1).
Proof. Under the assumptions of the theorem of the MLE asymptotic normality (i.e. under Equation ( 1)
), p( θML | θ) = η n (θ) + ε n (θ), where η n (θ) = φ θ,n -1 I(θ) -1 ( θML ) and ε n (θ) = n→∞ o(η n (θ)). Therefore, p(θ | θML ) = n→∞ {η n (θ) + ε n (θ)}π(θ) Θ η n (α)π(α)dα + Θ ε n (α)π(α)dα = η n (θ)π(θ)(1 + o(1)) Θ η n π + Θ ε n π .
The densities π and η n being positive over Θ and the density η n converging to the Dirac distribution at the true parameter vector θ 0 when n → ∞, Θ η n π is positive and its limit, namely π(θ 0 ), is also positive. Besides, θ → p( θML | θ) being asymptotically equivalent to η n , it also converges to the Dirac distribution at the true parameter vector θ 0 when n → ∞,
and Θ p( θML | θ)π(θ)dθ → π(θ 0 ) > 0. Therefore, Θ ε n π = Θ p( θML | θ)π(θ)dθ -Θ η n π converges to 0, Θ ε n π = o( Θ η n π) and p(θ | θML ) = n→∞ π(θ) π(θ 0 ) η n (θ)(1 + o(1)). (2)
Let B n be the ball of center θ 0 and radius r n such that r n → 0 and
r n √ n → ∞ (r n converges to zero at a lower rate than 1/ √ n). Since π is Lipshitzian (i.e. ∃A 1 < ∞, ∀θ 1 , θ 2 ∈ Θ, |π(θ 1 ) -π(θ 2 )| ≤ A 1 ||θ 1 -θ 2 ||), π(θ) = n→∞ π(θ 0 )(1 + o(1)), ∀θ ∈ B n . (3)
Let us now derive an equivalent function for η n which can be written:
η n (θ) = φ θ,n -1 I(θ) -1 ( θML ) = √ n|I(θ)| 1/2 (2π) d/2 exp - n 2 ( θML -θ) I(θ)( θML -θ) .
Using the Lipschitz condition on θ → I(θ
) (i.e. ∃A 2 < ∞, ∀θ 1 , θ 2 ∈ Θ, | |I(θ 1 )|-|I(θ 2 )| |≤ A 2 ||θ 1 -θ 2 ||), one can state that for all θ ∈ B n , |I(θ)| 1/2 = n→∞ |I( θML )| 1/2 (1 + o(1)).
Besides, the two Lipschitz conditions concerning θ → x I(θ)x can be written as follows:
∀x ∈ R q , ∃A 3 (x) < ∞, ∀θ 1 , θ 2 ∈ Θ, |x I(θ 1 )x -x I(θ 2 )x| ≤ A 3 (x)||θ 1 -θ 2 || ∃A 4 < ∞, ∀x 1 , x 2 ∈ R q , |A 3 (x 1 ) -A 3 (x 2 )| ≤ A 4 ||x 1 -x 2 ||.
These conditions imply that over
B n , |( θML -θ) I(θ)( θML -θ) -( θML - θ) I( θML )( θML -θ)| is bounded from above by 2r n (A 3 (0) + 2r n A 4 ). There- fore, ( θML -θ) I(θ)( θML -θ) = n→∞ ( θML -θ) I( θML )( θML -θ)(1 + o(1))and η n (θ) = n→∞ φ θML ,n -1 I( θML ) -1 (θ)(1 + o(1)), ∀θ ∈ B n . (4)
Using Equations (2-4), we obtain the first equation of Lemma 1:
p(θ | θML ) = n→∞ φ θML ,n -1 I( θML ) -1 (θ)(1 + o(1)), ∀θ ∈ B n .
The
∀θ ∈ R n,p , || θML -θ|| ≤ λ min {I( θML )} -1 χ 2 d (p)/n. ( 5
)
From the MLE asymptotic normality,
lim n→∞ P ( θML -θ 0 ) I(θ 0 )( θML -θ 0 ) ≤ χ 2 d (p)/n = p.
Moreover, ( θML -θ 0 ) I(θ 0 )( θML -θ 0 ) ≥ λ min {I(θ 0 )}|| θML -θ 0 ||, where λ min {I(θ 0 )} is the minimum eigenvalue of I(θ 0 ). Therefore,
lim n→∞ P || θML -θ 0 || ≤ λ min {I(θ 0 )} -1 χ 2 d (p)/n ≥ p. (6)
Since ||θ -θ 0 || ≤ || θML -θ||+|| θML -θ 0 ||, one obtains using Eq. ( 5) and ( 6),
lim n→∞ P ∀θ ∈ R n,p , ||θ -θ 0 || ≤ λ min {I( θML )} -1 + λ min {I(θ 0 )} -1 χ 2 d (p)/n ≥ p.
Since r n goes to zero more slowly than 1/ √ n, the previous inequality yields:
lim n→∞ P (∀θ ∈ R n,p , ||θ -θ 0 || ≤ r n ) ≥ p lim n→∞ P (R n,p ⊂ B n ) ≥ p lim n→∞ P Bn φ θML ,n -1 I( θML ) -1 (θ)dθ ≥ Rn,p φ θML ,n -1 I( θML ) -1 (θ)dθ ≥ p lim n→∞ P Bn φ θML ,n -1 I( θML ) -1 (θ)dθ ≥ p ≥ p.
The last inequality obtained for any p ∈ (0, 1) implies that Bn φ θML ,n -1 I( θML ) -1 (θ)dθ converges to one in probability when n → ∞, i.e.
Posterior conditional on an MPLE
Here, we propose a lemma analogous to Lemma 1 but concerning an MPLE (maximum pseudo-likelihood estimate) instead of the MLE.
Lemma 2 (Asymptotic normality of the posterior conditional on an MPLE).
Consider the modeling setting provided in Section 2. Assume that, given the vector θ under which the data D were generated, the p.d.f. of the MPLE θMP L is equivalent to the normal density with mean vector θ and covariance matrix g(n) -1 J(θ) -1 :
p( θMP L | θ) ∼ n→∞ φ θ,g(n) -1 J(θ) -1 ( θMP L ),
where g is a positive increasing function such that g(n) → ∞ and J(θ) is a positive-definite matrix. Assume in addition that the prior π is a positive and Lipschitz function over Θ, that θ → |J(θ)| and θ → x J(θ)x (for all
x ∈ R q , x being the transpose of x) are Lipschitz functions over Θ, and that the constant which arises in the Lipschitz condition for θ → x J(θ)x and which is a function of x, is also a Lipschitz function over R q . Then, when n → ∞, the posterior density p(θ | θMP L ) conditional on the MPLE is asymptotically equivalent to the density of the normal distribution with mean vector θMP L and covariance matrix g(n) -1 J( θMP L ) -1 over a subset B n of Θ whose measure with respect to this normal density is asymptoticallyone:
p(θ | θMP L ) ∼ n→∞ φ θMP L ,g(n) -1 J( θMP L ) -1 (θ), ∀θ ∈ B n lim n→∞ Bn φ θMP L ,g(n) -1 J( θMP L ) -1 (θ)dθ = 1.
Lemma 2 justifies the use of the posterior conditional on the MPLE for parameter estimation because the Bayesian confidence sets that are provided by this posterior asymptotically coincide with the frequentist limit confidence sets obtained by maximizing the pseudo-likelihood.
The asymptotic normality of the MPLE required in Lemma 2 has been obtained for various models, especially random Markov fields and spatial point processes; see Gaetan and Guyon (2008, chap. 5), [START_REF] Gourieroux | Pseudo maximum likelihood methods: Theory[END_REF], Møller and Waagepetersen (2004, chap. 9) and references therein.
It has to be noted that information is lost when MPLE are used rather than MLE and, consequently, that estimation accuracy is decreased (e.g. this has been shown for simple Markovian models using asymptotic estimation variances (Gaetan and Guyon, 2008, chap. 5)).
Proof. Follow the proof of Lemma 1 by assuming that the radius r n of the ball
B n satisfies r n g(n) → ∞ instead of r n √ n → ∞.
Approximate posterior conditional on an MPLE
Here, we derive implications of Lemma 2 in the framework of approximate Bayesian computation (ABC) when (some of) the summary statistics are MPLE (see Theorem 1 and Corollary 1). We consider the (simple) ABCrejection algorithm based on independent simulations, on a set of summary statistics and on a tolerance threshold [START_REF] Pritchard | Population growth of human Y chromosomes: a study of Y chromosome microsatellites[END_REF]:
ABC-rejection. Perform the next 3 steps for i in {1, . . . , I}, independently:
• Generate θ i from π and simulate D i from M θ i ;
• Compute summary statistics S i = s(D i ), where s is a function from D to the space S of statistics; normal density goes to one in probability and that does not depend on :
• Accept θ i if d(S i , S) ≤ ,
p (θ | θMP L ) ∼ n→∞, →0 φ θMP L ,g(n) -1 J( θMP L ) -1 (θ), ∀θ ∈ B n lim n→∞ Bn φ θMP L ,g(n) -1 J( θMP L ) -1 (θ)dθ = P 1.
As explained in the introduction, this result leads us to conjecture that, for some stochastic implicit models, (i) the ABC-posterior distribution conditional on an MPLE is asymptotically normal and centered around the MPLE, and (ii) resulting point estimates and confidence sets converge to their frequentist analogues. Corollary 1 in Appendix A provides an analogous result when the MPLE is used in conjunction with supplementary statistics.
Proof. This result is directly obtained from Lemma 2 by simply noting that the subset B n does not depend on . Indeed, following the proof of Lemma 1, pointwise in θ we have:
p (θ | θMP L ) = →0 p(θ | θMP L )(1 + o (1)) = n→∞, →0 π(θ) π(θ 0 ) η n (θ)(1 + o n (1))(1 + o (1)).
Then, B n is used to provide an equivalent of π(θ)π(θ 0 ) -1 η n (θ). This term does not depend on and, consequently, B n has not to be dependent on .
Example
The simplified example presented here illustrates the application of ABC conditional on an MPLE and a supplementary statistic. The model M θ under consideration is the following bivariate normal distribution:
N ( µ µ ) , 1 ρ ρ 1
, parameterized by the mean µ and the correlation ρ; we set θ = (µ, ρ).
Observed data D = {(D (1) k , D (2)
k ) : k = 1, . . . , n} are n = 100 vectors drawn under this normal distribution with µ = 0 and ρ = 0.5. We use a uniform prior distribution π over the rectangular domain (-3, 3) × (-1, 1). The maximum likelihood estimates of µ and ρ are the empirical mean of (D
(1) k + D (2)
k )/2 and the empirical correlation of (D
(1) k , D (2) k ), k = 1, . . . , n.
Here, we applied ABC with the two following statistics:
S = s(D) = μMP L S 0 = 1 n n i=1 D (1) k 1{sign(D (1) k ) = sign(D (2) k )} ,
where μMP L is an MPLE of µ that uses only partial information contained in the sample (i.e. only the first component of sampled vectors), and S 0 is a supplementary statistic that gives the mean number of vectors in the sample whose components D
k and D
(2)
k have the same sign (1{•} is the indicator function).
To assess the convergence of ABC when tends to zero, we applied ABC with varying , with I = 10 5 simulations, and with the distance d(S i , S) = (μ M P L,i -μMP L ) 2 + (S 0,i -S 0 ) 2 , where S i = (μ M P L,i , S 0,i ) is the vector of statistics computed for the simulation i. As usual in ABC-rejection, instead of fixing , we fixed the sample size τ of the posterior sample (i.e. the number of accepted parameter vectors); note that decreases when τ decreases.
The sample size τ was fixed at values ranging from 10 to 5000. For each value of τ , we computed the local posterior probability (LPP) around the true parameter vector θ = (0, 0.5) as the proportion of accepted parameter vectors in the small rectangle [-0.015, 0.015] × [0.495, 0.505] whose center is θ = (0, 0.5) and whose sides are 200 times smaller than the sides of the parameter space (-3, 3) × (-1, 1). We expect that this LPP increases with the efficiency of the inference procedure. The LPP was computed for 50000 datasets and Figure 1 shows its mean and standard deviation when τ varies.
The mean LPP around the true parameters increases when the sample size τ (and ) tends to zero; meanwhile, the dispersion of the LPP increases. This is the signature of the classical bias-variance trade-off.
To automatically select the sample size τ , we applied the procedure proposed by [START_REF] Soubeyrand | Approximate Bayesian computation with functional statistics[END_REF] where the distance between summary statistics is also optimized. In this procedure, the distance is weighted: d(S i , S; w 1 , w 2 ) = w 1 (μ M P L,i -μMP L ) 2 + w 2 (S 0,i -S 0 ) 2 , and the triplet (τ, w 1 , w 2 ) is optimized under constraints using an integrated mean square error criterion. For one of the 50000 datasets simulated above, Figure 2 shows ABC-posterior samples obtained when d is not weighted and τ is
Discussion
We provided a frequentist justification for the use of posterior distributions conditional on MPLE, both in the classical Bayesian framework and in the ABC framework. The asymptotic results presented above were obtained for a large but limited class of models satisfying regularity assumptions. In real-life studies where ABC is applied, these assumptions cannot be checked and, consequently, our asymptotic results may not hold. Therefore, it is crucial (i) to combine MPLE and supplementary summary statistics to tend to a set of sufficient summary statistics [START_REF] Joyce | Approximately sufficient statistics and Bayesian computation[END_REF], and
(ii) to apply a method for selecting, weighting or transforming the summary statistics to avoid to take into account non-relevant statistics. In Section 6, we used the weighted distance between summary statistics proposed by [START_REF] Soubeyrand | Approximate Bayesian computation with functional statistics[END_REF] where the weights are optimized with respect to an integrated mean square error. Other approaches could be applied. For example, Barnes et al. (2012), [START_REF] Joyce | Approximately sufficient statistics and Bayesian computation[END_REF] and [START_REF] Nunes | On optimal selection of summary statistics for approximate Bayesian computation[END_REF] propose a dimension reduction (that can be viewed as a binary weighting), and [START_REF] Wegmann | Efficient approximate Bayesian computation coupled with Markov chain Monte Carlo without likelihood[END_REF] proposes a PLS transformation followed by a binary weighting of the PLS axes (see also [START_REF] Blum | A comparative review of dimension reduction methods in approximate Bayesian computation[END_REF][START_REF] Fearnhead | Constructing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation[END_REF][START_REF] Jung | Choice of summary statistic weights in approximate Bayesian computation[END_REF].
The strong implication of our approach is that an analytic work has to be made: the dependence structure of the model has to be simplified to write a tractable pseudo-likelihood and, eventually, to find an analytic expression for the maximizer. This additional work is however expected to yield relevant summary statistics directly informing (a subset of) the parameters.
In this article, we did not precisely define what is a pseudo-likelihood.
This deliberate choice is justified in the sense that the MPLE could be replaced, in Theorem 1, by any other estimates with similar normal weak convergence. Thus, our approach could be generalized by replacing the pseudo-likelihood by a quasi-likelihood or a contrast. Another extension of Theorem 1 could be derived by conditioning ABC on the score function of the pseudo-likelihood instead of the MPLE as proposed by [START_REF] Gleim | Approximate Bayesian computation with indirect summary statistics[END_REF], [START_REF] Ruli | Approximate Bayesian computation with composite score functions[END_REF] and [START_REF] Mengersen | Bayesian computation via empirical likelihood[END_REF].
Under the first set of assumptions of Corollary 1, we can also write ). Moreover, the existence of T 0 is a strong assumption that reduces the applicability of the corollary (a trivial but certainly rare example where T 0 exists is obtained when one assumes that ( θMP L , S 0 ) is a normal random vector; Indeed, in this case T 0 can be defined as T 0 = S 0 -E(S 0 | θMP L )). Nevertheless, if ( θMP L , S 0 ) is asymptotically normal, we conjecture that a result similar to Corollary 1 can be obtained.
Proof. The first part of the corollary is straightforward:
Observed data D ∈ D are assumed to be generated under the stochastic model M θ parametrized by θ ∈ Θ with prior density π. The data space D and the parameter space Θ ⊂ R q (q ∈ N * ) are both included in multidimensional sets of real vectors. The probability distribution functions (p.d.f.) of the model and the prior are defined with respect to the Lebesgue measure. Let p(D | θ) denote the likelihood of the model and p(θ | D) = p(D | θ)π(θ)/p(D) the full sample posterior of the parameter vector θ. The vector θML ∈ Θ is the maximum likelihood estimate (MLE) of θ: θML = argmax θ∈Θ p(D | θ). The posterior of parameters conditional on the MLE is p(θ | θML ) = p( θML | θ)π(θ)/p( θML ), where p( θML | θ) is the p.d.f. of the MLE given θ. Besides, we are interested in models whose likelihoods are not tractable because of the dependence structure in the data, but for which we can build tractable pseudo-likelihoods, say p(D | θ).
θ∈Θ p(D | θ). The posterior of parameters conditional on the MPLE is p(θ | θMP L ) = p( θMP L | θ)π(θ)/p( θMP L ), where p( θMP L | θ) is the p.d.f. of the MPLE given θ.
2nd equation of Lemma 1 is shown below. Let p ∈ (0, 1), and consider R n,p the region consisting of the vectors θ ∈ Θ satisfying n( θMLθ) I( θML )( θML -θ) ≤ χ 2 d (p), where χ 2 d (p) is the quantile of order p of the chi-square distribution of order d (i.e. the dimension of Θ). Using the link between the normal and chi-square distributions, R n,p satisfies: Rn,p φ θML ,n -1 I( θML ) -1 (θ)dθ = p. Moreover, from a property of the Rayleigh quotient, ( θML -θ) I( θML )( θML -θ) ≥ λ min {I( θML )}|| θML -θ||, where λ min {I( θML )} is the minimum eigenvalue of I( θML ). Therefore,
lim n→∞ Bn φ θML ,n -1 I( θML ) -1 (θ)dθ = P 1.
where d is a distance over S and is a tolerance threshold for the distance between the observed statistics S = s(D) and the simulated ones S i .The set of accepted parameters, say Θ ,I = {θ i : d(S i , S) ≤ , i = 1, . . . , I}, forms a sample from the following posterior:p (θ | S) = B d (S, ) h(s | θ)ds π(θ) Θ B d (S, ) h(s | α)ds π(α)dα, where the ball B d (S, ) in the d-dimensional space S is the set of points from which the distance to S is less than , and h(S | θ) is the conditional probability distribution function of S given θ. When tends to zero and d is appropriate, p (θ | S) is a good approximation of p(θ | S) under regularity assumptions (see Blum (2010) and Soubeyrand et al. (2013, Appendix A)): p (θ | S) and p(θ | S) are asymptotically equivalent. However, if S is not sufficient, then p(θ | S) = p(θ | D) and information is lost by using S instead of D. Theorem 1 (Asymptotic normality of the ABC-posterior conditional on an MPLE). Consider the ABC-rejection algorithm that samples in the posterior p (θ | θMP L ) of θ conditional on the vector of summary statistics S = θMP L . Assume that when → 0, p (θ | θMP L ) converges pointwise to p(θ | θMP L ). Then, under assumptions of Lemma 2, when n → ∞ and → 0, the posterior p (θ | θMP L ) is asymptotically equivalent to the density of the normal distribution with mean vector θMP L and covariance matrix g(n) -1 J( θMP L ) -1 over a subset B n of Θ whose measure with respect to this
Figure 1 :
1 Figure 1: Mean (solid line), pointwise 95%-confidence envelopes (dashed lines) and standard deviation (dotted line) of the local posterior probability around the true parameter vector θ = (0, 0.5) as a function of sample size τ .
Figure 2 :
2 Figure 2: ABC-posterior samples (dots) obtained when d is not weighted and τ is fixed at 5000 (top left), 1000 (top right) and 200 (bottom left), and when d is weighted and (τ, w1, w2) is optimized (bottom right). Dashed lines are intersecting at the true value (0, 0.5) of the parameter vector θ = (µ, ρ). The grey contour line gives the smallest 95%-posterior area obtained with the classical Bayesian computation.
that p (θ | θMP L , S 0 ) = →0 p(θ | θMP L )(1 + o (1)), where p(θ | θMP L ) is the posterior density of θ conditional on the MPLE with the modified prior p(θ | T 0 ) = p(T 0 | θ)π(θ)/p(T 0
p
(θ | θMP L , S 0 ) = →0 p(θ | θMP L , S 0 )(1 + o (1)) = p(θ | θMP L , T 0 )(1 + o (1)) = p( θMP L , T 0 | θ)π(θ) p( θMP L , T 0 ) (1 + o (1)) = p(T 0 | θMP L , θ)p( θMP L | θ)π(θ) p(T 0 )p( θMP L ) (1 + o (1)) = p(T 0 | θ)p( θMP L | θ)π(θ) p(T 0 )p( θMP L ) (1 + o (1)) = p(T 0 | θ)p(θ | θMP L ) p(T 0 ) (1 + o (1)).The term p(θ | θMP L ) can be viewed as a prior that is based on information contained in θMP L and that is independent from data T 0 used for the final inference. The second part of the corollary corresponds to Theorem 1. Barnes, C. P., Filippi, S., Stumpf, M. P. H., Thorne, T., 2012. Considerate approaches to constructing summary statistics for abc model selection. Statistics and Computing 6, 1181-1197. Bickel, P. J., Kleijn, B. J. K., 2012. The semiparametric Bernstein-von Mises theorem. The Annals of Statistics 40, 206-237.
Acknowledgements. We thank Denis Allard, Rachid Senoussi and the reviewers for their suggestions. This work was supported by the ANR grant EMILE.
Appendix A. Combining an MPLE and supplementary statistics Corollary 1. Consider the posterior p (θ | θMP L , S 0 ) of θ conditional on the vector of summary statistics S = ( θMP L , S 0 ) in which the ABC-rejection algorithm samples. Assume that when → 0, p (θ | θMP L ) converges pointwise to p(θ | θMP L ). Assume that there exists a vector T 0 independent of θMP L such that ( θMP L , T 0 ) brings the same information on θ than S = ( θMP L , S 0 ).
Then,
where p(θ | T 0 ) is the posterior density of θ conditional on the subset of statistics T 0 given the modified prior p(θ
If in addition assumptions of Lemma 2 hold, the modified prior p(θ | θMP L ) is asymptotically equivalent to the density of the normal distribution with mean vector θMP L and covariance matrix equal to g(n) -1 J( θMP L ) -1 over a subset B n of Θ whose measure with respect to this normal density goes to one in probability and that does not depend on . |
04109554 | en | [
"phys.meca.solid"
] | 2024/03/04 16:41:24 | 2023 | https://hal.science/hal-04109554/file/Complete_finite_strain_isotropic_thermo_elasticity_revised.pdf | Paul Bouteiller
Complete finite-strain isotropic thermo-elasticity
Keywords:
This paper deals with finite strain isotropic thermo-elasticity without any specific Ansatz regarding the Helmholtz free energy. On the theoretical side, an Eulerian setting of isotropic thermo-elasticity is developed, based on the objective left Cauchy-Green tensor along with the Cauchy stress. The construction of the elastic model relies on a particular invariants choice of the strain measure. These invariants are built so that a succession of elementary experiments, in which the invariants evolve independently, ensures the complete identification of the Helmholtz free energy and thus of the thermo-elastic constitutive law. Expressions idealizing these experimental tests are proposed. A wide range of hyperelastic models are found to be a special case of the model proposed herein.
Introduction
Elasticity is probably one of the most extensively discussed topics in the analytical mechanics literature over the last few centuries. Since Robert Hooke's linear relationship linking the current Cauchy stress σ to the linearized strain ε, many authors studied small strain but also non-linear elasticity, the latter beeing well suited to study materials, such as polymer, which undergo large deformations. The fundamental ground of this phenomenon, which can preceed more complexe non linear events, is now a very well assessed subject, both theoretically [START_REF] Biot | Thermoelasticity and irreversible thermodynamics[END_REF][START_REF] Coleman | The thermodynamics of elastic materials with heat conduction and viscosity[END_REF][START_REF] Ogden | Non-linear elastic deformations[END_REF] and experimentally.
However, specific Ansatz are still commonly used concerning the Helmholtz free energy from which the elastic constitutive law is derived. Classical small-strain elasticity, linking the second Piola-Kirchoff tensor π to the right Green-Lagrange strain tensor E relies on a quadratic expansion of the Helmholtz free energy with respect to the small strain E. As regards finite strain elasticity, where such series expansion are no longer valid, the construction is based on a choice of the Helmholtz free energy according to invariants of the deformation tensor. Two main categories are often distinguished, depending on the state variables Email address: [email protected] (Paul Bouteiller) Preprint submitted to Elsevier May 30, 2023 chosen. On the one hand, Neo-Hookean [START_REF] Kim | A comparison among neo-hookean model, mooneyrivlin model, and ogden model for chloroprene rubber[END_REF], Mooney-Rivlin [START_REF] Kumar | Hyperelastic mooney-rivlin model: determination and physical interpretation of material constants[END_REF], or Yeoh [START_REF] Yeoh | Some forms of the strain energy function for rubber[END_REF] models postulate a polynomial expansion of the free energy with respect to the fundamental invariants of a strain tensor. The coefficients involved in the polynomial expansion are fitted to reproduce experimental data. On the other hand, stretched based models, such as Ogden's which develop the Helmholtz free energy as a power series, retain the positive eigenvalues of the right Cauchy-Green tensor C. Ogden models have been found to be very efficient for large strain elasticity of rubber like material [START_REF] Cassels | Nonlinear elasticity: theory and applications[END_REF]. Many articles have offered reviews of these different models and their respective pros and cons [START_REF] Chagnon | Hyperelastic energy densities for soft biological tissues: a review[END_REF][START_REF] Dal | A comparative study on hyperelastic constitutive models on rubber: State of the art after 2006[END_REF][START_REF] Khaniki | A review on the nonlinear dynamics of hyperelastic structures[END_REF][START_REF] Melly | A review on material models for isotropic hyperelasticity[END_REF].
The first objective of this paper is to present a choice of a triplet of kinematically significant invariants, allowing for the rigorous identification of the Helmholtz free energy, through a succession of simple elementary experiments and without a posteriori adjustment. These invariants are chosen so that they can evolve independently.
In Section 1, we will recall the main lines of construction of the finite strain isotropic thermo-elastic behavior. In Section 2, we will present the invariants of the left Cauchy-Green deformation tensor used for the construction of the elastic model and the resulting constitutive law. Then, we will study, Section 3, the successive elementary experiments which allow the identification of the Helmholtz free energy. Physically realistic expressions for the experimental data required to build the model will then be proposed in Section 4.
Reminder of finite strain elasticity
This section briefly presents the fundamental assumption of finite strain isotropic thermo-elasticity within an Eulerian setting. The formulation of the general theory is given in the thermodynamic framework by introducing the Helmholtz free energy as a function of a finite strain measure, and by exploring both the first and the second law of thermodynamics.
Framework
Let D m 0 ∈ R 3 be a material domain corresponding to the reference configuration of a continuous medium.
Particles labeled by x 0 ∈ D m 0 are transported in the current configuration to x t ∈ D m t by a one-to-one mapping φ t . Let us note F the deformation gradient of this mapping.
Let us consider two observers (reference frames) R and R. We denote Q 0 the orthogonal transformation linking these two reference frames at the initial time and Q t the orthogonal tensor linking these same two observers in the current configuration. A second-rank tensor, Y for the observer R, equal to Ỹ for the observer R, is called invariant (or "objective Lagrangian") [START_REF] Korobeynikov | Objective tensor rates and applications in formulation of hyperelastic relations[END_REF] if:
Ỹ = Q 0 • Y • Q ⊤ 0 Q 0 ∈ SO(3); ∀t. (1)
2 Similarly, we will call "objective" any second-rank tensor X satisfying the following equality:
X = Q t • X • Q ⊤ t Q t ∈ SO(3). (2)
The invariance or objectivity property of a tensor is intrinsic and comes either from its mathematical definition or from a physically motivated assumption. The first step in the construction of a constitutive law is the choice of a deformation measure, which cannot be simply F , among two large families:
• On the one hand, strain tensors based on the product C = F ⊤ • F or its related tensors of the form
Y = 1 n (U n -1) with U = √ C (pure stretch
). These deformation tensors are often called "right strain mesure", "Lagrangian" or "invariant" because they obey the relation [START_REF] Akella | Static eos of uranium to 100 gpa pressure[END_REF]. These strain measures are used to close the mechanical problem by relating the first or second Piola-Kirchoff stress tensor P , π (which are also postulate invariant) to the displacement field u.
• On the other hand, strain measure, called "left", "Eulerian" or "objective", and based on the left Cauchy-Green deformation B = F • F ⊤ (subsequently called "Finger tensor") are more rarely considered to build elasticity models. These deformation measurements effectively link the objective Cauchy stress σ to the displacement field.
Elastic behavior
The finite-strain isotropic elastic behavior is based on the following three assertions:
1. Temperature and a strain measure are the only state variables.
2. The stress measure is a state function.
3. In all evolutions, the intrinsic dissipation is null.
The first assumption 1) forbids the presence of additional internal variables which could be the witness of phenomena such as plasticity, damage etc... But also anisotropy through a structural tensor [START_REF] Boehler | On irreducible representations for isotropic scalar functions[END_REF][START_REF] Zheng | Tensors which characterize anisotropies[END_REF]. The second assumption 2 excludes viscous effects and a possible strain rate D (D denotes the symmetric part of the spatial velocity gradient) dependency of the Cauchy stress. The last assumption 3 ensures reversibility in the sense that the entropy production other than thermal is zero. To satisfy these three requirements, most treatises devote the so-called "Lagrangian" approaches and relate the second Piola-Kirchoff tensor to the right Green-Lagrange deformation tensor. Eulerian formulations of elasticity are rarely considered (see nevertheless [START_REF] Miehe | Entropic thermoelasticity at finite strains. aspects of the formulation and numerical implementation[END_REF][START_REF] Simo | Computational inelasticity[END_REF]), especially since the Cauchy stress cannot be written as the simple derivative of a potential with respect to a strain measure.
Eulerian finite strain isotropic elasticity
Here, we choose to catch the motion with the left Cauchy-Green objective strain tensor B which we relate to the Cauchy stress σ whose physical meaning is perfectly clear.
The isotropy of the material σ(T, B)
= Q t • σ • Q ⊤ t (T, Q t • B • Q ⊤ t )
and the Rivlin-Ericksen theorem [START_REF] Ogden | Non-linear elastic deformations[END_REF] implies that the Cauchy stress necessarily expands as a quadratic polynomial of the strain measure:
σ = K B 0 1 + K B 1 B + K B 2 B 2 . (3)
The coefficients K B i are scalar functions of the temperature, but also of a triplet of B invariants. Their expressions, as a function of the free energy, can be deduced from the nullity of the intrinsic dissipation (see [START_REF] Truesdell | The non-linear field theories of mechanics[END_REF] p.295) written on the current (spatial) configuration:
Φ = -ρ ψm + Ṫ s m + σ : D = 0. (4)
For an isotropic material, the Helmholtz massic free energy is an isotropic function [START_REF] Boehler | On irreducible representations for isotropic scalar functions[END_REF][START_REF] Smith | On isotropic functions of symmetric tensors, skew-symmetric tensors and vectors[END_REF] 3) so that there exists f B ψ such that:
ψ m (T, B) = ψ m (T, Q t • B • Q ⊤ t ), ∀Q t ∈ SO(
ψ m (T, B) = f B ψ (T, B I , B II , B III ), (5)
where the fundamental invariants (B I , B II , B III ) denote the coefficients of the characteristic polynomial of B (main invariant). Note that any other triplet of invariants bijectively related to (B I , B II , B III ), is legitimate.
Thus, the "principal" invariants defined by B i = 1 i TrB i are often used to construct elastic laws as their partial derivatives with respect to the strain measure are straightforward. In the following, we will retain invariants whose kinematic meaning is clear.
According to the fundamental invariants (B I , B II , B III ), the isotropic finite strain thermo-elastic law reads, (see Appendix A but also [START_REF] Garrigues | Comportement élastique[END_REF][START_REF] Simo | Computational inelasticity[END_REF]):
σ = 2ρ 0 √ B III B III ∂ BIII f B ψ 1 + (∂ BI f B ψ + B I ∂ BII f B ψ )B -∂ BII f B ψ B 2 . (6)
Elastic model based on volumetric split
Most materials exhibit completely different volumetric and isochoric responses so that we make use of the isochoric-volumetric commutative split of the gradient F [START_REF] Flory | Thermodynamic relations for high elastic materials[END_REF][START_REF] Simo | Variational and projection methods for the volume constraint in finite deformation elasto-plasticity[END_REF]:
F = J 1 3 1 • J -1 3 F = J 1 3 1 • F , (7)
⇒ B = (J 2 3 1) • (J -2 3 B) = B sph • B. ( 8
)
B sph is a spherical tensor whose only invariant is the volume expansion J:
J = det F = √ det B = B III . (9)
The isochoric Finger tensor B satisfies det B = 1 and is therefore represented by two invariants. We note, without any ambiguity, B I and B II the two fundamental invariants of the isochoric Finger tensor.
These invariants are trivially related to those of B by:
B I = J -2 3 B I = B -1 3 III B I and B II = J -4 3 B II = B -2 3 III B II . (10)
One can find a function f B ψ of the invariants (T, B I , B II , B III ) coinciding with the Helmholtz massic free energy:
ψ m (T, B) = f B ψ (T, B I , B II , B III ) = f B ψ (T, B I , B II , B III ). (11)
The partial derivatives of the new function f B ψ are then obtained by simple algebraic calculations. The elastic law ( 6) is then transformed to give (see Appendix B):
σ = 2ρ 0 √ B III B III ∂ BIII f B ψ 1 + ∂ BI f B ψ DevB -∂ BII f B ψ Dev B 2 -Tr B B , σ = 2ρ 0 √ B III B III ∂ BIII f B ψ 1 + ∂ BI f B ψ B 1 3 III DevB - ∂ BII f B ψ B 2 3 III Dev B 2 -Tr (B) B . (12)
DevX = X -1 3 TrX 1 refers to the deviatoric (trace-less) part of a second-order tensor. [START_REF] Doll | On the development of volumetric strain energy functions[END_REF] emphasizes the deep link between the isochoric strain and the deviatoric part of the stress.
Construction of the thermo-elastic model
The invariants used so far are natural, but inefficients for the identification of the Helmholtz free energy, as they evolve simultaneously in classical experimental tests. The corner-stone of the subsequent section is the choice of independent invariants associated with simple elementary experiments, allowing them to evolve one after the other.
State variables
First invariant of the isochoric Finger tensor. We introduce a first invariant γ s of the isochoric Finger tensor B, defined by:
γ s = B I -3 = B I B 1 3 III -3, (13)
γ s is bijectively related to the maximum distortion δ max (see [START_REF] Garrigues | Cinématique des milieux continus[END_REF]):
γ s = √ 3 (δ max ) 2 3 -1 with δ max = max Q∈SO(3) 1 |det (Q • u t , Q • u ′ t , Q • u ′′ t )| , (14)
where (u t , u ′ t , u ′′ t ) denotes an initially orthogonal material direction triplet (at t = 0, det (u 0 , u ′ 0 , u ′′ 0 ) = 1).
The maximum distortion δ max reflects the maximum decrease of the solid angle formed by three initially orthogonal material directions. Moreover, in an isochoric simple shear test, defined by the transformation field (see Figure 1):
x t = φ(x 0 , t) = J 1 3 (x 0 + γY e x ), (15)
simple algebra yield γ s = γ, so that γ s equalized the motion magnitude.
Set γ ⊥ = 0 and e = 1 in (D.3) shows that the two fundamentals invariants of ( 16) stay equal B II = B I .
We define the invariant γ t , identically zero in any spherical evolution and in a simple shear test, by:
γ t = B I -3 -B II -3 = γ s - B II J 4 3 -3, (17)
γ t is always well defined (i.e. the terms under the roots are always positive) by concavity of the logarithm.
A closed form of its variations' domain is given in Appendix C.
Constitutive law with the new invariants set
The list of reduced state variables retained is thus (T, J, γ s , γ t ) (9), ( 13), [START_REF] Garrigues | Comportements inélastiques[END_REF]. The relations are inversed in:
J = B 1 2 III ; γ s = B I -3; γ t = γ s -B II -3 ⇔ B III = J 2 ; B I = γ 2 s + 3; B II = (γ s -γ t ) 2 + 3; (18)
The generic elastic law [START_REF] Doll | On the development of volumetric strain energy functions[END_REF], expressed in terms of the principal invariants (T, B I , B II , B III ) is transformed by applying the chain rule:
σ = ρ 0 ∂ J fψ 1 + 1 γ s J 5 3 ∂ γs fψ + ∂ γt fψ DevB + ∂ γt fψ (γ s -γ t )J 7 3 Dev B 2 -Tr(B)B . (19)
This is the most general isotropic thermo-elastic law, formulated with respect to the variables (T, J, γ s , γ t ).
Postulating that the free energy does not depend on the γ t invariant, i.e. ∂ γt fψ = 0, reduces [START_REF] Khaniki | A review on the nonlinear dynamics of hyperelastic structures[END_REF] to the elastic law used by [START_REF] Miehe | Entropic thermoelasticity at finite strains. aspects of the formulation and numerical implementation[END_REF][START_REF] Pamin | Gradient-enhanced large strain thermoplasticity with automatic linearization and localization simulations[END_REF][START_REF] Simo | Associative coupled thermoplasticity at finite strains: Formulation, numerical analysis and implementation[END_REF][START_REF] Simo | Computational inelasticity[END_REF]. It should be noted, however, that in these papers, the expression of the free energy is presented as a hypothesis and is not deduced from experimentally assessed stresses.
These models, being limited to the triplet (T, J, γ s ), choose not to distinguish energetically two different deformations which would share the same maximum distortion δ max and thus the same invariant γ s .
Experimental identification
The thermodynamic construction presented herein follows the generic method presented by Guarrigues [START_REF] Garrigues | Comportement élastique[END_REF][START_REF] Garrigues | Comportements inélastiques[END_REF].
Let fψ denote the free energy variation within the non-isothermal thermo-elastic process from a reference configuration, E 0 = (T 0 , 1, 0, 0) for which the free energy is set to zero, to an arbitrary configuration given by the state variables (T, J, γ s , γ t ):
ψ m = fψ (T, J, γ s , γ t ). (20)
To define the Helmholtz massic free energy at any point (T, J, γ s , γ t ) of the state space, we define a particular path E 0 = (T 0 , 1, 0, 0)
P (1)
-→ E 1 = (T, 1, 0, 0)
P (2)
-→ E 2 = (T, J, 0, 0)
P (3)
-→ E 3 = (T, J, γ s , 0)
P (4) -→ E t = (T, J, γ s , γ t )
such that fψ is additively found, given the energy variation in every path:
ψ m = g (1) (T ) + g (2) (T, J) + g (3) (T, J, γ s ) + g (4) (T, J, γ s , γ t ). (21)
g (1) (T ) is the Helmholtz massic free energy variation within P (1) (strain-locked heating):
Ṫ ̸ = 0; J = 1; J = 0; γ s = 0; γs = 0; γ t = 0; γt = 0; (22)
g (2) (T, J) is the Helmholtz massic free energy variation within P (2) (isothermal spherical motion):
Ṫ = 0; J ̸ = 0; γ s = 0; γs = 0; γ t = 0; γt = 0; (23)
g (3) (T, J, γ s ) is the Helmholtz massic free energy variation within P (3) (isochoric simple shear test):
Ṫ = 0; J = 0; γs ̸ = 0; γ t = 0; γt = 0; (24)
g (4) (T, J, γ s , γ t ) is the Helmholtz massic free energy variation within P (4) (isothermal isochoric isotrace motion):
Ṫ = 0; J = 0; γs = 0; γt ̸ = 0; (25)
with the initial conditions:
g (1) (T 0 ) = 0; g (2) (T, 1) = 0, ∀T ; g (3) (T, J, 0) = 0, ∀(T, J); g (4) (T, J, γ s , 0) = 0, ∀(T, J, γ s );
which leads to cancel the following partial derivatives:
∂ T g (2) (T, 1) = 0; ∂ T g (3) (T, J, 0) = ∂ J g (3) (T, J, 0) = 0 ∀ T, J; ∂ T g (4) (T, J, γ s , 0) = ∂ J g (4) (T, J, γ s , 0) = ∂ γs g (4) (T, J, γ s , 0) = 0 ∀ T, J, γ s . (27)
The general form of the massic entropy is given by the Helmholtz relation:
s m = -∂ T fψ = -∂ T g (1) -∂ T g (2) -∂ T g (3) -∂ T g (4) . ( 28
)
The massic internal energy is deduced from the definition of the Helmholtz free energy: 4) ).
e m = fψ + T s m = g (1) + g (2) + g (3) + g (4) -T (∂ T g (1) + ∂ T g (2) + ∂ T g (3) ∂ T g (
(29)
Elementary evolutions
The identification of the four unknown functions g (1) (T ), g (2) (T, J), g (3) (T, J, γ s ) and g (4) (T, J, γ s , γ t ) comes down to experimental measures in a few experiments carried under the following ideal experimental conditions:
1) body forces are negligible;
2) state variables (T, J, γ s , γ t ) are uniform across the specimen.
3) experimental data are evaluated in a quasi-static setting so that the kinetic energy can be neglected;
g (1) (T ) will be determined by a simple energy balance whereas direct integration of experimental stress data shall clarify the rest of the Helmholtz free energy.
Strain-locked pure heating
The first evolution P (1) is a pure heating, strain locked, experiment: B = 1, J = 1, J = 0, γ s = 0, γs = 0, γ t = 0, γt = 0. We measure the algebraic massic heat q m (1) exp (T ) supplied to the system to go from T 0 to T . The conservation of energy between the states E 0 and E 1 is written (quasi-static evolution: ∆e m c = 0, no deformation: w m ext = 0):
e m 1 -e m 0 0 = q m (1) exp (T ), (30)
which leads to the differential equation in g (1) from (29):
g (1) (T ) -T (∂ T g (1) (T ) + ∂ T g (2) (T, 1) + ∂ T g (3) (T, 1, 0)) + ∂ T g (4) (T, 1, 0, 0)) = q m (1) exp (T ), ⇔ g (1) (T ) -T ∂ T g (1) (T ) = q m (1) exp (T ) see (27), ⇔ g (1) (T ) = -T T T0 q m (1) exp ( T ) T 2 d T . (31)
This experimental measurement is very difficult to perform, especially when the samples tend to contract (usually during cooling). We will see later, Section 3.1.5, that this ideal experiment can be replaced by a free thermal expansion followed by an isothermal spherical motion.
Spherical motion
In the second path P (2) , the deformation is purely spherical (B = J 2 3 1, γ s = 0, γs = 0, γ t = 0, γt = 0)
and isothermal ( Ṫ = 0). The elastic law [START_REF] Khaniki | A review on the nonlinear dynamics of hyperelastic structures[END_REF] proves that stress tensor must also be spherical:
σ (2) = ρ 0 ∂ J fψ 1, (32)
and is thus completely characterized by its mean stress.
In this experiment, we measure the pressure p
exp (T, J) which is the opposite of the uniform normal stress applied on the boundary if the hypothesis 2) p.8 holds:
p (2) exp (T, J) = - Trσ (2) 3 = -ρ 0 ∂ J g (2) (T, J) see (27). ( 33
)
The second contribution to the Helmholtz massic free energy variation then follows naturally.
g (2) (T, J) = - 1 ρ 0 J 1 p (2) exp (T, J)d J. (34)
In the shock-wave mechanics community, the determination of this pressure with respect to the volumetric change (or equivalently to the mass density ρ) and the temperature, is often referred to as the "equation of state (EOS)" [START_REF] Akella | Static eos of uranium to 100 gpa pressure[END_REF].
Isochoric iso-T planar simple shear test
To perform an isochoric isothermal planar simple shear [START_REF] Boni | Application of the plane simple shear test for determination of the plastic behaviour of solid polymers at large strains[END_REF] in the (e x , e y ) plane with an initial volume expansion J, we impose the following position field on the particles:
x t = J 1 3 (x 0 + γY e x ), (35)
where x 0 = Xe x + Y e y + Ze z is the reference position of a particle. The Finger tensor B = F • F ⊤ in the orthonormal basis (e x , e y , e z ) for this isochoric motion have already been given [START_REF] Garrigues | Comportement élastique[END_REF]. The strain tensor is uniform in the specimen ∂ x0 B = 0 so that γ s is uniform as well and satisfies:
γ s = B I -3 = γ 2 = ±γ.
Stress tensor in the P (3) evolution. The general thermo-elastic law reads:
σ = ρ 0 ∂ J fψ 1 + 1 γ s J 5 3 ∂ γs fψ + ∂ γt fψ DevB + ∂ γt fψ (γ s -γ t )J 7 3 Dev B 2 -Tr(B)B . ( 36
)
Given the equalities γ t = 0, ∂ γs g (4) (T, J, γ s , 0) = 0, simple calculations show that the tangential stress
τ (3) exp (T, J, γ s ) = σ (3)
12 within this evolution is directly related to ∂ γs g (3) by:
τ (3) exp = ρ 0 J ∂ γs g (3) (T, J, γ s ). ( 37
)
integration the value of the function g (3) :
g (3) = J ρ 0 γs 0 τ (3) exp (T, J, γ)dγ. (38)
3.1.4. Iso-γ s iso-T isochoric double shear test P (4) is an isothermal, isochoric, iso-γ s experiment. Several tests grants this triple condition. For the sake of simplicity, we introduce herein the "double shear test", eventhough this experiment does not scan the whole domain of variation of γ t . In the Appendix D, we will present a wider two-parameter test family, which include the present "double shear experiment", and which browse the whole domain for the variable
γ t .
Isochoric double shear test. Consider the motion defined by the following transformation (see also Figure 2):
x t = J 1 3 (x 0 + γ 2 s -γ 2 ⊥ Y e x + γ ⊥ Ze y ), (39)
where x 0 = Xe x + Y e y + Ze z is the reference position of a particle, J is the volume dilatation prior from path P (2) and constant in path P (3) and P (4) , γ s is the slip prior from path P (3) and constant in path P (4) . The components of the Finger tensor B = F • F ⊤ in the orthonormal basis {e x , e y , e z } for this isochoric motion are:
F = J 1 3 1 γ 2 s -γ 2 ⊥ 0 0 1 γ ⊥ 0 0 1 ⇒ B = 1 + γ 2 s -γ 2 ⊥ γ 2 s -γ 2 ⊥ 0 γ 2 s -γ 2 ⊥ 1 + γ 2 ⊥ γ ⊥ 0 γ ⊥ 1 ⇒ B I = γ 2 s + 3. ( 40
)
This test continuously shifts the deformation from a simple shear test in the (e x , e y ) plane to the same state in the (e y , e z ) plane while keeping the first invariant γ s constant.
Stress tensor in the double shear test. Injecting the strain measure deduced from (39) in [START_REF] Yeoh | Some forms of the strain energy function for rubber[END_REF], one can see that the out of plane shear stress σ
13 is directly related to the derivative of the Helmholtz massic free energy with 10 respect to γ t . Indeed, B 2 [START_REF] Flory | Thermodynamic relations for high elastic materials[END_REF] given by (D.2) and [START_REF] Yeoh | Some forms of the strain energy function for rubber[END_REF] proves that the tangential stress τ
(4) exp (T, J, γ s , γ t ) = σ (4) 13
in this evolution equalized:
τ (4) exp (T, J, γ s , γ ⊥ ) = ρ 0 J(γ s -γ t ) ∂ γt g (4) (T, J, γ s , γ t )γ ⊥ γ 2 s -γ 2 ⊥ with γ t = γ s -γ 2 s + γ 2 ⊥ (γ 2 s -γ 2 ⊥ ) see (D.9), τ (4) exp (T, J, γ s , γ t ) = ρ 0 J(γ s -γ t ) ∂ γt g (4) (T, J, γ s , γ t ) (γ s -γ t ) 2 -γ 2 s . (41)
The expression for the massic free energy g (4) is obtained once more by integrating the experimental data:
g (4) (T, J, γ s , γ t ) = J ρ 0 γt 0 γ s -τ (γ s -τ ) 2 -γ 2 s τ (4) exp (T, J, γ s , τ )dτ. ( 42
)
When γ ⊥ evolves in the interval 0,
γ s √ 2 , the invariant γ t takes its values in γ t ∈ γ s 1 -1 + γ 2 s 4 , 0
(see Appendix D).
Note that this out-of-plane shear-stress σ (4)
13 is purely "non-linear" in the sense that it arises from the quadratic term of the strain measure, therefore, it cannot be predicted by any "linearized" versions of the elastic law or by any model neglecting the contribution of the second strain invariant B II .
The "double shear experiment" is not a standard laboratory test but can be performed using an efficient hexapod machine (Stewart machine [START_REF] Dalemat | Une experimentation reussie pour l'identification de la reponse mecanique sans loi de comportement: Approche data-driven appliquee aux membranes elastomeres[END_REF][START_REF] Stewart | A platform with six degrees of freedom[END_REF]).
3.1.5. Free thermal expansion (P (5) )
A direct measurement of the massic heat q m (1) exp in the first path P (1) (strain-locked heating) is rather difficult. This experiment can be replaced by a free-stress thermal expansion (p
exp (T, J
exp (T )) = 0, ∀T ), followed by an isothermal spherical deformation which brings the volume expansion J
exp (T ) back to 1.
Writing the conservation of energy for this evolution between the initial state (T 0 , 1, 0, 0) and the freely expanded state (T, J
exp , 0, 0), it follows: e m 5 = q m (5) exp (T ). Moreover, the massic internal energy in the freely expanded state (T, J
exp , 0, 0) is also given by ( 29):
e m 5 =q m (1) exp (T ) - 1 ρ 0 J (5) exp 1 p (2) exp (T, J) -T ∂ T p (2) exp (T, J)d J, (43)
⇒ q m (1) exp (T ) = q m (5) exp (T ) + 1 ρ 0 J (5) exp 1 p (2) exp (T, J) -T ∂ T p (2) exp (T, J)d J. (44)
(44) proves that given the experimental pressure p 2 exp (T, J) and the free-stress expansion massic heat q m (5) exp (T ), one may determine q m (1)
exp (T ). The function [START_REF] Simo | Computational inelasticity[END_REF] is then finally written:
g (1) (T ) = -T T T0 q m (5) exp ( T ) T 2 d T - T ρ 0 T T0 J (5) exp ( T ) 1 p (2) exp ( T , J) -T ∂ T p (2) exp ( T , J))d J d T . (45)
Synthesis
The complete identification of the finite strain thermo-elastic model relies on four experimental data:
1) q m (1)
exp (T ): massic heat exchanged in the first path P (1) (strain-locked heating or cooling), with the condition:
q m (1) exp (T 0 ) = 0. (46) 2) p (2)
exp (T, J): pressure in the spherical evolution P (2) (isothermal spherical deformation), with the condition:
p (2) exp (T, 1) = p (1) exp (T ), ∀T. (47)
3) τ
exp (T, J, γ s ): in-plane shear stress in the isochoric planar shear test P (3) , along with the condition:
τ (3) exp (T, J, 0) = 0, ∀T ∀J. (48) 4) τ (4)
exp (T, J, γ s , γ t ): out-of-plane tangential stress in the fourth path P (4) , with the condition:
τ (4) exp (T, J, γ s , 0) = 0, ∀T ∀J ∀γ s . (49)
g (1) , g (2) , g (3) , g (4) which determine the state functions are expressed in terms of these measurements:
g (1) (T ) = -T T T0 q m (1) exp ( T ) T 2 d T or (45), (50)
g (2) (T, J) = - 1 ρ 0 J 1 p (2) exp (T, J)d J, (51)
g (3) (T, J, γ s ) = J ρ 0 γs 0 τ (3) exp (T, J, γ)dγ, (52)
g (4) (T, J, γ s , γ t ) = J ρ 0 γt 0 γ s -τ (γ s -τ ) 2 -γ 2 s τ (4) exp (T, J, γ s , τ )dτ. (53)
The internal massic energy is assumed to be a state function, as to ensure (29) takes a simpler expression:
e m = q m (1) exp + g (2)
-T ∂ T g (2) + g (3) -T ∂ T g (3) + g (4) -T ∂ T g (4) .
For the numerical assessment of the model, it is not necessary to compute explicitly g (1) because this function does not appear neither in the constitutive law, nor in the internal energy.
Volumetric/isochoric splitting assumption
As already mentioned, hyperelastic materials exhibit radically different volume and shear behavior. This explains a well-accepted splitting of the Helmholtz free energy with respect to the isochoric/volumetric part of the strain tensor (see [START_REF] Miehe | Entropic thermoelasticity at finite strains. aspects of the formulation and numerical implementation[END_REF][START_REF] Sansour | On the physical assumptions underlying the volumetric-isochoric split and the case of anisotropy[END_REF]). In this section, we emphasize that this partition comes down to a very natural assertion: the pressure remains unchanged in isochoric evolutions. With the previous results, the pressure in any state (T, J, γ s , γ t ) is:
- Trσ 3 = -ρ 0 ∂ J g (2) (T, J) + g (3) (T, J, γ s ) + g (4) (T, J, γ s , γ t ) = p (2) exp (T, J) + γs 0 τ (3) exp (T, J, γ s ) -J∂ J τ (3) exp (T, J, γ s )dγ (55)
- γt 0 γ s -τ (γ s -τ ) 2 -γ 2 s τ (4) exp + J∂ J τ (4) exp (T, J, γ s , τ )dτ.
The pressure in any state is a function of the tangential stresses τ
(3) exp , τ (4)
exp measured during the paths P (3) , P (4) . If one performs the P (3) experiment from the initial state (p
exp (T 0 , 1) = 0, no prior volume expansion), the Cauchy stress tensor generated is not deviatoric.
Hypothesis Optional Simplification. In isochoric deformation, the pressure does not vary. Both conditions ∂ γt Trσ = 0, ∂ γs Trσ = 0 and (55) lead to the differential equations:
τ (4) exp + J∂ J τ (4) exp = 0 ⇒ τ (4) exp (T, J, γ s , γ t ) = τ (4) exp (T, γ s , γ t ) J , τ (3) exp + J∂ J τ (3) exp = 0 ⇒ τ (3) exp (T, J, γ s ) = τ (3) exp (T, γ s ) J . (56)
Inserting these expressions into the general formula ( 52)-( 53), (56) obviously split the Helmholtz free energy into an isochoric/volumetric contributions:
ψ m (T, J, γ s , γ t ) = g (1) (T ) + g (2) (T, J) + g (3) (T, γ s ) + g (4) (T, γ s , γ t ).
(
) 57
This assumption is certainly valid in a wide range of solicitation (e.g is consistent with the classical linearized elasticity), but must be somehow validated experimentally. Note that the complete model ( 50)-(53) can take into account the volumetric/deviatoric coupling which is relevant for high velocity impact [START_REF] Scheidler | On the coupling of pressure and deviatoric stress in hyperelastic materials[END_REF]. Indeed, extreme strain-rate experiments can induced finite elastic deformations before plasticity occurs, so that pressure-dependent shear modulus are requiered (see for instance the SCG model [START_REF] Banerjee | An evaluation of plastic flow stress models for the simulation of high-temperature and high-strain-rate deformation of metals[END_REF][START_REF] Steinberg | A constitutive model for metals applicable at high-strain rate[END_REF]).
Time derivative of the massic internal energy
The massic internal energy takes a rather elegant form:
e m = q m (5) exp (T ) - 1 ρ 0 J J (5) exp (T ) (p (2) exp -T ∂ T p (2) exp )(T, J)d J + 1 ρ 0 γs 0 τ (3) exp -T ∂ T τ (3) exp (T, γs )dγ s + 1 ρ 0 γt 0 γ s -τ (γ s -τ ) 2 -γ 2 s τ (4) exp -T ∂ T τ (4) exp (T, γ s , τ )dτ. ( 58
)
The time derivative of the internal energy, essential for the numerical implementation, is obtained by evaluating the partial derivatives of the massic internal energy with respect to its four variables (see Appendix E for details):
de m dt = ∂e m ∂T Ṫ + ∂e m ∂J J + ∂e m ∂γ s γs + ∂e m ∂γ t γt . ( 59
)
Idealizations of elementary paths
The expressions (50)-(53) yields the "exact" thermo-elastic behavior of an isotropic material and lies upon four experimental results. In this section, we will propose physically consistent expressions for the experimental stresses. We will see that some rather simple forms reduce our model to a few hyperelastic models classically found in the literature.
Pressure assumption
Hypothesis Experimental pressure p
(2)
exp . We assume the following form for the pressure p
(2)
exp in any isothermal spherical deformation:
p (2) exp (T, J) = p (1) exp (T ) -κ(T ) ln J, ( 60
)
where p
(1) exp (T ) is the pressure from the path P (1) κ(T ) is a temperature-dependent bulk modulus.
The pressure (60) is reasonable (note for instance lim
J→∞ p (2) exp = -∞, lim J→0 p (2) exp = ∞)
, but predicts a finite strain energy under infinite compression J → 0. For further details about the volumetric part of the Helmholtz free energy from which the pressure is derived, we refer to [START_REF] Doll | On the development of volumetric strain energy functions[END_REF] and references therein. A more reasonable approximation of the experimental stress would be:
p (2) exp = p (1) exp (T ) - κ(T ) 2 J - 1 J , (61)
⇒ g (2) (T, J) = -(J -1)p
(1)
exp (T ) ρ 0 + κ(T ) 2 1 2 J 2 -1 -ln J . ( 62
)
It's worth noticing that (62) coincides with that proposed by [START_REF] Simo | Computational inelasticity[END_REF] which is a variant of [START_REF] Ciarlet | Mathematical elasticity: Three-dimensional elasticity[END_REF].
For the sake of calculus simplicity, we will however keep the pressure given by (60) for the following, noting that the differences between (60) and (61) are very small for any volumic expansion up to 200%.
Both model comes down to the so called linearized elasticity when J ≈ 1 as ln J ≈
1 2 J - 1 J ≈ Trε.
g (2) is obtained using both (60) and (51):
g (2) (T, J) = - (J -1)p (1) exp (T ) ρ 0 + κ(T ) ρ 0 (J ln(J) + 1 -J) . ( 63
)
If we replace the first path P (1) by a free expansion P (5) (in which one save q m (5) exp (T ) and J
(5) exp (T ))
followed by an isothermal spherical compression to bring J back to 1, p
exp (T ) is obtained by evaluating (60) in (T, J
exp ). Remembering that the stress is zero in a freely expanded test, it comes:
p (1) exp (T ) = κ(T ) ln J (5) exp (T ) . (64)
Therefore, by reinjecting this expression (64) into the general formula (60), we deduce the expression of
p (2)
exp :
p (2) exp (T, J) = -κ(T ) ln J J (5) exp (T ) . ( 65
)
Hypothesis Free-stress volume expansion J
(5) exp (T ). One can set arbitrary for J
(5) exp (T ):
J (5) exp (T ) = 1 + β(T -T 0 ), (66)
where β refers to a free-stress expansion coefficient.
Inserting both the experimental pressure (64) and the free-stress volume expansion (66) in (63), g (2) (T, J) now yields:
g (2) (T, J) = κ(T ) ρ 0 J ln J 1 + β(T -T 0 ) + 1 -J + ln (1 + β(T -T 0 )) . ( 67
)
Massic heat q m (1) exp
As a reminder, the first path P (1) can be substituted by a free thermal expansion followed by isothermal compression. We then obtain:
q m (1) exp (T ) = q m (5) exp (T ) + 1 ρ 0 J (5) exp 1 p (2) exp (T, J) -T ∂ T p (2) exp (T, J)d J [see (44)]. ( 68
)
The conservation of energy in a free expansion is then written according to (65):
q m (5) exp (T ) = q m (1) exp (T ) + 1 ρ 0 T κ(T ) ∂ T J
(5) exp
J
(5) exp J (5) exp -1 + (κ(T ) -T κ ′ (T )) 1 + ln(J (5) exp ) -J (5) exp .
We can choose a simple function of q m (5)
exp (T ) or q m (1)
exp (T ) the other being determined by (69).
It is noteworthy that the idealizations of experimental curves presented in this section are only arbitrary examples that have no theoretical justification.
Hypothesis Massic heat q m (5)
exp (T ).
A physically reasonable approximation to q m (5) exp (T ) may be, for example:
q m (5) exp (T ) = C p (T -T 0 ), (70)
where C p refers to a specific heat capacity in free expansion.
Calculation of g (3)
Hypothesis In-plane shear stress τ (3) exp . We assume that a temperature dependent simple shear modulus µ(T ) linearly links the shear stress with respect to the shear parameter γ:
τ (3) exp (T, J, γ) = µ(T )γ. (71)
The shear stress (71) leads to a quadratic massic free energy with respect to γ s that is also independent of the volume expansion J:
g (3) (T, γ s ) = µ(T )γ 2 s 2ρ 0 . (72)
This Helmholtz massic free energy has for example been postulated in [START_REF] Simo | Associative coupled thermoplasticity at finite strains: Formulation, numerical analysis and implementation[END_REF]. We now choose to enhance the model considering the contribution of the second invariant B II through γ t .
4.4. Calculation of g (4) The out-of-plane shear stress is a purely non-linear effect and its interpretation in terms of a "classical" shear or compressibility coefficient is therefore impossible.
Hypothesis Out-of-plane shear stress τ (4) exp . We postulate that the shear stress τ (4) exp in the fourth path 4) is given by:
P (
τ (4) exp (T, γ s , τ ) = α(T ) (γ s -τ ) 2 -γ 2 s , (73)
α(T ) refers to a non-linear shear modulus. We further recall that the invariant γ t is always negative during the double shear test so that the square root involved in (73) is well-defined. Using the expression (73) g (4) now reads:
g (4) (T, γ s , γ t ) = α(T ) ρ 0 γt 0 (γ s -τ )dτ = α(T ) 2ρ 0 γ 2 s -(γ s -γ t ) 2 . (74) 4.5
. Idealized elastic law
With the idealizations of the experimental stresses (60), (71), (73) the g (•) functions were explicitly calculated (63), (72), (74). The derivation of the Helmholtz free energy yields the current Cauchy stress:
σ = ρ 0 ∂ J fψ 1 + 1 γ s J 5 3 (µ(T )γ s + α(T ) (γ s -γ t )) DevB + α(T ) J 7 3 Dev B 2 -Tr(B)B . (75)
The invariant γ t is upper bounded by γ s whereas the positivity of the second invariant J B 2 implies the lower bound:
J B 2 = 3 2 ∥DevB∥ ≥ 0 ⇒ γ t ≥ γ s 1 - 1 + γ 2 s 4 . (76)
Finally lim γs→0 γ t γ s = 0 so that (75) does not diverge when γ s → 0. As we will see in the following section, the expression (75) is algebraically equivalent to a standard Mooney-Rivlin model.
Analogy with the Mooney-Rivlin model
If we retain the idealizations (71), (73), the dependence of the massic free energy with respect to the isochoric invariants is written:
g (3) (T, γ s ) + g (4) (T, γ s , γ t ) = 1 2ρ 0 µ(T )γ 2 s + α(T ) γ 2 s -(γ s -γ t ) 2 = 1 2ρ 0 µ(T ) B I -3 + α(T ) B I -3 -B II -3 . (77)
The right and left Cauchy-Green strain tensors C and B share their invariants B i = C i , as C and B are related through the rotation tensor R arising from the polar decomposition of the transformation gradient
F = R • U = V • R.
Therefore the free energy (77) coincides exactly with that of a Mooney-Rivlin model [START_REF] Kumar | Hyperelastic mooney-rivlin model: determination and physical interpretation of material constants[END_REF]:
ψ m M -R = g 1 (T ) + g 2 (T, J) + C 01 ρ 0 B I -3 + C 10 ρ 0 B II -3 , (78)
with the parameters
C 01 = 1 2 (µ(T ) + α(T )), C 10 = -α(T ) 2 .
If we assume that the shear stress along the P (4) path is identically zero, the massic free energy becomes independent of the γ t invariant and we then observe that the Helmholtz massic free energy evolves linearly with B I -3, which corresponds to the Neo-Hookean model
ψ m N -H = g 1 (T )+g 2 (T, J)+ µ 2ρ 0 B I -3 .
Consequently our thermodynamic approach includes these two models which result from the experimental idealizations (71) (73).
More generally, if one looks at the in-plane shear stress curve τ
exp as a function of the slip parameter γ, it is certain that a polynomial expansion fitting experimental data can be found. By symmetry, this polynomial expansion must necessarily contain only even powers:
τ (3) exp (T, γ) = i µ i γ 2i+1 . (79)
(79) then generates a generalized Neo-Hookean model (power series of the free energy with respect to the "B I -3" invariant).
Moreover, if we postulate by the same reasoning that the out-of-plane shear stress τ (4) exp develops in the form:
τ (3) exp (T, γ) = i µ i γ 2i+1 and τ (4) exp (T, γ s , τ ) = i α i (γ s -τ ) 2 -γ 2 s (γ s -τ ) 2i . (80)
Then one found the following Helmholtz massic free energy:
ψ m = 1 ρ 0 i µ i (T ) 2i + 2 B I -3 i + α i (T ) 2i + 2 B I -3 i -B II -3 i + g (1) (T ) + g (2) (T, J), ψ m = 1 ρ 0 i C i0 B I -3 i + C 0i B II -3 i + g (1) (T ) + g (2) (T, J) C ij = 0 if i ̸ = j. (81)
That is, a generalized Mooney-Rivlin material [START_REF] Kumar | Hyperelastic mooney-rivlin model: determination and physical interpretation of material constants[END_REF] .
In the general case, the integral expressions given in section 3.2 yield the exact finite strain isotropic thermo-elastic constitutive law.
Conclusion
We have developed a complete finite-strain isotropic thermo-elastic model without any assumption concerning the form of the Helmholtz free energy. The identification of the latter relies on the successive realization of four elementary experiments, and more precisely the measurement of: a massic heat, a pressure, and two shear stresses. Experiments must be carried many times in order to scan all the accessible values of the state variables e.g. the double shear test P (4) must be performed for a whole collection of temperature T , volume expansion J, and slip γ s .
Some idealizations of the experimental stresses have been proposed. They can be replaced by any realistic curves, or even be given by the upscaling results of a large scale molecular dynamic simulation.
The well known Mooney-Rivlin model, and consequently the Neo-Hookean, are special cases of our methodology. They are derived from an assumption concerning the shear stress along the P (3) and P (4) paths.
The present work could admit several natural extensions among which:
• The description of an anisotropic medium by the introduction of structural tensors {N • t } representing the actual directions of anisotropy. Enhancing the Helmholtz free energy with these new unknowns necessarily yields cross-invariant effects, reflecting the current orientation of the deformation with respect to the anisotropic directions.
• A finite strain plasticity model with the addition of an objective plastic strain tensor B p in the arguments' list of the Helmholtz free energy. Taking this tensor into account adds 6 state variables.
Three of them are specific to the plastic strain B p (2 if one further assumes that the plastic evolution is isochoric), three others orient the principal directions orthogonal trihedron of B and B p .
• The study of large-amplitude shock waves and more particularly shock tails. Indeed, after a large plastic phase which is out of the scope of this study, the elastic release wave on the shock tail can be considered as large strain. A proper coupling between both, an equation of state driving the pressure, and the deviatoric elastic law presented herein is a very interesting perspective.
III ∂ BI f B ψ , ∂ BII f B ψ = ∂ BII f B ψ ∂ BII B II = B -2 3 III ∂ BII f B ψ , ∂ BIII f B ψ = ∂ BI f B ψ ∂ BIII B I + ∂ BII f B ψ ∂ BIII B II + ∂ BIII f B ψ , ∂ BIII f B ψ = -∂ BI f B ψ B I 3B III -∂ BII f B ψ 2B II 3B III + ∂ BIII f B ψ . (B.1)
And the elastic law (6) now reads:
σ = 2ρ 0 √ B III B III ∂ BIII f B ψ -∂ BI f B ψ B I 3 -∂ BII f B ψ 2B II 3 1 + B -1 3 III ∂ BI f B ψ + B 1 3 III B I B -2 3 III ∂ BII f B ψ B -B -2 3 III ∂ BII f B ψ B 2 , σ = 2ρ 0 √ B III B III ∂ BIII f B ψ -∂ BI f B ψ B I 3 -∂ BII f B ψ 2B II 3 1 + ∂ BI f B ψ + B I ∂ BII f B ψ B -∂ BII f B ψ B 2 . (B.2)
This formula is more elegantly written using the following relations, which hold for any tensor of order 2: Let (λ i ) i∈ 1,3 be the principal stretches of the isochoric Finger tensor B. We further assume that the invariant γ s remains fixed. The following system is necessarily verified by the triplet (λ i ) i∈ 1,3 :
X II = 1 2 X 2 I -TrX 2 ⇒ - 2 3 X II 1 + X I X -X 2 = - 1 3 X 2 I -TrX 2 1 + X I X -X 2 , = -Dev X 2 -Tr (X) X . (B.3) σ = 2ρ 0 √ B III B III ∂ BIII f B ψ 1 + ∂ BI f B ψ DevB -∂ BII f B ψ Dev B 2 -Tr B B , σ = 2ρ 0 √ B III B III ∂ BIII f B ψ 1 + ∂ BI f B ψ B 1 3 III DevB - ∂ BII f B ψ B 2
λ 2 1 + λ 2 2 + λ 2 3 = γ 2 s + 3 = B I , (C.1) λ 2 1 λ 2 2 + λ 2 2 λ 2 3 + λ 2 1 λ 2 3 = B II , (C.2) λ 1 λ 2 λ 3 = 1. (C.3)
By combining the equations (C.1) and (C.3), we deduce:
λ 4 2 + λ 2 1 -B I λ 2 2 + 1 λ 2 1 = 0. (C.4) (C.4
) has a positive or zero discriminant:
∆ = λ 2 1 -B I 2 - 4 λ 2 1 ≥ 0 ⇒ λ 2 2± = B I -λ 2 1 ± λ 2 1 -B I - 2 λ 1 λ 2 1 -B I + 2 λ 1 2 . (C.5)
The positivity of the term under the square root implies that (C.4) has real solutions if and only if the eigenvalue λ 1 evolves in the interval:
λ ∈ [X 2 , X 0 ] with X k = 3 √ α cos 1 3 arccos -(α) -3 2 + 2kπ 3 with α = B I 3 ≥ 0. (C.6)
We can then express γ t as a function of the unique variable λ 1 :
B II (λ 1 ) = λ 2 1 λ 2 2± + 1 λ 2 1 + 1 λ 2 2± = λ 2 1 B I -λ 4 1 + 1 λ 2 1 ⇒ γ t (λ 1 ) = γ s -B II (λ 1 ) -3 (C.7)
Fixing γ s successively, we plot the evolution domain of γ t (see Figure C.3a) and compare it to the set that can be scanned by the double shear test: Therefore, we present in the following Appendix D a more general motion which scan the whole domain of variation for γ t . We define as "isochoric double shear-traction test" the motion given by the following gradient:
γ t ∈ γ s 1 -1 + γ 2 s 4 , 0
F = J 1 3 1 √ e γ 0 0 1 √ e γ ⊥ 0 0 e ⇒ B = 1 e + γ 2 γ √ e 0 1 e + γ 2 ⊥ γ ⊥ e Sym e 2 . (D.1)
The "double shear test" mentioned in Section 3.1.4 corresponds to e = 1. We also denote as "simple tensileshear test" the case γ ⊥ = 0.
The square of the Finger isochoric strain tensor is equal to:
B 2 = 1 e + γ 2 2 + γ 2 e γ √ e 2 e + γ 2 + γ 2 ⊥ γγ ⊥ √ e γ 2 e + (γ ⊥ e) 2 + 1 e + γ 2 ⊥ 2 γ ⊥ e 1 e + γ 2 ⊥ + e 2 Sym (γ ⊥ e) 2 + e 4 . (D.2)
It's worth noticing a non-zero B 2 13 deformation which induces a purely non-linear stress. This out-of plane shear stress is relevant for identifying the dependence of the free energy on the second invariant γ t .
A simple computation of the characteristic polynomial of the isochoric Finger tensor shows that the fundamental invariants are equal to:
B I = γ 2 + γ 2 ⊥ + e 2 + 2 e and B II = γ 2 γ 2 ⊥ + e 2 + 2e + 1 e 2 + γ 2 ⊥ e 2 . (D.3)
The conservation of the parameter γ s is ensured if at each instant, the following equality holds:
B I = γ 2 + γ 2 ⊥ + e 2 + 2 e = γ 2 s + 3 with e(0) = 1, γ(0) = γ s , γ ⊥ (0) = 0. (D.4)
The motion has two independent parameters: a shear γ ⊥ and a traction magnitude e. The value of γ is adjusted so as to guarantee at each instant the equality (D.4).
We then deduce, by simple calculations, the value of the invariant γ t : The identification of the constitutive law with respect to the second invariant γ t Section 3.1.4 was built upon the double shear test. The reason is quite simple: the single shear tensile test, although scanning the whole domain for γ t , does not allow to simply identify the dependence on γ t because the non-linear stress σ 13 vanishes. Therefore, measuring either σ 12 or σ 13 will couple g (4) to the other functions. The most efficient test, but also the hardest to perform experimentally, shall set a small shear strain γ ⊥ and then perform a tensile test by varying e. Hence σ 13 would be non-zero and one could characterize fψ for γ t ≥ 0. The partial derivative with respect to the volume expansion J thanks to the volumetric/isochoric split:
γ t (e, γ ⊥ ) = γ s -(γ 2 ⊥ + e 2 ) B I -γ 2 ⊥ -e 2 -
∂e m ∂J = - 1 ρ 0 p (2)
exp (T, J) -T ∂ T p (2) exp (T, J) . The partial derivative with respect to γ s and γ t are written as:
∂e m ∂γ s = 1 ρ 0 τ (3) exp (T, γ s ) -T ∂ T τ (3
Figure 1 :
1 Figure 1: Isochoric planar simple shear kinematics
Figure 2 :
2 Figure 2: Double shear kinematic
3 IIIDev B 2 -(B. 4 )
324 Tr (B) B . Appendix C. Evolution domain of γ t
. (C. 8 )
8 Some values of the shear parameter γ t are not accessible through the double shear test (see Figure C.3b).
Analytical
Figure C.3: Value of the second invariant γt for various γs
B I -γ 2 ⊥ -e 2 - 2 e ≥ 0 + 1 e 2 e 2 e
22022 and e > 0. (D.6)The existence conditions (D.6) define a sub-domain of R 2 . At γ ⊥ ≤ γ s fixed, the range of variation of e is analytic and simply given by e ∈ [e 2 , e 0 ] with: values of the parameter γ t that can be scanned through the simple shear tensile test ((D.9) with γ ⊥ = 0) are:γ t (e, 0) = γ s -e 2 B I -e 2 -2 e + 2e γ t (e, 0) = γ s -e 2 B I -e 4 + 1 ∈ [e 2 (0), e 0 (0)], e k (0) given by (D.7). (D.8)Whereas for the double-shear test ((D.9) with e = 1):γ t (1, γ ⊥ ) = γ s -(γ 2 ⊥ + 1) B I -γ 2 ⊥ -3 + γ 2 ⊥ = γ s -γ 2 s + γ 2 ⊥ (γ 2 s -γ2⊥ ). (D.9) We represent in figure D.4 the values of γ t covered by the double shear tensile test with γ s = 3. The red line corresponds to the single tensile-shear test (γ ⊥ = 0), while the orange line corresponds to the double shear test (e = 1). The widest γ t interval is scanned when γ ⊥ = 0 (see D.4), i.e., during a single tensile-shear test. We further represent the γ t that can be scanned by both the tensile-shear (Figure (D.5b)) and double shear (Figure D.5a) tests for various γ s . We note that the double-shear test only explores negative values of γ t while the tensile-shear test scans the whole domain (see Figure D.6 or more simply the equality between (C.7) and (D.8) as e = λ 1 ).
Figure D. 4 :(γ s -τ ) 2 -γ 2 s T ∂ 2 Tτ ( 4 )
42224 Figure D.4: Value of γt, γs = 3 fixed, for different values of e and γ ⊥
Figure D. 6 :
6 Figure D.6: Upper and lower bound of γt
) exp (T, γ s ) +
1 ρ 0 γ ∂e m γt 0 ∂ γs ∂γ t ρ 0 = 1 γ
s -τ (γ s -τ ) 2 -γ 2 s τ (4) exp -T ∂ T τ (4) exp (T, γ s , τ ) dτ, s -γ t (γ s -γ t ) 2 -γ 2 s τ (4) exp -T ∂ T τ (4) exp (T, γ s , γ t ). (E.3)
Acknowledgement
The author would like to thank his colleagues from the CEA for their fruitful remarks but also Pr. Jean Guarrigues whose rigorous work gave rise to this article.
Appendix A. Justification of the finite strain isotropic elastic law
The definition of the Finger tensor is B = F • F ⊤ , hence its time derivative reads:
where D (resp W ) denotes the symmetric (resp skew-symmetric) part of the spatial velocity gradient.
Using the combined product
, we get successively, ∀n ∈ Z:
Orthogonality of the antisymmetric and symmetric tensors for the doubly contracted product has been used to simplify (A.2).
The time derivatives of the coefficients of the characteristic polynomials of B are (see also [START_REF] Garrigues | Algèbre et analyse tensorielles pour l'étude des milieux continus[END_REF][START_REF] Garrigues | Cinématique des milieux continus[END_REF]):
The nullity of intrinsic dissipation (4), is written using the coincident isotropic function ( 5):
Injecting both (A.3), (A.2) in (A.4) and using the fact that σ and f B ψ are state functions so that they do not depend on D, the nullity of (A.4) ∀T ∀D yields the elastic law [START_REF] Cassels | Nonlinear elasticity: theory and applications[END_REF].
Appendix B. Thermo-elastic law with respect to isochoric invariants
The partial derivatives of the new function f B ψ defined by [START_REF] Dalemat | Une experimentation reussie pour l'identification de la reponse mecanique sans loi de comportement: Approche data-driven appliquee aux membranes elastomeres[END_REF] are simple algebraic calculations: |
04109659 | en | [
"info",
"info.info-cv",
"info.info-lg"
] | 2024/03/04 16:41:24 | 2023 | https://hal.science/hal-04109659/file/CTSVG2023.pdf | Sandeep Manandhar
email: [email protected]
Auguste Genovesio
email: [email protected]
An Invertible Video GAN in a Conditional Setting
Keywords: conditional video generation, temporal style, dynamics transfer
: Our proposed method allows for independent manipulation of content and temporal styles in videos.Top row: Our method can effectively transfer learnt motion to target actors who were not seen performing the act during the training. Bottom row: The facial expression can be gradually changed along the time direction by interpolating the label embedding, while preserving the lip motion.
Introduction
Image synthesis has seen significant advancements with the development of generative models. However, generative models of videos have not been as successful, and controlling the dynamic generation process has been a major challenge. This is largely due to the complex spatio-temporal relationships between content/actors and dynamic/actions, which makes it difficult to synthesize and control the dynamics independently. Several methods have been proposed to address this challenge, each with their own design principles. Broadly speaking, there are two primary classes of video generative models: 3D models that learn from 2D+time volumetric data by employing 3D convolutional neural networks (CNNs), and 2D models that generate a sequence of 2D frames while disentangling the spatio-temporal components of a given video distribution. Many of the earlier methods took the former approach treating each video clip as a point in latent space, thus making the manipulation in such space hardly possible. The latter approach is not only more resource-efficient, but also allows for greater control over the generation process, as demonstrated by [START_REF] Xu | From continuity to editability: Inverting gans with consecutive images[END_REF][START_REF] Zablotskaia | Dwnet: Dense warp-based network for poseguided human video generation[END_REF][START_REF] Zhao | Thin-plate spline motion model for image animation[END_REF]. However, these methods require some pre processing (optical flow, pose information) to manipulate the generated videos.
In their work, [START_REF] Grathwohl | Disentangling space and time in video with hierarchical variational auto-encoders[END_REF] introduced a variational encoder for visual learning, which assumes that higher-level semantic information within a short video clip can be decomposed into two independent sets: static and dynamic. With similar notion, [START_REF] Emily | Unsupervised learning of disentangled representations from video[END_REF] employed two separate encoders to produce content and pose feature representations. Pose features are processed by an LSTM to predict future pose information which is then used along with the current content information to generate the next frame. The idea of treating content and motion information independently has laid a foundation for many works in video generation.
Instead of considering a video as a rigid 3D volume, one can model it as a sequence of 2D video frames x(t) ∈ R 3×H×W , where t is the temporal point, (H, W ) are the height and the width of the video frame. An image generator G(z) can be trained to produce an image x ′ ∼ x(t) from a vector z coming from a latent space Z ∈ R d , where d < H × W . However, the problem at hand is to come up with a sequence of z(t) that can be fed into G(z) to produce a realistic video frame sequence. And, if such z(t) can be obtained, how can we manipulate the video generation process?
The authors of [START_REF] Saito | Temporal generative adversarial nets with singular value clipping[END_REF] proposed to first map a latent vector to a series of latent codes using a temporal generator. An image generator would then use the set of codes to output video frames. MOCOGAN, [START_REF] Tulyakov | Mocogan: Decomposing motion and content for video generation[END_REF], on the other hand proposed to decompose the latent space Z into two independent subspaces of content Z c and motion Z m . Z c is modeled by the standard Gaussian distribution, whereas Z m is modeled by a recurrent neural network (RNN). The content code remains the same for a generated video, while motion codes varies for each generated frames. MOCOGAN-HD [START_REF] Tian | A good image generator is what you need for high-resolution video synthesis[END_REF] and StyleVideoGAN [START_REF] Fox | Stylevideogan: A temporal generative model using a pretrained stylegan[END_REF] took advantage of a pretrained Style-GAN2 [START_REF] Karras | Analyzing and improving the image quality of StyleGAN[END_REF] image latent space and proposed to traverse in the latent space using RNNs to produce video frames.
Interestingly, in the context of a pretrained StyleGAN2 network, one can perform GAN inversion [START_REF] Xu | From continuity to editability: Inverting gans with consecutive images[END_REF] on a image sequence to obtain its latent representation. StyleGAN2 produces a continuous and consistent latent space, where close by latent vectors map to similar realistic images. Tak-ing advantage of this property, the latent vector obtained by optimization from the previous frame can be used as the starting point to search for the latent vector of the next frame, thus optimizing for minor changes. Upon simple linear projection (such as PCA) of the latent trajectory of a movie optimized in such manner, we can observe that the higher components are similar to cosine waves (see supplementary material). [START_REF] Hess | Convergence of sampling in protein simulations[END_REF] also made this observation in the context of protein trajectory simulation, where he finds that the cosine content of the principal components are negatively related to the randomness of the simulation. In the case of optimized vectors corresponding to the inverted images, they are correlated. Hence, the waves are obvious and visible. This hints us that sinusoidal bases could naturally facilitate training of a StyleGAN generator to produce image sequences.
To this end, we propose a temporal style generator in order to generate videos using StyleGAN2's sythesis network. Alongside the StyleGAN2's style space, which we treat as the content space, we use a time2vec [START_REF] Seyed Mehran Kazemi | Time2vec: Learning a vector representation of time[END_REF] network to introduce a temporal embedding from where the temporal styles will be generated. time2vec network provides a learnable Fourier bases and carries an additional linear term which prevents the encoding from being cyclic. Main contributions of our work are as follow:
• We integrate a novel temporal latent space in Style-GAN's generator network using a sinusoid-based temporal embedding.
• We evaluate our method against prevalent methods in an unconditional setting, demonstrating a significant enhancement of video quality.
• We propose several approaches to rigorously evaluate conditional video generation through contexts such as talking faces and human activities.
• We demonstrate the benefits of style-based temporal encoding to independently transfer dynamics or content, editing motion mid-sequence or reuse the dynamic extracted from a real video by GAN-inversion of temporal codes.
We trained our model on videos of human facial expression (MEAD [START_REF] Wang | Mead: A large-scale audio-visual dataset for emotional talking-face generation[END_REF]) and human activities (UTD-MHAD [2]). Besides the Fréchet video distance (FVD) [32] metric, we conducted human evaluation focused on the realism of the generated videos using the MEAD dataset. Additionally we proposeed LiA (Lips Area) metric to evaluate the talking face videos from the MEAD dataset. We also benchmarked our results using publicly available method for human action recognition with UTD-MHAD dataset.
Related work
The domain of video synthesis consists of tasks such as future frame prediction [START_REF] Finn | Unsupervised learning for physical interaction through video prediction[END_REF][START_REF] Mathieu | Deep multi-scale video prediction beyond mean square error[END_REF]34,[START_REF] Emily | Unsupervised learning of disentangled representations from video[END_REF], frame interpolation [START_REF] Niklaus | Video frame interpolation via adaptive convolution[END_REF]14,[START_REF] Xiang | Zooming slow-mo: Fast and accurate one-stage space-time video super-resolution[END_REF] and in our context, video generation from scratch [33]. Video generation follows the success of image generative adversarial models which can produce highly controllable images of remarkable quality [START_REF] Goodfellow | Generative adversarial nets[END_REF]. Much focus has been given to temporal extension of such GANs. [START_REF] Tulyakov | Mocogan: Decomposing motion and content for video generation[END_REF][START_REF] Saito | Temporal generative adversarial nets with singular value clipping[END_REF][START_REF] Saito | Train sparsely, generate densely: Memoryefficient unsupervised training of high-resolution temporal gan[END_REF][START_REF] Munoz | Temporal shift gan for large scale video generation[END_REF] have adopted the strategy to use content and motion codes by leveraging on 2D image generator. MOCOGAN-HD [START_REF] Tulyakov | Mocogan: Decomposing motion and content for video generation[END_REF] used a pretrained StyleGAN2's network [START_REF] Karras | Analyzing and improving the image quality of StyleGAN[END_REF] and trained a RNN model to simply explore along the principal components of the latent space. Recently, [START_REF] Brooks | Generating long videos of dynamic scenes[END_REF] also proposed a style-based temporal encoding for a 3D version of StyleGAN3's synthesis network [START_REF] Karras | Alias-free generative adversarial networks[END_REF] where temporal codes are generated by a noise vector filtered by a fixed set of temporal low pass-filters. [START_REF] Yu | Generating videos with dynamics-aware implicit generative adversarial networks[END_REF] used implicit neural representation (INR) [4,[START_REF] Chen | Learning continuous image representation with local implicit image function[END_REF] to model videos as continuous signal. Concurrently, StyleGAN-V [START_REF] Skorokhodov | Stylegan-v: A continuous video generator with the price, image quality and perks of stylegan2[END_REF], relied on training a modified StyleGAN2 generator where they propose a INR-inspired positional embedding for the time-points of the video frames. Both of these methods produce videos with arbitrary frame rates. Our method is related to StyleGAN-V as it uses StyleGAN2 synthesis network. However, their approach differs from ours as they concatenate their temporal codes with the constant input tensor of the synthesis network, whereas we treat our temporal codes as style vectors.
Conditional generative models are another exciting field of research. Besides explicit vector based labels, text, audio and images have been used in conditioning for frame generation. [START_REF] Yaohui | Imaginator: Conditional spatio-temporal gan for video generation[END_REF] proposes a simple and efficient 3D CNN based generator that takes a single image and a conditioning label as an input to generate videos. [START_REF] Wiles | X2face: A network for controlling face generation using images, audio, and pose codes[END_REF] takes a source frame with one human face and generates video that has pose and expression of another face in a driving video. [START_REF] Ting-Chun | Video-tovideo synthesis[END_REF] conditioned their video generation on semantic maps where objects present in the frame are labelled with colors. The network can also take information like optical flow and pose information during the training. [START_REF] Songsri-In | Face video generation from a single image and landmarks[END_REF] generated videos of talking face using sequence of facial landmarks of target face. [START_REF] Zhao | Thin-plate spline motion model for image animation[END_REF] is yet another image-conditioned video generation model, which has dedicated networks for motion prediction and keypoint detection. However, it is not straightforward to generated videos with arbitrary frame rates with image-conditioned models. [START_REF] Ho | Video diffusion models[END_REF] proposed a 3D U-Net based diffusion model for text to video generation. Following this [START_REF] Singer | Make-a-video: Text-to-video generation without text-video data[END_REF] proposed another text-to-video generation method that makes use of efficient 3D convolutions and temporal attention modules. They also added an embedding for specifying frame rates.
Method
Our method contains two main components: (1) a temporal style generator that drives StyleGAN2's synthesis network to produce frames in time-conditioned manner, (2) two discriminators to impose content consistency and temporal consistency. Our generator is further conditioned on actor identity and action classes, though it can be used in unconditional setting.
Generator
Our generator G is a conditional generator conditioned on actor-id and action-id. We equip the generator with appropriate embeddings for both conditions. The two embeddings are summed together to obtain a content style vector w c , which defines the general appearance of the actor along with the nature of action. Now, we define our novel temporal generator F t . It maps a random vector z m , using a 4-layered MLP to a k dimensional motion style vector m t . The vector scales the waves provided by an independant time embedding. The time embedding time2vec maps a time value t to a k dimensional vector v(t). The resulting product is w t m , which we refer to as temporal style vector. To generate a frame at time t, we concatenate both styles as [w c , w t m ] before injecting them to the synthesis blocks. During the training, we generate three consecutive frames for each video element per batch. The triplets share the same vector m t while their time2vec embedding are generated from their respective time points. More formally,
m t = F t (z m ), w t m = m t * v(t), w t+1 m = m t * v(t + 1). (1)
A basic structure of the generator network is shown in Figure 2. To ensure the smooth integration of action-id embeddings, we employ a ramp function [START_REF] Shahbazi | Collapse by conditioning: Training classconditional GANs with limited data[END_REF] that linearly scales the vectors derived from the action-id embedding with a factor ranging from 0 to 1, in a scheduled manner. Time2vec: We used k -1 sinusoidal bases accompanied by a linear term to create our temporal embedding as seen in Eq. 2, where the parameters w j and ϕ j are trainable. By restricting the dynamics to a fixed set of sinusoidal functions, we can avoid overfitting to the training data, since the model has a limited capacity to represent complex dynamics. This makes the model more robust and generalizable to unseen data. Moreover, since sinusoidal functions are periodic, they can naturally capture cyclic patterns in the data (e.g. lip movement, hand waving).
v j (t) = ω j t + ϕ j , if j = 0 sin(ω j t + ϕ j ), if 1 ≤ j ≤ k -1 (2)
The linear term represents the time direction. The time t does not need to be discrete as the time2vec embedding is a continuous domain. This allows us to generate videos with arbitrary frame rates. However, during the training we use integer valued time-points. We note that StyleGAN-V's time representation lacks the linear term, which might hint to why its generation is plagued by unnatural repetitive motion despite its elaborate interpolation scheme. Unlike StyleGAN-V, we have chosen to stay closer to StyleGAN's original principal, which is to allow variations in input only through the style vectors rather than the traditional input tensor. Furthermore, our time embedding fundamentally differs from StyleGAN-V's in its design. StyleGAN-V requires interpolation of multiple noise vectors to compute a single trajectory. Additionally, the wave parameters involved are dependent on the noise samples. In contrast, our wave parameters are independently learned and are fixed during inference. The latent vector m t interacts with the waves only as an amplitude scaling factor. This makes our time representation more compact and simpler. We leverage this representation to perform GAN-inversion for the motion style using off-the-self methods, which is not possible with StyleGAN-V representation. (see )
Discriminators
Shuffle discriminator: Consistency in content over time is a crucial aspect of video generation. Although the time2vec module in G provides temporal bases to guide motion learning, it does not ensure consistency in content across the sequence. In order to address this, we design a 2D-CNN based discriminator D s (see Figure 2) that evaluates whether the frame features are consistent or not. During the training of D s , each batch element consists of two frames. For the fake adversarial example, pairs of frames are shuffled among the batch to contain two different contents. In contrast, for the real example, the pairs are consecutive frames drawn from real videos. The feature maps of the pairs undergo a series of 2D convolutions, are flattened, and then concatenated into a single vector before passing through a fully connected layer. During the training of G, a batch of unshuffled fake pairs is input to D s .
Conditional discriminator: To ensure temporal consistency in the generated videos, we adopt a time-conditioned discriminator, inspired by prior works such as [START_REF] Miyato | cGANs with projection discriminator[END_REF][START_REF] Yu | Generating videos with dynamics-aware implicit generative adversarial networks[END_REF][START_REF] Skorokhodov | Stylegan-v: A continuous video generator with the price, image quality and perks of stylegan2[END_REF]. This discriminator, denoted as D t , takes in a batch of video triplets along with their respective time information, and learns to distinguish real videos from fake ones based on their temporal coherence. Then the video frames are processed by a set of 2D CNNs and a linear layer d t (.) to produce frame features. These features are then concatenated following the temporal order. D t is equipped with another time2vec module which enforces learning of a time representation. The temporal encoding for the three input time points are also concatenated. The dot product of these concatenated vectors is then computed, generating a final score [START_REF] Miyato | cGANs with projection discriminator[END_REF].
In conditional setting, actor-id and action-id embeddings are introduced in D t as well. As shown in Figure 2, two additional linear layers (d action (.), d actor (.)) are present at the level of d t (.), which produce actor and action representations. Dot products are computed between the corresponding embedded vector and the feature vector. The final output of the discriminator is the weighted sum of the three dot products. We use the same ramp-up function to scale d action (.) as in the generator [START_REF] Shahbazi | Collapse by conditioning: Training classconditional GANs with limited data[END_REF].
Experimental settings
We perform experiments with our video generator in both conditional and unconditional settings. Though our unconditional video generation performs competitively against the baseline methods, we focus on the conditional generation as it permits for better disentanglement between actions and actors, as well as better control over the generation process.
Datasets
We have used three publicly available video datasets with their labels: MEAD [START_REF] Wang | Mead: A large-scale audio-visual dataset for emotional talking-face generation[END_REF], RAVDESS and UTD-MHAD [2]. Our MEAD training set contains 30 individuals talking while expressing 8 different emotions (18883 videos). We train our network only with the sequences where generic sentences are being recited. We set aside the emotion specific dialogues as unseen test sequences. For the training, we chose 128 × 128 image dimension and between 60 -170 frames as the dataset contains videos of variable length.
The RAVDESS dataset contains 24 talking faces also with 8 different emotions (not same categories as MEAD). To create a test set, we exclude sequences of 7 different emotions for four individuals. Though the dataset set contains only two dialogues, compared to over 20 dialogues in MEAD, RAVDESS contains more variation in head movements of the actors.
We used all the RGB videos provided with UTD-MHAD. It contains 754 videos of 8 individuals performing 27 different actions. The video frame size is 128 × 128 with variable video length (33 -81) as provided in the dataset. We created a test set by excluding videos of each action sequence performed by few selected target actor from the training set. Thus, we train the network to learn motion and content independently.
Baseline Methods
To demonstrate that our generator does not falter in video quality, we choose MOCOGAN-HD [START_REF] Tulyakov | Mocogan: Decomposing motion and content for video generation[END_REF] and StyleGAN-V [START_REF] Skorokhodov | Stylegan-v: A continuous video generator with the price, image quality and perks of stylegan2[END_REF] as our baselines in unconditional setting as they both use StyleGAN2's image synthesizer. For the conditional video generation, we choose ImaGINator [START_REF] Yaohui | Imaginator: Conditional spatio-temporal gan for video generation[END_REF]. Though it requires an input image to generate videos, it is free of any Figure 2: Description of the proposed model: a temporal style generator F t equipped with a time2vec module generates the motion code. In a conditional setting the content style generator F u is replaced by F c , which consists of two independent actor and action embeddings. The corresponding embeddings are activated in D t 's final layer. A ramp function [START_REF] Shahbazi | Collapse by conditioning: Training classconditional GANs with limited data[END_REF], which gradually increases from 0 to 1, is used to scale the vectors coming from the action embedding in both F c and D t additional representation like pose or motion maps. For MOCOGAN-HD, we first trained a StyleGAN2 network on MEAD dataset with 256 2 image size with batch size of 16 for upto 150K iterations. Then the MOCOGAN-HD network was trained with the hyper-parameters set as suggested in the author's implementation. For StyleGAN-V, we trained on both datasets with image of dimension 256 2 , with a batch size of 64 and with up to 25000K images according to the author's implementation. We adapted ImaGINator's network to output 128 × 128 × 32 size image (as it was originally 64 × 64 × 32). We trained it on both datasets in a conditional manner for up to 5K epochs.
Training
We trained our method on a single Nvidia's A100 GPU with 80GB VRAM. The training image size was 128 2 with a batch size of 16 triplet frames. The hyperparameters for the generator, discriminators and the optimizers were kept the same as suggested in [START_REF] Karras | Analyzing and improving the image quality of StyleGAN[END_REF]. The transition factor λ of action-id vectors in both generator and discriminator started at 4000 iterations and ended at 6000 iterations, which was set empirically. We trained our model for both datasets for up to 150k iterations which took about 2 weeks.
Results
Video quality is improved
Table 1 reports the FVD scores of the generated videos by all the methods. Our conditional method (Ours(C)) scores the best which is in agreement with the videos provided in the supplementary data. Few frames of the generated video samples are depicted in Figure 3. Motion artifacts are strongly present in MOCOGAN-HD and ImaGINator's output. StyleGAN-V has relatively higher quality videos but suffers from erratic, repeated motion. However, our methods (both conditional Ours(C) and unconditional Ours(UC)) produce far better results. To assess the preservation of the actor's identity, we computed the ArcFace [START_REF] Deng | Arcface: Additive angular margin loss for deep face recognition[END_REF] similarity between the frames of the generated videos. ArcFace computes the cosine similarity between the feature vector of the first frame and the successive ones obtained from a network trained for face recognition. As seen in Table 1, our methods preserve the appearance of the actor throughout the sequence while MOCOGAN-HD is not consistent generating the same face over the sequence. The authors of [START_REF] Skorokhodov | Stylegan-v: A continuous video generator with the price, image quality and perks of stylegan2[END_REF] also made this observation.
The FVD score is widely used to evaluate video quality. However, as it is a comparison of distributions of representations in a high dimensional space, it may not accurately characterize the true quality of the video. The same can be said about the ArcFace score. Furthermore, these metrics can be influenced by factors such as spatial resolution, video length, etc. To complement these metrics, a human evaluation was conducted to assess the realism of the generated videos. To conduct the human evaluation, we generated 10 sets of videos, each consisting of 6 videos (1 real video and 5 generated videos using the proposed methods and the baselines). We asked 25 university students and researchers to watch 3 randomly selected sets and rank the 6 videos based on their perceived realism. The ranking distributions of the survey is presented in Figure 4. Notably, videos generated with Ours(C) and Ours(UC) models consistently ranked higher than those generated using the baseline methods. This demonstrates that our method produces more realistic videos compared to existing approaches.
Temporal style encodes temporal semantic
While we demonstrated that the video quality is improved, the aforementioned metric cannot assess the preservation of temporal semantics across different sequences. We then propose a new metric named LiA (for Lips Area) to evaluate our ability to reproduce the semantic of talking-face videos while changing the content such as the actor-id or actionid (emotion). LiA value computes the polygonal area of the lips detected using face landmark detectors [START_REF] King | Dlib-ml: A machine learning toolkit[END_REF]. A LiA signal is then obtained by computing LiA value sequentially for each frames of a generated or a real video. Though there are other factors such as eye brows and head orientation that contribute to the overall dynamics of a talking face, we focus on lip motion as it appears to be the most dynamic part of the face on this dataset. We generated 100 different sequences using different content styles and the same temporal style for the baseline methods. The average correlation coefficient rt of the LiA signals of the generated videos by all the methods are reported in Table 1. We observed that even for the same temporal style, the ImaG-INator produced different motion pattern depending on the
Generation of unseen coupled conditions
We generate videos of unseen actor-action combination only present in the test set. Figure ?? shows a few selected frames of real videos, and generated videos by ImaGINator, and our conditional method. Our method is able to successfully transfer a learnt action to an actor who was never seen performing this action in the training set. To evaluate our method in this dataset, we additionally train an action recognition model using the implementation of [START_REF] Duan | Revisiting skeleton-based action recognition[END_REF]. We train the model on skeletal key points extracted from the video frames of our training set which contains 27 different actions. The trained model was able to achieve (77%, 100%) top-1 and top-3 accuracies on the real test cases. In our generated case, it was able to achieve (68.5%, 93.5%) top-1 and top-3 accuracies. We present the confusion matrices for 27 different classes in the supplementary data. Our model not only generated high-quality videos, as shown in Table 2, but also accurately captured many actions. On the other hand, ImaGINator performed poorly on this dataset, with evidence of mode collapse in the type of motion despite the conditioning during inference. We have included the generated videos in the supplementary data. Table 3: FVD and classification accuracy for UTD-MHAD with three different versions of our model.
Interpolating conditions over time
Because our content and motion space is highly disentangled, it is possible to edit the attributes of the videos over time. We choreograph a sequence where actors change their expression over time by a linear interpolation in the actionid embedding space. The interpolation does not interfere with the general motion of the face. (See supplementary videos)
Motion recovery with GAN inversion
A talking face video can be generated with random temporal style. However, the apparent motion of the mouth may not recite any plausible sentence. We show that with our generator, it is possible to obtain a temporal style, free of any motion computation or landmark point detection for reciting a given sentence by simply using GAN-inversion. In the following experiments, we invert unseen videos from test cases of MEAD and RAVDESS dataset. For the MEAD dataset, we recover the motion from the real video of actor reciting the sentences which were excluded from the training set. We assume that the excluded dialogues carry unseen lip motions and show that our GAN-inversion is capable of recovering them. After fixing the learned actor and emotion labels of the real input sequence, we optimize only for m t (we keep the sinusoidal bases fixed as well). We minimize the LPIPS [START_REF] Zhang | The unreasonable effectiveness of deep features as a perceptual metric[END_REF] and mean squared error losses over the batch of frames. Figure 6a shows an example of LiA signals for real and inverted videos using network trained with k = 63, 127, 255. The more the number of sinusoidal bases is used in the generator, the more faithful the recovered motion is to the real video. We performed the inversion and LiA signal analysis for 39 different emotion specific sentences excluded from the training set (see Appendix for the complete list), and report the average correlation to be 0.6, 0.79 and 0.91 for k = 63, 127, 255 simultaneously, which supports our observation. Furthermore, in Figure 6b, the inversion is able to recover the large movement of head in RAVDESS dataset. The facial structure is further improved using pivotal tuning [START_REF] Roich | Pivotal tuning for latent-based editing of real images[END_REF] where we adjust generator's weight by fixing the previously optimized m t vector. Thus recovered motion in the form of m t can then be transferred to other learned actors of choice. We believe this is a novel way for re-enactment or lip syncing between different individuals and different emotion.
Ablation
In our ablation studies, we investigate the impact of different components on the performance of our model. Using more sinusoidal bases improves the recovered motion with GAN-inversion as discussed in the section before. However, higher number of k leads to minute intermittent motion artefacts of eyes in MEAD dataset. For k = 127, most of the artefacts are unnoticeable. We trained D t using only one time point instead of three time points, which resulted in a decrease in action recognition accuracy from 68% to 57% for unseen conditions. Secondly, we removed D s in the training on the MEAD dataset, which led to an FVD score of 600. We report the affect of tweaking of D t on UTD-MHAD dataset in Table 3. We also examined the effect of using a ramp function to schedule scaling of the action-id vectors. We found that without the ramp function, introducing the action-id at the beginning of the training caused the generator to favor one class over the other, while using the ramp function stabilized the quality of the videos for all classes.
Conclusion
In this study, we proposed a video generation model which produces high quality videos in both conditional and unconditional settings. Through various experiments, we show that the temporal style can independently encode the dynamics of the training data and can be transferred to unseen targets. We demonstrated that it is possible to generate different types of action with high accuracy as seen in UTD-MHAD videos. Our generator produces videos with bet-ter fidelity than the prevalent style-based video generation methods as shown by various metrics as well as human preference score. Though we did not generate high resolution videos, the StyleGAN2's synthesis network allows us to do so as shown by [START_REF] Skorokhodov | Stylegan-v: A continuous video generator with the price, image quality and perks of stylegan2[END_REF]. We also provide an example of GANinversion for temporal styles which opens possibilities for GAN-based editing for videos.
Figure 3 :
3 Figure 3: Top panel shows videos generated by our conditional model for various emotions and actors using the same temporal style. The bottom row depicts few sampled frames from videos generated using StyleGAN-V, MOCOGAN-HD, and ImaGINator.
5 )
5 Table2: FVD score for the UTD-MHAD dataset. Though we train StyleGAN-V unconditionally, it serves as a good baseline for assessing the video quality. We also report top-(1,3) action recognition accuracies among 27 action classes.
Figure 4 :
4 Figure 4: Human preference ranking for different videos. Videos generated by our conditional model (Ours(C)) tops the preference over other methods.
Ablation FVD 16
16 accuracy (top-1/top-3)% D t (1 time-point) 179.73 59.2/79.6 D t (3 time-points) 184.55 68.5/93.5 w/o D t 526.21 42.6/80.5
(a) Sit-to-Stand sequence (b) Pick-and-throw sequence
Figure 5 :
5 Figure 5: Dynamics transfer in unseen conditions. Both panels display few frames sampled from videos of real sequence (top), generated sequence by ImaGINator (mid), and our method (bottom). The actors were never seen performing these actions (top) during the training.
Figure 6: (a) The correlation coefficient (ρ) between LiA signals of real and inverted videos suggest that the network with higher number of sinusoidal bases generate more faithful videos. (b) Pivotal tuning further improves the facial structure even though most of the motion is already recovered in the first step.
Table 1 :
1 All the scores pertain to the training with MEAD dataset. FVD 16/64 is computed with 16 and 64 (only for StylgeGANV and ours(C)) frames. rt is the average correlation coefficient of the LiA signals. ArcFace is the average of the cosine similarity between the features of the first frame and the successive frames.
Method FVD 16/64 ↓ rt ↑ ArcFace ↑
ImaGINator 319 0.041 0.93±0.03
MOCOGAN-HD 272 0.52 0.80±0.13
StyleGANV 191/920 0.77 0.92±0.05
Ours(UC) 140 0.79 0.97±0.018
Ours(C) 115/655 0.7 0.96±0.02 |
03594772 | en | [
"info.info-ts",
"math.math-pr"
] | 2024/03/04 16:41:24 | 2017 | https://hal.science/hal-03594772/file/gsi_paper.pdf | Elsa Cazelles
email: [email protected]
Jérémie Bigot
Nicolas Papadakis
Regularized Barycenters in the Wasserstein Space
Keywords: Wasserstein space, Fréchet mean, Barycenter of probability measures, Convex regularization, Bregman divergence
on the convex regularization of Wasserstein barycenters for random measures supported on R d . We discuss the existence and uniqueness of such barycenters for a large class of regularizing functions. A stability result of regularized barycenters in terms of Bregman distance associated to the convex regularization term is also given. Additionally we discuss the convergence of the regularized empirical barycenter of a set of n iid random probability measures towards its population counterpart in the real line case, and we discuss its rate of convergence. This approach is shown to be appropriate for the statistical analysis of discrete or absolutely continuous random measures. In this setting, we propose an efficient minimization algorithm based on accelerated gradient descent for the computation of regularized Wasserstein barycenters.
Introduction
This paper is concerned by the statistical analysis of data sets whose elements may be modeled as random probability measures supported on R d . It is an overview of results that have been obtain in [START_REF] Bigot | Penalized barycenters in the Wasserstein space[END_REF]. In the special case of one dimension (d = 1), we are able to provide refined results on the study of a sequence of discrete measures or probability density functions (e.g. histograms) that can be viewed as random probability measures. Such data sets appear in various research fields. Examples can be found in neuroscience [START_REF] Wu | An information-geometric framework for statistical inferences in the neural spike train space[END_REF], biodemographic and genomics studies [START_REF] Zhang | Functional density synchronization[END_REF], economics [START_REF] Kneip | Inference for density families using functional principal component analysis[END_REF], as well as in biomedical imaging [START_REF] Petersen | Functional data analysis for density functions by transformation to a Hilbert space[END_REF]. In this paper, we focus on first-order statistics methods for the purpose of estimating, from such data, a population mean measure or density function.
The notion of averaging depends on the metric that is chosen to compare elements in a given data set. In this work, we consider the Wasserstein distance W 2 associated to the quadratic cost for the comparison of probability measures.
Let Ω be a subset of R d and P 2 (Ω) be the set of probability measures supported on Ω with finite order second moment. Definition 1. As introduced in [START_REF] Agueh | Barycenters in the Wasserstein space[END_REF], an empirical Wasserstein barycenter νn of a set of n probability measures ν 1 , . . . , ν n (not necessarily random) in P 2 (Ω) is defined as a minimizer of
µ → 1 n n i=1 W 2 2 (µ, ν i ), over µ ∈ P 2 (Ω). (1)
The Wasserstein barycenter corresponds to the notion of empirical Fréchet mean [START_REF] Fréchet | Les éléments aléatoires de nature quelconque dans un espace distancié[END_REF] that is an extension of the usual Euclidean barycenter to nonlinear metric spaces.
However, depending on the data at hand, such a barycenter may be irregular. As an example let us consider a real data set of neural spike trains which is publicly available from the MBI website 1 . During a squared-path task, the spiking activity of a movement-encoded neuron of a monkey has been recorded during 5 seconds over n = 60 repeated trials. Each spike train is then smoothed using a Gaussian kernel (further details on the data collection can be found in [START_REF] Wu | An information-geometric framework for statistical inferences in the neural spike train space[END_REF]). For each trial 1 ≤ i ≤ n, we let ν i be the measure with probability density function (pdf) proportional to the sum of these Gaussian kernels centered at the times of spikes. The resulting data are displayed in Fig. 1(a). For probability measures supported on the real line, computing a Wasserstein barycenter simply amounts to averaging the quantile functions of the ν i 's (see e.g. Section 6.1 in [START_REF] Agueh | Barycenters in the Wasserstein space[END_REF]). The pdf of the Wasserstein barycenter νn is displayed in Fig. 1(b). This approach clearly leads to the estimation of a very irregular mean template density of spiking activity.
In this paper, we thus introduce a convex regularization of the optimization problem (1) for the purpose of obtaining a regularized Wasserstein barycenter. In this way, by choosing an appropriate regularizing function (e.g. the negative entropy in Subsection 2.1), it is of possible to enforce this barycenter to be absolutely continuous with respect to the Lebesgue measure on R d .
Regularization of barycenters
We choose to add a penalty directly into the computation of the Wasserstein barycenter in order to smooth the Fréchet mean and to remove the influence of noise in the data. Definition 2. Let P ν n = 1 n n i=1 δ νi where δ νi is the dirac distribution at ν i . We define a regularized empirical barycenter µ γ P ν n of the discrete measure P ν n as a minimizer of
µ → 1 n n i=1 W 2 2 (µ, ν i ) + γE(µ) over µ ∈ P 2 (Ω), (2)
where P 2 (Ω) is the space of probability measures on Ω with finite second order moment, E : P 2 (Ω) → R + is a smooth convex penalty function, and γ > 0 is a regularization parameter.
In what follows, we present the main properties on the regularized empirical Wasserstein barycenter µ γ P ν n .
1 http://mbi.osu.edu/2012/stwdescription.html
Existence and uniqueness
We consider the wider problem of min µ∈P2(Ω)
J γ P (µ) = W 2 2 (µ, ν)dP(ν) + γE(µ). (3)
Hence, (2) corresponds to the minimization problem (3) where P is discrete ie
P = P n = 1 n n i=1 δ νi .
Theorem 1 (Theorem 3.2 in [START_REF] Bigot | Penalized barycenters in the Wasserstein space[END_REF]) Let E : P 2 (Ω) → R + be a proper, lower semicontinuous and differentiable function that is strictly convex on its domain
D(E) = {µ ∈ P 2 (Ω) such that E(µ) < +∞}.
Then, the functional J γ P define by (3) admits a unique minimizer.
Such assumptions on E are supposed to be always satisfied throughout the paper. A typical example of regularization function satisfying such assumptions is the negative entropy defined as
E(µ) = R d f (x) log(f (x))dx, if µ admits a density f with respect to
the Lebesgue measure dx on Ω, +∞ otherwise.
Stability
We study the stability of the minimizer of (3) with respect to the discrete distribution P ν n = 1 n n i=1 δ νi on P 2 (Ω). This result is obtained for the symmetric Bregman distance d E (µ, ζ) between two measures µ and ζ. Bregman distances associated to a convex penalty E are known to be appropriate error measures for various regularization methods in inverse problems (see e.g. [START_REF] Burger | Convergence rates of convex variational regularization[END_REF]). This Bregman distance between two probability measures µ and ζ is defined as
d E (µ, ζ) := ∇E(µ) -∇E(ζ), µ -ζ = Ω (∇E(µ)(x)-∇E(ζ)(x))(dµ -dζ)(x),
where ∇E : Ω → R denotes the gradient of E. In the setting where E is the negative entropy and µ = µ f (resp. ζ = ζ g ) admits a density f (resp. g) with respect to the Lebesgue measure, then d E is the symmetrised Kullback-Leibler divergence
d E (µ f , ζ g ) = (f (x) -g(x)) log f (x) g(x) dx.
The stability result of the regularized empirical barycenter can then be stated as follows.
Theorem 2 (Theorem 3.3 in [START_REF] Bigot | Penalized barycenters in the Wasserstein space[END_REF]) Let ν 1 , . . . , ν n and η 1 , . . . , η n be two sequences of probability measures in P 2 (Ω). If we denote by µ γ P ν n and µ γ P η n the regularized empirical barycenter associated to the discrete measures P ν n and P η n , then the symmetric Bregman distance (associated to E) between these two barycenters is bounded as follows
d E µ γ P ν n , µ γ P η n ≤ 2 γn inf σ∈Sn n i=1 W 2 (ν i , η σ(i) ), (4)
where S n denotes the permutation group of the set of indices {1, . . . , n}.
In particular, inequality (4) allows to compare the case of data made of n absolutely continuous probability measures ν 1 , . . . , ν n , with the more realistic setting where we have only access to a dataset of random variables X = (X i,j ) 1≤i≤n; 1≤j≤pi organized in the form of n experimental units, such that X i,1 , . . . , X i,pi are iid observations in R d sampled from the measure ν i for each 1 ≤ i ≤ n. If we denote by ν pi = 1 pi pi j=1 δ Xi,j the usual empirical measure associated to ν i , it follows from inequality (4) that
E d 2 E µ γ P ν n , µ γ X ≤ 4 γ 2 n n i=1 E W 2 2 (ν i , ν pi ) ,
where µ γ X is given by µ γ X = argmin
µ∈P2(Ω) 1 n n i=1 W 2 2 (µ, 1 pi pi j=1 δ Xi,j ) + γE(µ).
This result allows to discuss the rate of convergence (for the symmetric squared Bregman distance) of µ γ X to µ γ P ν n as a function of the rate of convergence (for the squared Wasserstein distance) of the empirical measure ν pi to ν i for each 1 ≤ i ≤ n (in the asymptotic setting where p = min 1≤i≤n p i is let going to infinity). As an illustrative example, in the one-dimensional case (that is d = 1), one may use the work in [START_REF] Bobkov | One-dimensional empirical measures, order statistics and Kantorovich transport distances[END_REF] on a detailed study of the variety of rates of convergence of an empirical measure on the real line toward its population counterpart for the expected squared Wasserstein distance. For example, by Theorem 5.1 in [START_REF] Bobkov | One-dimensional empirical measures, order statistics and Kantorovich transport distances[END_REF], it follows that
E W 2 2 (ν i , ν pi ) ≤ 2 p i + 1 J 2 (ν i ), with J 2 (ν i ) = Ω F i (x)(1 -F i (x)) f i (x) dx,
where f i is the pdf of ν i , and F i denotes its cumulative distribution function. Therefore, provided that J 2 (ν i ) is finite for each 1 ≤ i ≤ n, one obtains the following rate of convergence of µ γ X to µ γ
P ν n (in the case of measures ν i supported on an interval Ω of R) E d 2 E µ γ P ν n , µ γ X ≤ 8 γ 2 n n i=1 J 2 (ν i ) p i + 1 ≤ 8 γ 2 1 n n i=1 J 2 (ν i ) p -1 . (5)
Note that by the results in Appendix A in [START_REF] Bobkov | One-dimensional empirical measures, order statistics and Kantorovich transport distances[END_REF], a necessary condition for J 2 (ν i ) to be finite is to assume that f i is almost everywhere positive on the interval Ω.
Convergence to a population Wasserstein barycenter
Introducing this symmetric Bregman distance also allows to analyze the consistency of the regularized barycenter µ γ Pn as the number of observations n tends to infinity and the parameter γ is let going to zero. When ν 1 , . . . , ν n are supposed to be independent and identically distributed (iid) random measures in P 2 (Ω) sampled from a distribution P, we analyze the convergence of µ γ P ν n with respect to the population Wasserstein barycenter defined as
µ 0 P ∈ argmin µ∈P2(Ω) W 2 2 (µ, ν)dP(ν),
and its regularized version
µ γ P = argmin µ∈P2(Ω) W 2 2 (µ, ν)dP(ν) + γE(f ).
In the case where Ω is a compact of R d and ∇E(µ 0 P ) is bounded, we prove that µ γ P converges to µ 0 P as γ → 0 for the Bregman divergence associated to E. This result corresponds to showing that the bias term (as classically referred to in nonparametric statistics) converges to zero when γ → 0. We also analyze the rate of convergence of the variance term when Ω is a compact of R: Theorem 3 (Theorem 4.5 in [START_REF] Bigot | Penalized barycenters in the Wasserstein space[END_REF]) For Ω compact included in R, there exists a constant C > 0 (not depending on n and γ) such that
E d 2 E µ γ P ν n , µ γ P ≤ C γ 2 n .
Therefore, when ν 1 , . . . , ν n are iid random measures with support included in a compact interval Ω, it follows that if γ = γ n is such that lim n→∞ γ 2 n n = +∞ then lim
n→∞ E(d 2 E µ γ P ν n , µ 0 P ) = 0.
Numerical experiments
We consider a simulated example where the measures ν i are discrete and supported on a small number p i of data points (5 ≤ p i ≤ 10). To this end, for each i = 1, . . . , n, we simulate a sequence (X ij ) 1≤j≤pi of iid random variables sampled from a Gaussian distribution N (µ i , σ 2 i ), and the µ i 's (resp. σ i ) are iid random variables such that -2 ≤ µ i ≤ 2 and 0 ≤ σ i ≤ 1 with E(µ i ) = 0 and E(σ i ) = 1/2. The target measure that we wish to estimate in these simulations is the population (or true) Wasserstein barycenter of the random distribution N (µ 1 , σ 2 1 ) which is N (0, 1/4) thanks to the assumptions E(µ 1 ) = 0 and E(σ 1 ) = 1/2. Then, let ν i = 1 pi pi j=1 δ Xij , where δ x is the Dirac measure at x. In order to compute the regularized barycenter, we solve (3) with an efficient minimization algorithm based on accelerated gradient descent (see [START_REF] Cuturi | A smoothed dual approach for variational wasserstein problems[END_REF]) for the computation of regularized barycenters in 1-D (see Appendix C in [START_REF] Bigot | Penalized barycenters in the Wasserstein space[END_REF]).
To illustrate the benefits of regularizing the Wasserstein barycenter of the ν i 's, we compare our estimator with the one obtained by the following procedure which we refer to as the kernel method. In a preliminary step, each measure ν i is smoothed using a standard kernel density estimator to obtain
f i,hi (x) = 1 p i h i pi j=1 K x -X ij h i , x ∈ Ω,
where K is a Gaussian kernel. The bandwidth h i is chosen by cross-validation. An alternative estimator is then defined as the Wasserstein barycenter of the smoothed measures with density f 1,h1 , . . . , f n,hn . Thanks, to the well-know quantile averaging formula, the quantile function F -1 n of this smoothed Wasserstein barycenter is given by F
-1 n = 1 n n i=1 F -1 f i,h i
where F -1 g denotes the quantile function of a given pdf g. The estimator F -1 n corresponds to the notion of smoothed Wasserstein barycenter of multiple point processes as considered in [START_REF] Panaretos | Amplitude and phase variation of point processes[END_REF]. The density of F -1 n is denoted by f n , and it is displayed in Fig. 2. Hence, it seems that a preliminary smoothing of the ν i followed quantile averaging is not sufficient to recover a satisfactory Gaussian shape when the number p i of observations per unit is small.
Alternatively, we have applied our algorithm directly on the (non-smoothed) discrete measures ν i to obtain the regularized barycenter f γ Pn defined as the minimizer of (2). For the penalty function E, we took either the negative entropy or a Dirichlet regularization. The densities of the penalized Wasserstein barycenters associated to these two choices for E and for different values of γ are displayed as solid curves in warm colors in Fig. 2. For both penalty functions and despite a small number of observations per experimental units, the shape of these densities better reflects the fact that the population Wasserstein barycenter is a Gaussian distribution.
Finally, we provide Monte-Carlo simulations to illustrate the influence of the number n = 100 of observed measures on the convergence of these estimators. For a given 10 ≤ n 0 ≤ n, we randomly draw n 0 measures ν i from the whole sample, and we compute a smoothed barycenter via the kernel method and a regularized barycenter for a chosen γ. For given value of n 0 , this procedure is repeated 200 times, which allows to obtain an approximation of the expected error E (d(μ, µ P )) of each estimator μ, where d is either d E or W 2 . The penalty used is a linear combinaison of Dirichlet and negative entropy functions. The results are displayed in Figure 3. It can be observed that our approach yields better results than the kernel method for both types of error (using either the Bregman or Wasserstein distance). In this paper, we have summarize some of the results of [START_REF] Bigot | Penalized barycenters in the Wasserstein space[END_REF]. We provide a study on regularized barycenters in the Wasserstein space, which is of interest when the data are irregular or for noisy probability measures. Future works will concern the numerical computation and the study of the convergence to a population Wasserstein barycenter for Ω ⊂ R d .
Fig. 1 .
1 Fig. 1. (a) A subset of 3 smoothed neural spike trains out of n = 60. Each row represents one trial and the pdf obtained by smoothing each spike train with a Gaussian kernel of width 50 milliseconds. (b) Probability density function of the empirical Wasserstein barycenter νn for this data set.
Fig. 2 .
2 Fig. 2. Simulated data from Gaussian distributions with random means and variances. In all the figures, the black curve is the density of the true Wasserstein barycenter. The blue and dotted curve represents the pdf of the smoothed Wasserstein barycenter obtained by a preliminary kernel smoothing step. Pdf of the regularized Wasserstein barycenter µ γ Pn (a) for 20 ≤ γ ≤ 50 with E(f ) = ||f || 2 (Dirichlet), and (b) for 0.08 ≤ γ ≤ 14 with E(f ) = f log(f ) (negative entropy) .
Fig. 3 .
3 Fig. 3. Errors in terms of expected Bregman and Wasserstein distances between the population barycenter and the estimated barycenters (kernel method in dashed blue, regularized barycenter in red) for a sample of size n0 = 10, 25, 50 and 75.
Acknowledgment
This work has been carried out with financial support from the French State, managed by the French National Research Agency (ANR) in the frame of the GOTMI project (ANR-16-CE33-0010-01). |
03511250 | en | [
"shs.eco"
] | 2024/03/04 16:41:24 | 2018 | https://hal.science/hal-03511250/file/BRANDS_USING_HISTORICAL_REFERENCES.pdf | Pecot Fabien
BRANDS USING HISTORICAL REFERENCES: A CONSUMERS' PERSPECTIVE
Keywords: Corporate Brand heritage, historical references, Fast-moving consumer goods, Positioning, Past, Band Management
While existing literature on brand heritage focuses on corporate perspectives, this paper investigates the gap between intended and perceived heritage. Two sequential qualitative studies were performed: the preliminary study is based on observation and enables the selection of 27 fast moving consumer brands using historical references that are explicit for consumers; the main study is composed of 25 semi-structured interviews of consumers in order to analyse their interpretations. Results show that consumers know little about parent companies behind brands. However, they imagine that companies are seeking a compromise between an ideal tradition and a necessary modernity. Finally, they also distinguish different strategies in the management of temporality. Results outline the critical role of the consumers and enable to distinguish two types of brandsfamiliar and aristocratic onesand to formulate two distinct sets of recommendations for them based on the use of historical references.
Introduction
Lindt, "maître chocolatier suisse depuis 1845" ("Swiss chocolate master since 1845"), is a chocolate manufacturer using historical references in its marketing mix as an expression of its corporate heritage identity. But what do consumers perceive and remember from this message? Do they relate historical references to corporate identity? At a conceptual level, this question first resonates with recent interrogations about the dynamics between corporate brand heritage and heritage branding orientation, defined as an organisational trait upon which managers build a positioning strategy [START_REF] Santos | Heritage branding orientation: The case of Ach. Brito and the dynamics between corporate and product heritage brands[END_REF].
The second interrogation deals with consumers' interpretation of heritage branding [START_REF] Balmer | Corporate heritage brands in China. Consumer engagement with China's most celebrated corporate heritage brand -Tong Ren Tang: 同仁堂[END_REF][START_REF] Rindell | Two sides of a coin: Connecting corporate brand heritage to consumers' corporate image heritage[END_REF][START_REF] Rose | Emphasizing brand heritage: Does it work? And how[END_REF], and particularly with the use of historical references. Heritage branding orientation aims to facilitate consumers' interpretation of the corporate heritage identity [START_REF] Santos | Heritage branding orientation: The case of Ach. Brito and the dynamics between corporate and product heritage brands[END_REF]. Existing research focuses on detailed case study and only investigate consumers' interpretation of companies for which the implementation of corporate heritage identity has been assessed [START_REF] Balmer | Corporate heritage brands in China. Consumer engagement with China's most celebrated corporate heritage brand -Tong Ren Tang: 同仁堂[END_REF][START_REF] Rindell | Two sides of a coin: Connecting corporate brand heritage to consumers' corporate image heritage[END_REF]. To the best of our knowledge, no research takes what is accessible to consumers as a starting point to investigate their interpretation of the corporate heritage brand.
Building on existing literature on corporate heritage brands [START_REF] Balmer | Corporate heritage identities, corporate heritage brands and the multiple heritage identities of the British Monarchy[END_REF][START_REF] Balmer | Introducing organisational heritage : Linking corporate heritage , organisational identity and organisational memory[END_REF][START_REF] Urde | Corporate brands with a heritage[END_REF], on heritage branding [START_REF] Santos | Heritage branding orientation: The case of Ach. Brito and the dynamics between corporate and product heritage brands[END_REF], and on consumers' interpretation of brand heritage [START_REF] Rindell | Two sides of a coin: Connecting corporate brand heritage to consumers' corporate image heritage[END_REF][START_REF] Santos | Heritage branding orientation: The case of Ach. Brito and the dynamics between corporate and product heritage brands[END_REF], this paper looks at consumers' interpretation of Fast Moving Consumer Goods (FMCG) brands making use of historical references in the marketing mix. A first qualitative study based on observation enables to identify 27 FMCG brands using explicit historical references. A second qualitative study made of 25 semi-structured interviews shows that consumers 1) have little to no knowledge about the organisations behind the product brands, 2) imagine that these companies are managed in a sort of compromise articulating traditional and mainstream production system and ownership structure, 3) perceive different nuances in the management of temporality with implications on the brand's positioning.
Results contribute to the research on corporate heritage brand and on heritage branding orientation by outlining the gap between the corporate and the consumer perspectives on brand heritage. On a theoretical level, they remind that in the absence of a formal corporate brand communication, consumers rely on alternative cues to interpret corporate heritage. They also suggest a dual interpretation of the past which in marketing terms shows that using historical references can be articulated with different strategies combining a brand's adaptability and its stability. Based on these results, we provide brand managers with a guide for strategic decision-making based on the use of brand heritage. We also encourage them to increase corporate communications so as to address consumers' scepticism about the genuineness of their heritage orientation. Consumers' memory and brand relics can serve in this communication strategy.
Related Literature
This paper looks at brand managers' use of the past from a consumer perspective. As such it engages with and builds on two streams of literature. First, it addresses the dynamics between the corporate and the product brand perspectives. Then, it considers the consumer perspective over this marketing phenomenon.
Corporate and product brand heritage
Marketing scholarship distinguishes the foundational concepts of past, history or heritage, and their instrumental counterparts designating the way the past is used as a resource in marketing [START_REF] Burghausen | Repertoires of the corporate past: explanation and framework. Introducing an integrated and dynamic perspective[END_REF]. More specifically, they differentiate seven instrumental modes to referring to the past at a corporate level: past, memory, history, tradition, nostalgia, provenance, and heritage [START_REF] Burghausen | Repertoires of the corporate past: explanation and framework. Introducing an integrated and dynamic perspective[END_REF]. Corporate memory operates as a bridge between the corporate past and the three primary instrumental modes of representing the past (history, heritage, and traditions). Corporate nostalgia and provenance are secondary modes. As for the representations, the concept of heritage is distinct from history [START_REF] Urde | Corporate brands with a heritage[END_REF], and retro [START_REF] Wiedmann | Drivers and Outcomes of Brand Heritage: Consumers' Perception of Heritage Brands in the Automotive Industry[END_REF]. Marketing scholars refer to heritage in a variety of meanings: as the temporality of a construct, as mental associations based on historical references, as a cultural or institutional legacy, as collective memory, as a company's provenance or roots, to denote the longevity, or as a synonym of the past [START_REF] Balmer | Introducing organisational heritage : Linking corporate heritage , organisational identity and organisational memory[END_REF]. In a strict corporate perspective, corporate heritage is defined as "all the traits and aspects of an organisation that link its past, present, and future in a meaningful and relevant way" (Burghausen and Balmer, 2014b, p.394). This link between the different time strata, called omni-temporality, is a cornerstone of corporate heritage [START_REF] Balmer | Corporate heritage identities, corporate heritage brands and the multiple heritage identities of the British Monarchy[END_REF].
There are two internal perspectives on this marketing phenomenon: one looks at the corporate brand level while the other focuses on the product brand level. The former implies a holistic approach of the company. It considers the whole organisation as a brand as well as its multiple stakeholders [START_REF] Balmer | Corporate brands: what are they? What of them?[END_REF]. The latter focuses on the marketing function and mostly consider the interactions between the product brand managers and its consumers.
Most of existing work on heritage and brands falls into the corporate perspective. Scholars define a category of brands called corporate Heritage Brands. They use their corporate heritage as a central aspect of their proposition value [START_REF] Balmer | Corporate heritage identities, corporate heritage brands and the multiple heritage identities of the British Monarchy[END_REF][START_REF] Urde | Corporate brands with a heritage[END_REF]. These corporations share five characteristics: longevity, track records, core values, the use of symbols and an organisational belief that history is important [START_REF] Urde | Corporate brands with a heritage[END_REF]. These traits remain relevant over time to internal and external stakeholders. These corporate brands are different from brands with a heritage which may have a rich past but do not implement it as heritage at an organisational level [START_REF] Urde | Corporate brands with a heritage[END_REF].
As for the management at a corporate level, Urde and colleagues' pioneer article defines the common principles of corporate heritage brands' management [START_REF] Urde | Corporate brands with a heritage[END_REF]. This is later extended by the definition of a corporate heritage identity as the institutional traits remaining meaningful and invariant over time [START_REF] Balmer | Corporate heritage identities, corporate heritage brands and the multiple heritage identities of the British Monarchy[END_REF]. Further work looks at how corporate heritage is constructed and managed over time [START_REF] Burghausen | Corporate heritage identity stewardship: a corporate marketing perspective[END_REF][START_REF] Miller | Brand-building and the elements of success: Discoveries using historical analyses[END_REF], and how top managers communicate it to different internal stakeholders [START_REF] Blombäck | Corporate identity manifested through historical references[END_REF]Burghausen and Balmer, 2014a). For example, the literature clarifies the foundational concepts [START_REF] Balmer | Corporate heritage, corporate heritage marketing, and total corporate heritage communications: What are they? What of them? Corporate Communications[END_REF][START_REF] Burghausen | Repertoires of the corporate past: explanation and framework. Introducing an integrated and dynamic perspective[END_REF], and also look at the specificities of corporate heritage management [START_REF] Burghausen | Corporate heritage identity stewardship: a corporate marketing perspective[END_REF], on the construction of corporate heritage Recent research looks at the phenomenon from a product brand perspective. Unlike prior work at a corporate level, they focus on middle management activities such as marketing mix [START_REF] Balmer | Explicating corporate brands and their management: Reflections and directions from 1995[END_REF]. Some focuses on the role of brand heritage in international expansions [START_REF] Hakala | Operationalising brand heritage and cultural heritage[END_REF][START_REF] Hudson | Brand heritage and the renaissance of Cunard[END_REF], or brand revival [START_REF] Dion | Reviving sleeping beauty brands by rearticulating brand heritage[END_REF][START_REF] Hudson | Brand heritage and the renaissance of Cunard[END_REF][START_REF] Santos | Heritage branding orientation: The case of Ach. Brito and the dynamics between corporate and product heritage brands[END_REF]. Others consider brand heritage as mental associations based on historical references i.e. as the way managers operationalise brand heritage in the distinct facets of the marketing mix: in retail [START_REF] Dion | Retail Luxury Strategy: Assembling Charisma through Art and Magic[END_REF][START_REF] Dion | Managing heritage brands: A study of the sacralization of heritage stores in the luxury industry[END_REF][START_REF] Joy | M(Art)worlds: Consumer perceptions of how luxury brand stores become art institutions[END_REF], or in public relations [START_REF] Martino | When the past makes news: Cultivating media relations through brand heritage[END_REF].
While most scholars focus on a single level of analysis, [START_REF] Santos | Heritage branding orientation: The case of Ach. Brito and the dynamics between corporate and product heritage brands[END_REF] look at the dynamics between the corporate and the product brand levels. They define heritage branding orientation as the organisational disposition upon which managers build their marketing strategies in order to bridge the gap between the corporate and the product brand emphasis.
The present research follows this direction and adds the perspective of the consumer.
Nascent research on consumers' interpretation of heritage cues
So far, little research engages in a consumer perspective although it appears as a critical point, mostly at a product brand level, but also from an organisational perspective as consumers are important stakeholders. There are two perspectives on this phenomenon. First, researchers look at corporate associations defined as perceptual cognitive constructs [START_REF] Brown | Identity, intended image, construed image, and reputation: An interdisciplinary framework and suggested terminology[END_REF].They have three antecedents: consumers hold a specific understanding about corporate brands based on the corporate communication or their reflection through third parties, the product characteristics, or their beliefs about businesses [START_REF] Brown | Corporate associations in marketing: Antecedents and consequences[END_REF]. Indeed, at a product brand level brand heritage can strengthen the relationship with the consumers, its attachment and trust to the brand [START_REF] Rose | Emphasizing brand heritage: Does it work? And how[END_REF]. Another perspective look at historical references defined as representational use of the past in discursive form that consumers interpret and make sense of [START_REF] Balmer | Introducing organisational heritage : Linking corporate heritage , organisational identity and organisational memory[END_REF][START_REF] Blombäck | Corporate identity manifested through historical references[END_REF]. Building on this view, further research shows that consumers also have their own image of the corporate heritage which potentially differs from the company's perspective [START_REF] Rindell | Two sides of a coin: Connecting corporate brand heritage to consumers' corporate image heritage[END_REF]. Also, given that not all organisations communicate their corporate brand [START_REF] Balmer | Corporate brands: what are they? What of them?[END_REF], symbols are considered the elements that are the most accessible to consumers [START_REF] Hakala | Operationalising brand heritage and cultural heritage[END_REF]. Conceptually, the symbols can be defined as historical references used at a product brand level to induce the existence of a corporate heritage [START_REF] Balmer | Introducing organisational heritage : Linking corporate heritage , organisational identity and organisational memory[END_REF].
Actually, prior experimental research operationalise brand heritage through symbols communicating the other dimensions of the corporate heritage: longevity, values, track records and the importance of history [START_REF] Rose | Emphasizing brand heritage: Does it work? And how[END_REF]. Theoretically, historical references (or symbols) engage with the dynamics between the corporate brand they aim to make accessible, the product brand making them accessible, and the consumers interpreting them. Symbols are therefore the focus of this research.
Focus of the present research
The present research focuses on corporate heritage elements that are made accessible to the consumers through the product brand managers' actions. We acknowledge the fact that some product brands have no direct relation with a corporate brand. While every company has a corporate identity, not all organisations have a corporate brand [START_REF] Balmer | Corporate brands: what are they? What of them?[END_REF].
However, existing research finds that the implementation of a corporate heritage identity almost always induces the presence of historical references in the marketing mix. But is the presence of historical references on a product always to be interpreted as a clue for corporate heritage identity? [START_REF] Blombäck | Corporate identity manifested through historical references[END_REF] suggest that the use of historical references in the communication also affects internal audiences. Nevertheless, the literature also acknowledges the use of "faux heritage" in communication [START_REF] Hudson | Corporate Heritage Brands: Mead's Theory of the Past[END_REF], and of retro associations [START_REF] Brown | Teaching Old Brands New Tricks: Retro Branding and the Revival of Brand Meaning[END_REF] which do not require a particular implementation of heritage identity at a corporate level. There could be differences between the consumers' perceived heritage and the company's deliberated heritage, their qualitative exploration can inform the management of the corporate heritage identity.
Figure 1 is a visual representation of our original approach. Most of the existing literature summarised above focuses on a single level approach (represented by the circles, either at a corporate or at a brand level). Some look at cross-level perspectives (represented by the arrows): from the corporate brand to the product brand level and vice versa (arrow number 1, eg. [START_REF] Hudson | Brand heritage and the renaissance of Cunard[END_REF][START_REF] Santos | Heritage branding orientation: The case of Ach. Brito and the dynamics between corporate and product heritage brands[END_REF], from the product brand to the consumer level (arrow number 2, eg. [START_REF] Rose | Emphasizing brand heritage: Does it work? And how[END_REF][START_REF] Wiedmann | Drivers and Outcomes of Brand Heritage: Consumers' Perception of Heritage Brands in the Automotive Industry[END_REF], or from the corporate brand to consumer level (arrow number 3, eg. [START_REF] Balmer | Corporate heritage brands in China. Consumer engagement with China's most celebrated corporate heritage brand -Tong Ren Tang: 同仁堂[END_REF]. The fourth arrow represents our approach, it starts from the product brand level (what marketing managers do) regardless of the organisation they belong to, then it looks at consumers' interpretations of the organisation.
Figure 1 -Focus of the research
As such, it takes historical references as a starting point for consumers to comment. These historical references are material, textual or visual cues that managers make deliberately available for the consumers to interpret and to make sense of. These cues can be in some cases a partial reflection of the corporate heritage identity that marketing managers communicate to the consumers via the marketing mix. It other cases, they can be mere marketing instruments reflecting a product manager's own strategy with little to no relation with the corporate brand strategy [START_REF] Balmer | Corporate brands: what are they? What of them?[END_REF]. We acknowledge this possibility and we come back to it in the discussion section but we do not want to impose our views on the consumers a priori as we seek to study their interpretation.
Methodology
In order to gain a detailed understanding of the interpretation of brands using historical references from a consumers' perspective, a qualitative approach in two steps was chosen. As it is consumer centric, it focuses on what consumers see and know. The selection of brands is based on cues consumers have access to on the pack or on the internet. Contrary to existing corporate centric approaches, we do not use internal data consumers would not have access to.
This research has two sequential qualitative studies. The first study aims to identify FMCG brands using historical references explicitly on their brands' packs or site. The second study is a series of semi-structured interviews aiming to collect consumers' interpretation of these brands and the organisations they belong to, and particularly of their management (Figure 2).
Figure 2 -Design of the research
Preliminary study: Selection of Brands using historical references
Existing research provides methods to identify corporate heritage brands based on in-depth case study (Burghausen and Balmer, 2014a) which is not adapted to our objective and focus.
In manner consistent with [START_REF] Hakala | Operationalising brand heritage and cultural heritage[END_REF] and with experiments inducing heritage for fake brands [START_REF] Rose | Emphasizing brand heritage: Does it work? And how[END_REF], we focus on symbols and more particularly on historical references. The objective of the preliminary study is to select brands using historical references that will be later used as stimuli for the main study. Brand heritage is intuitively associated with luxury, spirits, and to product categories with a rather high implication [START_REF] Hudson | Brand heritage and the renaissance of Cunard[END_REF][START_REF] Hudson | Managing temporality to enhance luxury: Brand heritage at Dom Perignon[END_REF][START_REF] Wiedmann | Drivers and Outcomes of Brand Heritage: Consumers' Perception of Heritage Brands in the Automotive Industry[END_REF]. However, this research takes a slightly different approach and looks across a wide range of FMCG. This approach focusing on FMCG which seeks to make the most of their heritage [START_REF] Alexander | Brand authentication: creating and maintaining brand auras[END_REF] reduces the biases related to a particular product category. Observations aim to identify brands that clearly use historical references in their marketing mix. We used two cues for the marketing mix: the packaging and the websites, both available to consumers. These two cues are not exhaustive but are suitable for our focus. All FMCG use packaging which expresses the brand identity [START_REF] Underwood | The communicative power of product packaging: creating brand identity via lived and mediated experience[END_REF] and serve to mythologise the brand [START_REF] Kniazeva | Packaging as Vehicle for Mythologizing the Brand[END_REF]. We wanted to add a second cue to only select brands with a consistent use of historical references.
An interesting alternative cue would have been advertising, particularly because it tends to have a strong impact on consumers. However, many FMCG brands do not have recent adverts if any. It would have been very restrictive. In contrast, all brands have a website accessible to all consumers and where adverts can also be found. We therefore selected packaging and website as the two cues.
Packaging observations took place in two supermarkets of approximately 1500m², belonging to different corporate groups (Monoprix and Carrefour) located in the centre of a major French city.
The first step of the preliminary study consisted of the observation of all packaged goods sold in these supermarkets by the two experts. Only spirits were excluded from the study as almost all spirit brands use historical references unlike all other categories. Experts adapted Urde et al.'s five criteria for corporate brand heritage (2007) (mention of track records, longevity, core values, symbols and importance of the firm's history) to code historical references on the packaging. Table 1 and Figure 3 show an example of how these criteria were adapted to code De Cecco's pasta brand. Historical references do not directly mention the current corporate brand which is not surprising for FMCG. However, they sometimes reflect the history of the company beyond the product (see Table 1 for De Cecco) and this is even stronger for smaller companies (e.g. La Mère Poulard, De Cecco, Gillot, Briochin…) when the product brand represents most of the company's activity. This observation led to the selection of fifty-seven brands.
Figure 3 -Visual Examples From De Cecco Packaging
The second step of the preliminary study consisted of the reduction of the selected brands to those with a consistent strategy on the packaging and online on the brand's site. The authors analysed all 57 brands' websites. As an example of how the websites were analysed, Monsavon website (A Unilever-owned beauty care brand) displays the founding date is reminded repeatedly throughout the site in different sections, the website has a company history or background section and a "values and commitments" section ("Histoire" and "Valeurs et engagements" in French), the history of the brand is told with emphasis on the track records and use of old black and white posters. As for the packaging, the websites showcase historical references in relation with the product brands and to a smaller extent to the corporate brands. Again, this is not the case for product brands belonging to large conglomerates such as Unilever (e.g. Alsa, Monsavon…).
In total, the preliminary study led to the selection of 27 brands with consistent use of historical references on both the packaging and the website (15 in food, 5 in beverages, 5 in health & beauty, and 2 in home care). For each of [START_REF] Urde | Corporate brands with a heritage[END_REF] criteria, we propose an adaptation of what could be used by an external consumer to assess the existence of brand heritage at a corporate level. We acknowledge all of these 27 brands do not qualify as corporate heritage brands defined by [START_REF] Balmer | Introducing organisational heritage : Linking corporate heritage , organisational identity and organisational memory[END_REF] but they all use historical references which is consistent with the objectives of the research.
The main study: Semi-structured interviews
One of the authors conducted semi-structured interviews with individuals in charge of grocery shopping in their household. Nine male and sixteen female with diverse occupations were interviewed, ranging from 24 to 73 years old, thirteen living in a major French city, nine in suburbs, and three in rural areas so as to capture a wide range of views (Table 2). Theoretical saturation was achieved after 25 interviews [START_REF] Glaser | The discovery of grounded theory: Strategies for qualitative research[END_REF]. On average, interviews lasted for 44 minutes; they were recorded and fully transcribed. The interview guide aims to generate detailed descriptions of the heritage brands each informant knew the best, as well as perceptual elements about the brands' temporality. In practice, it has four phases.
1) Selection. Informants were first introduced with the 27 brands and sequentially asked to eliminate all unknown brands from the set, to eliminate those they never buy, and finally to pick, within the remaining brands, the one, two or three brands they consider to be the most familiar with and loyal to.
2) Description. Consumers were asked to describe the first brand they had selected.
Follow-up questions included memories, places, people, and images associated with the brand, with particular attention to any unexpected information brought by the informants.
3) Personification. Consumers were asked to imagine the brand as a person and describe his or her direct environment.
If the informants had selected a second or a third brand, the second and third phases were repeated.
4) When informants had not already raised and detailed their interpretation of the brands' temporality in the second phase of the interview, we introduced it the following question: "Would you say this brand emphasises its past?", follow-up questions included their justification for this strategy, the cues supporting their interpretation, their position towards this strategy and served to expand the interpretation of heritage branding to other product categories.
The analysis of the transcript focuses on the inferences consumers make about the management of the brands they perceive to use historical references. After a first impressionistic reading of the transcripts, a second reading focused on the descriptions of the brand management. As a result, a series of 21 keywords were identified so as to carry out a more systematic analysis based on the unity of meaning [START_REF] Bardin | L'analyse de contenu[END_REF]. These 21 keywords cover the management, the production, and the marketing of the product.
Findings
The analysis of the transcripts provides three main findings. First, consumers have limited knowledge of the corporations behind the FMCG they buy. Then, results show that consumers imagine brands have to build a compromise between tradition and modernity. Finally, it reveals that consumers perceive different degrees in the management of temporality.
A corporate heritage implementation is hard to perceive
Four main themes emerge from the interviews: uncertainty, little interest, personal experience, and brand relics. The first important finding is that consumers have little knowledge and interest about FMCG brands and their parent companies. When they describe the companies behind the brands, they use expressions such as "I don't know," "perhaps," or "I have no idea." For instance, they are not sure about the production system, or the ownership. They assume more than they know and some justify this lack of knowledge by the little interest they have in finding out.
Two of the informants describe different tactics to assess their interpretation: one is based on personal memories, and the other on the existence of relics related to the brand. "First, there are tons of adverts from the 60s, people even buy them to decorate their home. And all the goodies you find in the bars, the parasols, these kinds of things, the marketing in France! Like Coke's
Christmas special, all these old stuff! [Question: These old stuff?]
They sort of build the brand. We have been around for a long time: look!" (Eric)
A perceived compromise between tradition and modernity
Despite their lack of knowledge and interest, they still imagine how these brands are managed while remaining suspicious. Four main themes emerge: sense of memory, transmission, modern-day production systems, and critical role of supermarkets. These themes reveal a tension between two extremes: the ideal of a family business based on artisanal productions, and the likeliness of a large industrial multinational able to supply large retailers. Most of them imagine that these brands belong to companies seeking a compromise: companies building on a sense of memory and a willingness to perpetuate quality, but still adapting to modern-day production systems and supplying huge quantities to supermarkets. Some of their descriptions resemble criteria listed in the literature such as the unremitting managerial tenacity and the institutional trait consistency. "Now, do they still make it in the same way? I would be surprised ! But either way, they want to project this image of traditional fabrication.
Question: You would be surprised?
It is not possible! The costs are so… there is such a fierce competition that they modified their processes to be profitable. Nobody makes ketchup like they used to 50 years ago, it is not possible". (Eric)
The critical role of supermarkets "For me it still is a big company, even if it reminds me of small farmers and all that, and it makes me smile but actually I also think it must be a production process looking like any other industrial cheese. So…
At worst it was a little farmer who once sold its recipe to a big corporation, which kept the name but did whatever they wanted to do. But I have a hard time thinking it remains… The fact that it is sold in a supermarket (she shakes her head to say no)". (Estelle)
"But there is something wrong with the mass retail and the respect of a manufacture tradition… I mean, in 2015, it is no secrecy that every item sold in supermarkets are made in factories producing tons of things and that… the hand made thing is only for TV adverts! There is something wrong, it's like this date would be a label of legitimacy. It says OK we are now in an era of mass production but we have been here for a long time so there is still an exceptional recipe, something that makes us special." (Lisa)
The use of historical references leads to two distinct perceived positioning strategies However, consumers perceive different degrees in the management of temporality with implications for brands' perceived positioning. Indeed, some brands are perceived to be truly omni-temporal and to be consumer-oriented, while others are perceived as past and more product-oriented.
For instance, Olivier (29) describes differently Lindt (chocolate) and La Mère Poulard (biscuits). Lindt ( 1845 Badoit seems to be rooted in the 19 th century, although it might not be true."
This difference in terms of interpretation does not mean that temporality is actually managed differently at a corporate level. It may outline the difference between a genuine corporate heritage brand and a brand with a heritage. It could also illustrate a degree of heritage implementation in the marketing mix.
One perceived management of temporality is characterised by more flexibility and adaptation.
When consumers understand a brand as such, they describe a brand belonging to the present or future as much as to the past, committed to meet customers' new needs, sometimes creating new trends. Informants relate their longevity to the ability to take risks by launching new products; they also acknowledge their adaptation to modern days' production systems which does not seem to be a problem (Table 6).
These brands are considered as familiar brands. Three main themes characterise these familiar brands: 1) Consumer orientation: They are aware of and adapt to consumers' new needs; 2)
Pioneer spirit: Longevity allow them to take risks, launch new trends; 3) Flexibility:
Adaptation to Modern Production System. "Every two or three years we can see the little "new recipe" label, so there's always a renewed recipe […] I think they must have cooks, or they must adapt, I mean the raw material they use for their powders must change so they need to switch to new providers" (Ludovic, 24)
Here, the historical references mainly bring values to the brand through the track records.
Consumers have a strong relationship with the brand, based on its ability to answer their changing needs continuously. This echoes to the phenomenon of relative invariance: identities seem to remain the same but the meanings attached may change [START_REF] Balmer | Corporate heritage identities, corporate heritage brands and the multiple heritage identities of the British Monarchy[END_REF]; and also to previous results found in the luxury sector [START_REF] Veg-Sala | A semiotic analysis of the extendibility of luxury brands[END_REF]. As such, it represents a case of perceived corporate heritage. This articulation of innovation and tradition seems particularly suitable for brands competing on relatively highly innovative product categories.
In this sample, for shampoo or stain remover brands that must cope with brands launching many innovative products, brand heritage is a distinctive asset [START_REF] Urde | Corporate brands with a heritage[END_REF] as long as an updated version of the past is proposed.
In contrast with the familiar brands, consumers perceive the use of historical references as a sign of worshipping the past. Five characteristics build on this interpretation: safeguards, product orientation, passion, year on year improvement and special occasion. They describe brands being strongly rooted in the past and sometimes opposed to present times. Informants report little to no customer orientation: the brand pushes the same product they have always produced on the market but is not making any effort to understand if consumers' needs have changed. They are said to let passion rather than profit drive the business, and they are associated with special occasions (Table 7). We call them aristocratic brands. Here, the historical references add value to the brand through the core values associated with it. One value is authenticity, influenced by perception of passion driving the business rather than profit [START_REF] Cova | Les particules expérientielles de la quête d'authenticité du consommateur[END_REF]. Another value is the maintenance in a changing world, the promise that no matter what the brand will remain the same. To a certain extent, these brands also appear as supermarkets' luxury brands (see the comparison with Ferrari): prestigious in their rather utilitarian product categories (biscuits, cheese, olive oil, pasta, sardine cans…). As opposed to the first register generating interpretations of familiarity, that one makes the brand appear rather aristocratic: desirable, prestigious and distant at the same time. This positioning appears relevant for brands in categories in which stability and traditional know-how is associated with added-value such as food.
Discussion
These findings interrogate existing knowledge about corporate heritage brands and heritage branding orientation. More specifically, they inform consumers' interpretation of heritage organisations, of the specificities of corporate heritage management, and of the management of temporality.
Perceived corporate heritage orientation
Our results address the dynamic between the organisational level, the brand management level, and the consumer level. We find that consumers have little knowledge of the companies standing behind the product brands. They cannot go beyond speculations and even claim to have very little interest in finding out. It is of particularly interest as all interviews were conducted with brands consumers declare to buy on a regular basis, so potentially those they are the most likely to be interested in. This lack of interest could be a particularity of FMCG, or it could be related to consumers' implication consumers with these categories. As stated above, existing research tend to focus on products with a rather high implication for which consumers have arguably more interest in having information about the company. More importantly, these results outline the relative weakness or even absence of corporate brands in FMCG. The organisations remain discrete although prior research shows that corporate associations influence the beliefs and attitudes towards product brands [START_REF] Brown | The company and the product: Corporate associations and consumer product responses[END_REF]. Consumers generally seek to reduce the informational asymmetry with the producers [START_REF] Erdem | Brand Equity as a Signaling Phenomenon[END_REF]. Their stated lack of interest could be interpreted as fatalism or habits on a market where they are used to receive very scarce information about the organisations. For those product brands belonging to genuine corporate heritage brands, our findings support [START_REF] Santos | Heritage branding orientation: The case of Ach. Brito and the dynamics between corporate and product heritage brands[END_REF] claim for making the use of their heritage as it could be an answer to consumers' scepticism. In absence of corporate communication, suspicious consumers report to assess brand longevity referring to alternative sources. One possibility is their own memory. It relates to and confirms the role of track records [START_REF] Urde | Corporate brands with a heritage[END_REF], ceaseless multigenerational utility [START_REF] Balmer | Corporate heritage identities, corporate heritage brands and the multiple heritage identities of the British Monarchy[END_REF], and image heritage [START_REF] Rindell | Time in corporate images: introducing image heritage and image-in-use[END_REF]. One informant also raises the role of brand relics: branded artefacts from prior times which has remained on the market (e.g. a poster from the 1960s). Prior research suggests the importance of materiality to embody the corporate heritage in a product [START_REF] Santos | Heritage branding orientation: The case of Ach. Brito and the dynamics between corporate and product heritage brands[END_REF]. For our respondent, the relics objectify the heritage as they make it more tangible. They also provide evidence for the company's claims of longevity. The use of such relics is complex as it usually involves various stakeholders such as former suppliers (e.g. advertising agency), in addition, it cannot prove the organisation is currently concerned with its heritage. However, it brings tangible elements in a world of intangible values and can be of particular efficiency for managers engaging in a rebranding based on the corporate heritage [START_REF] Hudson | Brand heritage and the renaissance of Cunard[END_REF] to emphasise on their new intentions.
The management of tradition and modernity
Despite their little knowledge of the companies behind the product, consumers still hold corporate associations. As suggested by the literature, we find that consumers deduce corporate associations from the characteristics of the products or using beliefs they hold about businesses in general [START_REF] Brown | Corporate associations in marketing: Antecedents and consequences[END_REF]. Our results contribute to the existing work reporting how corporate heritage brands are managed internally. Burghausen and Balmer (2014a) show that corporate heritage identities are implemented internally through three interrelated patterns: "the conflation of past and present, the conflation of old and new, and the conflation of traditional and modern" (p.2318). One year later, the same authors find that the management of corporate heritage brands is related to a particular mindset whose characteristics are: continuance, belongingness, self, heritage, responsibility and potency [START_REF] Burghausen | Corporate heritage identity stewardship: a corporate marketing perspective[END_REF]. Our results offer an external stakeholders' perspective on the matter. In the interviews, consumers report about how they imagine the companies to be managed and it clearly goes beyond the product. Indeed, they share their views on the culture of the organisations, the ownership, their production system and their values or covenant. The result of their imagination is very much aligned with the triple conflation found by Burghausen and Balmer inside corporate heritage brands. In other words, consumers fail to differentiate the product brands which could be related to a genuine corporate heritage brand from the others.
The lack of information proactively shared by the organisations themselves reinforces the importance of the alternative cues such as the presence of historical references. This marketing "trick" can easily be disconnected from a corporate heritage identity. The little engagement on corporate branding from part of the FMCG companies certainly plays in favour of those who do not have any corporate heritage but it also maintains consumers' suspicion over the whole categories. From our perspective, genuine corporate heritage brands operating on FMCG markets have underexploited assets they could use to differentiate themselves from the others. Their corporate heritage and their specific management style should help them to be more efficient and to increase their credibility or legitimacy to use historical references [START_REF] Burghausen | Corporate heritage identity stewardship: a corporate marketing perspective[END_REF]. It raises the question of the corporate communication towards consumers which is discussed later in the managerial recommendations.
research on heritage should also consider the possibility that some individuals are reluctant to change and will be attracted to brands looking past-oriented.
Conclusion
Theoretical implications
This research contributes to the existing work on the management of the corporate brand identity [START_REF] Balmer | Corporate brands: what are they? What of them?[END_REF][START_REF] Balmer | Introducing organisational heritage : Linking corporate heritage , organisational identity and organisational memory[END_REF]. In the context of FMCG, our results show that consumer imagine how the company is managed in the absence of formal corporate communication. To feed their imagination, they rely on alternative cues such as corporate image heritage [START_REF] Rindell | Two sides of a coin: Connecting corporate brand heritage to consumers' corporate image heritage[END_REF]. In addition, this original approach based on consumers' interpretation of product brands outlines the role of historical references in shaping the corporate identity in the absence of corporate communication. It refines existing knowledge on the interstice between product and corporate brands. While [START_REF] Santos | Heritage branding orientation: The case of Ach. Brito and the dynamics between corporate and product heritage brands[END_REF] show how corporate brand heritage can shape a product brand positioning; our research shows that consumers understand how a company manages its heritage through their product brands' positioning. Altogether, our results encourage corporate heritage brands to engage in corporate communication towards their consumers to actively take part in the management of their corporate image.
We contribute to the research on the status of the omni-temporal trait by introducing different degrees of perception. Our results show that external stakeholder perceive a brand's omnitemporality in gradual terms. From a conceptual perspective, it implies to think about this trait as a continuum ranging from a strong anchoring on one time period (past, present or future) to a strong conflation between the three periods. From a methodological perspective, studies surveying stakeholders could operationalise the omni-temporality trait as a continuous variable rather than a dichotomous one to render this gradual aspect in the perception.
Managerial recommendations
We can formulate three recommendations to brand managers: on the importance of corporate communications, on the use of consumers' memory and of brand relics, and on the distinct positioning they can achieve while using historical references.
Managers in charge of corporate heritage brands operating on FMCG markets should not assume that consumers are aware of their originality. As many brand managers use similar historical references, consumers become suspicious. There is a risk for genuine corporate heritage brands to be under evaluated while being considered as any other brand claiming to have a heritage. Given the little spontaneous interest consumers have in getting to know the companies, managers should increase their corporate heritage communication to reach to their consumers. [START_REF] Balmer | Corporate heritage, corporate heritage marketing, and total corporate heritage communications: What are they? What of them? Corporate Communications[END_REF] introduces and details the concept of total corporate heritage communication and provides examples of actions that managers could use as starting points.
In their corporate communication strategies, managers can use consumers' memory and brand relics to strengthen their case. Asking consumers about their memories with the brand as we did in the interviews help connect the brand with the self-narrative and could strengthen the emotional bond between the brand and its consumers [START_REF] Ardelet | Self-referencing narratives to predict consumers' preferences in the luxury industry: A longitudinal study[END_REF]. Social media can be an interesting tool to generate brand content based on consumers' memory and to refresh track records. Working on a temporary exhibition or the opening of a brand museum is an opportunity to identify, collect, and promote brand relics while involving former and current stakeholders, including consumers. For instance, one can imagine asking consumers to lend branded artefacts for a collaborative exhibition.
Finally, in addition to the management of corporate heritage, this paper also engages with the mere use of historical references. Based on our results, we can distinguish two positioning strategies. One strategy uses historical references to construct a familiar brand while the other leads to the construction of an aristocratic brand.
In the first positioning, historical references are associated with an updated version of the past and cue familiarity based on the brand's track records. In the second positioning, historical references are associated with a worshipped version of the past, consumer perceive the brand as aristocratic through values of authenticity, exclusivity, permanence and distance. Table 8 presents a guide for brand management and decision-making. Both strategies are compatible with the use of historical references, however, they differ on the "how" these references are used.
Further research
We see three avenues for further research building on this work. First, in addition to our focus on discursive materials (historical references), it would be interested to look at the phenomenon from a psychological perspective. Indeed, research on corporate association engages with CSR and corporate ability (Brown and Dacin, 2007) but overlooks corporate heritage. Further empirical research could replicate our approach within this framework.
Another promising avenue could investigate consumers' perception of historical references in a quantitative approach. Comparing corporate heritage brands to corporate brands with a heritage, both using historical references, could extend our results using a larger sample of brands and product categories. The informational asymmetry tends to increase the importance of signals [START_REF] Erdem | Brand Equity as a Signaling Phenomenon[END_REF] as the multiple historical references and as a consequence reduce the effect of corporate heritage.
Finally, marketing academics could focus on managers' ability to stimulate consumers' memory in a way that is favourable to the brand. There is here potential to bridge the gap between the research on brand heritage and on nostalgia.
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Firm
summary of the company's history (e.g. "From father to son. Since 1886 we have been carrying on an important responsibility: our tradition of making pasta, without compromise, that we refine each single day and that allows us to bring the authentic pleasure of Italian cuisine all over the world").
Table 1 -Coding: Operationalisation of existing criteria Criteria
1
Use of symbols Use of a symbol or image of times gone by (e.g.: De Cecco is
written in a vintage font; and below De Cecco's logo is a picture
of a peasant woman carrying two wheat bundles in a wheat field).
Operationalisation taking De Cecco as an example Track records Mention of achievements in order to recall track records to the consumer (e.g. "we have been…", "based on a 130-year-old recipe"). Longevity Mention of the "established" date on the packaging: either in the logo, or in an explicative text (e.g. De Cecco's logo is associated with the mention "Dal 1886": "Since 1886", the same date is reminded in a short text).
Core values
Mention one or more values the brand has stood for historically, or is committed standing for in the future (e.g. a short text entitled "From father to son" uses the following expressions: "important responsibility", "making pasta without compromise").
Table 2 -Structure of the sample
2
Name Gender Age Occupation Selected brands
Laurent Male 26 Urban planner Lindt
Estelle Female 30 Librerian Lepetit
Justine Female 28 Journalist Badoit, Saint-Michel
Bob Male 27 Student La Mère Poulard
Grace Female 33 Body-Designer Twinnings
Nadia Female 34 Social worker Carapelli
Elodie Female 38 Translator De Cecco, Jordans, Lindt
Olivier Male 29 Translator Lindt, Connétable, La Mère Poulard
Table 3 -Key words used in the systematic analysis
3
Advertising Family Know-how
Artisanal Founder Manager
Company Goodies Offices
Employee Group Price
Entrepreneur Hand-made Production
Fabrication Industrial Retailer
Factory Job Supermarket
Table 4 -Quotes illustrating limited corporate knowledge Themes
4 , he thinks about its own image, maybe more than others… Now, maybe they were bought, it used to be a family business back in the days but maybe now it's not anymore, I don't know"(Eric)
Little interest "I do not think this is a question we ask ourselves when we are in a
supermarket [if a brand has a heritage], however, this is a question we ask
ourselves if we go to a smaller shop" (Lisa),
"Maybe all these products belong to the same company… I don't know,
this is just my view but… maybe the same company has many products…
there are not a lot of companies on the detergent sector you know.
Quote
Uncertainty "I think this is a brand which may have existed for a long time, this is a product we imagine to be more traditional, based on traditional recipes, at least in my own perception… A product which may use fewer chemicals, which may be better for our health in the long run. Maybe it is more artisanal, or at least it gives this impression, that it's less industrial.
By the way, maybe it's all fake, but that's the impression I have" (Elodie). "It is a family business do I guess the boss must be a little bit like IKEA'sQuestion: so you don't know if Eau Ecarlate has remained in the same company? No, I have not been interested in the matter"(Sylvie) Personal experience [Question: do you think some brands cheat about their past?] "I don't know, I don't really watch adverts, I have no idea, it's possible… See, I'm
kind of an old lady now so if they exaggerate too much about the past, I do remember
. If a brand claims to have existed for a long time and I have never seen it, it raises questions in my mind.
Maybe a younger person could be fooled but with me it's more difficult.
There are brands
my mother used, my grandmother used, so I know it's old" (Annie)
Brand relics [Question: how do you know they are older brands?]
Table 5 -Quotes illustrating the compromise
5
Themes Quotes
Sense of "First do what you know and then export it, adapt it to marketing, to
memory globalisation, no problem, we need to live in our times but… still have a
sense of memory. […] Even if it were a new company opening today,
they would have investigated about what we used to know, and how
we used to do" (Alexandre)
Transmission
"I think it makes sense to know that the company had been founded in whatever, because there's a real tradition having been perpetuated, know-how… even if it's
not from father to son, but from one worker to another, we
imagine that the founder did not transmit the company to
whoever." (Alexandre)
Modern days "Before, I would have imagined a sort of rustic producer; I mean in a
production rustic artisanal way. But now, this
system
producer would have adapted to its epoch, he would produce in a more automated way but still keeping this spirit of authenticity
."
(Guillaume)
Table 6 -Characteristics of familiar heritage brands
6
Characteristics Quotes
Consumer "I bought it because it was an organic muesli […] they had many different
orientation: varieties and it was organic. Now I don't know if everything is organic but I
They are keep buying it and I still have the feeling it's a rather better quality" (Elodie, 38)
aware of and Briochin: "they have updated themselves, now you have the bathroom special,
adapt to you have… originally there was only one product! I know because my cousin
consumers' told me all about Briochin" (Veronique, 49)
someone who has automated the production but still sticking to its authentic spirit
[…]
you can see this in the different sorts of products, the normal mustard, the mustard with granules…" (Guillaume, 28)
Table 7 -Characteristics of aristocratic heritage brands
7
Characteristics Quotes
Safeguard: "the problem with novelties is that you always need 5 or 10 years to
Opposed to acknowledge all negative impacts of a product, so right now you don't know.
current times
It's a kind of crash-test, when they release a new product, we feel like we're kind of doing the crash tests"
(Ludovic, 24)
"It has to be related to my grandparents' generation
[…]
that epoch before the generalised industrialisation of food, back
passion of their job" (Bob, 27)
"These brands do less special offers, buy 3 get 4 kind of thing. It might be silly
but this sort of thing doesn't help on the brand image […] Ferrari would never
do this, say, we buy your old car €1000, no they would never do this because if
you're buying a Ferrari you're not supposed to wonder how you're going to
pay" (Alexandre, 26)
Passion Over "I imagine someone in a non… I'm going to say something stupid, but someone
Profit who's not looking for profit, someone who does that for his passion for olives,
for love" (Nadia, 34)
Year on year
when we were still cooking at
home…" (Amelie, 39)
Product "we propose something: if you like it, good because we won't change if you
orientation don't. Because our objective is not to make money, it's to propose a certain
know-how […] Yeah, they're selfish! It relates to the fact they are not really
interested in what people may expect, they're focused, when I think about this
brand, I have the feeling they're concentrated on what they can get from the
improvement "This is typically the product that is very simple and has been improved in Italy for generations where they have developed this very traditional and simple thing" (Elodie, 38) For special occasions "I think it's related to rather festive occasions, exceptional occasions. If you bring Badoit… It isn't daily water, it's a water bottle for special events" (Sandrine, 39)
Table 8 -Guide for strategic decision-making based upon brand heritage Target positioning Familiar Brand Aristocratic Brand
8
Orientation Consumer orientation Product orientation
Personality Brand as a partner Brand as a landmark
Advert Pioneer spirit Vicarious and collective nostalgia
creative Adaptability of the brand Continuity of the brand
themes Founder's creativity and inventions Tribute to the founder
Passion over profit
Special occasions
Innovations Breakthrough Incremental improvement of the
Launch new trends and usages original products
Degrees of omni-temporality
The literature on corporate heritage shows that corporate heritage brands are not stuck in the past but that they carefully articulate the past, the present and the future [START_REF] Balmer | Corporate heritage identities, corporate heritage brands and the multiple heritage identities of the British Monarchy[END_REF].
Assessing this careful articulation requires access to internal data and often observation of the managerial practices (Burghausen and Balmer, 2014a). Most consumers are not aware of this and our results show they deduce a certain degree of omni-temporality based on the information they have access to. We find that using historical references does not always lead to a interpretation of omni-temporality. Our informants describe two distinct positions: familiar brands with an emphasis on adaptability (i.e. the present and future), aristocratic brands with an emphasis on longevity (i.e. the past). Both strategies are implemented at a product brand level and lead to different outcomes although they all relate to existing research on the concept of brand authenticity, and particularly to the distinction between indexicality and iconicity [START_REF] Grayson | Consumer perceptions of iconicity and indexicality and their influence on assessments of authentic market offerings[END_REF][START_REF] Napoli | Measuring consumer-based brand authenticity[END_REF]. This duality is not surprising given that western societies have two competing visions of a reference to the past. [START_REF] Lowenthal | The Past is a Foreign Country -Revisited[END_REF] summarises different philosophical developments on time and suggest dialectic between ancient and modern: between those who think the past is a source of unbeatable models; and others who see it as a source of inspiration only gaining value if updated. In a marketing context, this echoes with the distinction between "repro" and "retro" [START_REF] Brown | Retro-marketing : yesterday's tomorrows[END_REF]. "Repro" stands for reproduction and designates products representing the past more or less as it was, while "Retro" (retrospection) designates products combining an old-style form and updated content. A dominant idea in marketing is that consumers always seek updated versions [START_REF] Brown | Teaching Old Brands New Tricks: Retro Branding and the Revival of Brand Meaning[END_REF]Weindruch, 2016), however, our results tend to nuance this idea.
Consistently with the literature in history [START_REF] Lowenthal | The Past is a Foreign Country -Revisited[END_REF] and in politics about the conservative ideology [START_REF] Femia | The Antinomies of Conservative Thought[END_REF][START_REF] Hawley | Against Capitalism, Christianity, and America: The European New Right[END_REF][START_REF] Huntington | Conservatism as an Ideology[END_REF], the marketing |
04109721 | en | [
"info.info-im",
"info.info-ai",
"info.info-cv"
] | 2024/03/04 16:41:24 | 2023 | https://inria.hal.science/hal-04109721/file/Unsupervised_Polyaffine_Motion_Model_for_Echocardiography_Analysis__final_-2.pdf | Yingyu Yang
Maxime Sermesant
Unsupervised Polyaffine Transformation Learning for Echocardiography Motion Estimation
Keywords: Motion Estimation, Echocardiography, Polyaffine Transformation
Echocardiography plays an important role in the diagnosis of cardiac dysfunction. In particular, motion estimation in echocardiography is a challenging task since ultrasound images suffer largely from low signal-to-noise ratio and out-of-view problems. Current deep learningbased models for cardiac motion estimation in the literature estimate the dense motion field with spatial regularization. However, the underlying spatial regularization can only cover a very small region in the neighborhood, which is not enough for a smooth and realistic motion field for the myocardium in echocardiography. In order to improve the performance and quality with deep learning networks, we propose applying polyaffine transformation for motion estimation, which intrinsically regularizes the myocardium motion to be polyaffine. Our thorough experiments demonstrate that the proposed method not only presents better evaluation metrics on the registration of cardiac structures but also shows great potential in abnormal wall motion detection.
Introduction
Echocardiography is one of the most widely used modalities for cardiac dysfunction diagnosis. It's radiation-free, non-invasive, and real-time, making echocardiography very suitable for portable analysis, such as myocardial motion evaluation. However, ultrasound images generally have poorer quality than other modalities, such as MRI and CT, due to their low signal-to-noise ratio. Additionally, limitations in acquisitions and patient variability may cause the myocardium to occasionally be outside the imaging field-of-view. These burdens make motion estimation of the myocardium in echocardiography a very challenging task.
Traditional methods for motion estimation in echocardiography, such as block matching [START_REF] Alessandrini | Detailed evaluation of five 3d speckle tracking algorithms using synthetic echocardiographic recordings[END_REF][START_REF] Azarmehr | An optimisation-based iterative approach for speckle tracking echocardiography[END_REF] or optical flow [START_REF] Alessandrini | Detailed evaluation of five 3d speckle tracking algorithms using synthetic echocardiographic recordings[END_REF][START_REF] Barbosa | Fast Left Ventricle Tracking in 3D Echocardiographic Data Using Anatomical Affine Optical Flow[END_REF][START_REF] Farneb | Two-Frame Motion Estimation Based on Polynomial Expansion[END_REF], are typically time-consuming and not suitable for real-time or portable analysis. However, recent advances in deep learning (DL) have improved both the time efficiency and tracking performance of motion estimation algorithms. DL-based models can generally be classified into two types: optical flow-based models and registration-based models. Optical flowbased DL models estimate dense displacement fields between consecutive image frames and typically require ground truth displacement for supervised training [START_REF] Evain | Motion estimation by deep learning in 2d echocardiography: synthetic dataset and validation[END_REF][START_REF] Østvik | Myocardial function imaging in echocardiography using deep learning[END_REF]. Since obtaining ground truth displacement from real-world echocardiography images can be difficult, synthetic echocardiography datasets are often used for supervised training of these models [START_REF] Alessandrini | Realistic vendor-specific synthetic ultrasound data for quality assurance of 2-d speckle tracking echocardiography: simulation pipeline and open access database[END_REF][START_REF] Evain | Motion estimation by deep learning in 2d echocardiography: synthetic dataset and validation[END_REF]. Registration-based DL models typically register all other frames to end-diastole either in an unsupervised manner or with weak supervision using segmentation masks, enabling them to work on real-world datasets [START_REF] Ahn | Unsupervised motion tracking of left ventricle in echocardiography[END_REF][START_REF] Ta | A semisupervised joint learning approach to left ventricular segmentation and motion tracking in echocardiography[END_REF]. U-net-like architectures are often used as the core design of such methods [START_REF] Ahn | Unsupervised motion tracking of left ventricle in echocardiography[END_REF][START_REF] Ta | A semisupervised joint learning approach to left ventricular segmentation and motion tracking in echocardiography[END_REF]. Many other studies on cardiac MRI motion estimation have also utilized the registration approach with different temporal smoothness strategies [START_REF] Krebs | Probabilistic motion modeling from medical image sequences: application to cardiac cine-mri[END_REF][START_REF] Qin | Joint learning of motion estimation and segmentation for cardiac mr image sequences[END_REF]. Recently, researchers have also incorporated biomechanical modeling knowledge into deep learning networks with the aim of improving the generalizability of motion estimation performance [START_REF] Qin | Biomechanics-informed neural networks for myocardial motion tracking in mri[END_REF][START_REF] Zhang | Learning correspondences of cardiac motion from images using biomechanics-informed modeling[END_REF].
The motion of the myocardium is not arbitrary, and various spatial regularization techniques have been explored in the literature, such as the divergence penalty [START_REF] Mansi | ilogdemons: A demons-based registration algorithm for tracking incompressible elastic biological tissues[END_REF] for enforcing incompressibility, the rigidity penalty for smoothness [START_REF] Staring | A rigidity penalty term for nonrigid registration[END_REF], and the elastic strain energy for mechanical correctness [START_REF] Papademetris | Estimation of 3d left ventricular deformation from echocardiography[END_REF], among others. Many deep learning models have incorporated these regularization techniques into their work [START_REF] Ahn | Unsupervised motion tracking of left ventricle in echocardiography[END_REF][START_REF] De Vos | A deep learning framework for unsupervised affine and deformable image registration[END_REF][START_REF] Zhang | Learning correspondences of cardiac motion from images using biomechanics-informed modeling[END_REF]. However, these regularization techniques are usually based on first-or second-order derivatives of the displacement vector field, which are applied at the pixel level in discrete implementation and are insufficient for studying the myocardium in echocardiography. The high noiseto-signal ratio in echocardiography degrades the quality of the estimated motion field, despite the pixel-wise constraints in deep learning methods. Other methods tackle the smoothness issue at the transformation level, for instance using regionally affine deformations [START_REF] Mcleod | Spatio-temporal tensor decomposition of a polyaffine motion model for a better analysis of pathological left ventricular dynamics[END_REF].
In this work, we introduce a deep learning-based motion model leveraging such approach and tailored for 2D echocardiography motion estimation, called the PolyAffine Motion model (PAM). It incorporates global-level smoothness regularization for the myocardium through polyaffine motion fusion. Our contributions include:
-Proposing a comprehensive pipeline for using PAM in a weakly-supervised manner on a large public echocardiography dataset, demonstrating the effectiveness of our approach for unsupervised learning of myocardial motion. -Conducting extensive experiments on various real-world datasets to show that PAM outperforms other popular deep learning-based motion estimation methods, thereby highlighting its potential for clinical applications.
Methodology
Polyaffine motion model
Given source image I S and target image I T , we would like to estimate a dense motion field T S←T : R 2 → R 2 from target image I T to source image I S such that
I S (T S←T (•)) = I T (•). (1)
Inspired by the polyaffine motion fusion framework proposed by Arsigny et al. [START_REF] Arsigny | A fast and log-euclidean polyaffine framework for locally linear registration[END_REF], we adapted the motion estimation module from [START_REF] Siarohin | First order motion model for image animation[END_REF] to develop our proposed method for cardiac motion estimation in echocardiography.
The Polyaffine motion model consists of two steps. Firstly, the motion of the left ventricle is approximated through the sparse motion of several key points. An encoder-decoder network is used to output the location of key points and their local affine mapping for both I S and I T separately. Secondly, from the sparse motion, we obtain the final dense motion field through polyaffine motion fusion. The proposed method is illustrated in Fig. 1.
… ED ES ED + 1 ES -1 … …
K key points
Key point and affine transformation estimation
We adopt the encoderdecoder architecture for key point extraction as presented in [START_REF] Siarohin | First order motion model for image animation[END_REF] and provide a brief review of the method from the point of view of affine transformation. In order to process each image separately, we assume there exists an abstract reference frame R. Given an image X, the encoder-decoder network outputs the estimated key points p k X , k = 1, 2, ...K as well as the corresponding linear mappings
A k X←R ∈ R 2×2 , k = 1, 2, ...K.
The local affine transformation from target image I T to source image I S is then computed using the following equation
T k S←T (z) = Āk S←T • z Linear mapping + (p k S -Āk S←T • p k T ) Translation , (2)
where z ∈ R 2 represents the coordinate in the target image, and
Āk S←T = A k S←R (A k T ←R ) -1 .
In order to capture motion around the myocardium, we in-troduce a myocardium-related key-point prior (Section 3.2) and associated loss functions (Section 2.2). This encourages the network to output key points close to the myocardium, in contrast to [START_REF] Siarohin | First order motion model for image animation[END_REF], which employed a self-supervised approach for learning key-point positions.
Polyaffine motion fusion Once we obtain the local affine transformation, the dense motion field is computed through direct polyaffine motion fusion. For each local affine motion, a spatial weight W k (p k T , σ 2 ) controls its influenced region. It is a 2D isotropic Gaussian distribution centered at key point p k T with variance σ 2 . W 0 represents the weight for the background region and the left ventricle cavity area. It is computed as follows:
W 0 = RELU(1 - K k=1 W k (p k T , σ 2 )). (3)
We normalise all weights into Wk =
W k K k=0 W k
, where k = 0, 1, 2...K. An example is shown in Fig. 1. We apply them to compute the polyaffine dense motion
T S←T (z) = W0 z + K k=1 Wk (p k T )T k S←T (z). (4)
The original first-order motion model (FOMM) [START_REF] Siarohin | First order motion model for image animation[END_REF] utilised a second encoderdecoder network to estimate the normalised weights for each local affine transformation, which can not guarantee the center of weights close to the corresponding key point, thereby unstable for myocardial motion estimation.
Sequence motion estimation Considering a sequence of image frames I 1 , I 2 , ...I N , the assumption of abstract reference frame enables a fast way to obtain the dense motion field between any arbitrary pair of frames in the sequence. Without loss of generality, we assume I ED is the frame at end-diastole and I ES is the frame at end-systole and that I ED is ahead of I ES in time. The local affine transformation from I ED to I ES can be calculated by composition:
T k ES←ED (z) = T k ED+1←ED • T k ED+2←ED+1 • • • T k ES←ES-1 = Āk ES←ED • z + (p k ES -Āk ES←ED • p k ED ), (5)
where
Āk ES←ED = A k ES←R (A k ED←R ) -1 .
The final dense motion is computed by combining the local motion, as described in Equation 4.
Loss functions
The model is trained end-to-end using a combination of loss functions, which can be grouped into the following subsets for sequence motion estimation.
Keypoint myocardium losses
We propose two loss functions to enforce the position of key points near the myocardium region. The first loss, denoted by L kp_prior , penalizes the L 2 norm between the estimated key-point position and a prior position. We obtain the prior key-point position from the available training masks, as described in Section 3.2. The second loss, denoted by L kp_myo_ED , constrains key points at end-diastole (ED) to reside within the myocardium region by penalizing the distance between each key point and the myocardium.
L kp_myo_ED = - K k=1 H(p k , σ 2 H ) * (Mask myo_ED -0.5), (6)
where
H(p k , σ 2
H ) is the isotropic Gaussian heatmap centered at p k with variance σ 2 H and Mask myo_ED the binary mask of myocardium at ED.
Keypoint equivalence losses
Another two losses L equi_kp and L equi_affine impose an equivalence constraint to the detected key points [START_REF] Siarohin | First order motion model for image animation[END_REF]. They force the model to predict consistent key points and local linear mappings when applying a known transformation to the input image.
Registration losses
The last four losses regularize the final dense motion field through image similarity and key-point similarity. First, for each pair of consecutive frames in the sequence, we constrain
L seq_im = seq N CC(I T , I S (T I S ←I T )), (7)
L seq_kp = seq 1 K K k=1 |H(p k T , σ 2 H ) -H(p k S , σ 2 H )(T I S ←I T )|, (8)
where N CC represents the normalised cross correlation. Second, in order to force the model to learn temporal motion, we apply a second pair of similarity loss between the image at end-diastole and all frames after the end-diastole frame, denoted as L ED2any_im and L ED2any_kp for image similarity and keypoint similarity respectively. The total loss for the PAM model becomes
L total =λ 1 L kp_prior + λ 2 L kp_myo_ED + λ 3 L equi_kp + λ 4 L equi_affine + λ 5 L seq_im + λ 6 L seq_kp + λ 7 L ED2any_im + λ 8 L ED2any_kp . (9)
3 Experiments
Datasets
Three public datasets of echocardiography are included in our study. EchoNet
Dataset preprocessing
Pseudo myocardium mask There is no ground truth of myocardium mask in EchoNet dataset. To provide guidance for key points, we generate a pseudo myocardium mask for ED and ES by applying a dilation operation using 13x13 and 17x17 structure element to the left ventricle mask at ED and ES frame respectively. The difference between the dilated mask and the original one is regarded as the pseudo myocardium mask (see Fig. 2(b)). Key-point prior at end-diastole A mean myocardium mask is computed using all the pseudo myocardium masks at ED from the training set. Then, all the pixels of the mean mask are clustered into 10 groups using the KMeans function from the scikit-learn package [START_REF] Pedregosa | Scikit-learn: Machine learning in Python[END_REF], where the center of each group is considered as one key-point prior (see Fig. 2(c)).
Implementation
We compare our proposed model with the conditional variational autoencoder (CVAE) method proposed in [START_REF] Krebs | Probabilistic motion modeling from medical image sequences: application to cardiac cine-mri[END_REF], which has demonstrated effectiveness in sequential motion modelling of cardiac images. Its former registration version [START_REF] Krebs | Learning a probabilistic model for diffeomorphic registration[END_REF] has shown more regular motion field than other deep learning methods, including VoxelMorph [START_REF] Dalca | Unsupervised learning for fast probabilistic diffeomorphic registration[END_REF]. We implemented the CVAE method following its description in [START_REF] Krebs | Probabilistic motion modeling from medical image sequences: application to cardiac cine-mri[END_REF] and used the same training hyper-parameters. However, we added a Dice loss between the transformed end-systolic (ES) mask and end-diastolic (ED) mask during CVAE training to keep it consistent with our PAM model, which used segmentation mask information during training (see Section 2.2).
We trained the PAM model using an Adam optimizer with a learning rate of 1e-4. To determine the optimal hyperparameters, we conducted experiments on a randomly selected subset of 1000 training examples from the EchoNet dataset. The hyperparameters for the loss function were set to λ 1 = 20, λ 2 = 0.1, λ 3 = 50, λ 4 = 50, λ 5 = λ 6 = λ 7 = λ 8 = 100. The variance of the Gaussian heatmap was set to σ 2 = 0.05 for affine transformation weights (see Equation 3), and σ 2 H = 0.005 for all other scenarios. We trained the model for a maximum of 100 epochs, and applied early stopping if the validation loss did not improve over 10 epochs. We first evaluated the motion estimation accuracy by assessing the registration performance using the available segmentation masks from all the test datasets. For the EchoNet test split, we compared the ground truth left ventricle mask of end-diastole (ED) with that transformed from end-systole (ES). For the CAMUS dataset, the same evaluation was applied to the myocardium mask for ED/ES. For the HMC-QU dataset, the myocardium masks of one cardiac cycle were all transformed to ED using the estimated motion field. To enable group-level statistical analysis along the cardiac cycle, the temporal metric was interpolated to the same length. Our proposed PAM model demonstrated significant out-performance compared to the CVAE model on the test sets of EchoNet and CAMUS datasets. However, it showed slightly lower Dice scores on the HMC-QU dataset, which may be due to the fact that the pseudo-labelling process for generating the ground truth segmentation of the myocardium on the HMC-QU dataset [START_REF] Degerli | Early detection of myocardial infarction in low-quality echocardiography[END_REF] may be not very accurate. Furthermore, the PAM model exhibited more regular deformation fields compared to the baseline model (as shown in Fig. 4) when evaluating the gradient of Jacobian determinant. This smooth displacement field is advantageous for computing dense strain tensor, which is typically constructed using the second-order derivative of the displacement field. Additionally, as listed in Table 1, we demonstrated that our proposition of using myocardium prior and the polyaffine fusion mechanism greatly improved the registration performance, when compared with the original firstorder motion model (FOMM) [START_REF] Siarohin | First order motion model for image animation[END_REF]. In particular, our explicit design of fusion weights helps the network to efficiently learn affine transformation locally. [START_REF] Siarohin | First order motion model for image animation[END_REF], without prior of any key points (Prior), without explicit design of polyaffine weights (Polyaffine), without registration penalty between ED and any other frames (Sequence). All = FOMM+Prior+Polyaffine+Sequence.
Results
CVAE
In addition to the good performance from various evaluation metrics, the proposed PAM model has the potential for abnormal wall motion detection. We provide examples of myocardial infarction (MI) from the HMC-QU dataset (see Supplementary video material) and show the abnormal strain values in different segments, highlighting the consistency of our findings with the ground truth diagnosis.
Discussion and conclusion
In this paper, we proposed a polyaffine motion model (PAM) for echocardiography motion estimation. The PAM model demonstrated excellent motion esti-mation performance on real-world echocardiography datasets and showed good generalization to unseen datasets from other centers. Our explicit design of fusion weights enabled efficient learning of local affine transformation, and the intrinsic polyaffine structure improved the smoothness of the motion field, showing potential for abnormal wall motion detection. In the future, we will focus on integrating temporal regularization for the PAM model and conducting evaluations on synthetic datasets with known ground-truth displacement.
Fig. 1 :
1 Fig. 1: Method overview.
1 [ 18 ]Fig. 2 :
1182 Fig. 2: Dataset overview. (a) EchoNet example and the given annotations of left ventricle tracing. (b) Left ventricle cropping with the generated pseudo myocardium contour (blue). (c) The mean mask from the EchoNet training set and the 10 prior key points. (d) HMC-QU example and the given annotation of the myocardium. (e) CAMUS example and the given annotation of different cardiac structures. (MYO: myocardium, LV: left ventricle, LA: left atrium) image sequences are cropped around the left ventricle center according to the provided segmentation/tracings (see Fig. 2(b)). In this study, we follow the given data division of EchoNet dataset, with 7465 samples for training, 1288 samples for validation and 1277 samples for testing. CAMUS and HMC-QU datasets are used for evaluation during test phase.
Fig. 3 :Fig. 4 :
34 Fig. 3: Evaluation results on test datasets. (a-c) On EchoNet test split using available left ventricle masks of ED/ES. (d-f) On HMC-QU dataset using frame-wise myocardium masks. The original line represents the comparison between the ground truth ED frame and the ground truth mask of each frame. (g-i) On CAMUS 4-chamber view dataset using myocardium masks of ED/ES. (HD: Hausdorff distance, MSD: main surface distance)
Table 1 :
1 Performance comparison of different methods on EchoNet and CA-MUS datasets. FOMM: the original model in
Method EchoNet-LV Dice HD(pixels) MSD(pixels) Dice HD(mm) MSD(mm) CAMUS-MYO
All = PAM (ours) 0.92 7.33 2.23 0.81 7.43 1.99
FOMM+Prior+Polyaffine 0.91 7.53 2.36 0.80 7.95 2.08
FOMM+Prior 0.77 18.36 5.46 0.57 13.85 3.89
FOMM [23] 0.75 22.99 6.07 0.48 17.27 4.41
CVAE [13] 0.91 7.97 2.30 0.77 9.91 2.67
https://echonet.github.io/dynamic/
https://www.creatis.insa-lyon.fr/Challenge/camus/
https://www.kaggle.com/datasets/aysendegerli/hmcqu-dataset
Acknowledgements This work has been supported by the French government through the National Research Agency (ANR) Investments in the Future with 3IA Côte d'Azur (ANR-19-P3IA-0002) and by Inria PhD funding. The authors are grateful to the OPAL infrastructure from Université Côte d'Azur for providing resources and support. |
00410993 | en | [
"info.info-ni"
] | 2024/03/04 16:41:24 | 2007 | https://inria.hal.science/inria-00410993/file/article.pdf | Alexandre Denis
email: [email protected]
Meta-communications in Component-base Communication Frameworks for Grids
Applications are faced with several network-related problems on current grids: heterogeneous networks, firewalls, NAT, private IP addresses, non-routed networks, performance problems on WAN. Moreover, the requirements concerning communications are varied and the acceptable tradeoffs highly depends on the applications. A solution to reach the flexibility regarding communication on grids is the use of a component-based communication framework. The users then compose their own protocol stacks by assembling building blocks in the way they want. However, a truly flexible and dynamic component-based communication framework needs a meta-communication channel for its out-of-band communications required by dynamic component assembly in a consistent way on multiple nodes. The meta-communication channel is useful for some "brokered" communication methods, too, and in particular those designed to cross firewalls. The meta-communication channel has often been the "weakest link " of component-based communication frameworks: bottleneck for the performance, back-door from the security point of view, and limited connectivity.
In this article, we present an architecture for a meta-communication channel that suffers from none of the aforementioned limitations. It exhibits good properties regarding connectivity, security and performance. Thus, the gain in flexibility brought by software components may be fully exploited without trading anything against flexibility.
Introduction
The goal of grid computing is to aggregate the computing power of multiple clusters of PCs and parallel machines scattered throughout multiple sites. Undoubtly, network communications play a critical role to reach this purpose.
Communication management on grids is different from a lot of other applications involving networking. The main characteristic of networks in grids is heterogeneity. The networking technologies are various, ranging from highperformance networks between nodes of clusters (SAN) through wide area networks (WAN) with a latency of multiple tenths or hundreds of milliseconds and a random bandwidth. These multiple levels bring each their own issues, thus an application for grids is faced with all of them. We consider in particular the following problems on modern grids:
Connectivity -To protect their machines from intruder attacks, many site administrators have drastically restricted the connectivity to the Internet. Many sites are using firewall routers, non-routed private networks [START_REF] Rekhter | Address Allocation for Private Internets[END_REF], or hide their machines via Network Address Translation (NAT) [START_REF] Egevang | The IP Network Address Translator (NAT). Request for comments 1631[END_REF]. As a consequence, plain TCP/IP is not sufficient to get a full connectivity, from every node to every other on the grid. NAT and firewalls introduce nonsymmetry in the topology. Some nodes are hidden and not visible from the Internet. This is quite unusual for people used to parallel computing where it is traditional to have an all-to-all communication channel with no restriction.
Security -As WAN connections between sites cross the Internet, they are vulnerable to attackers. Thus, many application require authentication of communication peers and privacy based on encryption. The widespread solution to authentication and encryption is the use of the Transport Security Layer (TLS) [START_REF] Dierks | The TLS Protocol Version 1.0. Request for comments 2246[END_REF], a successor of the Secure Sockets Layer (SSL).
Performance -Since most applications on grids expect high performance, performance is a critical aspect of network communication. Different network have different performance properties. Even a given network may exhibit different performance results depending on the protocol used. For example, plain TCP can hardly exploit the bandwidth capacity of WAN connections. One solution to improve TCP performance in WANs is to use multiple TCP streams in parallel. The Globus implementation of GridFTP [START_REF] Allcock | GridFTP Protocol Specification[END_REF] is probably the best-known tool implementing this approach. Alternatively, WAN performance can be improved using data compression, as implemented, e.g., in the AdOC library [START_REF] Jeannot | Adaptive Online Data Compression[END_REF].
In this paper, we will use two different metrics for evaluating performance; we will consider separately the link utilization performance (characterized by the bandwidth and latency), and the connection establishment performance (characterized by the connection establishment delay).
The problems to overcome are very different and influence each other, e.g. usually improving security degrades performance, thus tradeoffs have to be made. However, the applications that may benefit from a deployment on grids are varied with very different requirements regarding security and performance from one application to another. There won't be any best tradeoff suitable for any application.
A communication framework for grids has to be able to utilize a very large spectrum of networking technologies, must be flexible enough to be adapted to the requirements of various applications, and must overcome the main problems of communication on grids, namely connectivity, security and performance. One solution to reach such a flexibility in a communication framework is the use of a component-based approach. The user is offered the ability to assemble itself the building blocks he/she wants to get a custom service. For a good flexibility and adaptability, we will see that it is welcome that the communication framework implements an overlay network for out-of-band communications, that we call a meta-communication channel. The meta-communication channel is often the weakest link of a component-based communication framework. It may introduce security holes, performance bottlenecks, or connectivity restrictions.
This paper presents on-going work on a component-based approach for the meta-communication channel itself, in order to solve all the aforementioned limitations at the same time.
The remaining of this paper is divided as follows: the second section analyzes component-based communication frameworks for grids and their needs and requirements for a meta-communication channel. Section 3 explains our proposal for managing such a meta-communication channel. Section 4 describes and evaluates our implementation of our proposal in the PadicoTM communication framework. Section 5 discusses related work, and section 6 draws conclusions and directions for further work.
Component-based Communication Frameworks
In this section, we study the principles and operation of component-based communication frameworks, and exhibit their need for a meta-communication channel.
Motivation and Principles
The most challenging part to manage communication on grids is the heterogeneity of the resources and the variety of applications -and thus their requirements for a communication sub-system. The networks are ranged from high-performance networks in cluster to wide-area networks between sites. Not only their properties are different, but their protocol, communication methods and programming interface are different. Moreover, the requirements for the communication sub-system depends on the application; the performance v.s. security tradeoff largely depends on the nature of the application and may not be hard-coded in a communication framework. Such a needed flexibility may hardly be reached by the usual two-layer portability model based on an abstraction layer and drivers for each supported resource. Considering the variety of cases to deal with, on grids it would be highly welcome to have communication methods be assembled by the users depending on the application and the kind of network and protocols involved. For example, a user may want to add compression or encryption on the fly to any communication method; another user may want no encryption at all to get the best achievable performance with non-critical data.
To reach such a flexibility, it has been proposed [START_REF] Bresnahan | The eXtensible Input Output library for the Globus Toolkit[END_REF][START_REF] Hutchinson | The x-kernel: An architecture for implementing network protocols[END_REF] to manage communications with a freely and dynamically assembled protocol stack made of several simple building blocks. Such a technique is nowadays commonly used in all fields of software development and is known under the name of software component.
In the remaining of this paper, we will call "component-based" a communication framework based on freely assembled building blocks.
Such a flexible assembled protocol stack based on "building blocks" has been implemented in particular in x-kernel [START_REF] Hutchinson | The x-kernel: An architecture for implementing network protocols[END_REF], Globus XIO [START_REF] Bresnahan | The eXtensible Input Output library for the Globus Toolkit[END_REF], NetIbis [START_REF] Olivier Aumage | Netibis: An efficient and dynamic communication system for heterogeneous grids[END_REF] and PadicoTM [START_REF] Denis | PadicoTM: An open integration framework for communication middleware and runtimes[END_REF].
Need for a Meta-communication Channel
In this section we introduce the concept of meta-communication and its motivations. Formally, we distinguish two classes of network communications: metacommunications and data communications.
• Data communications are communications carrying data from the upper levels -middleware or application-which are using the communication framework. These communications are controlled through the API of the communication framework.
• Meta-communications are communications used internally by the communication framework or one of its components. They are sometimes called service links, control channel or out-of-band communications in some other communication frameworks.
In the next paragraphs we explain why the meta-communications are welcome in a communication framework for grids.
Controlling the assembly. As a result of the component-baseness of a communication framework, various assembly schemes of building blocks may be selected to adapt to the requirement and networking resources. However, it introduces the problem of choosing the appropriate assembly and ensuring that all peers (i.e. client and server) are using the same assembly. For example, if a server uses zip compression over plain TCP and a client uses directly plain TCP, they are not likely to understand each other. At least two approaches are possible to solve this problem:
Static component stack -This is the approach used in Globus XIO [START_REF] Bresnahan | The eXtensible Input Output library for the Globus Toolkit[END_REF].
Client and server know in advance the protocol stack to use. Once the server is bound to a protocol stack, clients must use the same protocol stack. This approach is simple but suffers from a lack of flexibility; the servers must know in advance where the requests will come from. As a consequence, for a given server, all clients have to use the same protocol stack. However, the user may want to use a different protocol stack for example for connection coming from nodes on the same cluster reachable through a high-performance network and for connections that cross an insecure and slow WAN.
Dynamically assembled component stack -This is the approach used in PadicoTM [START_REF] Denis | PadicoTM: An open integration framework for communication middleware and runtimes[END_REF] and NetIbis [START_REF] Olivier Aumage | Netibis: An efficient and dynamic communication system for heterogeneous grids[END_REF]. Both parties agree on the fly on the protocol stack to use. Therefore a server is not required to know in advance where the requests will come from, and different clients to the same server may even use different protocol stacks. The dynamically assembled component stack strategy uses the following algorithm: when a client requests a new connection establishment, the communication framework first selects the assembly scheme to use according to configuration rules and depending on the nodes involved. Then the framework sends an assembly request to the framework of the server node; this request asks the server node to create an instance of the selected protocol stack (on the server side).
In the meantime, the client creates its own instance of the selected protocol stack. Finally, the client uses the usual connection mechanisms on his stack, and is sure that the server is already listening with the same protocol stack.
The main obstacle to dynamic assembly is that there must be a way of sending the assembly request to the framework on the server node even though the connection is not established yet. Thus we need a pre-existing framework-toframework communication channel to send meta-data. This is precisely the role of the meta-communication channel. Dynamic assembly using a control channel (meta-communication channel ) is depicted in Figure 1: on step 1, node B does not know the assembly that will be used (actually, it does not even know that a connection will be estbalished from node A); node A sends a connection request with the assembly description embedded in he request. On step 2, node B builds locally the requested assembly then sends an acknowledgement with connection information (the port number) to node A. On step 3, node A establishes the data connection through the selected component stack.
We should notice that we have restricted our study to the case of clientserver connection establishment. However, some other connection schemes are possible. For example, PadicoTM has the notion of circuit which is composed of a set of nodes (roughly similar to an MPI communicator). It is possible to apply the same algorithm to a larger set of nodes than two as in client-server, but we will only consider the case of client-server in the remaining of this paper to avoid useless overcomplexification.
Brokered communication. Some communication methods need to exchange information prior to establishing connections. Plain TCP is the best known example of this. To establish a TCP connection on an ephemeral port, the port Step 1 A sends assembly request through control channel
Step 2 B builds its communication stack
Step number has first to be transmitted from the server to the client before the client can connect to this port. There are various methods to solve this problem:
• listen on a well-known (fixed) port instead of an ephemeral port;
• use a third party that plays the role of directory (or "name service");
• send the port number through a meta-communication channel, preexisting before the data connection is attempted.
The first solution may not work in case the chosen port is busy, and does not supports multiple instances. The second solution supposes that all nodes are able to communicate with a third party; this means that actually the third party establishes an indirect route for meta-communications between nodes.
Other communication methods that plain TCP can benefit from a metacommunication channel. For example, in case a server is behind a firewall that drops incoming packets but not outgoing packets (common case), or behind a NAT [START_REF] Egevang | The IP Network Address Translator (NAT). Request for comments 1631[END_REF] gateway, we establish connections in the outgoing way; this is the socalled reverse connection method. Clients send a request to the server so that it connects to them. Another technique for crossing firewalls is TCP splicing (also called "simultaneous SYN " or "simultaneous initiation" in [START_REF] Postel | Transmission Control Protocol[END_REF]): both endpoints needs to exchange port numbers, and need to synchronize themselves to succeed in simultaneous connection. Both reverse connection and TCP splicing need to exchange meta-data. The availability of a meta-communication channel which allows component-to-component communications facilitates the use of plain TCP over dynamic ports and enables connection methods that wouldn't be available without it. Thus the meta-communication channel is a must, especially for communication methods designed to overcome the connectivity issues typically encountered in a grid environment.
To dynamically assemble component stack and to use brokered communication methods, we need a meta-communication channel which allows frameworkto-framework and component-to-component communications. We define the meta-communication channel as a communication channel that:
• allows communication from every node to every other node;
• exists before any data connection is attempted;
• exists implicitly, i.e. it is created without any explicit action from the user, as soon as the processes are started.
Requirements of a Meta-communication Channel
In this section, we define and analyze the requirements that a metacommunication channel must fulfill to be used in a communication framework for grids. Actually, the meta-communication channel is often designed with little care and is usually the "weakest link " of a communication framework. However, the whole communication framework cannot have a better connectivity, security, and performance than its meta-communication channel. Therefore, the meta-communication channel has the same requirements as the data communication channels, namely: connectivity, security, and performance.
Connectivity.
Nodes not reachable through the meta-communication channel are not able to use dynamically assembled component stacks since there is no way to send an assembly request. Moreover, without a meta-communication channel, no brokered communication method can be used. As a consequence, if a node is not reachable through plain TCP (because of firewalls, NAT, etc.) and is not reachable through a meta-communication channel, then it is not reachable for any data connection. Therefore, for a communication framework based on dynamically assembled component stacks -the most flexible model-, the set of nodes reachable for data connection is a subset of the nodes reachable through the meta-communication channel.
Security. Since the protocol stack is decided by clients, any intruder able to send meta-communication messages to a node may send forged assembly requests. Therefore, such an intruder may request unauthenticated and/or unencrypted connections to a server. A world-accessible meta-communication channel is undoubtly a back-door through which an intruder may change the security policy used for its own connection attempts. As a consequence, the security level of the whole communication framework cannot be higher than the security level of the meta-communication channel.
Performance. The meta-communication channel is used for assembly request and brokered communication methods when a data connection is attempted. It means that the meta-communication channel is on the critical path for data connection establishment. In other words, the data connection establishment performance is impacted by the meta-communication channel performance. Depending on the application, the data connection establishment delay may or may not be critical for overall performance. On the other side, the metacommunication channel connection establishment only affects the process initialization time.
An
Approach for a Flexible Metacommunication Channel
In this section, we describe our approach for a meta-communication channel suitable for a communication framework for grids.
As seen in the previous section, the meta-communication channel itself has roughly the same requirements as data communications: connectivity, security, and performance. We propose thus to use a similar solution to a similar problem; indeed, following the study of section 2.1, we propose the idea that the meta-communication channel might be implemented with dynamically assembled protocol stacks of software components. The remaining of this section explains such an approach where the meta-communication channel itself reaches a good flexibility and fulfills its requirements through a component-based architecture.
Overall architecture: two-step bootstrap
The main difficulty raised by the idea of a meta-communication channel following itself a component-based architecture is that it needs its own metacommunication channel -or rather: meta 2 -communication channel. However, the requirements for such a meta 2 -communication channel are not as high as for the meta-communication channel since it is used only at bootstrap time to build only the (primary) meta-communication channel. From now on, we will call this meta 2 -communication channel the bootstrap channel.
Undoubtly, the bootstrap channel has the same connectivity and security requirements as the meta-communication and data channels. However, the constraints on performance may be relaxed. The performance of the bootstrap channel only impacts the performance of the meta-communication connection establishment that takes place at process start-up. We choose to neglect this one-time initialization delay. As a consequence, the requirements and constraints for the bootstrap channel are:
• full connectivity (every node to every node);
• secure communications;
• uses no meta-communication channel (no meta 3 -communication channel);
• performance requirements are low.
With these hypothesis, we conclude that for the bootstrap channel, static component stack is mandatory since no meta-communication is possible for a dynamically assembled stack. This is no problem since a "one size fits all" approach is possible at the bootstrap channel level: we can guarantee security with an authenticated/encrypted communication method; we can bring the full connectivity through routing done by the communication framework on top of the encrypted transport. The performance of such a systematically routed and encrypted communication system is likely to be suboptimal, but it fulfills our requirements for a bootstrap channel.
Following this scheme, the sequence of initialization and data communication establishment is as follows:
1. start processes;
each process opens its bootstrap channel;
Each node has an initial basic connectivity to other nodes.
processes open meta-communication channel towards other nodes, using
the bootstrap channel for meta 2 -communications;
Each node has a meta-communication channel to other nodes.
4. upon data connection establishment attempt, an assembly request is sent to the other node through the meta-communication channel.
The internals of the bootstrap channel, the meta-communication channel, and various optimizations are detailed in the following sections.
Bootstrap channel
The goal of the bootstrap channel is to reach a basic initial full connectivity. This implies resource discovery, and basic messaging towards every known node. For scalability reason, we use a two-level hierarchical approach based on clusters of nodes.
Bootstrap channel architecture
The overall architecture of the bootstrap channel is depicted in Figure 2.
We define a node as a process involved in the considered application; there may be several nodes per hosts. We define a cluster as a set of nodes which are implicitly connected through an underlying native communication subsystem. A typical cluster is for example a set of nodes connected with a vendor-MPI on a parallel machine, or nodes connected through the Madeleine [START_REF] Olivier Aumage | A portable and efficient communication library for high-performance cluster computing[END_REF] communication library. Usually, the native intra-cluster communication subsystem is high-performance, non-TCP, and unsecure but isolated from the outside.
In each cluster, we distinguish a particular node that we call the leader. It should be able to connect to the internet with plain TCP, and be able to communicate with every node of the cluster with the native communication subsystem of the cluster. A typical example of cluster leader choice is the frontend of the cluster.
A particular node is dedicated to the directory management. We call this node the rendez-vous node. The rendez-vous node should be visible from the internet -or at least from all the cluster leaders, in case of a private grid. Typically, the rendez-vous node will be located on a gateway, outside of any firewall, and with a public IP address. The rendez-vous node manages a directory of nodes comprised in the current session. More precisely, it manages a table of node entries; each entry is composed of a node ID (actually an UUID [START_REF] Leach | A UUID URN Namespace[END_REF]), and the ID of the leader or a reference to the connection if the node is a leader. The rendez-vous node listens for incoming connections from the internet on a fixed port number, using a secure (e.g. SSL/TLS [START_REF] Dierks | The TLS Protocol Version 1.0. Request for comments 2246[END_REF]) communication method.
Discovery phase
The initial reference of the rendez-vous node is supplied to every node. When a process starts, it initializes its bootstrap connections. A standard node (nonleader) sends its ID to its leader. A leader node connects to the rendez-vous node with the secure communication method, using the supplied bootstrap initial reference; it sends its ID and the list of IDs of the nodes in its cluster. The rendez-vous node registers the IDs and the route to reach every known node. Then, it broadcasts the ID of new nodes to every already known leader, so as every node knows the list of currently running nodes. In case of a broken connection between a leader and the rendez-vous node, it unregisters the given leader and all the nodes of its cluster, and broadcasts the information to the other leaders.
The communication method used between the rendez-vous node and the leaders may be configured. For example, as an optimization one may want not to use authentication at all on a private grid. However, all leaders and the rendez-vous node must use the same configuration for a given session. The initial reference of the rendez-vous node is given similarly as a configuration parameter. It is not expected to change very often.
Messaging on bootstrap channel
Once the bootstrap channel is connected (i.e. the nodes are connected to their leader, and the leaders connected to the rendez-vous node), the messages on the bootstrap channel are routed, as depicted on example shown in figure 2. Since the topology of the bootstrap channel is a tree rooted in the rendez-vous node, the routing algorithm is straightforward. To send a message, a node sends it to its cluster leader. If the final recipient is in the same cluster, then the leader sends the message directly, else it forwards it to the rendez-vous node. Following its routing table, the rendez-vous node sends the message to the appropriate cluster leader, which finally forwards the message to its final recipient. The properties of such a bootstrap channel are:
• full connectivity, from every node to every other node;
• as secure as the chosen underlying transport layer;
• low performance, due to routing and the bottleneck in the rendez-vous node. However, every route is no longer than 4 hops;
• static protocol stack, does not require a meta-communication channel.
These properties fulfill the requirements for a bootstrap communication channel.
Meta-communication channel
The goal of the meta-communication channel is to provide the framework and the components with fast and secure connections from every node to every other node. The meta-communication channel is based on dynamically assembled protocol stacks. It has at its disposal the bootstrap channel.
The meta-communication may use the straightforward approach introduced in section 3.1: just after bootstrap, open all-to-all connections for the metacommunication channel. However, optimizations are highly welcome to overcome two main drawbacks: opening n 2 connections at the same time (n being the number of nodes) is likely to be a superfluous overload on the bootstrap channel -the rendez-vous node is a bottleneck-, and describing all the protocol stacks for every node to every other node is a tedious job.
Lazy connections. To solve the problem of the bootstrap channel flood, reduce startup time, and save on resources wasted by unneeded connections, the meta-communication channel uses lazy connection establishment. All nodes of the session are known as a result of the resource discovery phase, but it is not necessary to immediately open meta-communication connections to every known node. Therefore, it is lighter to open meta-communications connections ondemand, on the first message sent to a given node on the meta-communication channel.
Default configuration schemes. The assembly patterns used for protocol stacks are configured by the user as a set of rules defining which assembly pattern to use to reach which node. This is very powerful and may be used to describe the protocol stacks for any topology supported by the communication framework. However, the targeted topology are not random, thus one can want to optimize the configuration process for commonly encountered network topologies. It also saves the user's time by reducing the configuration complexity.
Basically, a configuration can be described as a default configuration strategy, and a list of exceptions. The default configuration is a sensible default scheme, for example: open direct TCP connections from every node to every other node (typically for small single-site, firewall-less, multi-cluster configurations); use native intra-cluster communication method for intra-cluster, establish direct connections between leaders, and route messages (max.: 3 hops). It can save a long distance round-trip if the rendez-vous node is far from both leaders; use bootstrap channel as meta-communication channel -a last resort option, but works everywhere. These default configurations are a basis upon which more advanced configurations are built in adding rules describing only exceptions.
Implementation and Evaluation
In this section, we describe our implementation of our meta-communication channel model in the PadicoTM [START_REF] Denis | PadicoTM: An open integration framework for communication middleware and runtimes[END_REF] communication framework for grids.
The PadicoTM communication framework
PadicoTM [START_REF] Denis | PadicoTM: An open integration framework for communication middleware and runtimes[END_REF] is a component-based communication framework for grids. Padi-coTM is designed to be as flexible as possible. It supports a wide range of networks, from high-performance networks to wide area networks. Moreover, several middleware systems -MPI, various CORBA implementations, Java RMI, SOAP implementations, HLA, ICE, DSM systems, JXTA-have been ported on top of PadicoTM thanks to its flexible personality layer that enables a seamless integration of existing code.
PadicoTM is based on a three-layer approach [START_REF] Denis | Network communications in grid computing: At a crossroads between parallel and distributed worlds[END_REF]: the lowest layer does multiplexing and arbitration between concurrent accesses to a given network, and between accesses to different networks (e.g. TCP/Ethernet and Myrinet) on the same machine; the middle layer is the abstraction layer, based on dynamically assembled components; the higher layer, or personality layer, adapts the API to the expectations of applications. The meta-communication channel is needed only for the abstraction layer, where the dynamic component assembly takes place.
Communication methods implemented
Various communication methods have been implemented in PadicoTM. Each communication method is provided in its own component and may be freely used in any assembly for supplying communication to any middleware system (MPI, CORBA, etc.). The supported communication methods are:
Plain TCP -This is the usual vanilla TCP connection, with access to some configuration parameters such as window size.
Madeleine -We use the Madeleine [START_REF] Olivier Aumage | A portable and efficient communication library for high-performance cluster computing[END_REF] communication library for access to high-performance networks in clusters. Supported networks are: Myrinet (through MX, GM or BIP), SCI, Quadrics QsNet, VIA.
Shmem -A shared memory communication component offers low-latency high-bandwidth inter-process communication on SMP hosts.
TCP derivatives for WAN -A large set of communication methods derived from TCP are implemented to overcome connectivity and performance problems specific to WAN. These methods are: TCP splicing (aka simultaneous connect) for crossing firewalls with no performance drop; one-way connection to always establish connections in the same direction, to cross firewalls when only one side is firewalled; SOCKS [START_REF] Leech | SOCKS Protocol Version 5. Request for comments 1928[END_REF] proxy; connection through SSH tunnels; parallel streams to improve TCP performance, as implemented in GridFTP [START_REF] Allcock | GridFTP Protocol Specification[END_REF].
Data filters -Some data filters are proposed. These filters may be composed atop any other communication method. The implemented filters in Padi-coTM are: compression -LZO, BZIP2, and AdOC [START_REF] Jeannot | Adaptive Online Data Compression[END_REF] (adaptive ZIP)-, and Gnu TLS for authentication/encryption.
Last resort -A last resort communication method is proposed. It performs tunneling through the meta-communication channel. The performance is likely to be low, but this solution works in any case where a metacommunication channel is established.
for the bootstrap channel. This is made possible by the fact that our bootstrap channel uses a configurable component assembly (even though it is static for the bootstrap channel). We should notice that getting even basic connectivity on such a topology is not possible for most communication frameworks, even component-based ones such as NetIbis [START_REF] Olivier Aumage | Netibis: An efficient and dynamic communication system for heterogeneous grids[END_REF].
Security analysis. Both the bootstrap and the meta-communication channels are built as component stacks for which the default is either TLS or a private intra-cluster network. Our approach introduces no world-accessible unsecured TCP server, unless explicitly asked by a user willing to trade security against performance in a controlled environment.
Performance analysis.
We have measured the quantitative impact of our approach for a component-based meta-communication channel. The performance of the meta-communication channel impacts the data connection establishment. Table 1 shows typical connection establishment performance.
The first column shows transmission latency, which is the latency of a meta-communication channel using the given network. The second column shows connection establishment delay with a three-way handshake (native for TCP/Ethernet and TCP/WAN, at application level for Myrinet) and a static protocol stack (no meta-communication channel). The third column shows connection establishment delay with dynamic protocol stacks and metacommunication routed through a relay located in a site 100 ms apart from the hosts (not uncommon on large-scale grids). Finally, the fourth column shows connection establishment delay with dynamic protocol stacks and componentbased meta-communication channel (our proposed architecture).
The connection establishment delay on a dynamic component-based software with a basic meta-communication channel is bounded by the performance of the meta-communication channel. If the performance of the meta-communication channel is poor, e.g. caused by relaying through a WAN, then connection establishment is slow even if the remote machine is theoretically reachable through Myrinet. In contrast, our proposed architecture (rightmost column) gets performance results close to direct connection (delay ∼ +60%). This is made possible by the use of an appropriate communication method by the metacommunication channel itself. We can see that our approach greatly reduces the connection establishment delay and makes the overhead of using a dynamically assembled protocol stack acceptable.
Regarding scalability, it should be noted that the rendez-vous node may look like a bottleneck. However, only cluster leaders are connected to the rendezvous node and very little communication goes through the rendez-vous node (actually, only bootstrap communications).
In conclusion, our proposed architecture for a meta-communication channel enables connectivity in cases where most communication frameworks cannot even get basic connectivity, and gets better performance than other meta-communication-channel based approaches where it can compare, without compromising security.
Related Work
Many researchers are working on communication management for grids. Most of the works rely on the difference between intra-cluster high-performance communication and inter-cluster TCP communication, but only a few actually uses a component-based architecture for a flexibility pushed further than the binary intra-/inter-cluster approach.
Globus XIO [START_REF] Bresnahan | The eXtensible Input Output library for the Globus Toolkit[END_REF] is becoming a de facto standard for communication on grids. Its main concept is the driver stack which is an assembly of building blocks very similar to software components. However, its static driver stack approach, with no meta-communication channel, defeats most of the purpose of software components in communication frameworks. In particular, a server must know in advance the driver stack that clients will use, which limits the flexibility of the communication framework.
A widely used grid programming model is MPI. The most popular implementation for grids is MPICH-G2 [START_REF] Karonis | MPICH-G2: A grid-enabled implementation of the message passing interface[END_REF], an MPI implementation over Globus. However, WAN communications methods in MPICH-G2 are rudimentary; it does not cross firewalls nor NAT. The only communication methods that MPICH-G2 is able to utilize are vendor-MPI for intra-cluster communication and plain TCP for inter-cluster communication. PACX-MPI [START_REF] Gabriel | Implementing MPI with Optimized Algorithms for Metacomputing[END_REF] is an implementation of MPI that has been designed from scratch for grids. For each site, PACX-MPI uses a dedicated gateway node for relaying messages across the WAN. This static configuration solves some of the connectivity problems. However, it does not solve all problems caused by firewalls and introduce a performance penalty because of relaying. GridMPI [START_REF] Gridmpi | [END_REF] is another implementation of MPI designed from scratch for grids. It solves some connectivity problems but supports only vendor-MPI communications, plain TCP, and routing on top of these. Open-MPI [START_REF] Squyres | The component architecture of open MPI: Enabling third-party collective algorithms[END_REF] is becoming a major MPI implementation and is built with software components. However, software components are used as an engineering tool to ease development by independant people and not as a tool to reach flexibility and dynamicity. Components in OpenMPI are statically assembled by the end-user. None of these MPI implementations is as flexible as PadicoTM with dynamic protocol stack and brokered communication methods (splicing, reverse connections, etc.).
NetIbis [START_REF] Olivier Aumage | Netibis: An efficient and dynamic communication system for heterogeneous grids[END_REF] is another component-based communication framework for grids. It features dynamically assembled protocol stacks and brokered communication methods. Actually, this advances in NetIbis are our own work [START_REF] Denis | Wide-area communication for grids: An integrated solution to connectivity, performance and security problems[END_REF]. Our present work transposes these concepts in PadicoTM and goes further with a two-step bootstrap for better performance of the meta-communication channel and a hierarchical bootstrap channel.
Finally, Project JXTA [START_REF] Gong | Project JXTA: A technology overview[END_REF] is an alternative to solving connectivity problems in WAN with application-level relaying building an overlay network. This is very similar to our bootstrap channel. However, JXTA is targeted towards peer-topeer and very volatile nodes rather than grid computing. It will presumably not be suitable for high-performance communication [START_REF] Halepovic | JXTA performance study[END_REF].
Conclusion and Future Work
Applications are faced with connectivity and security problems in current grids. Moreover, the requirements concerning communications and the acceptable tradeoffs highly depends on the applications. A solution to reach the flexibility regarding communication on grids is the use of a component-based communication framework. The users are then completely free to configure and assemble the building block in the way they want. However, we have seen that a truly flexible and dynamic component-based communication framework needs a meta-communication channel for its out-of-band communications required by consistency and dynamic adaptability. The meta-communication channel is useful for some "brokered" communication methods, in particular those designed to cross firewalls. The meta-communication channel has often been the "weakest link" of component-based communication frameworks: bottleneck for the performance, back-door from the security point of view, and limiting connectivity to nodes reachable by plain TCP.
We proposed in this article an architecture for a meta-communication channel that suffers from none of the aforementioned limitations. It exhibits good properties regarding connectivity, security and performance. Thus, the gain in flexibility brought by software components may be fully exploited without trading anything against flexibility. The proposed architecture has been successfully implemented in the PadicoTM communication framework which is available [START_REF]The PadicoTM web site[END_REF] as open source software.
The following steps in our work are in multiple directions. The first direction is quite short term and consists in adding support for more communication methods, and in particular for the ubiquitous Globus Security Infrastructure (GSI) [START_REF] Foster | A security architecture for computational grids[END_REF]. The second direction consists in investigating precisely the scalability of our approach for thousands of nodes, and our envisaged solution with a federation of rendez-vous nodes. Finally, fault-tolerance which was not taken very much into account in our present study, will be investigated for very large scale experiments.
Figure 1 :
1 Figure 1: Dynamically assembled communication stack from node A through node B, using ZIP compression over TLS over TCP/IP; node B does not know in advance the component assembly.
Figure 2 :
2 Figure 2: The bootstrap channel uses a relayed protocol through a rendez-vous node. The route from node A2 to B2 goes through A1, rendez-vous node (R), and B1.
Acknowledgements
This work has been funded by the project "LEGO" [3] (contract number ANR-CICG05-11) from the French National Agency for Research (ANR).
Experimentations were performed on the Grid'5000 [1] platform funded by the ACI GRID from the French Ministry of Research.
Meta-communication channels in PadicoTM
In PadicoTM, the concept of cluster is guided by Madeleine. The rendez-vous node is a dedicated process that can be started on any accessible host. Three schemes are available for the bootstrap channel:
• rendez-vous node on an internet-visible host, connections from leaders to rendez-vous node through TLS over TCP. This closely follows the model described in section 3.2.
• rendez-vous node on the machine of the user who launches processes, connections from leaders to rendez-vous node through SSH tunnels. The advantage is that it does not require an Internet-visible host and works even if some leaders have no access to the public Internet.
• rendez-vous node on some random machine, connections through plain TCP. This avoids unnecessary TLS certificates mangling when deploying on a private network.
Bootstrap connections from cluster nodes to cluster leaders are done through Madeleine. The implementation of the meta-communication channel is quite straightforward following the model described in section 3.3.
Evaluation
We have evaluated our component-based approach of meta-communication channel on various grid configurations.
Connectivity analysis.
We deployed PadicoTM on multiple sites of Grid'5000 [START_REF]The grid'5000 project[END_REF] and some sites outside Grid'5000. Grid'5000 as a whole is a private network without routing towards the outside Internet (private IP address without NAT) except one gateway per site allowed to connect to the outside. Most sites outside Grid'5000 are themselves protected by stateful firewalls.
In all cases, we were able to establish a bootstrap channel from every node to every other node and thus reach a full connectivity for the meta-communication channel and data links. When there are nodes inside a private network without NAT and nodes outside, there is no choice but to use proxies or SSH tunnels |
01840880 | en | [
"spi"
] | 2024/03/04 16:41:24 | 2017 | https://hal.science/hal-01840880/file/Towards%20the%20definition%20of%20an%20indoor%20air%20quality%20index%20for%20residential%20buildings.pdf | Louis Cony
Olivier Ramalho
Abadie
Towards the definition of an indoor air quality index for residential buildings based on long-and short-term exposure limit values
Keywords: IAQ, Indicator, Indices, Guideline value, Health assessment, Good IAQ, Bad IAQ 1
In the Framework of the IEA EBC Annex68 Subtask 1 working subject, we aimed at defining an indoor air quality index for residential buildings based on long-and short-term exposure limit values. This paper compares 8 indoor air quality indices (IEI, LHVP, CLIM2000, BILGA, GAPI, IEI Taiwan, QUAD-BBC and DALY) by using the French IAQ Observatory database that includes pollutant concentration measurements performed in 567 dwellings between 2003 and 2005. This comparison allows to make a relevant analysis of each index and determines their pros and cons i.e. the calculation method, selected pollutants, threshold concentrations, subindices and their aggregation. From this analysis, a new index is proposed in order to be as consistent as possible in regards of health impacts by taking both long-and short-term exposure limit values into account.
INTRODUCTION
Due to the huge number of various sources of emissions, pollutants, health impacts and toxicity levels, assessment of Indoor air Quality (IAQ) is a complex task [START_REF] Hulin | Indoor air pollution and childhood asthma: variations between urban and rural areas[END_REF], Wolkoff, 2013;[START_REF] Haverinen-Shaughnessyu | An assessment of indoor environmental quality in schools and its association with health and performance[END_REF]. One of the required tools to achieve that goal is a single Indoor Air Quality Index that would describe air quality in regards of health impacts. During the last decades, such indices were defined, yet, none was accepted as sufficiently relevant by the international scientist community. In this paper, we analyze 8 of them to evaluate their main pros and cons, expanding a first comparison study performed by [START_REF] Wei | Applicability and relevance of six indoor air quality indexes[END_REF]. Selected Indoor Air Quality Indices are IAPI [START_REF] Sofuoglu | The link between symptoms of office building occupants and in-office air pollution: the Indoor Air Pollution Index[END_REF], LHVP [START_REF] Castanet | Contribution à l'étude de la ventilation et de la qualité de l'air intérieur des locaux[END_REF], CLIM 2000 [START_REF] Castanet | Contribution à l'étude de la ventilation et de la qualité de l'air intérieur des locaux[END_REF], BILGA [START_REF] Castanet | Contribution à l'étude de la ventilation et de la qualité de l'air intérieur des locaux[END_REF], GAPI [START_REF] Cariou | Utilisation d'un indice global de la qualité de l'air intérieur pour suivre l'ensemble des COV présents et leur impact sur la santé humaine[END_REF], IEI Taiwan [START_REF] Chiang | A study on the comprehensive indicator of indoor environment assessment for occupants' health in Taiwan[END_REF], QUAD-BBC (Quad-BBC, 2012) and DALY (Logue et al., 2011). In a first part, the methodology used to compare the different indices is presented along with the description of the 8 IAQ indices, the pollutants of concern and their associated Exposure Limit Values (ELVs). Comparison and discussion are given in a second partwhich results in the definition of a new IAQ index.
METHOD
IAQ Indices
A total of 8 indices are studied in this paper: IEI, LHVP, CLIM 2000, BILGA, GAPI, IEI Taiwan, Quad-BBC, and DALY. Table 1 gathers all calculation equations of the different indices along with the reference studies that defined them. (1
)
where I is the number of level-3 groups, J, the number of level-2 groups in each level-3 group, K, the number of level-1 pollutant variables in each level-2 group and max and min are the measured maximum and minimum concentrations of the BASE study (Girman et al., 1995), respectively. Most indices from literature are based upon the same principle:an average indoor concentration is divided by an ELV, quite similar to hazard quotients used for health risk assessment. ELV are usually health-based but may be different to toxicological reference values and vary according to the index. Many indices use sub-indices dedicated to one pollutant that are aggregated to obtain a unique index. Nevertheless, the presence of subindices and its aggregation are important questions that induce debates. From Table 1, 4 main types of aggregation emerged:
-Sum-average:Most indices are based on a sum of pollutants' concentrations compared to a reference value. Sometimes the sum is divided by the number of studied pollutants. The aim is to quantify average level of quality in a room by considering each pollutant as equally important.
-Maximum: The BILGA index is calculated by taking the maximum value of all its sub-indices. The pollutant with highest toxicity exposure is the only one taken in account to assess IAQ. -Specific formula: Some indices are not based on a concentration divided by an ELV but on a specific formula that returns a value per pollutant, which can be summed up to all pollutants to obtain a global health risk assessment level. -Score using breakpoint concentration: the IEI Taiwan is based on the attribution of a score for each pollutant using 4 ranges of concentrations between breakpoints. Scores are summed in the category except if one score of the category is below 60, in which case the minimum score of the category is chosen. Scores of respective categories are then summed and weighted by a category coefficient.
Table 2 summarizes the main pros and cons of the published indices.
Target pollutants
The pollutants used in the calculation of each index are listed in Table 3. ). According to the definition, there is no known health impact for the selected period below the threshold concentration.
Note that, among the 4 main ELV used, TRV are not accurate and may differ from one study to another; LRV and IRV are too old and too lax. IAGV seems to be the most relevant one, because it is current, accurate, and based on known health impacts.
Comparison procedure
In order to proceed with the comparison of IAQ indices, a common set of inputs is necessary. Most studied IAQ indices require both pollutants concentration levels and Exposure Limit Values (ELVs).Regarding the first kind of inputs, the French dwellings survey conducted from 2003 to 2005 by OQAI (French Indoor Air Quality Observatory) on 567 housings [START_REF] Kirchner | Etat de la qualité de l'air dans les logements français[END_REF]was chosen as a reference for IAQ assessment in residential buildings.
In this survey, only long-term effects were taken in account. NO 2 , SO 2 , O 3 , mold and bacteria were not measured. Whenever a pollutantconcentration is not available but is needed in the calculation, the index formula was readapted so that it does not bias the results. As much as possible, all indices were calculated the same way as it was firstly described in the literature. Most indices need ELVs for considered pollutants. If the ELV was not clearly defined, the French Indoor Air Guideline Value (IAGV) was chosen (ANSES, 2011). If not available, international reference values were used instead e.g. WHO(WHO 2010), OEHHA (2016)...
The following procedure is applied to compare the indices.A first graph presents the distribution of the studied index according to its original scaleconsidering the 567 dwellings(Figure 1, left). A second one is produced to transform the results using a common 3 level-scale with the following interpretation: good, intermediate or bad IAQ (Figure 1, right).All indices have an interpretation to determine if IAQ is good or bad in a dwellingin their original definition except for DALY and GAPI. The first approach quantifies DALY lost per year per 100,000 persons due to exposure to indoor air pollutants but there is no indication on how many DALYs should be lost per year to consider IAQ as good or bad. The GAPI index returns a value that relies on the weight of the selected criteria without any scientific signification.
A last graph (Figure 2) intends to detect eclipsing as defined by [START_REF] Sharma | National Air Quality Index[END_REF] and inconsistencies. Most of IAQ indices employ aggregation (maximum, sum, weighted averages, root-mean-square formulation…) of sub-indices at some point so that information for a particular pollutant can be hidden by the weight of the other pollutants. In this analysis, the studied indices are plotted against the maximum value of the concentration of each pollutant to its ELV ratio (MAX) in order to identify whether the indices are not hiding critical cases or not. In Figure 2, some points are encircled in red, they are associated with dwellings that have a bad IAQ according to IAPI value whereas it has an excellent IAQ according to MAXindex (<0.5). On the contrary, points encircled in blue correspond to dwellings with an intermediate IAQ whereas MAX(>1) characterizesa very bad IAQ (one pollutant at least is above short-term IAGV). In this example the dispersion is so high that IAPI is not able to distinguish the level of IAQ as shown by the green region of equal MAXand IAPI ranging from 4 to 10 and the orange region where IAPI predicts the same level of IAQ with a MAXranging from 0.2 to 3.5. Short term IAGV was used as ELV to detect bad IAQ with certainty.
RESULTS AND DISCUSSION
IAQ indices comparison
All results obtained by calculating the 8 IAQ indices are compiled in Figure 3 to 5. Since PM 2.5 weights about 90% of DALYs lost and was not measured in every dwelling, we decided to considereronly the dwellingswhere PM 2.5 was measured (noted as "DALY (with PM 2.5 )" hereafter). Figure 3 shows that some indices do not distinguish well the differences among the buildings (LHVP, GAPI, IEI Taiwan) whereas the others do. Figure 4 strengthens this observation. In particular, only two indices clearly classify the building population according to the interpretation scale: IAPI and DALY. However, they interpret the IAQ in opposite way with 70% and 20% of bad IAQ for IAPI and DALY, respectively. Figure 5highlights the lack of correlation between the indices and the MAX except for DALY and, to a lower extent, BILGA and IAPI.
Proposal of a new index
Since there is no current consensus about the definition of good or bad IAQ, we propose a few statements that IAQ should reflect in our opinion:
-IAQ is good if there is no known health impact in a long-term perspective. Long-term (usually 1 year period) IAGV can be seen as the minimum threshold to be considered. -IAQ is bad when the long-term (annual) average concentration is above short-term exposure maximum threshold. Short-term IAGV represents themaximal threshold for a long-term(annual) average concentration. -Since the comparison is made with a critical threshold, if only one pollutant reaches this threshold, it is sufficient to affirm that IAQ is bad with certainty, no matter how low the concentration of the other pollutants are. The most unfavourable situation is relevant to define an IAQ index. -There is no point in letting IAQ index values range from ]-∞;+∞[.If the long-term (annual) average concentration of one pollutant is above critical threshold, no matter how high the concentration, IAQ remains bad; a maximum value for the index can be then defined. In the same way, concentration below the minimum ELV threshold refers to good IAQ so that an index minimum value can be proposed. Based on the previous points, the proposed formula for a new index, called ULR-IAQ, is the following:
𝐼 𝑈𝐿𝑅-𝐼𝐴𝑄 = 𝑚𝑎𝑥 10 𝐶 𝑖𝑛𝑑 ,𝑖 -𝐼𝐴𝐺𝑉 𝐿𝑇 ,𝑖 𝐼𝐴𝐺𝑉 𝑆𝑇 ,𝑖 -𝐼𝐴𝐺𝑉 𝐿𝑇 ,𝑖
IAGV LT,i is the indoor air guideline value for long-term exposure (usually 1 year) to apollutant, IAGV ST,i is the indoor air guideline value for short-term exposure (shortest available) and C ind,i is the indoor concentration of pollutant i. If C ind,I > IAGV ST,I then C ind,I =IAGV ST,I and if C ind,I <IAGV LT,I then C ind =IAGV LT,i .
This index varies from 0 to 10. A value of 0 means that IAQ is good; there is no known health impact due to the target indoor air pollutants. Index equals to 10 meansa very bad IAQ; it is dangerous for human health even on short-term exposure and something must be done to improve IAQ. Between those two boundaries, a linear trend is used for sake of simplicity as we cannot currently define intermediate situations between good and bad IAQ.
The pollutants accounted for this new index have been selected according to the existence of a long-and short-term exposure IAGV and concentration level availability in the OQAI campaign (OQAI, 2007).
Evaluation of the proposed index
As for the previous indices, ULR-IAQ was calculated over the whole French dwellings measurement campaign of 567 housings. Results are presented inFigure 6. The new index shows another picture of the IAQ in French buildings: about 28% bad, 10% good and the remaining 62% with intermediate IAQ. This picture is close to the finding of [START_REF] Wei | Applicability and relevance of six indoor air quality indexes[END_REF], using a more complex combination of index classification: 34% bad, 6% good and 60% intermediate IAQ. The third graph confirms the capability of detecting bad IAQ; this result is obvious as it is part of the definition of ULR-IAQ. One key element to evaluate the IAQ is the list of target pollutants to be considered. A total of 12 pollutants have been used to evaluate the new index. However, not all pollutants have the same importance on the ULR-IAQ final value. Figure 7 reports how frequent each pollutant has the first, second and third highest sub-index, respectively noted "pollutant 1, 2 and 3". Theresult clearly pointsformaldehyde, acrolein, benzene, PM 10 , PM 2.5 and carbon monoxide as unavoidable when evaluating IAQ in dwellings. However, from literature, at least three more pollutants of interest should be added to the list i.e. radon, nitrogen dioxide and mould. Their harmful effects are known but there was no available data in the French survey to take them into account.
CONCLUSION
This work is based on the comparison of IAQ indices. Eight indices found in the literature were calculated and compared using thedata of the French dwelling measurement campaign (567 housings) as inputs. By analysing the outputs and indices' original definitions, the advantages and drawbacks have been listed and the definition of a new index called ULR-IAQ has been proposed.The new index seems to give a better representation of the IAQ of the studied dwellings. In particular, the index allows the detection of bad IAQ caused by one (or more) pollutants, ability not included in the existing indices.The new index permits to limit the list of pollutants of interest to a minimum, alist in agreement with previous prioritization studies [START_REF] Index | The INDEX project: Critical Appraisal of the Setting and Implementation of Indoor exposure Limits in the EU[END_REF][START_REF] Kirchner | Etat de la qualité de l'air dans les logements français[END_REF].
AKNOWLEDGEMENTS
The authors want to thank the Nouvelle Aquitaine Region (CPER 2015-2020 "Bâtiment Durable") and Health Agency (former ARS-Poitou-Charentes) for funding this research project.
Figure 1 :
1 Figure 1: Representation of indices (left: original scale; right: common scale) -Example for IAPI.
Figure 2 :
2 Figure 2: Detection of hidden information -Example for IAPI.
Figure 3 :
3 Figure 3: Representation of indices according to their original scale.
Figure 4 :
4 Figure 4: Representation of indices according to their interpretation scale.
Figure 5 :
5 Figure 5: Comparison of indices with the MAX index.
Figure 6 :
6 Figure 6: Results of ULR-IAQ, original scale (left), interpretation scale (middle) and comparison with max(right).
Figure 7 :
7 Figure 7: Frequency of pollutants corresponding to the 3 most unfavourable on the whole dwelling measurement campaign.
Table 1 :
1 previously proposed approaches to define IAQ index
Index Equation
Table 2 :
2 Main pros and cons for IAQ indices and types of formula.
Type of Corresponding Main pros Main cons
aggregation indices
Sum-average IEI, CLIM2000, IEI values are limited between 0 Loss of information by ambiguity or
LHVP, Quad- and 10. Quad-BBC has an eclipsing (Ministry of Environment,
BBC adaptive formula depending on Forests and Climate Change,
the type of room. 2014):importance of a high value can be
reduced in a mass of low values even if it
exceeds hazardous threshold level.
Maximum BILGA Based on the most unfavourable ELVs used are old and need to be updated
pollutant level.
Take into account both limited
risks and important risks.
Specific GAPI,DALY GAPI has a flexible formula that GAPI value has no real signification.
formula can be readapted to any pollutants DALY approach is very approximate and
and many studies criteria. DALY many pollutants lack of availabledata to
is based exclusively on health be usedefficiently.
impacts.
Score by IEI Taiwan IEI Taiwan gathers both Sum- In practice, it is almost equal to Maximum
breakpoint average and Maximum type type. Breakpoint concentrations and
concentration advantages. categories weights are defined
subjectively without a related health
correlation
Table 3 :
3 Pollutants used inthe calculation of each index. Almost every index use Exposure Limit Value (ELV) to quantify the exposure level to a pollutant. Among the studied indices, 4 different types of ELV are used: -Limited Risk Value (LRV): For exposure below LRV, health impacts are limited, null, or unknown. -Important Risk Value (IRV): If exposure is above IRV, health impacts are proven, corresponding to irreversible lesions, chronic diseases, or even death. -Toxicological Reference value (TRV): Based on animal toxicological studies by applying a conversion factor, or sometimes based on human epidemiologic studies. -Indoor Air Guideline Values (IAGV): Threshold values defined by national or international organizations, e.g. the French Agency of Health and Environment Security (ANSES
Indices Pollutants
IAPI Formaldehyde, Benzene, Acrolein, Carbon monoxide, Carbon dioxide, PM 10 , PM 2.5
LHVP Carbon monoxide, Carbon dioxide
CLIM 2000 Carbon monoxide, Carbon dioxide, Formaldehyde
BILGA Carbon monoxide (1h), Carbon monoxide(8h), Carbon dioxide, Formaldehyde, Radon
GAPI Formaldehyde, Acetaldehyde, Acrolein, Hexaldehyde, Benzene, 1-Methoxy-2-propanol,
Trichloroethylen, Toluene, Tetrachloroethylene, 1-Metoxy-2-Propyl, Acetate, Ethylbenzene,
m,p-xylenes, styrene, o-xylene, 2-Butoxyethanol, 124-Trimethylbenzene, 1,4-
dichlorobenzene, n-decane, 2-Butoxy ethyl acetate, n-undecane
IEI Taïwan Carbon monoxide, Formaldehyde, Carbon dioxide, PM 2.5 and Total volatile organic
compounds (TVOC)
Quad-BBC Carbon monoxide, Formaldehyde, PM 2.5 , Radon, Toluene, o-xylene, Acetone, PM 10
DALY PM 2.5 , Carbon monoxide, Acrolein, Formaldehyde, Benzene
2.3 Exposure Limit Values (ELV)
All pollutants are presented as follows: name [long-term ELV; short-term ELV]. All units are in µg.m -3 except for carbon monoxide which is in mg.m -3 |
00411008 | en | [
"info.info-ni"
] | 2024/03/04 16:41:24 | 2006 | https://inria.hal.science/inria-00411008/file/component.pdf | Alexandre Denis
email: [email protected]
Sébastien Lacour
email: [email protected]
Christian Pérez
email: [email protected]
Thierry Priol
email: [email protected]
AND André É Ribes
email: [email protected]
S Ébastien Lacour
Christian P Érez
Programming the grid with components: models and runtime issues
Keywords: component model, middleware, runtime, component deployment AMS subject classifications
come L'archive ouverte pluridisciplinaire
1. Introduction. Programming distributed systems has always been seen as a tedious activity for a programmer. A Grid infrastructure, as the latest incarnation of distributed systems, is no exception to this reality. In addition to the associated coding activity, a programmer often has to deal with low-level programming and runtime issues such as communications between different modules of the application or deployment of modules among a set of available resources. To cope with this problem, several approaches have been pursued to make the programming task easier. Some well known approaches such as Remote Procedure Call or Distributed Objects allowed usual programming paradigms (function call or objects) to be applied to transparently invoke a function of a remote program or a method of a remote object. Despite some success stories, these two approaches have not been able to reduce the complexity of distributed programming to an acceptable level. Moreover, the automatic deployment of distributed applications is still an important issue.
A third approach based on the composition of software modules has gained acceptance in the last few years. Instead of following an object-oriented approach, and its associated inheritance mechanism, a component approach enforces composition as the main paradigm to develop distributed applications. It offers the advantage of decreasing the design complexity and improves productivity by facilitating software re-use. Such an approach can be applied to programs running on Grid infrastructures. Moreover, a Grid infrastructure, and its associated services to manage distributed resources, is well suited to deploy software components by placing them on available resources taking into account various constraints. It is worth noting that deployment of distributed software components is the missing feature of most of the available distributed component models. Therefore, we think that associating component programming with the Grid is of mutual benefit: making Grid programming easier and deploying software components in a transparent way. Thus, this will insure a larger success of these two promising technologies.
The focus of this paper is to apply component programming to scientific computing, especially multi-physics applications. Such applications aim to simulate various physics, each of them being implemented by a dedicated code, to increase the accuracy of simulation. It is becoming clear that a radical shift in software development should occur to handle the increasing complexity of such applications. Moreover, the computing infrastructure should provide the level of performance in order to run such applications within a reasonable time frame. A computational grid is by no doubt a computing infrastructure that could deliver this level of performance by combining together high-performance computing resources connected to the Internet.
However, modern software development approaches are often suspected of not providing the level of performance which high-performance computing systems would offer. If we consider the use of a component programming methodology for the design and the implementation of multi-physics applications, and the use of a Grid infrastructure for their execution, several obstacles can be foreseen. The first one is the suitability of existing component models for the encapsulation of scientific simulation codes within software components. It may often be the case that such codes are parallel (mostly SPMD) whereas component models are not designed to encapsulate SPMD parallel codes in an efficient way. The second obstacle is communications between software components. Within a grid infrastructure, there may be several networking technologies from System-Area Networks (SAN) to Wide-Area Networks (WAN). The use of the Internet's lingua franca TCP/IP jointly with a SAN is probably not the best way to exploit all this networking technology. Moreover, several communication middleware or runtime environements will have to work together in a seamless way to ensure communication within a component (parallelism) or between components (distribution). A third obstacle is the deployment of components within a grid infrastructure. Such a deployment should be made transparent to the users taking into account end-user constraints. An end-user should not have to map components onto available grid resources by himself. A grid middleware should manage this operation in an automatic way on behalf of the end-user. We think that these three obstacles represent the major ones and should be addressed by computer scientists.
The objective of this paper is to present some solutions to overcome these three obstacles. Within the framework of the Padico project, we carried out three research activities, each of them addressing an obstacle. Section 2 addresses the first one: the design of a component model for the grid (GridCCM) based on an existing one (Corba Component Model or Ccm). Section 3 presents a communication framework, called PadicoTM, allowing several communication middleware and runtime environments to work together by isolating them and allowing for various networking technologies to be shared. Section 4 explains the process of deploying components within a Grid and the required extension that should be made to existing Grid middleware such as the Globus Toolkit [START_REF] Foster | Globus: A metacomputing infrastructure toolkit[END_REF]. Finally, section 5 draws general conclusions and mentions perspectives of this work.
A Grid Component
Model. The component model we describe in this section is based on the Corba Component Model instead of designing a new one. We think that such a decision offers more advantages than drawbacks. Using an existing component model allows us to take benefit of all the work that has been done both on the design of the model itself and its realization through several open source implementations. We propose here some extensions to the Corba Component Model that do not require the modification of the Omg specification. Before introducing such extensions, we give a brief overview of Ccm in the following sections.
2.1. Overview of the CORBA Component Model. The Corba Component Model [START_REF] Omg | CORBA Component Model V3.0[END_REF] (Ccm) is part of the latest Corba [START_REF] Omg | The Common Object Request Broker: Architecture and Specification V3.0[END_REF] specifications (version 3). The Ccm specifications allow the deployment of components into a distributed environment, that is to say that an application can deploy interconnected components on different heterogeneous servers in one operation. Figure 1 presents the general picture of Ccm. The component life-cycle is divided into two parts. First, the creation of the component requires to define the component interface, to implement it and then to package it so as to obtain a component package, i.e. a component. The second part consists in (optionally) linking together several components into a component assembly and in deploying it. Ccm provides a model for all theses phases. For example, the Ccm abstract model deals with the external view of a component, while the Component Implementation Framework (CIF) provides a model to implement a component. There are also models for packaging and deploying a component, as well as for the local runtime environment of a component. In this section, we briefly introduce the abstract model, the execution model and the deployment model.
CCM Abstract Model.
A Corba component is represented by a set of ports described in the Interface Definition Language (Idl) of Corba 3 defined by the Omg. The Idl of Corba 3 is an extension of the Idl of Corba version 2 by the Omg. There are five kinds of ports as shown in Figure 2. Facets are named connection points that provide services available as interfaces while receptacles are named connection points to be connected to a facet. They describe the component's ability to use a reference supplied by some external agent. Event sources are named connection points that emit typed events to one or more interested event consumers, or to an event channel. Event sinks are named connection points into which events of a specified type may be pushed. Attributes are named values exposed through accessor (read) and mutator (write) operations. Attributes are primarily intended to be used for component configuration, although they may be used in a variety of other ways. Figure 3 shows an example of component definition using Idl3.
Facets and receptacles allow a synchronous communication model based on the remote method invocation paradigm. An asynchronous communication model based on data transfer is implemented by the event sources and sinks.
A component is managed by an entity named home. A home provides factory and finder operations to create and/or find a component instance. For example, a home exposes a create operation which locally creates a component instance.
CCM Execution Model.
Ccm uses a programming model based on containers. Containers provide the run-time environment for Corba components. A container is a framework for integrating transactions, security, events, and persistence into a component's behavior at runtime. Containers provide a standard set of services to a component, enabling the same component to be hosted by different container implementations. All component instances are created and managed at runtime by its container.
CCM Deployment
Model. The deployment model of Ccm is fully dynamic: a component can be dynamically connected to and disconnected from another component. For example, Figure 4 illustrates how a component ServerComp can be connected to a component ClientComp through the facet FacetExample: a reference is obtained from the facet and then it is given to a receptacle. Moreover, the model supports the deployment of a static application. In this case, the assembly phase has produced a description of the initial state of the application. Thus, a deployment tool can deploy the components of the application according to the description. It is worthwhile to remark that it is just the initial state of the application: the application can change it by modifying its connections and/or by adding/removing components.
The deployment model relies on the functionality of some fabrics to create component servers which are hosting environments of component instances. The issues of determining the machines where to create the component servers and how to actually create them are out of the scope of the Ccm specifications. 2.5. Adapting CCM to the Grid. The main problem of Ccm is that it does not provide any support to encapsulate parallel codes. Modifying the parallel code to a master-slave approach so as to restrict Corba communications to one node (the master) does not appear the right solution: it may require non trivial modifications to the the parallel code and the master node may become a communication bottleneck in parallel to parallel component communications. This problem is address by GridCCM.
2.5.1. Introducing Parallelism into CCM. This section presents GridCCM, an extension of the Corba Component Model. It adds the concept of parallel components to Ccm. Its objective is to allow an efficient encapsulation of parallel codes into GridCCM components. We currently restrict ourselves to embed Spmd (Single Program Multiple Data) codes1 . Another goal of GridCCM is to encapsulate parallel codes with as few modifications to parallel codes as possible. Similarly, we target to extend Ccm without introducing deep modifications to the model. That is why, we do not allow ourselves to do any change to the Corba Interface Definition Language (Idl). In the same way, a parallel component has to be interoperable with a standard sequential component.
Figure 5 presents a parallel component in the Corba framework. The Spmd code may use Mpi for its inter-process communications; it uses Corba to communicate with other components. In order to avoid bottlenecks, all processes of a parallel component participate to inter-component communications. The nodes of a parallel component are not directly exposed to other components. We introduced proxies to hide the nodes. More details about parallel Corba are exposed in [START_REF] Pérez | A parallel CORBA component model for numerical code coupling[END_REF].
Managing the Parallelism.
To introduce parallelism support, like data redistribution, without requiring any change to the Orb, we choose to introduce a software layer between the user code (client and server) and the stub as illustrated in Figure 6.
A call to a parallel operation of a parallel component is intercepted by this new layer. The layer sends the data from the client nodes to the server nodes. It can perform a redistribution of the data on the client side, on the server side or during the communication between the client and the server. The decision depends on several constraints like feasibility (mainly memory requirements) and efficiency (client network performance versus server network performance).
The parallel management layer is generated by a compiler specific to GridCCM, In the new Idl interface, the user arguments described as distributed have been replaced by their equivalent distributed data types. Because of this transformation, there are some constraints about the types that can be distributed. The current implementation requires the user type to be an Idl sequence type, that is to say a 1D array. So, one dimension distribution can automatically be applied. This scheme can easily be extended for multidimensional arrays: a 2D array can be mapped to a sequence of sequences.
C O R B A C l i e n t C o m p o n e n t G r i d C C M l a y e r C O R B A s t u b o > m ( m a t r i x n ) ; o 1 > m ( M a t r i x D i s n 1 ) ; o 2 > m ( M a t r i x D i s n 2 ) ; o 3 > m ( M a t r i x D i s n 3 ) ;
2.5.3. Preliminary Implementation of GridCCM. We have implemented a preliminary prototype of GridCCM on top of two existing Ccm implementations: OpenCCM [START_REF] Vadet | The OpenCCM platform[END_REF] and MicoCCM [START_REF] Pilhofer | The MICO CORBA component project[END_REF]. Our first prototype has been derived from OpenCCM [START_REF] Vadet | The OpenCCM platform[END_REF]. OpenCCM is developed at the research laboratory LIFL (Laboratoire d'Infomatique Fondamentale de Lille) and is written in Java. The second prototype has been derived from MicoCCM [START_REF] Pilhofer | The MICO CORBA component project[END_REF]. MicoCCM is an OpenSource implementation based on the Mico Orb and is written in C++. Considering the second prototype, we have shown that GridCCM is able to efficiently aggregate the bandwidth allowing parallel components to best use the underlying network. Some experiments have been done with a WAN, called VTHD [START_REF]The VTHD project[END_REF], a French high-bandwidth Wan that connect several clusters located in several INRIA research units. A bandwidth of 103 MB/s (820 Mb/s) was obtained (using a 1 Gbit/s link) between two clusters, each of them running a parallel component encapsulating a parallel code running on 11 cluster nodes. They should be able to efficiently share the resources (network, processor, etc.) without conflicts and without competing with each other. Moreover, we want every middleware systems to be able to use every available resources with the most appropriate method so as to achieve the highest performance. Unfortunately, existing Corba implementations are able neither to use a wide range of networks nor to be used beside Mpi. Therefore we designed a communication framework to cope with these issues. The important features which should be supported by grid-enabled middleware systems are: Transparency -The middleware systems used by an application should be able to transparently and efficiently use the available resources. For example, a Mpi, Pvm, Java or Corba communication should be able to utilize high speed networks (San) as well as local area networks (Lan) and wide area networks (Wan). Moreover, they should adapt their security requirements to the characteristics of the underlying network, e.g. if the network is secure, it is useless to cipher data. Flexibility -There is a diversity of middleware systems, and we can assume there will always be. It seems important not to tie grid applications to a specific grid framework but instead to ease the "gridification" of middleware systems. Interoperability -Grids are not a closed world. Grid applications will need to be accessible using standard protocols. So, there is a high need to keep protocol interoperability. Support Multiple Communication Paradigms -GridCCM requires several middleware systems, e.g. Mpi and Corba. Thus, it is important to allow different middleware systems to be used simultaneously. We introduce the PadicoTM [START_REF] Denis | Padicotm: An open integration framework for communication middleware and runtimes[END_REF][START_REF] Denis | Network communications in grid computing: At a crossroads between parallel and distributed worlds[END_REF] communication framework which is able to deal with these problems. PadicoTM is based on a three-level runtime layer model which decouples the interface seen by the middleware systems from the interface actually used at low-level: an arbitration layer plays the role of resources multiplexer; an abstraction layer virtualizes resources and provides the appropriate communication abstractions; a personality layer implements various APIs on top of the abstract interfaces. The originality of this model is to propose both parallel and distributed communication paradigms at every level, even in the abstraction layer. There is therefore no "bottleneck of features" as depicted in Figure 8. The following sections give a presentation of the different layers of PadicoTM as shown in Figure 9.
A communication
3.1. Arbitration Issues. Supporting Corba and Mpi, both running simultaneously in the same process using the same network, is not straightforward. Access to high-performance networks is the most conflict-prone task when using multiple middleware systems at the same time. We propose that arbitration should be dealt for at the lowest possible level, so as to build more advanced abstractions atop a fully reentrant system. Arbitration is performed by a layer which provides a consistent, reentrant and multiplexed access to every networking resources, each resource is utilized with the most appropriate driver and method. The arbitrated interfaces are designed for efficiency and reentrance. Thus, we propose these Api to be callbackbased (à la Active Message). For true arbitration, this layer is the only client of the system-level resources: all accesses to the network should be performed through the arbitration layer. It provides also arbitration between different networks (e.g. Myrinet against Ethernet) so that they do not bother each other, and between different middleware systems even if the communication library does not provide multiplexing. More details about cooperative access rather than competitive are given in [START_REF] Denis | Towards high performance CORBA and MPI middlewares for grid computing[END_REF].
The arbitration layer in PadicoTM is called NetAccess, which contains two subsystems: SysIO for access to system I/O (sockets, files), and MadIO for multiplexed access to high-performance networks. The core of NetAccess manages the threads with the polling loops and enforces fairness between SysIO and MadIO. The interleaving policy between SysIO and MadIO is dynamically user-tunable through a configuration Api to give more priority to system sockets or high performance network depending on the application. NetAccess is open enough so as to allow the integration of other subsystems beside MadIO and SysIO for other paradigms such as Shmem on Smp for example.
3.1.1. NetAccess MadIO: API for Accessing Parallel-oriented Hardware. For good I/O reactivity and portability over high performance networks, we have chosen the high-performance network library Madeleine [START_REF] Aumage | A portable and efficient communication library for high-performance cluster computing[END_REF] as a foundation. Madeleine is used for high-performance networks such as Myrinet , Sci, Via. Madeleine provides no more multiplexing channels than what is allowed by the hardware (e.g. 2 over Myrinet , 1 over Sci). MadIO adds a logical multiplexing/demultiplexing facility which allows an arbitrary number of communication channels. Multiplexing on top of Madeleine adds a header to all messages. This can significantly increase the latency if not done properly. We implement headers combining to aggregate headers from several layers into a single packet. Thus, multiplexing on top of Madeleine adds virtually no overhead to middleware systems which send headers anyway. We actually measure that the overhead of MadIO over plain Madeleine is less than 0.1 µs which is imperceptible on most current networks. 3.1.2. NetAccess SysIO: API for Accessing Distributed-oriented Hardware. Contrary to a widespread belief, using directly the socket Api from the Os does not bring full reentrance, multiplexing and cooperation. Several middleware systems not designed to work together may get into troubles when used simultaneously, even with only plain Tcp/Ip. There are reentrance issues for signal-driven I/O (used by middleware systems designed to deal with heavy load), which results in an incorrect behavior, or worst, in a crash. If a middleware system uses blocking I/O and another uses active polling, the one which does active polling holds near 100 % of the Cpu time; it will result in inequity or even deadlock. To solve these conflicts, SysIO manages a unique receipt loop that scans the opened sockets and calls user-registered callback functions when a socket is ready. The callback-basedness guarantees that there is no reentrance issue nor signals to mangle with.
Abstraction
Layer. On top of the arbitration layer, the abstraction layer provides higher level services, independent of the hardware. Its goal is to provide various abstract interfaces well suited for their use by various middleware systems.
A dual-abstraction model.
A wide-spread design for communication frameworks consists in providing a unique abstraction on which several middleware systems may be built (see Figure 8 a andb). However, if this unique abstract interface is parallel-oriented (à la Mpi: message-based, Spmd, logical numbering of processes), dynamicity and link-per-link management are not easy. On the other hand, if this unique abstract interface is distributed-oriented (à la sockets: streams, fully dynamic), the performance is likely to be poor. Thus we propose an abstraction layer with both parallel-and distributed-oriented interfaces; these abstract interfaces are provided on top of every method provided by the arbitration layer (Figure 8 c). The abstract layer should be fully transparent: a middleware system built on top of the abstract layer should not have to know whether it uses Myrinet , a Lan or a Wan; it always uses the same Api and does not even choose which hardware it uses. The abstraction layer is responsible for automatically and dynamically choosing the best available service from the low-level arbitration layer according to the available hardware; then it should map it onto the right abstraction. This mapping could be straight (same paradigm at low and abstract levels, e.g. parallel abstract interface on parallel hardware) or cross-paradigm-e.g. distributed abstract interface on parallel hardware.
The abstract interfaces in PadicoTM are called VLink for distributed computing, and Circuit for parallelism.
Distributed abstract interface:
VLink . The VLink interface is designed for distributed computing. It is client/server-oriented, supports dynamic connections, and streaming. In order to easily allow several personalities -both synchronous and asynchronous personalities-, VLink is based on a flexible asynchronous Api. This Api consists in five primitive operations -read, write, connect, accept, close. These functions are asynchronous: when they are invoked, they initiate (post ) the operation and may return before completion. Their completion may be tested at any time by polling the VLink descriptor; a handler may be set which will be called upon operation completion. Such a set of functions is called a VLink -driver. VLink drivers have been implemented on top of: MadIO, SysIO, Parallel Streams for Wan, AdOC [START_REF] Jeannot | Adaptive online data compression[END_REF], loopback.
Abstract interface for parallelism:
Circuit. The Circuit interface is designed for parallelism. It manages communications on a definite set of nodes called a group. A group may be an arbitrary set of nodes, e.g. a cluster, a subset of a cluster, may span across multiple clusters or even multiple sites. Circuit allows communications from every node to very other node through an interface optimized for parallel runtimes: it uses incremental packing with explicit semantics to allow onthe-fly packet reordering, like in Madeleine [START_REF] Aumage | A portable and efficient communication library for high-performance cluster computing[END_REF]. Collective operations in Circuit still needs to be investigated. Circuit adapters have been implemented on top of MadIO, SysIO, loopback and VLink (to use the alternates VLink adapters); a given instance of Circuit can use different adapters for different links.
Personality
Layer and Middleware Systems. The middleware systems likely to be used by grid-enabled applications are various: Mpi, Corba, Soap, Hla, Jvm, Pvm, etc. Moreover, for each kind of middleware, there are several implementations which have their own specific properties. Developing a middleware system is a heavy task -for example, Mpich contains 200,000 lines of C-and requires very specific skills. Moreover, the standards -and thus, the middleware systems themselvesare ever-changing. It does not seem reasonable to re-develop an implementation of each one of these middleware systems specifically for a given communication framework. Instead of adapting the middleware systems to our communication framework, we adapt our communication framework to the expectation of the existing middleware systems. Thus it is easy to follow the new versions and to use specific features of a given implementation.
To seamlessly re-use existing implementations of middleware systems, we choose to virtualize networking resources. It consists in giving the middleware system the illusion that it is using the usual resource it knows, even if the real underlying resource is completely different. For example, we show a "socket" Api to a Corba implementation so as to make it believe it is using Tcp/Ip, even if it is actually using another protocol/network behind the scene. This is performed through the use of thin wrappers on top of the appropriate abstract interface to make it look like the required Api. We call these small wrappers personalities. It is possible to give several personalities to an abstract interface.
PadicoTM provides several well-known Api through simple "cosmetics" adapters over the VLink and Circuit abstract interfaces. These thin Api wrappers are called personalities. The personalities for VLink are: Vio for an explicit use through a socket-like Api; SysWrap supplies a 100 % socket-compliant Api through wrapping at runtime, binary-compatible with C, C++ or Fortran legacy codes without even recompiling. Thus, legacy applications are able to transparently use all PadicoTM communication methods without losing interoperability with PadicoTM-unaware applications on plain sockets. We implement an Aio personality on top of VLink which provides a plain Posix.2 Asynchronous I/O (Aio) Api. Thin adapters on top of Circuit provides a Fm 2.0 Api, and a (virtual) Madeleine Api.
Thanks to SysWrap, various middleware systems have been seamlessly ported on PadicoTM with absolutely no change in their code: Corba implementations (om-niORB 3, omniORB 4, ORBacus 4.0, all Mico 2.3.x including Ccm-enabled versions), an Hla implementation (Certi from the Onera), and a Soap implementation (gSOAP 2.2). A Java virtual machine (Kaffe 1.0.7) has been slightly modified for use within PadicoTM, with some changes in its multi-threading management code. Thanks to the virtual Madeleine personality, the existing Mpich/Madeleine [START_REF] Aumage | MPICH/Madeleine: a true multi-protocol MPI for high-performance networks[END_REF] implementation can run in PadicoTM. The middleware systems are dynamically loadable into PadicoTM. Arbitration guarantees that any combination of them may be used at the same time.
4. Deploying Components on a Computational Grid. One of the long run goals of computational grids is to provide computer power in the same way as the electric power grid supplies electric power [START_REF]The Grid: Blueprint for a New Computing Infrastructure[END_REF], i.e. transparently. Here, transparency means that the user does not know what particular resources provide electric or computational power. So the user should just have to submit his or her application to a computational grid and get back the result of the application without worrying about resource selection, resource location, or mapping processes on resources. In other words, application deployment should be as automatic and easy as plugging an electric device into an electric outlet.
Automatic deployment of component-based applications is crucial for better acceptance of the component-based programming model as well as for the success of computational grids which can host various types of applications (parallel, distributed, etc.). One of the advantages of the Corba Component Model (Ccm, [START_REF] Omg | CORBA Component Model V3.0[END_REF]) is that it specifies both a packaging model and deployment model. However, Ccm does not say how execution hosts may be selected, nor how processes may be launched on computers from a practical viewpoint.
To really achieve automatic deployment, we need both a description of the computational grid which we have access to plus a packaged application (Figure 10). Those two pieces of information are given to a deployment tool which selects resources and actually launches the application on the selected, distributed resources of the grid. The following sections provide further details on the different steps of automatic deployment: application and resource information description, deployment planning, actual execution and configuration of the application. package. This is a compressed archive provided by the user to the deployment tool. It includes, among other files, the assembly description which describes all the components of the assembly and their interconnections, as well as initial configuration parameters.
The assembly and component descriptors can express various requirements such as the processor architecture and the operating system required by a component implementation. A component may have environmental or other dependencies, like libraries, executables, Java classes, etc. Another possible requirement is component collocation: components may be free or partitioned to a single process or a single host, meaning that a group of component instances will have to be deployed in the same process or on the same compute node.
The component deployment tool must make sure that those constraints and dependencies will be satisfied at execution time.
4.1.2. Grid Resource Information Description. Before automatically deploying the processes of a distributed application on a computational grid, the compute nodes on which the application will be run must be selected automatically. In order for the deployment tool to make wise decisions in selecting computers, grid resources must be described precisely.
Information about grid resources includes not only compute and storage resource information, but also network description. Network information is important for highperformance applications in particular: resource selection may be constrained by such computer-level and network-level requirements as "I want 32 computers connected by a network of at least 2 Gb/s, like Myrinet".
Compute and storage resource description is rather well mastered (computer architecture, number and speed of CPUs, operating system, memory size, storage capacity, etc.), as exemplified by MDS2 (Monitoring and Discovery Service, [START_REF] Czajkowski | Grid information services for distributed resource sharing[END_REF]), the Grid Information Service of the Globus Toolkit version 2 [12]. However, network description received less attention. Simple networks should be described in a simple way, but the description model should allow for the description of complex networks including firewalls, NAT (Network Address Translation), asymmetric links (like asymmetric bandwidths), non-hierarchical topologies, connection to multiple networking technologies (Myrinet, Ethernet, etc.). We have proposed [START_REF] Lacour | A network topology description model for grid application deployment[END_REF] a scalable description model of grid network topology (as shown in Figure 11) and have implemented it on top of MDS2. The main idea is to group compute nodes together within network groups where network characteristics are roughly similar (bandwidth, latency, jitter, loss rate, etc.). This results in a synthetic description of grid resources, including both compute nodes and network.
4.2. Deployment Planning. Once application and resource information has been retrieved, the deployment planner [START_REF] Lacour | A software architecture for automatic deployment of CORBA components using grid technologies[END_REF] is responsible for 1) selecting resources to run the application, 2) selecting the network links (or network technology) to interconnect the application components, and 3) mapping the application processes (or Corba component servers) onto the selected resources. The input of the deployment planning algorithm is made of the application description and the resource description. The application description represents a set of constraints which must be satisfied by the selected resources.
The output of the deployment planner is a deployment plan which describes the mapping of the components onto component servers and the mapping of the component servers onto the selected compute nodes of the computational grid. The deployment plan should also specify 1) in what order processes must be launched by the deployment tool, 2) how data must flow from the output of certain processes to the input of other processes, 3) what network connections must be established between every pair of processes. For instance, items 1) and 2) are necessary for Corba applications, where a Naming Service needs to be launched, and its reference needs to be passed to the component servers launched afterwards.
Actual Deployment of Components Using the Globus Toolkit.
Once a deployment plan has been obtained from the previous step, the componentbased application is launched and configured according to the Corba component model. The technical point is that the selected computers are assumed not to run any component activator or component server. That is the reason why a job submission method is needed to launch an initial process on the selected compute nodes.
This step is fully compatible with the Ccm deployment model as explained in [START_REF] Lacour | Deploying CORBA components on a computational grid: General principles and early experiments using the Globus Toolkit[END_REF]. For example, we have developed a prototype called ADAGE: Automatic Deployment of Applications in a Grid Environment. It is able to deploy standard Corba component using the Globus Toolkit version 2 [12,[START_REF] Foster | The Globus Project: a status report[END_REF]. The Globus Toolkit is an open source software toolkit used for building grids. It includes software for security enforcement, resource, information, and data management. This middleware is wide-spread and well-established, as exemplified by many projects relying on the Globus Toolkit, such as GriPhyN (the Grid Physics Network, [START_REF]The Grid Physics Network (GriPhyN) web site[END_REF]), the American DOE Science Grid [START_REF]The DOE Science Grid web site[END_REF], the European DataGrid project [START_REF]The DataGrid Project web site[END_REF], TeraGrid [START_REF]The TeraGrid web site[END_REF].
As shown on Figure 10, the deployment tool manages two sorts of handles: Corba references and handles returned by the grid access middleware (the Globus Toolkit). Both are useful to control application processes, like cancel, suspend, or restart their execution.
Conclusion.
The deployment of high bandwidth wide-area networks has led computational grids to offer a very powerful computing resource. In particular, this inherently distributed resource is well-suited for multi-physics applications. To face the complexity of such applications as well as the heterogeneity and volatileness of grids, the software component technology appears to be a very adequate programming model. We choose to work with the Corba component model because its deployment model is very complete: it specifies the deployment of a set of components on a set of distributed (component) servers. However, it does not handle very well some aspects that are inherent to Grid infrastructures: managing parallelism within a component, heterogeneity of networks and automatic deployment of components onto available compute nodes. It specifies neither how to select resources, nor how to initiate component servers on the selected resources. On the other hand, a grid access middleware, such as the Globus Toolkit, deals with security enforcement, resource, information, data management, and portability.
This paper presents some solutions to those problems. We have shown that managing parallelism within a Corba component, while maintaining scalable connection between components, can be done without modifying the Omg specification. Then, grid programmers can rely on an existing model, from the Omg, which is widely accepted in some application fields, but not yet within the grid user community, to be honest. Concerning network resources, we propose a framework that is capable of virtualizing various networking technologies and their associated protocols. It allows components to communicate between each other without taking care of the underlying networks. Thus a distributed application based on software components can be deployed independently of network resources. It can be executed anywhere while fully taking advantage of the performance characteristics of the underlying network. Moreover, several communication middleware or runtime systems can be used simultaneously within a component without suffering from any side effect or unexpected behavior. Our framework is able to share network resources even if they were not designed to be shared. This will give much flexibility for the deployment of components on Grid infrastructures. As for instance two components exchanging a large amount of data can be mapped onto a set of compute nodes interconnected over a very high-performance network such as Myrinet. However to do so, it requires that the Grid middleware managing the Grid infrastructure be aware of the presence of such networks as well as the topology of these networks. We proposed to extend existing information services to store information related to network resources (topologies, network technologies, etc.). Using such extensions, it is possible to allocate resources and to propose a mapping of components to those resources in an automatic way depending on the user's constraints and requirements. Most of our efforts are now devoted to integration of the results presented in this paper with the Globus Toolkit.
Figure 1 .
1 Figure 1. Overview of the Corba Component Model.
Figure 3 .
3 Figure 3. A component Idl definition. aComponent ref = ServerComp->provide avgPort(); ClientComp->connect avgClientPort(ref);
Figure 4 .
4 Figure 4. Example of code to connect two components.
Figure 5 .
5 Figure 5. Parallel component concept.
Figure 6 .Figure 7 .
67 Figure 6. GridCCM intercepts and translates remote method invocations.
Framework for Software Components. GridCCM requires several middleware systems at the same time, typically Corba and Mpi.
Everything expressed through a single abstraction (distributed) -two cross-paradigm translations are needed for a parallel middleware atop a parallel network. (b) A unified abstraction makes compromises in all cases! -gives up most possible optimizations and imposes compromises to everything. Dual-abstraction model: use different abstract interfaces for different paradigms -only required compromises are done.
Figure 8 .
8 Figure 8. Several abstraction models may be envisaged.
C
Figure 9 .
9 Figure 9. The PadicoTM communication framework.
4. 1 .Figure 10 .
110 Figure 10. Overview of the deployment architecture.
Figure 11 .
11 Figure 11. Network graph describing the topology of a sample Grid.
This choice stems from two considerations. First, many parallel codes are indeed Spmd. Second, Spmd codes bring an easily manageable execution model.
Acknowledgments. This work was supported by the Incentive Concerted Action "GRID" (ACI GRID) of the French Ministry of Research.
This work was supported by the French ACI GRID initiative 1 |
01163963 | en | [
"phys.meca.mefl",
"spi.gciv"
] | 2024/03/04 16:41:24 | 2015 | https://hal.science/hal-01163963/file/Wind%20Tunnel%20study%20of%20the%20flow%20around%20a%20wall-mounted%20square%20prism.pdf | F Perret
I Demouge
S Calmet
F Courtine
De
Risheng Sheng
email: [email protected]
Laurent Perret
François Demouge
Isabelle Calmet
Sébastien Courtine
Fabrice De Oliveira
Wind Tunnel study of the flow around a wall-mounted square prism immersed in an atmospheric boundarylayer
published or not. The documents may come Wind Tunnel study of the flow around a wall-mounted square prism immersed in an atmospheric boundary-layer
INTRODUCTION
Wind load on the structures is one of the main wind effects which are very important for building construction. Most of the structures are bluff bodies hence the aerodynamics of bluff bodies is commonly studied in the wind engineering literature. [START_REF] Irwin | Bluff body aerodynamics in wind engineering[END_REF] has listed several examples of wind tunnel tests for bluff bodies and proved the importance of these tests can be to the safety and economics of large buildings and bridges. Evidently, full-scale measurements have no scale mismatch for all characteristics. However, they are costly and can be time-consuming. Additionally, the conditions of measurements can be hardly controlled. Every change of the environment can lead to the error of measurements. Hence, the well-controlled wind-tunnel tests were carried out for wind engineering. Niemann (1998) has compared the results of wind tunnel tests with those of full-scale measurements. Good agreement was found which ensure the quality of the wind tunnel measurements. [START_REF] Melbourne | Comparison of measurements on the caarc standard tall building model in simulated model with flows[END_REF] compared the results of wind tunnel tests from different laboratories for a same model and almost the same result was presented by those different laboratories. Hence we can conclude that conditions can be easily controlled for wind tunnel tests. Furthermore, wind tunnel experiment is a very important method for building construction. The wind tunnel experiment with a tower building as a bluff body presented in the present paper is part of a broader research project, the target of which is to validate the ability of Large Eddy Simulation (LES) to predict extreme wall-pressure events against experimental measurements. In order to be able to prescribe the correct inlet condition, detailed wind tunnel measurements were performed to measure the gradient of mean velocity and all the length scales of the turbulence. Different wind tunnel measurement techniques such as particle image velocimetry (PIV), high-frequency force measurement and unsteady wallpressure measurements were also performed in this study. In the present paper, the focus is on the high-frequency force measurement and unsteady wall-pressure measurements to investigate spectral content of the wind load in lateral and longitudinal directions.
EXPERIMENTAL MODELS AND PROCEDURES
All the wind tunnel tests were performed in the NSA wind tunnel at CSTB, Nantes, France, which is a close-loop wind-tunnel with a test-section 20 m long, 2 m high and 4 m wide. These large dimensions can allow for the reproduction of natural wind at scales ranging between 1:100 and 1:400 for the usual local and global measures. The velocity of the flow in the wind tunnel is adjustable from 0 to 30 m/s. The floor of the wind tunnel is equipped with a turntable table to position models from 0 to 360° ± 1°. The simulation of the wind is made by placing different sets of roughness and vortex generators on the floor of the wind tunnel.
Model
The tower building was modeled with a wall-mounted prism of square cross-section (dimensions: 10 cm 10 cm 49 cm) made of Plexiglas to allow for optical access. A second model equipped with 265 pressure taps was used to measure the unsteady pressure distribution on the model walls as shown in Figure 1.
The wind flow characteristics
The wind tunnel tests were carried out in turbulent flow, simulating the vertical velocity and turbulence profile expected at the construction site using roughness elements which include a rough carpet and turbulence generators at the entrance of the section (cf. Figure 2a). The targeted code roughness is z 0 =0.02 m according to Eurocode 1 (2005). At the reference height H ref = 0.67H (where H is the building height), the reference mean wind speed U ref =10m/s. Figure 2b and Figure 2c show the profiles of mean velocity and the turbulent intensity of the generated boundary layer. Comparison between measurements and results from the literature shows a good agreement, which ensure the inlet condition simulated by the wind tunnel NSA.
In addition to the mean velocity and the turbulent intensity profiles, power spectra of the wind speed were also checked. Here we compare the power spectrum of the longitudinal wind speed u at the reference height with the result given by the Eurocode 1 (2005). Figure 3 shows again a very good agreement between the experimental data and the model profile from the literature. We can conclude that in the NSA wind tunnel we can simulate a turbulent wind which can reproduce the fluid characteristics as proposed in the literature.
Experimental set-up
The tower building has been equipped with 265 pressure taps. Measurements were performed synchronously at a rate of 512 Hz for the pressure taps and 200 Hz for the force balance over 4 min. The pressure scanner designed by CSTB is the "1024 channels / 1024 Hz" capable pressure scanner. This accurate scanner is composed of high precision power supply over 32 channels. Each channel is connected to an ESP Pressure Scanners with 32 pressure ports multiplexed at 70 KHz. The tubing system was used to connect pressure tap and scanner port. The Helmholtz's resonance effects due to the tube cavity were corrected by using traditional restrictors.
RESULTS AND DISCUSSIONS
In this section, a detailed analysis of the characteristics of wind load is presented, focusing on its dynamics. Indeed, thanks to the pressure tap measurements, all local information such as mean pressure, variance of the pressure signal and the power spectrum of the signal are available. Global force measurements are compared with the integration of the local wind loads obtained from the pressure measurements.
Global wind loads and force
With the pressure tap measurements, the instantaneous value of the local pressure can be obtained simultaneously at the 265 pressure taps. After integration over the building surface, the global force signal can be compared with the result of the force balance. The original power spectrum of the force is altered by the mechanic resonance of the equipment. To facilitate the analysis and comparison, a filter is applied to remove the resonance according to Caracoglia (2015). Figure 4 shows a good agreement found between the two types of measurements. A strong peak appears in the lateral force power spectrum which gives us a Strouhal number at about 0.11. The Strouhal number is defined as St=fD/U ref with D the tower width and U ref the reference velocity. This value is very close to the one of 0.12 found in the literatures as Choi (2000) and [START_REF] Okajima | Numerical simulation of flow around rectangular cylinders[END_REF] for this type of structure. Difference exists because of the different definitions of U ref , which is the reference velocity at 0.67H here and free-stream velocity in those papers.
Local wind loads
A clear peak at St = 0.11 appeared only in the spectrum of the lateral force as shown in the Figure 4. We focus here on the local wind loads at the reference height to understand it clearly. We can number each pressure tap as shown in Figure 5b in which wind comes at the face of AB. Because of the symmetry of our tower building, identical results were found for sides BC and AD. Therefore, only the information at the three faces AB, BC and CD were needed.
Figure 5 shows the power spectrum for taps 1-15 around the tower at reference height. All the spectra on the BC side have a marked peak. This is expected as the lateral sides of the building are where shear layers at the origin of the vortex shedding cycle form and develop.
The spectral peak corresponding to the Strouhal number of 0.11 also exists on the AB and CD sides for some pressure taps (tap 1, 5, 11, 12, 14 and 15 symmetrically). This was less expected, especially on the side AB facing the oncoming wind where the flow is upstream the separation points formed by the upstream building corners. These tap locations are therefore under the influence of the vortex shedding. To better understand this phenomenon, the spectral coherence between different taps has been computed. At the frequency corresponding to the Strouhal number identified above, we found that there exists strong coherence (about 0.9) between two points on the same side of the face AB (tap 4 and 5) as shown in Figure 6a and also between taps located on the same side of the building (taps 5 and 6 to 10) (figure 6b), confirming that these tap locations are under the influence of the same shear layer. The coherence level drops to 0.65 for points at opposite corners of face AB (tap 1 and 5) as shown in Figure 6a. Even though vortex shedding from opposite sides of a bluff body is known to be well-synchronized (being roughly 180° out of phase), the marked loss of phase-coherence (also observed between two points from opposite sides) might be the result of the structure of the oncoming wind flow: the bluff body being immersed in a thick atmospheric boundary layer, the presence of strong turbulent structures might disturb the flow separation at the corners of the building side facing the wind near tap 1 and symmetrically tap 5. This loss of coherence of pressure signals measured on surfaces normal to the oncoming wind can therefore reduce or make disappear through the integration process the spectral peak in the global effort in the longitudinal direction. Finally, we can notice the presence of the same noise in the power spectrum figures and in the coherency figures. It's attributed to acoustic noises that exist in the wind tunnel and depend only on the wind tunnel configuration.
Figure 1 :
1 Figure 1: a) distribution of pressure taps, b) tower building for PIV measurements, c) tower building with pressure taps for pressure distribution measurements on the model walls
Figure 2 :Figure 3 :
23 Figure 2: a) roughness elements to simulate the turbulent flow, b) mean velocity profile, c) turbulent intensity profile
Figure 4 :
4 Figure 4: Power spectrum for tree components of force: a) lateral direction, b) longitudinal direction and c) vertical direction
Figure 5 :
5 Figure 5: a) power spectrum of pressure on AB, b) Configuration of pressure taps for the present local sectral analysis, c) power spectrum of pressure on BC, d) power spectrum of pressure on CD |
00389288 | en | [
"phys.qphy",
"phys.mphy",
"math.math-mp"
] | 2024/03/04 16:41:24 | 2010 | https://hal.science/hal-00389288v2/file/moebius.pdf | Hans Havlicek
Boris Odehnal
Metod Saniga
Möbius Pairs of Simplices and Commuting Pauli Operators
Keywords: Mathematics Subject Classification (2000): 51A50 -81R05 -20F99 PACS Numbers: 02.10.Ox, 02.40.Dr, 03.65.Ca Key-words: Möbius Pairs of Simplices -Factor Groups -Symplectic Polarity -Generalised Pauli Groups
There exists a large class of groups of operators acting on Hilbert spaces, where commutativity of group elements can be expressed in the geometric language of symplectic polar spaces embedded in the projective spaces PG(n, p), n being odd and p a prime. Here, we present a result about commuting and non-commuting group elements based on the existence of socalled Möbius pairs of n-simplices, i. e., pairs of n-simplices which are mutually inscribed and circumscribed to each other. For group elements representing an n-simplex there is no element outside the centre which commutes with all of them. This allows to express the dimension n of the associated polar space in group theoretic terms. Any Möbius pair of n-simplices according to our construction corresponds to two disjoint families of group elements (operators) with the following properties: (i) Any two distinct elements of the same family do not commute. (ii) Each element of one family commutes with all but one of the elements from the other family. A threequbit generalised Pauli group serves as a non-trivial example to illustrate the theory for p = 2 and n = 5.
Introduction
The last two decades have witnessed a surge of interest in the exploration of the properties of certain groups relevant for physics in terms of finite geometries. The main outcome of this initiative was a discovery of a large family of groups -Dirac and Pauli groups -where commutativity of two distinct elements admits a geometrical interpretation in terms of the corresponding points being joined by an isotropic line (symplectic polar spaces, see [START_REF] Huppert | I. Die Grundlehren der Mathematischen Wissenschaften[END_REF], [START_REF] Shaw | Finite geometry, Dirac groups and the table of real Clifford algebras[END_REF], [START_REF] Saniga | Multiple qubits as symplectic polar spaces of order two[END_REF], [START_REF] Planat | On the Pauli graphs of N-qudits[END_REF], [START_REF] Sengupta | Finite geometries with qubit operators[END_REF], [START_REF] Rau | Mapping two-qubit operators onto projective geometries[END_REF], [START_REF] Thas | Pauli operators of N-qubit Hilbert spaces and the Saniga-Planat conjecture[END_REF], [START_REF] Thas | The geometry of generalized Pauli operators of N-qudit Hilbert space, and an application to MUBs[END_REF], and [START_REF] Havlicek | Factor-group-generated polar spaces and (multi-) qudits[END_REF] for a comprehensive list of references) or the corresponding unimodular vectors lying on the same free cyclic submodule (projective lines over modular rings, e. g., [START_REF] Havlicek | Projective ring line of a specific qudit[END_REF], [START_REF] Havlicek | Projective ring line of an arbitrary single qudit[END_REF]). This effort resulted in our recent paper [START_REF] Havlicek | Factor-group-generated polar spaces and (multi-) qudits[END_REF], where the theory related to polar spaces was given the most general formal setting.
Finite geometries in general, and polar spaces in particular, are endowed with a number of remarkable properties which, in light of the above-mentioned relations, can be directly translated into group theoretical language. In this paper, our focus will be on one of them. Namely, we shall consider pairs of n-simplices of an n-dimensional projective space (n odd) which are mutually inscribed and circumscribed to each other. First, the existence of these so-called Möbius pairs of n-simplices will be derived over an arbitrary ground field. Then, it will be shown which group theoretical features these objects entail if restricting to finite fields of prime order p. Finally, the case of three-qubit Pauli group is worked out in detail, in view of also depicting some distinguished features of the case p = 2.
Möbius pairs of simplices
We consider the n-dimensional projective space PG(n, F) over any field F, where n ≥ 1 is an odd number. Our first aim is to show an n-dimensional analogue of a classical result by Möbius [START_REF] Möbius | Kann von zwei dreiseitigen Pyramiden eine jede in Bezug auf die andere umund eingeschrieben zugleich heissen?[END_REF]. Following his terminology we say that two n-simplices of PG(n, F) are mutually inscribed and circumscribed if each point of the first simplex is in a hyperplane of the second simplex, and vice versa for the points of the second simplex. Two such n-simplices will be called a Möbius pair of simplices in PG(n, F) or shortly a Möbius pair. There is a wealth of newer and older results about Möbius pairs in PG (3, F). See, among others, [START_REF] Guinand | Graves triads, Möbius pairs, and related matrices[END_REF], [4, p. 258], [START_REF] Witczyński | Some remarks on the theorem of Möbius[END_REF], [START_REF] Witczyński | Möbius' theorem and commutativity[END_REF]. The possibility to find Möbius pairs of simplices in any odd dimension n ≥ 3 is a straightforward task [2, p. 188]: Given any n-simplex in PG(n, F) take the image of its hyperplanes under any null polarity π as second simplex. By this approach, it remains open, though, whether or not the simplices have common vertices. For example, if one hyperplane of the first simplex is mapped under π to one of the vertices of the first simplex, then the two simplices share a common point. However, a systematic account of the n-dimensional case seems to be missing. A few results can be found in [START_REF] Berzolari | Sull' estensione del concetto di tetraedri di Möbius agli iperspazî[END_REF] and [START_REF] Herrmann | Matrizen als projektive Figuren[END_REF]. There is also the possibility to find Möbius pairs which are not linked by a null polarity. See [1, p. 137] for an example over the real numbers and [3, p. 290ff.] for an example over the field with three elements. Other examples arise from the points of the Klein quadric representing a double six of lines in PG (3, F). See [10, p. 31] We focus our attention to non-degenerate Möbius pairs. These are pairs of nsimplices such that each point of either simplex is incident with one and only one hyperplane of the other simplex. This property implies that each point of either simplex does not belong to any subspace which is spanned by less than n points of the other simplex, for then it would belong to at least two distinct hyperplanar faces. We present a construction of non-degenerate Möbius pairs which works over any field F. The problem of finding all Möbius pairs in PG(n, F) is not within the scope of this article.
In what follows we shall be concerned with matrices over F which are composed of the matrices
K := 0 -1 1 0 , J := 1 1 1 1 , L := 1 -1 -1 1 , (1)
and the 2 × 2 unit matrix I. We define a null polarity π of PG(n, F) in terms of the alternating (n + 1) × (n + 1) matrix1
A := K -J . . . -J J K . . . -J . . . . . . . . . . . . J J . . . K . (2)
Thus all entries of A above the diagonal are -1, whereas those below the diagonal are 1. Using the identities -K 2 = I, JK -KL = 0, and JL = 0 it is easily verified that A is indeed an invertible matrix, because
A -1 = -K -L . . . -L L -K . . . -L . . . . . . . . . . . . L L . . . -K . (3)
Let P := {P 0 , P 1 , . . . , P n } be the n-simplex which is determined by the vectors e 0 , e 1 , . . . , e n of the standard basis of F n+1 , i. e., P j = Fe j for all j ∈ {0, 1, . . . , n}.
(
The elements of F n+1 are understood as column vectors. We first exhibit the image of P under the null polarity π.
Lemma 1. Let S be a subspace of PG(n, F) which is generated by k + 1 ≥ 2 distinct points of the simplex P. Then the following assertions hold:
(a) S ∩ π(S ) = ∅ if k is odd. (b) S ∩ π(S
) is a single point, which is in general position to the chosen points of P, if k is even.
Proof. Suppose that S is the span of the points P j 0 , P j 1 , . . . , P j k , where 0
≤ j 0 < j 1 < • • • < j k ≤ n. A point Y is in S if,
The rows of A with numbers j 0 , j 1 , . . . , j k comprise the coefficients of a system of linear equations in n + 1 unknowns x 0 , x 1 , . . . , x n whose solutions are the vectors of π(S ). Substituting the vector y into this system gives the homogeneous linear system (written in matrix form)
A k • (y j 0 , y j 1 , . . . , y j k ) T = (0, 0, . . . , 0) T , (6)
where A k is the principal submatrix of A which arises from the first k + 1 rows and columns of A. Note that (6) holds, because the matrix A k coincides with the principal submatrix of A which arises from the rows and columns with indices j 0 , j 1 , . . . , j k . The solutions of ( 6) are the vectors of S ∩ π(S ). There are two cases: k odd: Here A k has full rank k + 1, as follows by replacing n with k in (2) and (3). Hence the system (6) has only the zero-solution and S ∩π(S ) = ∅, as asserted.
k even: Here A k cannot be of full rank, as it is an alternating matrix with an odd number of rows. By the above, the submatrix A k-1 has rank k, so that the rank of A k equals to k. This implies that the solutions of the linear system (6) is the span of a single non-zero vector. For example,
(-1, 1, -1, 1, -1, . . . , -1) T ∈ F k+1 (7)
is such a vector. It determines the point
P j 0 , j 1 ,..., j k := F k i=0 (-1) i+1 e j i . ( 8
)
Since the coordinates of P j 0 , j 1 ,..., j k with numbers j 0 , j 1 , . . . , j k are non-zero, the points P j 0 , P j 1 , . . . , P j k , P j 0 , j 1 ,..., j k are in general position.
The previous lemma holds trivially for k + 1 = 0 points, since then S = ∅. It is also valid, mutatis mutandis, in the case k + 1 = 1, even though here one has to take into account S = P j 0 yields again the point S ∩ π(S ) = P j 0 . Hence the null polarity π and the simplex P give rise to the following points: P 0 , P 1 . . . , P n (the points of P), P 012 , P 013 , . . . , P n-2,n-1,n (one point in each plane of P), . . . , P 0,1,...,n-1 , . . . , P 1,2,...,n (one point in each hyperplane of P). All together these are
n + 1 1 + n + 1 3 + • • • + n + 1 n -1 = n i=0 n i = 2 n (9)
mutually distinct points. We introduce another notation by defining
P j 0 , j 1 ,..., j k =: Q m 0 ,m 1 ,...,m n-k , (10)
where 0
≤ m 0 < m 1 < • • • < m n-k ≤ n
are those indices which do not appear in P j 0 , j 1 ,..., j k . We are now in a position to state our first main result:
Theorem 1. In PG(n, F), n ≥ 3, let the null-polarity π and the n-simplex P = {P 0 , P 1 , . . . , P n } be given according to (2) and (4), respectively. Then the following assertions hold:
(a) P and Q := {Q 0 , Q 1 , . . . , Q n },
where the points Q m are defined by [START_REF] Hirschfeld | Finite Projective Spaces of Three Dimensions[END_REF], is a non-degenerate Möbius pair of n-simplices.
(b) The n-simplices P and Q are in perspective from a point if, and only if, F is a field of characteristic two.
Proof. Ad (a): Choose any index m ∈ {0, 1, . . . , n}. Then Q m is the image under π of the hyperplane S which is spanned by P 0 , . . . , P m-1 , P m+1 , . . . , P n . The proof of Lemma 1 shows how to find a system of linear equations for Q m . Furthermore, formula (8) provides a coordinate vector for Q m . However, such a vector can be found directly by extracting the m-th column of the matrix A -1 , viz.
m-1 i=0 (-1) i+m+1 e i + n k=m+1 (-1) k+m e k =: q m . (11)
(The vector from ( 8) is (-1) m q m .) As the columns of A -1 form a basis of F n+1 , the point set Q is an n-simplex.
The n + 1 hyperplanes of the simplex Q are the images under π of the vertices of P. Each of these hyperplanes has a linear equation whose coefficients comprise one of the rows of the matrix A. So a point P j is incident with the hyperplane π(P i ) if, and only if, the (i, j)-entry of A is zero. Since each row of A has precisely one zero entry, we obtain that each point of P is incident with one and only one hyperplane of Q.
In order to show that each point of Q is incident with precisely one hyperplane of the simplex P, we apply a change of coordinates from the standard basis e 0 , e 1 , . . . , e n to the basis b i := (-1) i q i , i ∈ {0, 1, . . . , n}. The points Fb i constitute the n-simplex Q. Let B be the matrix with columns b 0 , b 1 , . . . , b n . With respect to the basis b i the columns of B -1 describe the points of P, and B T AB = A is a matrix for π. The columns of B -1 and A -1 are identical up to an irrelevant change of signs in columns with odd indices. Therefore, with respect the basis b i , the simplex Q plays the role of P, and vice versa. So the assertion follows from the result in the preceding paragraph.
Ad (b): For each j ∈ {0, 2, . . . , n -1} the lines P j Q j and P j+1 Q j+1 meet at that point which is given by the vector
-e j + q j = -(e j+1 + q j+1 ) = (-1, 1, . . . , -1, 1, -1, -1 j, j+1 , 1, -1, . . . , 1, -1) T . ( 12
)
Comparing signs we see that -e 0 + q 0 and -e 2 + q 2 are linearly independent for Char K 2, whereas for Char K = 2 all lines
P k Q k , k ∈ {0, 1, . . . , n} concur at the point C := F(1, 1, . . . , 1) T . ( 13
)
Remark 1. Choose k + 1 distinct vertices of P, where 3 ≤ k ≤ n is odd. Up to a change of indices it is enough to consider the k-simplex {P 0 , P 1 , . . . , P k } and its span, say S . The null polarity π induces a null polarity π S in S which assigns to X ∈ S the (k -1)-dimensional subspace π(X) ∩ S . We get within S the settings of Theorem 1 with k rather than n points and π S instead of π. The nested nondegenerate Möbius pair in S is formed by the k-simplex {P 0 , P 1 , . . . , P k } and the k-simplex comprising the points
Q 0,k+1,...,n , Q 1,k+1,...,n , . . . , Q k,k+1,...,n . (14)
This observation illustrates the meaning of all the 2 n points which arise from P and the null polarity π. If we allow k = 1 in the previous discussion then, to within a change of indices, the nested degenerate Möbius pair {P 0 ,
P 1 } = {Q 0,2,...,n , Q 1,2,...,n } is obtained.
Remark 2. The case F = GF(2) deserves particular mention, since we can give an interpretation for all points of PG(n, 2) in terms of our Möbius pair. Recall the following notion from the theory of binary codes: The weight of an element of GF(2) n+1 is the number of 1s amongst its coordinates. The 2 n points addressed in Remark 1 are given by the vectors with odd weight. The 2 n vectors with even weight are, apart from the zero vector, precisely those vectors which yield the 2 n -1 points of the hyperplane π(C) : n+1 i=0 x i = 0. More precisely, the vectors with even weight w ≥ 4 are the centres of perspectivity for the nested non-degenerate Möbius pairs of (w-1)-simplices, whereas the vectors with weight 2 are the points of intersection of the edges of P with π(C). The latter points may be regarded as "centres of perspectivity" for the degenerate Möbius pairs formed by the two vertices of P on such an edge. Each point of the hyperplane π(C) is the centre of perspectivity of precisely one nested Möbius pair.
Commuting and non-commuting elements
Our aim is to translate the properties of Möbius pairs into properties of commuting and non-commuting group elements. We shortly recall some results from [START_REF] Havlicek | Factor-group-generated polar spaces and (multi-) qudits[END_REF]. Let (G, •) be a group and p be a prime. Suppose that the centre Z(G) of G contains the commutator subgroup G ′ = [G, G] and the set G (p) of pth powers. Also, let G ′ be of order p. Then V := G/Z(G) is a commutative group which, if written additively, is a vector space over GF(p) in a natural way. Furthermore, given any generator g of G ′ we have a bijection ψ g : G ′ → GF(p) : g m → m for all m ∈ {0, 1, . . . , p}. The commutator function in G assigns to each pair (x, y) ∈ G × G the element [x, y] = xyx -1 y -1 . It gives rise to the non-degenerate alternating bilinear form
[•, •] g : V × V → GF(p) : (xZ(G), yZ(G)) → ψ g ([x, y]). ( 15
)
We assume now that V has finite dimension n + 1, and we consider the projective space PG(n, p) := P(V). Its points are the one-dimensional subspaces of V. In our group theoretic setting a non-zero vector of V is a coset xZ(G) with x ∈ G \ Z(G).
The scalar multiples of xZ(G) are the cosets of the form x k Z(G), k ∈ {0, 1, . . . , p -1}, because multiplying a vector of V by k ∈ GF(p) means taking a kth power in G/Z(G). So x, x ′ ∈ G describe the same point X of PG(n, p) if, and only if, none of them is in the centre of G, and x ′ = x k z for some k ∈ {1, 2, . . . , p -1} and some z ∈ Z(G). Under these circumstances x (and likewise x ′ ) is said to represent the point X. Conversely, the point X is said to correspond to x (and likewise x ′ ). Note that the elements of Z(G) determine the zero vector of V. So they do not represent any point of PG(n, p). The non-degenerate alternating bilinear form from ( 15) determines a null polarity π in PG(n, p). We quote the following result from [6, Theorem 6]: Two elements x, y ∈ G \ Z(G) commute if, and only if, their corresponding points in PG(n, p) are conjugate with respect to π, i. e., one of the points is in the polar hyperplane of the other point. This crucial property is the key for proving Lemma 2 and Theorem 2 below. Lemma 2. Suppose that x 0 , x 1 , . . . , x r ∈ G \ Z(G) is a family of group elements. Then the following assertions are equivalent:
(a) The points corresponding to x 0 , x 1 , . . . , x r constitute an n-simplex of the projective space PG(n, p), whence r = n.
(b) There exists no element in G\Z(G) which commutes with all of x 0 , x 1 , . . . , x r , but for each proper subfamily of x 0 , x 1 , . . . , x r at least one such element exists.
Proof. The points corresponding to the family (x i ) generate PG(n, p) if, and only if, their polar hyperplanes have no point in common. This in turn is equivalent to the non-existence of an element in G \ Z(G) which commutes with all elements of the family (x i ). The proof is now immediate from the following observation: An n-simplex of PG(n, p) can be characterised as being a minimal generating family of PG(n, p).
This result shows that the dimension n + 1 of V can be easily determined by counting the cardinality of a family of group elements which satisfies condition (b). We close this section by translating Theorem 1: Theorem 2. Let G be a group which satisfies the assumptions stated in the first paragraph of this section. Also, let V = G/Z(G) be an (n + 1)-dimensional vector space over GF(p). Suppose that x 0 , x 1 , . . . , x n ∈ G \ Z(G) and y 0 , y 1 , . . . , y n ∈ G \ Z(G) are two families of group elements which represent a non-degenerate Möbius pair of PG(n, p) as in Theorem 1. Then the following assertions hold:
(a) There exists no element in G \ Z(G) which commutes with x 0 , x 1 , . . . , x n .
(b) The elements x 0 , x 1 , . . . , x n are mutually non-commuting.
(c) For each i ∈ {1, 2, . . . , n} the element x i commutes with all y j such that j i.
Each of these three assertions remains true when changing the role of the elements x 0 , x 1 , . . . , x n and y 0 , y 1 , . . . , y n .
Proof. The assertion in (a) follows from Lemma 2. Since all non-diagonal entries of the matrix A from (2) equal to 1, no two points which are represented by the elements x i are conjugate with respect to π. Hence (b) is satisfied. Finally, (c) follows, as the polar hyperplane of the point represented by x i contains all the points which are represented by the elements y j with j i. The last statement holds due to the symmetric role of the two simplices of a Möbius pair which was established in the proof of Theorem 1 (a). Remark 3. According to Remark 1 we may obtain nested non-degenerate Möbius pairs from appropriate subfamilies of x 0 , x 1 , . . . , x n . These Möbius pairs satisfy, mutatis mutandis, properties (b) and (c).
Example 1. We consider the complex matrices
σ 0 := 1 0 0 1 , σ x := 0 1 1 0 , σ y := 0 -i i 0 , σ z := 1 0 0 -1 . (16)
The matrices i α σ β with α ∈ {0, 1, 2, 3} and β ∈ {0, x, y, z} constitute the Pauli group P of order 16. The centre of P is Z(P) = i α σ 0 | α ∈ {0, 1, 2, 3} . The commutator subgroup P ′ = {±σ 0 } and the set P (2) = {±σ 0 } of squares are contained in Z(P). By Section 3, the factor group P/Z(P), if written additively, is a vector space over GF [START_REF] Brauner | Geometrie projektiver Räume II[END_REF]. For each β ∈ {0, x, y, z} the coset Z(P)σ β is denoted by β. In this notation, addition can be carried out according to the relations 0 + β = β, β + β = 0, and x + y + z = 0. The mapping 0 → (0, 0) T , x → (1, 0) T , y → (0, 1) T , z → (1, 1) T ( 17)
is an isomorphism of P/Z(P) onto the additive group of the vector space GF(2) × GF [START_REF] Brauner | Geometrie projektiver Räume II[END_REF].
Let G be the group of order 256 comprising the three-fold Kronecker products i α σ β ⊗ σ γ ⊗ σ δ with α ∈ {0, 1, 2, 3} and β, γ, δ ∈ {0, x, y, z}. This group acts on the eight-dimensional Hilbert space of three qubits. In our terminology from Section 3 (with p := 2) we have (2) = {±σ 0 ⊗ σ 0 ⊗ σ 0 }, [START_REF] Shaw | Finite geometry, Dirac groups and the table of real Clifford algebras[END_REF] and g = -σ 0 ⊗ σ 0 ⊗ σ 0 . Hence V = G/Z(G) is a six-dimensional vector space over GF(2) endowed with an alternating bilinear form [•, •] g . We introduce βγδ as a shorthand for Z(G)(σ β ⊗ σ γ ⊗ σ δ ), where β, γ, δ ∈ {0, x, y, z}. In this notation, addition in V can be carried out componentwise according to the relations stated before. An isomorphism of V onto GF(2) 6 is obtained by replacing the three symbols of an element of V according to [START_REF] Sengupta | Finite geometries with qubit operators[END_REF]. This gives the coordinate vector of an element of V. For example, the coordinate vectors of the six elements x00, y00, 0x0, 0y0, 00x, 00y ∈ V [START_REF] Thas | Pauli operators of N-qubit Hilbert spaces and the Saniga-Planat conjecture[END_REF] comprise the standard basis of GF(2) 6 . These six elements therefore form a basis of V. The projective space PG(5, 2) = P(V), like any projective space over GF(2), has the particular property that each of its points is represented by one and only one non-zero vector of V. We therefore identify V \ {000} with PG(5, 2). Recall the matrices defined in [START_REF] Berzolari | Sull' estensione del concetto di tetraedri di Möbius agli iperspazî[END_REF]. The matrix of the alternating bilinear form from [START_REF] Rau | Mapping two-qubit operators onto projective geometries[END_REF] with respect to the basis (19) equals to the 6 × 6 matrix diag(K, K, K) over GF [START_REF] Brauner | Geometrie projektiver Räume II[END_REF]. In order to obtain a Möbius pair
Z(G) = i α σ 0 ⊗ σ 0 ⊗ σ 0 | α ∈ {0, 1, 2, 3} , G ′ = G
P = {P 0 , P 1 , . . . , P 5 }, Q = {Q 0 , Q 1 , . . . , Q 5 } (20)
we have to use another basis of V, e. g., the one which arises in terms of coordinates from the six columns of the matrix
T := I J J 0 I J 0 0 I . (21)
Indeed, T T • diag(K, K, K) • T gives an alternating 6 × 6 matrix A as in [START_REF] Brauner | Geometrie projektiver Räume II[END_REF]. We thus can translate our results from Section 2 as follows: First, we multiply T with the "old" coordinate vectors from there and, second, we express these "new" coordinate vectors as triplets in terms of 0, x, y, z. The vertices P 0 , P 1 , . . . , P 5 and Q 0 , Q 1 , . . . , Q 5 can be read off, respectively, from the first and second row of the following matrix: x00 y00 zx0 zy0 zzx zzy yzz xzz 0yz 0xz 00y 00x
We note that P and Q are in perspective from a point according to Theorem 1. This point is zzz. Since each line of PG(5, 2) has only three points, the entries of the second row in ( 22) can be found by adding zzz to the entries from the first row. We leave it to the reader to find the 6 4 = 15 nested Möbius pairs of tetrahedra which are formed by four points from P and the four appropriate points from (23). By Remark 2, the 32 points from ( 22) and (23) are precisely the points off the polar hyperplane of zzz. This means that none of the corresponding elements of G commutes with the representatives of the distinguished point zzz.
The results from [START_REF] Havlicek | Factor-group-generated polar spaces and (multi-) qudits[END_REF] show that Theorem 2 can be applied to a wide class of groups, including the generalised Pauli groups acting on the space of N-qudits provided that d is a prime number.
Conclusion
Following the strategy set up in our recent paper [START_REF] Havlicek | Factor-group-generated polar spaces and (multi-) qudits[END_REF], we have got a deeper insight into the geometrical nature of a large class of finite groups, including many asso-ciated with finite Hilbert spaces. This was made possible by employing the notion of a Möbius pair of n-simplices in a finite odd-dimensional projective space, PG(n, p), p being a prime. Restricting to non-degenerate Möbius pairs linked by a null polarity, we have first shown their existence for any odd n, a remarkable nested structure they form, and perspectivity from a point of the simplices in any such pair if p = 2. Then, the commutation properties of the group elements associated with a Möbius pair have been derived. In particular, the two disjoint families of n + 1 group elements that correspond to a Möbius pair are such that any two distinct elements/operators from the same family do not commute and each element from one family commutes with all but one of the elements from the other family. As the theory also encompasses a number of finite generalised Pauli groups, that associated with three-qubits (n = 5 and p = 2) was taken as an illustrative example, also because of envisaged relevance of Möbius pairs to entanglement properties of a system of three fermions with six single-particle states [START_REF] Lévay | Three fermions with six single-particle states can be entangled in two inequivalent ways[END_REF]. It should, however, be stressed that above-outlined theory is based on a particular construction of Möbius pairs, and so there remains an interesting challenge to see in which way it can be generalised to incorporate arbitrary Möbius pairs.
Note that indices range from 0 to n.
Acknowledgements
This work was carried out in part within the "Slovak-Austrian Science and Technology Cooperation Agreement" under grants SK 07-2009 (Austrian side) and SK-AT-0001-08 (Slovak side), being also partially supported by the VEGA grant agency projects Nos. 2/0092/09 and 2/7012/27. The final version was completed within the framework of the Cooperation Group "Finite Projective Ring Geometries: An Intriguing Emerging Link Between Quantum Information Theory, Black-Hole Physics, and Chemistry of Coupling" at the Center for Interdisciplinary Research (ZiF), University of Bielefeld, Germany. |
04110552 | en | [
"sdv.bbm"
] | 2024/03/04 16:41:24 | 2022 | https://pasteur.hal.science/pasteur-04110552/file/1.FINAL_metallomics_editing_final.pdf | Frédéric Cartín
Barras
email: [email protected]
Marine Lénon #
Rodrigo Arias-Cartín #
Frédéric Barras
The Fe-S proteome of Escherichia coli : prediction, function, and fate
Iron-sulfur (Fe-S) clusters are inorganic ubiquitous and ancient cofactors. Fe-S bound proteins contribute to most cellular processes, including DNA replication and integrity, genetic expression and regulation, metabolism, biosynthesis and most bioenergetics systems. Also, Fe-S proteins hold a great biotechnological potential in metabolite and chemical production, including antibiotics. From classic biophysics and spectroscopy methodologies to recent development in bioinformatics, including structural modeling and chemoproteomics, our capacity to predict and identify Fe-S proteins has spectacularly increased over the recent years.
Here, these developments are presented and collectively used to update the composition of Escherichia coli Fe-S proteome, for which we predict 181 occurrences, i.e. 40 more candidates than in our last catalog (Py and Barras, 2010), and equivalent to 4% of its total proteome. Besides, Fe-S clusters can be targeted by redox active compounds or reactive oxygen and nitrosative species, and even be destabilized by contaminant metals. Accordingly, we discuss how cells handle damaged Fe-S proteins, i.e. degradation, recycling or repair.
Introduction
Fe-S clusters are assemblies of iron and sulfur atoms and stand among the most frequently used protein cofactors in biology [START_REF] Beinert | Iron-sulfur proteins: ancient structures, still full of surprises[END_REF]. Fe-S clusters arise in various forms, wherein 2, 3 or 4 iron atoms are linked to sulfide ions, yielding to the typical Fe2S2 (rhombic), Fe3S4 (cuboidal) or Fe4S4 (cubane) clusters and some atypical types such as Fe4S3, Fe4S5, Fe8S8 or Fe8S9 clusters [START_REF] Zhou | Structural Evidence for a [4Fe-5S] Intermediate in the Non-Redox Desulfuration of Thiouracil[END_REF][START_REF] Jenner | An unexpected P-cluster like intermediate en route to the nitrogenase FeMo-co[END_REF][START_REF] Jeoung | Double-Cubane [8Fe9S] Clusters: A Novel Nitrogenase-Related Cofactor in Biology[END_REF][START_REF] Jeoung | ATP-dependent substrate reduction at an [Fe8S9] double-cubane cluster[END_REF][START_REF] Fritsch | The crystal structure of an oxygen-tolerant hydrogenase uncovers a novel iron-sulphur centre[END_REF]. Fe-S clusters were discovered as devices implicated in electron transfer in enzymes participating in photosynthesis and respiration, but we now know that they contribute in indispensable functions to nearly all cellular processes (see below). Fe-S proteins are present in both prokaryotes and eukaryotes where their pleiotropic role extends their influence onto general traits such as pathogenicity, CRISPR immunity systems and antibiotic resistance in bacteria, aging, cancer or ataxia in humans, plant growth and even replication of coronaviruses such as the SARS-CoV-2, the causal agent of the CoVID-19 pandemic [START_REF] Ezraty | Fe-S cluster biosynthesis controls uptake of aminoglycosides in a ROS-less death pathway[END_REF][START_REF] Lemak | Toroidal Structure and DNA Cleavage by the CRISPR-Associated [4Fe-4S] Cluster Containing Cas4 Nuclease SSO0001 from Sulfolobus solfataricus[END_REF][START_REF] Rouault | Biogenesis of iron-sulfur clusters in mammalian cells: new insights and relevance to human disease[END_REF][START_REF] Maio | Fe-S cofactors in the SARS-CoV-2 RNA-dependent RNA polymerase are potential antiviral targets[END_REF].
Fe-S clusters undergo one-electron redox processes and can exhibit various redox states. This makes them ideal catalysts for intra-and inter-molecular electron transfer processes. Their redox potential value ranges from -600 mV (S.H.E.), depending upon the chemical nature of their coordination environment and electronic properties of their immediate surrounding within the polypeptide. Fe-S clusters with low redox potentials can be used as catalysts for thermodynamically unfavorable reactions. This is best illustrated by radical-Sadenosylmethionine (SAM) enzymes, which use a Fe4S4 cluster to inject one electron at a low potential, generating reactive free radicals exploited in a myriad of biosynthetic and metabolic reactions [START_REF] Cheek | Adenosylmethionine-dependent iron-sulfur enzymes: versatile clusters in a radical new role[END_REF][START_REF] Jarrett | The generation of 5′-deoxyadenosyl radicals by adenosylmethionine-dependent radical enzymes[END_REF][START_REF] Choudens | Reductive Cleavage of S-Adenosylmethionine by Biotin Synthase from Escherichia coli*[END_REF][START_REF] Padovani | Activation of Class III Ribonucleotide Reductase from E. coli. The Electron Transfer from the Iron-Sulfur Center to S-Adenosylmethionine[END_REF][START_REF] Pelosi | Ubiquinone Biosynthesis over the Entire O2 Range: Characterization of a Conserved O2-Independent Pathway[END_REF]. Fe-S clusters can also allow access of small compounds to ferric ions with strong Lewis acidity properties. Such clusters can be used in non-redox catalysis, as in dehydratases [START_REF] Beinert | Aconitase as Iron-Sulfur Protein, Enzyme, and Iron-Regulatory Protein[END_REF]. Last, reversible interconversions between cluster forms, with different redox states or different nuclearity, is used in Fe-S bound transcriptional regulators that function as sensors of oxygen, superoxide, nitric oxide, or even in proteins controlling DNA integrity [START_REF] Singh | Mycobacterium tuberculosis WhiB3 responds to O2 and nitric oxide via its [4Fe-4S] cluster and is essential for nutrient starvation survival[END_REF][START_REF] Gaskell | RsmA Is an Anti-sigma Factor That Modulates Its Activity through a [2Fe-2S] Cluster Cofactor *[END_REF][START_REF] Kiley | The role of Fe-S proteins in sensing and regulation in bacteria[END_REF][START_REF] Demple | 35] Escherichia coli SoxR protein: Sensor/transducer of oxidative stress and nitric oxide[END_REF][START_REF] Bouton | Nitrosative and oxidative modulation of iron regulatory proteins[END_REF][START_REF] Grodick | DNA-Mediated Signaling by Proteins with 4Fe-4S Clusters Is Necessary for Genomic Integrity[END_REF].
As a general trend, Fe-S cluster ligation sites are composed of cysteine residues whose thiol side chains provide bonding to Fe atoms or form sulfide bridges. Hence, a four Cys-containing motif, although admitting uncertainties on the space between individual Cys residues, is often seen as a predictor of a Fe-S cluster binding site. However, this can be misleading for two reasons. First, these sites may bind other metals, such as Zinc. Second, Fe-S cluster coordination with oxygen-based (aspartate, tyrosine or glutamate) and nitrogen-based (histidine or arginine) residues have also been described [START_REF] Bak | Alternative FeS cluster ligands: tuning redox potentials and chemistry[END_REF]. Hence, if the richness and diversity of cluster environment is an asset for biology as it allows versatility, it prevents simple and straightforward Fe-S cluster signature to be applied from primary sequence analysis. In this article, we review the last advances in the search, identification, and assessment tools applied to Fe-S proteins with an emphasis on the Escherichia coli Fe-S proteome for which we present an updated version of the catalog of Fe-S proteins. We also address the question of the relationship between O2 and Fe-S-based biology and provide original data that led us to envision different fates for damaged Fe-S proteins.
Experimental assessment of the presence of an Fe-S cluster in a protein
Experimental methods are required to assess whether a given protein hosts an Fe-S cluster or not (for review see [START_REF] Santos | Fe-S Proteins: Methods and Protocols[END_REF][START_REF] Ollagnier De Choudens | Chapter One -Genetic, Biochemical, and Biophysical Methods for Studying FeS Proteins and Their Assembly[END_REF]). An arsenal of in vitro strategies is available, but they constitute a long and tedious path before a conclusion can be reached. In most cases, the biochemistry requires manipulation in the absence of O2 inside glove boxes and high amounts of either the as-isolated holo-form protein or an apo-form in which an Fe-S cluster is chemically or enzymatically reconstituted. Once the purified holo-protein is obtained, methods for quantifying iron and sulfur are the first steps to suggest the presence of a cluster. Next, complementary biophysical analyses, such as spectroscopy (UV-visible, Electron Paramagnetic Resonance -EPR, Resonance Raman, Mössbauer, etc.), mass spectrometry, and crystallography must be used to establish the presence of a cluster and to characterize its properties and chemical environment, in particular the nature of its ligands [START_REF] Ollagnier De Choudens | Chapter One -Genetic, Biochemical, and Biophysical Methods for Studying FeS Proteins and Their Assembly[END_REF]. Importantly, those spectroscopic techniques usually require biochemical and/or genetic manipulations and are complemented by functional studies. For in-depth study of Fe-S clusters, EPR and Mössbauer spectroscopy are widely used since they provide a solid characterization of the Fe-S cluster type, its environment, and redox state. Mössbauer spectroscopy is the more suited method since it allows characterization of iron complexes in any oxidation or spin state.
However, this type of spectroscopy requires 57 Fe isotope incorporation into the sample, demanding careful manipulation during cell culture or protein reconstitution. EPR studies can bypass this step using the natural Fe isotope, however it cannot detect silent Fe-S clusters (Spin = 0). For organisms amenable to genetic manipulation, use of strains lacking specific factors required for building and inserting clusters into proteins can be used to predict whether a given protein would hold a cluster and to assess its functional importance for activity in vivo [START_REF] Ollagnier De Choudens | Chapter One -Genetic, Biochemical, and Biophysical Methods for Studying FeS Proteins and Their Assembly[END_REF].
Bioinformatic prediction of Fe-S cluster containing proteins
In silico identification of Fe-S binding sites from primary sequences or structures of proteins remains challenging. Yet, several bioinformatic analyses [START_REF] Estellon | An integrative computational model for large-scale identification of metalloproteins in microbial genomes: a focus on iron-sulfur cluster proteins[END_REF][START_REF] Valasatava | MetalPredator: a web server to predict iron-sulfur cluster binding proteomes[END_REF][START_REF] Wehrspan | Identification of Iron-Sulfur (Fe-S) Cluster and Zinc (Zn) Binding Sites Within Proteomes Predicted by DeepMind's AlphaFold2 Program Dramatically Expands the Metalloproteome[END_REF] have proven decisive in allowing identification of new Fe-S binding proteins further demonstrated by experimental methods, giving significant insights into the detection of metabolic pathways on poorly characterized species [START_REF] Johnson | Pathways of Iron and Sulfur Acquisition, Cofactor Assembly, Destination, and Storage in Diverse Archaeal Methanogens and Alkanotrophs[END_REF][START_REF] Payne | Examining Pathways of Iron and Sulfur Acquisition, Trafficking, Deployment, and Storage in Mineral-Grown Methanogen Cells[END_REF][START_REF] Feller | Substrate Inhibition of 5β-Δ4-3-Ketosteroid Dehydrogenase in Sphingobium sp. Strain Chol11 Acts as Circuit Breaker During Growth With Toxic Bile Salts[END_REF][START_REF] Li | Integrated Metabolomics and Targeted Gene Transcription Analysis Reveal Global Bacterial Antimonite Resistance Mechanisms[END_REF] .
Estellon et al. developed a sophisticated machine learning tool to design and assess a penalized linear model to predict Fe-S proteins using primary amino acid sequences [START_REF] Estellon | An integrative computational model for large-scale identification of metalloproteins in microbial genomes: a focus on iron-sulfur cluster proteins[END_REF]. Their approach was based on an ensemble of descriptors from a tailored non-redundant database of Fe-Sspecific HMM (Hidden Markov Model) profiles and a curated selection of Fe-S coordinating domains and signatures, which were used to guide their machine learning algorithm on a large training dataset of protein sequences (PDB70, a data set of sequence alignments with identity <70% and using PDB structures as query excluding E. coli K12 sequences). This Fe-S predictive model of 67 descriptors reached 87.9% of precision and 80.1% of a recall (or sensitivity) on the E. coli K12 proteome, which are values of higher performance when compared to other motif-based analysis (Prosite, Pfam, InterPRO, etc.). Despite the inability of their model to detect 27 known Fe-S proteins -at the time of the publication-, the capacity of their software was demonstrated by the prediction and experimental validation of YhcC and YdiJ. Furthermore, the model was tested on 556 proteomes from bacterial and archaeal species and predicted an averaged content of Fe-S proteins at 2.37 ±1.31% of those prokaryotic total proteomes. A related method was created by Valasatava et al. [START_REF] Valasatava | MetalPredator: a web server to predict iron-sulfur cluster binding proteomes[END_REF] (available in the MetalPredator webserver). In this case, the search for metal-binding motifs by HMM-profiles combines domain-based predictions [START_REF] Andreini | A Simple Protocol for the Comparative Analysis of the Structure and Occurrence of Biochemical Pathways Across Superkingdoms[END_REF] and the local nature of Minimal Function Sites [START_REF] Andreini | MetalPDB: a database of metal sites in biological macromolecular structures[END_REF].
This tool has a similar precision (85.2%) and slightly higher recall (86.5%) when compared to the results of the IronSulfurProteHome [START_REF] Estellon | An integrative computational model for large-scale identification of metalloproteins in microbial genomes: a focus on iron-sulfur cluster proteins[END_REF] on the E. coli proteome. Validation of their work was based on 3D homology on seven of their predictions.
The search of Fe-S binding sites using 3D protein structures has been limited by the number of structures or models available. However, a very recent study from Wehrspan et al. [START_REF] Wehrspan | Identification of Iron-Sulfur (Fe-S) Cluster and Zinc (Zn) Binding Sites Within Proteomes Predicted by DeepMind's AlphaFold2 Program Dramatically Expands the Metalloproteome[END_REF] elaborated a new ligand-search algorithm to identify Fe-S cluster or Zn binding sites exploiting the novel AlphaFold2 structure database [START_REF] Jumper | Highly accurate protein structure prediction with AlphaFold[END_REF][START_REF] Bryant | Improved Prediction of Protein-Protein Interactions Using AlphaFold2[END_REF]. Their method is based first in the compilation of a repertoire of all sidechain or backbone atoms that could potentially coordinate Fe-S cofactors or Zn ions in the AlphaFold2 database, which are later grouped and used as potential binding regions to be examined using a standard single-linkage clustering algorithm (within 8 Å) to superimpose each ligand type (Fe2S2, Fe3S4 or Fe4S4) and their coordinating atoms in all possible combinations. Next, those solutions are evaluated using a root-mean-squared In addition, this work could place Fe4S4 clusters on the structural models of the newly characterized DppD or on the E. coli U32 protease homologs that have been experimentally confirmed (UbiU, UbiV and TrhP) or strongly suspected (RlhA) as Fe-S proteins [START_REF] Pelosi | Ubiquinone Biosynthesis over the Entire O2 Range: Characterization of a Conserved O2-Independent Pathway[END_REF][START_REF] Bak | Monitoring Iron-Sulfur Cluster Occupancy across the E. Coli Proteome Using Chemoproteomics[END_REF][START_REF] Kimura | Biogenesis and iron-dependency of ribosomal RNA hydroxylation[END_REF].
Interestingly, none of these computational tools predicted MnmA as a Fe-S binding protein, an issue that remains controversial [START_REF] Jumper | Highly accurate protein structure prediction with AlphaFold[END_REF][START_REF] Bryant | Improved Prediction of Protein-Protein Interactions Using AlphaFold2[END_REF]. Noteworthy, no bias or specific enrichment in a functional class of Fe-S protein or type of Fe-S binding motif, was found to be associated with one particular algorithm. It is clear that precision and sensitivity of any of those in silico approaches are still limited, due to the strength of their descriptors and conditioned to the curation and size of the datasets used for training and testing of the algorithms. Thus, those predictive methods will only improve as long as the factors involved in the Fe-S cluster binding and assembly -such as plasticity, abundance and nature-are elucidated by experimental and functional studies.
The E. coli Fe-S proteome: an update
In the specific interest to establish the E. coli Fe-S proteome, we used two validated bioinformatic tools to predict a list of Fe-S bound candidate proteins: MetalPredator and IronSulfurProteHome websites [START_REF] Estellon | An integrative computational model for large-scale identification of metalloproteins in microbial genomes: a focus on iron-sulfur cluster proteins[END_REF][START_REF] Valasatava | MetalPredator: a web server to predict iron-sulfur cluster binding proteomes[END_REF]. For E. coli, those tools predicted 141 iron-sulfur cluster containing proteins and we added 21 proteins, for which evidences of Fe-S binding were recently published. Furthermore, 19 additional proteins were predicted by 3D modelling using
Alphafold database [START_REF] Wehrspan | Identification of Iron-Sulfur (Fe-S) Cluster and Zinc (Zn) Binding Sites Within Proteomes Predicted by DeepMind's AlphaFold2 Program Dramatically Expands the Metalloproteome[END_REF]. This yielded to 181 entries represented in Figures 1 and2. For each protein, type of evidence, either computational (Figure 1A) or biophysical (Figure 1B), supporting the notion that it is a Fe-S bound protein is given in Supp Table 1. Roughly half of the 181 E. coli Fe-S proteins have been validated as such by biophysical methods (Figure 1B).
E. coli makes use of Fe-S proteins in most, if not all, processes such as DNA replication, repair and transcription, RNA modification and translation, amino acid, vitamin and metabolite biosynthesis, and bioenergetics (Figure 2). Respiration is the most represented cellular process as E. coli allocates 28% of its Fe-S proteome to respiratory pathways (Figure 2). Biosynthesis of metabolites and cofactors, including Fe-S themselves, constitute the second most populated group. As expected, Fe-S bound proteins allow stress sensing and adaptation. Interestingly, a few Fe-S proteins are predicted to be involved in transport. Interestingly, there are 15 Fe-S proteins for which no functional prediction, neither from homology search or genetic screen was found in the literature. Attribution of function for these proteins might unearth activities of interest.
Fe-S & Oxygen
Besides their intrinsic redox chemical properties, the reason why Fe-S clusters were retained in such a wide range of proteins might be related to the geological conditions prevailing at the onset of life, namely abundance of Fe and S and anoxic atmosphere. The Earth's Great Oxygenation Event, probably due to the activity of Fe-S containing photosynthesis apparatus, caused shortage in bioavailable iron as it precipitated from the soluble ferrous form into insoluble ferric form. Besides, Fe 2+ can act as a catalyst for production of reactive oxygen species (ROS), via Fenton reaction, which destabilize Fe-S cluster. Therefore, the longadmitted view is that sophisticated Fe-S cluster biogenesis machineries emerged to build and insert Fe-S clusters into proteins [START_REF] Imlay | Iron-sulphur clusters and the problem with oxygen[END_REF][START_REF] Garcia | The SUF system: an ABC ATPase-dependent protein complex with a role in Fe-S cluster biogenesis[END_REF][START_REF] Boyd | Interplay between oxygen and Fe-S cluster biogenesis: insights from the Suf pathway[END_REF]. Indeed, Fe-S cluster-based biology and aerobic life might be seen as mutually exclusive, unless a great investment was put to control Fe-S biogenesis and its use in the cell. Support for this view was put forward by Andreini and collaborators in a bioinformatic analysis of Fe-S proteins predicted content of prokaryotic genomes [START_REF] Andreini | The Relationship between Environmental Dioxygen and Iron-Sulfur Proteins Explored at the Genome Level[END_REF]. A set of 434 prokaryotes genomes, including 18 genomes from obligate aerobes, 29 from obligate anaerobes, 214 from aerobes, 130 from aerotolerant anaerobes and 43 from facultative anaerobes was analyzed for occurrence of Fe-S proteins as predicted by bioinformatic protocols. Remarkably, there are more predicted Fe-S proteins per genome from anaerobes than aerobes, even when corrected with genome size. In aerobes the number of predicted Fe-S proteins correspond to less than 3% of the total predicted proteome whereas in genomes of anaerobes (obligate, facultative, aerotolerant) predicted Fe-S proteins amount to more than 3% of the total predicted proteins. Accordingly, we observe that E. coli Fe-S proteome is 4% and some recent in silico studies in archaea suggest that in methanogens and alkanotrophs 5-10 % of the proteome corresponds to Fe-S proteins [START_REF] Johnson | Pathways of Iron and Sulfur Acquisition, Cofactor Assembly, Destination, and Storage in Diverse Archaeal Methanogens and Alkanotrophs[END_REF]. Both aerobes and anaerobes exhibit shared family of Fe-S proteins involved in energy production and conversion, amino acid metabolism, nucleotide and coenzyme metabolism, and Fe-S biogenesis.
Interestingly, the number of paralogs involved in energy production and conversion is much higher in anaerobes than in aerobes, presumably reflecting the multiplicity of potential electron acceptor chemical recruited for anaerobic respiratory chains instead of the unique O2 in aerobic ones. Another important observation was that the number of proteins using Fe2S2 cluster, which are more resistant to O2 and less demanding in iron, appeared to increase in aerobes vs anaerobes. Hence, this study stands as a validation of the expected negative interaction between O2 and Fe-S-based biology. It also provides a rationale of why organisms developed strategy to mitigate deleterious consequences of enhanced O2 level in the atmosphere after emergence of photosynthesis, either by reducing the use of Fe-S cluster containing proteins, or by evolving dedicated assisting biogenesis machineries.
6.
What to do with damaged Fe-S proteins: to repair, recycle or degrade them?
The capacity of Fe-S clusters to detect O2, ROS or reactive nitrogen species (RNS) has its drawbacks, as Fe-S clusters can be altered or degraded by such compounds, which might lead to inactivation and destabilization of the hosting polypeptide. One question is to know whether all Fe-S proteins are similarly destabilized in the face of ROS or RNS. The answer is evidently negative since alteration of Fe-S cluster will depend upon its location into the structure of the hosting polypeptide. Well buried, solvent inaccessible clusters are likely to be more stable. On the contrary, well exposed cluster might be more prone to targeting by toxics and ensuing alteration.
This was well studied with members of the dehydratase family, such as aconitase or fumarase.
The nature of the ligands holding the cluster can also be a determining factor for its stability.
The case of fumarase had been investigated in detail [START_REF] Flint | The inactivation of Fe-S cluster containing hydrolyases by superoxide[END_REF]. Fumarases contain Fe4S4 bound via a CXnCXXC motif. Like all dehydratases, cluster in the E. coli fumarase exhibits a labile exposed catalytically active Fe atom, which is freed upon oxidation of the cluster, yielding to an inactive Fe3S4-bound enzyme 44 . In contrast, the Bacteroides thetaiotaomicron fumarase harbors a CXnC motif, which cannot be reactivated when oxidized by H2O2. This is due to the generation of radical species that carbonylate the peptide chain [START_REF] Lu | A conserved motif liganding the [4Fe-4S] cluster in [4Fe-4S] fumarases prevents irreversible inactivation of the enzyme during hydrogen peroxide stress[END_REF]. Thus, composition of the ligating motif itself could be essential for Fe-S proteins in the resistance to oxidative environments. Besides, the number of ligand cysteines could also be important for tolerance to oxygen attack, as reported in the case of the six cysteines liganded Fe4S3 cluster of hydrogenase from Ralstonia eutropha [START_REF] Fritsch | The crystal structure of an oxygen-tolerant hydrogenase uncovers a novel iron-sulphur centre[END_REF].
Another example of Fe-S cluster targeted by environmental redox conditions is given by the transcriptional regulator Fnr that controls anaerobiosis/aerobiosis switch [START_REF] Crack | Influence of the Environment on the [4Fe-4S]2+ to[END_REF][START_REF] Zhang | Reversible cycling between cysteine persulfideligated [2Fe-2S] and cysteine-ligated [4Fe-4S] clusters in the FNR regulatory protein[END_REF][START_REF] Crack | Mass spectrometric identification of intermediates in the O2-driven [4Fe-4S] to [2Fe-2S] cluster conversion in FNR[END_REF]. The Fe4S4 cluster integrity of Fnr controls the monomer-dimer equilibrium and eventually the expression of hundreds of genes. Briefly, its Fe4S4-bound homodimeric form binds to operator regions of genes, repressing those involved in aerobic metabolism and activating those required for anaerobic metabolism. Exposure of the cluster to O2 leads to the Fe4S4 and subsequent conversion to Fe3S4 and Fe2S2 causing the dissociation of the dimer in inactive apo-monomers.
Another question concerns the fate of the "damaged" (or oxidized) Fe-S proteins. Anew, studies with Fnr as a model provided important insights. Indeed, it was found that in aerobic condition, apo-Fnr was degraded by the chaperone/protease ClpXP [START_REF] Crack | Influence of the Environment on the [4Fe-4S]2+ to[END_REF][START_REF] Mettert | The impact of O2 on Fe-S cluster biogenesis requirements of Escherichia coli FNR[END_REF][START_REF] Mettert | ClpXP-dependent Proteolysis of FNR upon Loss of its O2-sensing [4Fe-4S] Cluster[END_REF] but it was also shown that reconversion of apo-Fnr to Fe4S4-Fnr is possible upon switch to anaerobic growth [START_REF] Mettert | ClpXP-dependent Proteolysis of FNR upon Loss of its O2-sensing [4Fe-4S] Cluster[END_REF]. This perfectly illustrates how cells handle "damaged" Fe-S protein, either degrading them or reactivating them. This last possibility suggests that apo-forms of some proteins are stable long enough to engage in a new cycle of maturation via their recognition by ISC or SUF machineries. This view is consistent with an early study aiming at investigating whether SUF was a repairing system [START_REF] Djaman | Repair of Oxidized Iron-Sulfur Clusters in Escherichia coli[END_REF]. Interestingly, besides Fnr, other Fe-S proteins (AcnB, IscU, IscR, LipA and MoaA) have been identified as substrates of ClpXP in E. coli [START_REF] Flynn | Proteomic Discovery of Cellular Substrates of the ClpXP Protease Reveals Five Classes of ClpX-Recognition Signals[END_REF][START_REF] Pan | A region at the C-terminus of the Escherichia coli global transcription factor FNR negatively mediates its degradation by the ClpXP protease[END_REF], which may suggest that this protease could be involved in a broader spectrum in the homeostasis of the Fe-S proteome. Whether ClpXP acts on these Fe-S proteins specifically when they are damaged remain to be established.
A distinct possibility put forward for long, is the existence of dedicated factors that would "repair" damaged clusters. Interestingly, Clp might play a role in the repairing of damaged Fe-S proteins by helping iron release from Dps sequestration [START_REF] Sen | During Oxidative Stress the Clp Proteins of Escherichia coli Ensure that Iron Pools Remain Sufficient To Reactivate Oxidized Metalloenzymes[END_REF]. YtfE might also be such a repair factor acting by re-metalating clusters having lost Fe atoms [START_REF] Justino | Escherichia coli di-iron YtfE protein is necessary for the repair of stress-damaged iron-sulfur clusters[END_REF] like those invoked in the progressive dismantlement of Fnr or in the case of dehydratases. For instance the oxidativelydamaged cluster Fe3S4 of the E. coli fumarase can be reactivated if an iron source is provided [START_REF] Keyer | Inactivation of Dehydratase [4Fe-4S] Clusters and Disruption of Iron Homeostasis upon Cell Exposure to Peroxynitrite[END_REF].
Furthermore, studies in members of the radical-SAM family have enlarged our view of protein factors needed for the repair of their cannibalized Fe-S clusters. The NfuA carrier protein has been endowed with the capacity to regenerate the Fe4S4 cluster in the lipoyl synthase (LipA), which sulfur atom has been sacrificed during catalysis [START_REF] Mccarthy | The A-type domain in Escherichia coli NfuA is required for regenerating the auxiliary [4Fe-4S] cluster in Escherichia coli lipoyl synthase[END_REF].
Perspectives
Since their discovery [START_REF] Beinert | Iron-sulfur proteins: ancient structures, still full of surprises[END_REF], Fe-S proteins have been studied by relying on a wide array of approaches, from chemistry to genetics via biochemistry, structural biology and spectroscopy.
This has provided us with a deep insight in (i) the mechanism underlying reactivity of Fe-S proteins, (ii) the role played by the cluster, (iii) the contribution of Fe-S proteins to cellular homeostasis. The next step will be to apply high-throughput techniques aiming at getting an integrated description of the Fe-S proteome dynamics in vivo. Thermo proteome profiling methodology already contributed to a whole cell view by reporting the effect of genetic alteration of the ISC system [START_REF] Mateus | Thermal proteome profiling in bacteria: probing protein state in vivo[END_REF]. Another methodology is chemoproteomic [START_REF] Bak | Monitoring Iron-Sulfur Cluster Occupancy across the E. Coli Proteome Using Chemoproteomics[END_REF]. This technique is based in tracking Fe-S cysteine ligands by LC-MS/MS coupled to the labeling using iodoacetamide-alkyne derivative (termed isoTOP-ABPP and ReDiME). Its recent use to follow changes in Fe-S proteome as a response to iron-depletion or mutations in the ISC system opens highly promising possibility to an in vivo integrated description of cell response and adaptation to environmental challenges and genetic disorders. We recently showed that heterologous expression of some active Fe-S proteins requires the heterologous co-expression of the native Fe-S cluster assembly machinery [START_REF] Angelo | Cellular assays identify barriers impeding iron-sulfur enzyme activity in a non-native prokaryotic host[END_REF]. Deciphering the recognition mode between machineries and targets might help in optimizing and broadening heterologous expression of Fe-S containing proteins, a goal of considerable interest in biotechnology.
Bioinformatic approaches have provided new vision regarding distribution of Fe-S proteins and their evolution. Yet, predicting whether a given polypeptide will host a cluster is still limited by the wide diversity of motifs allowing Fe-S binding. Another structural feature that might be used is the intrinsic information of Fe-S polypeptide to be recognized by Fe-S maturation factors. Indeed, studies in eukaryotes have unearthed the so-called LYR motif, which endows a subset of Fe-S proteins with the capacity to interact with Hsc20 ISC machinery component. Future studies should aim at investigating whether similar type of "recognition sequence" occurs and permits interaction between apo-targets and machineries in prokaryotes. for primary sequence motif and pattern search [START_REF] Estellon | An integrative computational model for large-scale identification of metalloproteins in microbial genomes: a focus on iron-sulfur cluster proteins[END_REF][START_REF] Valasatava | MetalPredator: a web server to predict iron-sulfur cluster binding proteomes[END_REF] or dedicated sequence alignment (see Supp Table 1), Alphafold [START_REF] Wehrspan | Identification of Iron-Sulfur (Fe-S) Cluster and Zinc (Zn) Binding Sites Within Proteomes Predicted by DeepMind's AlphaFold2 Program Dramatically Expands the Metalloproteome[END_REF] and 3D model obtained from structural analysis of homologous holo-proteins (see Supp listed Fe-S proteins are in Supp Table 1.
Figure 1 .
1 Figure 1. Characterization of E. coli Fe-S proteome. A. Left panel: List of Fe-S proteins predicted by bioinformatic tools only, i.e. not experimentally assessed yet. Motif/Pattern stands
Figure 2 .
2 Figure 2. The Fe-S proteome of E. coli. Distribution of the E. coli Fe-S proteins according to their association with a specific biological process. Fe-S proteins characterized by biophysical approaches are in bold. Fe, iron; S, sulfur; TCA, Tricarboxylic Acid Cycle. Details about all
retained and the ligand is placed into the structure. This study identified thousands of potential Fe-S clusters in the proteomes of 21 organisms listed in the AlphaFold2 structures. When this analysis was applied to the E. coli proteome, it obtained a recall rate of 74%, which is lower than the sensitivity of Valasatava et al. (86.5%) or Estellon et al. (80.1%), however it found a good overlap on the predicted Fe-S proteins or the false positives predicted by those two previous studies. Moreover, this tool was able to calculate if some proteins are more likely to accommodate Fe2S2, Fe3S4 or Fe4S4 clusters. Interestingly, the 3D scan made by this work predicted that YjiM, YcbX, CyuA (YhaM) and PreT (YeiT) are indeed Fe-S containing proteins, which were classified as false positives by Estellon et al. Likewise, it agreed with Valasatava et al. that seven of their false positives could coordinate Fe-S clusters.
deviation (RMSD) of the ligands and checked for steric clashes. Results with poor RMSD scores or steric clashes are removed, the remaining combinations filtered, the one with the lowest RMSD
Table
). B. Right panel: List of Fe-S proteins characterized by biophysical approaches such as UV-visible (UV-vis), Electron Paramagnetic Resonance
(EPR), Mössbauer and Resonance Raman spectroscopy and X-Ray crystallography (Xray) (see Supp Table
1
).
Acknowledgements
We thank the SAMe Unit members for discussions and suggestions. This work was supported by the French State Program 'Investissements d'Avenir' (Grant "IBEID" ANR-10-LABX-62), by CNRS and by Institut Pasteur.
Data Availability Statement
The data underlying this article are available in the article and in its online supplementary material. |
04097900 | en | [
"math.math-ap",
"math.math-fa"
] | 2024/03/04 16:41:24 | 2023 | https://hal.science/hal-04097900v2/file/interpolation-2023.pdf | Eduard Curcȃ
Some Bourgain-Brézis type solutions via complex interpolation
Keywords: Bourgain-Brézis solutions, Fourier multipliers, Divergence equation. MSC 2020 classification: 42B15, 42B35, 46B70 1
In 2002 Bourgain and Brézis proved that given a vector field
We prove several results of a similar nature in which we take into consideration the Fourier support of the solutions. For instance, in the case d ≥ 3 we prove the following: for any vector field
and u L ∞ ∩ Ḃd/p,p 2 v Ḃd/p,p q .
Our arguments rely on a version of the complex interpolation method combined with some ideas of Bourgain and Brézis.
∈ S (R d ) ∩ Ẇ 1,d R d such that div u = f, (1)
in the distributions sense on R d . Indeed, it sufices to set
u := ∇|∇| -2 f, (2)
and to use the fact that the components of ∇u are of the form R i R j u where R 1 , ..., R d are the Riesz transforms on R d ( R j ϕ(ξ) = (ξ j / |ξ|) ϕ(ξ), for any Schwartz ϕ). Since each R j is a Calderón-Zygmund operator, we easily get that each component of ∇u belongs to L d (R d ).
Note that the space Ẇ 1,d does not embed in L ∞ and hence, the solution in Ẇ 1,d provided by the expression (2) may fall outside L ∞ (see for instance the example given by L. Nirenberg in [START_REF] Bourgain | On the equation div Y = f and application to control of phases[END_REF]Remark 7,p. 400]). However, as it was shown 1 by Bourgain and Brézis (2002), the fact that (1) admits a (possibly another) solution u ∈ L ∞ (R d ) is a direct consequence of the Gagliardo embedding (W 1,1 (R d ) → L d (R d ), where d := d/(d -1)) (see [5, Proposition 1]).
Even more, Bourgain and Brézis have proved in [5, Theorem 1] the following striking fact: there exists a solution u to (1) that is simultaneously bounded and in the "right" Sobolev space Ẇ 1,d (R d ). In other words, there exists a vector field u ∈ L ∞ (R d ) ∩ Ẇ 1,d (R d ) which is a solution to [START_REF] Bahouri | Fourier Analysis and Nonlinear Partial Differential Equations[END_REF]. In the general case where d ≥ 2, this result was proved by an involved approximation argument using the Littlewood-Paley square function. We mention that the complicated construction used in [START_REF] Bourgain | On the equation div Y = f and application to control of phases[END_REF] can also be used in more general situations. By similar constructive methods, Bourgain and Brézis proved an analogue existence result for more general underdeterminated Hodge systems. Following the ideas in [START_REF] Bourgain | On the equation div Y = f and application to control of phases[END_REF] and [START_REF] Bourgain | New estimates for eliptic equations and Hodge type systems[END_REF], Bousquet, Mironescu and Russ ( [START_REF] Bousquet | A limiting case for the divergence equation[END_REF], 2014) and later Bousquet, Russ, Wang and Yung ( [START_REF] Bousquet | Approximation in fractional Sobolev spaces and Hodge systems[END_REF], 2017) provided generalizations of the Bourgain-Brézis results in the scale of Triebel-Lizorkin spaces. For instance, adapted to the case of the divergence equation, Theorem 2 in [START_REF] Bousquet | Approximation in fractional Sobolev spaces and Hodge systems[END_REF] gives us: Theorem 1. Suppose that 1 < p, q < ∞ and consider some vector field v ∈ S (R d ) ∩ Ḟ d/p,p q (R d ). Then, there exists a vector field u ∈ L ∞ (R d ) ∩ Ḟ d/p,p q (R d ) such that div u = div v, and u L ∞ ∩ Ḟ d/p,p q v Ḟ d/p,p q .
Remark 2. Note that by Calderón-Zygmund theory any (compactly supported) f ∈ Ḟ d/p-1,p q (R d ) can be written as the divergence of the vector field v = ∇ |∇| -2 f ∈ Ḟ d/p,p q (R d ). Hence, since Ḟ 1,d 2 = Ẇ 1,d , when p = d and q = 2, from Theorem 1 above we recover the result of Bourgain and Brézis.
A similar existence result holds for the scale of Besov spaces, the proof being technically the same as for Theorem 1. Since in this paper we are concerned more with the Besov version, we explicitely state it below: Theorem 3. Suppose that 1 < p, q < ∞ and consider some vector field v ∈ S (R d ) ∩ Ḃd/p,p q (R d ). Then, there exists a vector field u ∈ L ∞ (R d ) ∩ Ḃd/p,p q (R d ) that satisfies
div u = div v,
and such that u L ∞ ∩ Ḃd/p,p q v Ḃd/p,p q .
Remark 4. Throughout the paper we will call the given vector v the source and u will be called solution. A similar convention will also be applied to more general equations.
It is worth noticing that in the special case where d = 2 (and hence, p = 2) Bourgain and Brézis have found a much simpler proof of their existence result (see [START_REF] Bourgain | On the equation div Y = f and application to control of phases[END_REF]Section 4,p. 403]). In this case the proof is by duality and it is nonconstructive. Also, by similar methods, a proof was found by [START_REF] Mazya | Bourgain-Brezis type inequality with explicit constants[END_REF] for the case p = q = 2 of Theorem 1 (or equivalently of Theorem 3) (see [START_REF] Mazya | Bourgain-Brezis type inequality with explicit constants[END_REF]). Again, the proof is by duality and strikingly simple. (See also [START_REF] Mironescu | On some inequalities of Bourgain, Brézis, Maz'ya, and Shaposhnikova related to L 1 vector fields[END_REF] for some related discussions.) However, both approaches, namely that of Bourgain and Brézis in the case d = 2 and that of Mazya are based on L 2 -Fourier analysis arguments that are unlikely to be extended to the case where p = 2.
There is yet another situation of a different nature: Proposition 5. Let d ≥ 2 be an integer and consider some r ∈ (1, ∞). Then, for any vector field v ∈ Ḃd/r,r Indeed, since Ḃd/r,r
1 (R d ) → L ∞ (R d ) ∩ Ḃd/r,r 1 (R d
), it suffices to set u := v (see subsection 2.1 for the definition of Ḃd/r,r 1 (R d ) we use in this paper). Since Proposition 5 and Theorem 3 have much easier proofs than the constructive proof used in [START_REF] Bourgain | On the equation div Y = f and application to control of phases[END_REF] and in [START_REF] Bousquet | A limiting case for the divergence equation[END_REF], it would be interesting to find a way to "interpolate" between Proposition 5 and Theorem 3.
A naive interpolation strategy
One may try to interpolate in the following way. By the closed range theorem and the lifting property of the Besov spaces we can reformulate Proposition 5 and Theorem 3
g Ḃ-d/r+1,r ∞ ∼ ∇g Ḃ-d/r,r ∞ ∇g L 1 + Ḃ-d/r,r ∞ , (3)
and respectively,
g Ẇ -d/2,2 ∼ ∇g Ẇ -d/2,2 ∇g L 1 + Ẇ -d/2,2 , (4)
for any Schwartz function g with g vanishing in a neighborhood of 0.
For each Banach function space Y on R d denote by G(Y ) the space of all vector fields in Y that are gradients, i.e., G(Y
) := {g ∈ Y | curl g = 0} .
With this notation one can view ( 3) and ( 4) as the embeddings
G(L 1 + Ḃ-d/r,r ∞ ) → Ḃ-d/r+1,r ∞ ,
and respectively,
G(L 1 + Ẇ -d/2,2 ) → Ẇ -d/2+1,2 .
By complex interpolation (we may consider the real interpolation as well), we conclude that, for any θ ∈ (0, 1),
(G(L 1 + Ḃ-d/r,r ∞ ), G(L 1 + Ẇ -d/2,2 )) θ → ( Ḃ-d/r+1,r ∞ , Ẇ -d/2+1,2 ) θ . (5)
The right hand side of (5) can be easily computed explicitly (see for instance [START_REF] Bergh | Interpolation spaces. An introduction[END_REF]Chapter 6]):
( Ḃ-d/r+1,r ∞ , Ẇ -d/2+1,2 ) θ = Ḃ-d/p+1,p q ,
where 1/p = (1 -θ)/r + θ/2 and 1/q = (1 -θ)/1 + θ/2. Now we would like to have
G(L 1 + Ḃ-d/p,p q ) → (G(L 1 + Ḃ-d/r,r ∞ ), G(L 1 + Ẇ -d/2,2 )) θ , (6)
and combining this with (5), we would get via the closed range theorem the fact that for any vector field v ∈ Ḃd/p,p q (R d ) there exists another vector field u ∈ L ∞ (R d ) ∩ Ḃd/p,p q (R d ) of the same divergence as v. However, computing explicitly the left hand side of (5) or proving [START_REF] Bourgain | New estimates for eliptic equations and Hodge type systems[END_REF] only by interpolation theory is a quite difficult task. Naively we may have the following strategy for proving [START_REF] Bourgain | New estimates for eliptic equations and Hodge type systems[END_REF]. We can observe that, since we have the embeddings
L 1 , Ḃ-d/r,r ∞ → L 1 + Ḃ-d/r,r ∞ and L 1 , Ẇ -d/2,2 → L 1 + Ẇ -d/2,2 , we can conclude by interpolation that L 1 , Ḃ-d/p,p q → (L 1 + Ḃ-d/r,r ∞ , L 1 + Ẇ -d/2,2 ) θ or equivalently L 1 + Ḃ-d/p,p q → (L 1 + Ḃ-d/r,r ∞ , L 1 + Ẇ -d/2,2 ) θ .
Consequently,
G(L 1 + Ḃ-d/p,p q ) → G(L 1 + Ḃ-d/r,r ∞ , L 1 + Ẇ -d/2,2 θ ).
Hence, in order to obtain [START_REF] Bourgain | New estimates for eliptic equations and Hodge type systems[END_REF] it would be sufficient to have
G((L 1 + Ḃ-d/r,r ∞ , L 1 + Ẇ -d/2,2 ) θ ) → (G(L 1 + Ḃ-d/r,r ∞ ), G(L 1 + Ẇ -d/2,2 )) θ ,
or, equivalently, since the other embedding is trivial,
G((L 1 + Ḃ-d/r,r ∞ , L 1 + Ẇ -d/2,2 ) θ ) = (G(L 1 + Ḃ-d/r,r ∞ ), G(L 1 + Ẇ -d/2,2 )) θ . (7)
We can further reformulate this fact as
N ∩ (Y 0 , Y 1 ) θ = (N ∩ Y 0 , N ∩ Y 1 ) θ , (8)
where
Y 0 = L 1 + Ḃ-d/r,r ∞ , Y 1 = L 1 + Ẇ -d/2,2
and N is the spaces of the fields in Y 0 + Y 1 that are gradients, i.e, N := G(Y 0 + Y 1 ). The difficulty of proving [START_REF] Bousquet | A limiting case for the divergence equation[END_REF] consists in the fact that, for the general situation when Y 0 , Y 1 , N are Banach spaces the question whether or not (8) holds does not have yet a satisfactory answer. If we replace the complex interpolation method in [START_REF] Bousquet | Approximation in fractional Sobolev spaces and Hodge systems[END_REF] with the real K-method of interpolation, then, (8) may be false for some particular choice of the spaces Y 0 , Y 1 , N (see for instance [START_REF] Krugliak | The failure of the Hardy inequality and interpolation of intersections[END_REF]). In the case of the complex interpolation method it seems that even less is known when [START_REF] Bousquet | Approximation in fractional Sobolev spaces and Hodge systems[END_REF] is valid. This naive interpolation strategy seems inappropriate to prove [START_REF] Bourgain | New estimates for eliptic equations and Hodge type systems[END_REF] or even a weaker statement like
G(L 1 + Ḃ-d/p,p 1 ) → (G(L 1 + Ḃ-d/r,r ∞ ), G(L 1 + Ẇ -d/2,2 )) θ ,
which corresponds to the following existence result: Proposition 6. Let d ≥ 2 be an integer and consider some parameters p ∈ (2, ∞) and q ∈ (1, 2). Then, for any vector field
v ∈ S (R d )∩ Ḃd/p,p q (R d ) there exists a vector field u ∈ L ∞ (R d )∩ Ḃd/p,p ∞ (R d ) such that div u = div v, and
u L ∞ ∩ Ḃd/p,p ∞ v Ḃd/p,p q .
The main results
In this paper we take into consideration the spectrum of solutions. In what follows, the spectrum of a tempered distribution v is its Fourier support, i.e., spec(v) := supp v (see Remark 14 for a more general definition). Adapted to the case of the divergence equation, our main result reads:
Theorem 7. Let d ≥ 3 be an integer and consider the set
∆ := R d \(-∞, 0) d . Consider some parameters p ∈ [2, ∞) and q ∈ (1, 2). Then, for any vector field v ∈ S (R d ) ∩ Ḃd/p,p q (R d ), with spec(v) ⊆ ∆ there exists a vector field u ∈ L ∞ (R d ) ∩ Ḃd/p,p 2 (R d ), with spec(u) ⊆ ∆ such that div u = div v, (9)
and
u L ∞ ∩ Ḃd/p,p 2 v Ḃd/p,p q .
In the case where d = 2 our method does not provide solutions with the spectrum in ∆. Nevertheless, one can obtain solutions with the spectrum in a different type of sets. For each δ ∈ (0, π/4) let C δ be the symmetric cone
C δ := {(ξ 1 , ξ 2 ) ∈ R 2 | |ξ 1 | ≤ (tan δ) |ξ 2 |}.
With this notation we have: Theorem 8. Consider the numbers δ ∈ (0, π/8), ε ∈ (0, 1] and some parameters p ∈ [2, ∞) and q ∈ (1, 2). Then, for any vector field
v ∈ S (R 2 ) ∩ Ḃ2/p,p q (R 2 ), with spec(v) ⊆ C δ , there exists a vector field u ∈ L ∞ (R 2 ) ∩ Ḃ2/p,p 2 (R 2 ), with spec(u) ⊆ C (1+ε)δ , such that div u = div v, and u L ∞ ∩ Ḃ2/p,p 2 v Ḃ2/p,p q .
When compared with Theorem 3 one can observe that Theorem 7 (or Theorem 8) has two major drawbacks. First, we are not alowed to take p < 2 or q ≥ 2 as parameters for the space Ḃd/p,p q on the source side. Secondly, for the space in which we obtain the solution we lose some control of the "third parameter". In other words, we would prefer to obtain L ∞ ∩ Ḃd/p,p q for the solution space instead of L ∞ ∩ Ḃd/p,p 2 which is a slighlty larger space. (See however, Lemma 35 for a "perfect" version of our results in the case p = q = 2.) On the other hand it is unlikely that one can deduce Theorem 7 directly from Theorem 3. Indeed, given a vector field v ∈ Ḃd/p,p q , with spec(v) ⊆ ∆, by Theorem 3 one can find some vector field u ∈ L ∞ ∩ Ḃd/p,p q such that div u = div v, however, not necessarly with spec(u) ⊆ ∆. It is not obvious that one can obtain a solution u with spec(u) ⊆ ∆ by direct methods: suppose P ∆ is the Fourier projection on ∆, i.e., P ∆ = I -P + , where P + is the Riesz projection and I is the identity operator. We have P ∆ v = v and we can write div P ∆ u = div v.
However, since P D is not bounded on L ∞ , we may not have P ∆ u ∈ L ∞ , i.e., P ∆ u is not in general a candidate for a solution. The same observation applies to or Theorem 8. To our knowledge, except for the method we give in this paper, there is no other method in the literature able to prove results like Theorem 7 or Theorem 8.
In fact, when d ≥ 3, we will prove a more general result than Theorem 7. Our methods alow us to work with more general Fourier multipliers than the usual derivatives. In order to formulate our result we first need some preparations.
Let σ ∈ C 2 (R d , R) be a function. We consider the following properties (that may or not be satisfied by σ):
(P1) The function σ satisfies the estimate
|∇ α σ(ξ)| |ξ| 1-|α| , on R d , for any multiindex α ∈ N d with |α| ≤ 2;
(P2) The function σ is odd in the variable ξ 1 and even in any other variable, i.e.,
σ( 1 ξ 1 , 2 ξ 2 , ..., d ξ d ) = 1 σ(ξ 1 , ξ 2 , ..., ξ d ), on R d , for any signs 1 , ..., d ∈ {-1, 1}.
Introduce the new functions σ 1 , ..., σ d defined by
σ j (ξ 1 , ξ 2 , ..., ξ d ) := σ(ξ j , ξ 2 , ...ξ j-1 , ξ 1 , ξ j+1 ..., ξ d ),
on R d , for any index j ∈ {1, 2, ..., d}.
Consider some half-spaces D 1 , ..., D d ⊂ R d and a family of functions G 1 , ..., G d : R d → R d-1 . We say that the function G j is adapted to the half-space D j if there exists a rotation R on R d (depending on j) and a function σ : R d → R (depending on j) satisfying (P1), (P2) such that D j = R(U ), where U := R d-1 × (0, ∞), and
G j = (σ 1 • R, ..., σ d-1 • R).
We say that the family of functions G 1 , ..., G d : R d → R d-1 is adapted to the family of halfspaces D 1 , ..., D d ⊂ R d , if for each j ∈ {1, ..., d} the function G j is adapted to D j and Let us denote by S c, the space of all Schwartz function f whose Fourier transform f is compactly supported and vanishing in a neighborhood of 0. Suppose E and F are some Banach function spaces on R d such that S c, is dense in E and
d j=1 |G j (ξ)| 1 D j (ξ) ∼ |ξ| 1 D (ξ), (10)
m(∇)f F f E ,
for any f ∈ S c, . Then, by linearity and density m(∇) can be uniquely extended to a bounded operator m(∇) : E → F (see also Remark 13). We will often say that m is the symbol of the Fourier multiplier m(∇).
To a vector valued function
G : R d → R d-1 , with G = (G 1 , ..., G d-1
), where G 1 , ..., G d-1 : R d → R are scalar functions of polynomial growth, we associate the vector-valued Fourier multiplier
G(∇) := (G 1 (∇), ..., G d-1 (∇)).
In other words, if f ∈ S c, , by G(∇)f we mean
G(∇)f := (G 1 (∇)f, ..., G d-1 (∇)f ).
Suppose u 1 , ..., u d-1 ∈ S c, and let u be the (d -1)-vector field2 u := (u 1 , ..., u d-1 ). By G(∇)
• u we mean G(∇) • u := G 1 (∇)u 1 + ... + G d-1 (∇)u d-1 .
Now we can formulate our generalisation of Theorem 7.
Theorem 9. Let d ≥ 3 be an integer and consider some parameters p ∈ [2, ∞) and q ∈ (1, 2). Suppose that the family of functions G 1 , ..., G d : R d → R d-1 is adapted to the family of half-spaces
D 1 , ..., D d ⊂ R d .
Then, for any system of (d -1)-vector fields (v j ) j=1,..,d with v j ∈ S (R d ) ∩ Ḃd/p,p q (R d ) and spec(v j ) ⊆ D j , there exists a system of (d -1)-vector fields (u j ) j=1,..,d , with
u j ∈ L ∞ (R d ) ∩ Ḃd/p,p 2 (R d ) and spec(u j ) ⊆ D j , such that d j=1 G j (∇) • u j = d j=1 G j (∇) • v j , (11)
and
d j=1 u j L ∞ ∩ Ḃd/p,p 2 d j=1 v j Ḃd/p,p q .
In this paper equations such as [START_REF] Burkholder | A geometric condition that implies the existence of certain singular integrals of Banach-space-valued functions[END_REF] or [START_REF] Curcȃ | On the interpolation of the spaces W l,1 (R d ) and W r,∞ (R d )[END_REF] will be called divergence-like equations.
In subsection 5.1 we will see that Theorem 9 easily implies Theorem 7.
About the proofs
Our proofs of Theorem 9, Theorem 7 and Theorem 8 are based on two ingredients:
1. The W-method of interpolation. This is the key method that we are using throughout the paper. Let (A 0 , A 1 ) and (B 0 , B 1 ) be Banach couples and T : A 0 + A 1 → B 0 + B 1 be a linear operator such that T (B j ) → T (A j ), for any j = 0, 1. Suppose we want to see under which conditions on the spaces involved and the operator T we have
T F 1 θ (B 0 , B 1 ) → T F 2 θ (A 0 , A 1 ) , (12)
for some θ-interpolation functors F 1 θ , F 2 θ . One can say that, in some sense, we "interpolate" linear equations or that we preserve some form of surjectivity of the operator T . In order to give reasonable sufficient conditions for [START_REF] Diestel | Vector measures[END_REF] to hold for some convenient interpolation functors we introduce a variant of the complex interpolation method which will be called the W-method 3 . In our case F 1 θ will be given by the usual complex method of interpolation and F 2 θ will be given by our W-method. Roughly speaking this method consists in the following. Suppose (A 0 , A 1 ) is a compatible couple of Banach spaces. In order to define the interpolation space of the couple (A 0 , A 1 ) via the W-method we use the three lines lemma on the strip as in the standard complex method of interpolation. However, instead of quantifying the endpoint regularity of the analytic functions involved via the norms L ∞ (R, A j ) (j = 0, 1) we use slightly more complicated quantities that depend on some prescribed pair of Banach spaces (X 0 , X 1 ). In this way, for each θ ∈ (0, 1), we obtain an interpolation space that will be denoted by (A 0 , X 0 | A 1 , X 1 ) θ . The efficiency of the W-method relies (between other facts) on properly choosing the spaces X 0 , X 1 .
When the spaces X j have the U M D property, under some additional embedding assumptions concerning the Banach spaces involved, the W-method of interpolation preserves the "surjectivity" of operators (i.e., (12) holds). The main requirements for applying the W-method are twofold:
(i) On one hand one needs to verify some embedding conditions for the domains and the co-domains of the operator. We give some simple necessary conditions that are easy to formulate, however not sharp. We also mention that, in the absence of any such conditions, it is not possible to preserve surjectivity (see the examples in the second part of subsection 3.3).
(ii) On the other hand explicitly computing the space (A 0 , X 0 | A 1 , X 1 ) θ seems to be difficult in practice. However, there are particular situations in which we can embed (A 0 , X 0 | A 1 , X 1 ) θ in some convenient space. More precisely, when A j are of the form A j = A ∩ X j , for some Banach space A, we have
(A 0 , X 0 | A 1 , X 1 ) θ = (A ∩ X 0 , X 0 | A ∩ X 1 , X 1 ) θ → A ∩ (X 0 , X 1 ) θ .
For instance, when we have
X 0 = Ḃd/r,r 2 , X 1 = Ḃd/2,2 2
and A = L ∞ , the above embedding becomes
(L ∞ ∩ Ḃd/r,r 2 , Ḃd/p,p 2 | L ∞ ∩ Ḃd/2,2 2 , Ḃd/2,2 2 ) θ → L ∞ ∩ ( Ḃd/r,r 2 , Ḃd/2,2 2
) θ .
By using only the result of Mazya (Theorem 3 in the case p = q = 2) and the W-method together with the embedding Ḃd/p,p
1 (R d ) → L ∞ (R d )
we easily obtain the following:
Theorem 10. Let d ≥ 2 be an integer and consider some parameters p ∈ [2, ∞) and q ∈ (1, 2). Then, for any vector field
v ∈ S (R d )∩ Ḃd/p,p q (R d ) there exists a vector field u ∈ L ∞ (R d )∩ Ḃd/p,p 2 (R d ) such that div u = div v, and
u L ∞ ∩ Ḃd/p,p 2 v Ḃd/p,p q .
One can even obtain an analogue of Theorem 10 for a class of Lorentz-Sobolev spaces (for definitions see subsection 2.2). Namely, by using only the result of Mazya and the W-method, together with some standard facts in the theory of Lorentz spaces, we easily obtain the following: Theorem 11. Let d ≥ 2 be an integer and consider some parameters p ∈ [2, ∞) and q ∈ (1, 2). Then, for any vector field
v ∈ S (R d ) ∩ Ẇ d/p L p,3/2 (R d ) there exists a vector field u ∈ L ∞ (R d ) ∩ Ẇ d/p L p,2 (R d ) such that div u = div v, and u L ∞ ∩ Ẇ d/p L p,2 v Ẇ d/p L p,q .
The conditions p ≥ 2 and q < 2 in Theorem 10, Theorem 11, as well as in Theorem 7 and Theorem 9, are induced by some technical limitations of the W-method (see subsection 5.2).
The Bourgain-Brézis technique.
In [5, Section 4, p. 403] Bourgain and Brézis proved the torus analogue of Theorem 3 in the case where p = q = d = 2. They conluded the existence of solutions for the divergence equation by duality. Namely, they proved that (see [START_REF] Bourgain | On the equation div Y = f and application to control of phases[END_REF]
, Lemma 2, p. 403]) u L 2 (T 2 ) ∇u L 1 (T 2 )+W -1,2 (T 2 ) , (13)
for any u ∈ L 1 (T 2 ), with u(0) = 0. In order to obtain this, they used the following key estimate (see [5, (4.20), p. 405]):
n∈Z 2 \{0} n 1 n 2 |n| 4 sin n 1 θ 1 sin n 2 θ 2 ≤ C,
uniformly in θ 1 , θ 2 ∈ T, for some numerical constant C > 0. By convexity this allows us to write
n∈Z 2 \{0} n 1 n 2 |n| 4 F 1 (n) F 2 (n) ≤ C F 1 L 1 (T 2 ) F 2 L 1 (T 2 ) , for any F 1 , F 2 ∈ L 1 (T 2
). Thanks to this bilinear estimate, after decomposing ∇u in the space L 1 (T 2 ) + W -1,2 (T 2 ), we can deal with the space L 1 (T 2 ) in [START_REF] Grafakos | Classical Fourier Analysis[END_REF].
We use the technique introduced by Bourgain and Brézis in [5, Section 4, p. 403] and we prove a version of Theorem 9 (and Theorem 8) in the case where the source space is Ẇ d/2,2 . As we will see, thanks to this technique we are able to work with more general Fourier multipliers than derivatives. Also, it is this technique that allows us to gain some control on the Fourier spectrum of solutions. The results obtained by this method are "perfect" in the sense that that the source space is Ẇ d/2,2 and the solution space is L ∞ ∩ Ẇ d/2,2 ; there is no loss of regularity in the third parameter. The drawback of this technique is the fact that it does not apply to the case where p = 2.
As in the case of Theorem 10, we can easily obtain Theorem 9 using the W-method. This time however, insted of using Mazya's result we use the more general results that we obtain via the Bourgain-Brézis technique. Using the properties of Lorentz spaces we can give a Lorentz-Sobolev version of Theorem 9. In fact, our methods will provide more general results. On one hand the function spaces we work with can be more general than those in the statements of our final results (see for instance Theorem 37). One the other hand, the conditions imposed on Fourier multipliers and the Fourier spectrum of the solutions can be more general. Also, by using the technique of Mazya, one can easily obtain a version of Theorem 3 in the case p = q = 2 that concerns general Hodge systems. Combining this result with the W-method one can obtain an analogue of Theorem 10 for Hodge systems. We will not consider however, such issues here. In this paper, we limit ourselves to some model situations that are easier to describe.
Notation. Throughout the paper we use mainly standard notation. For instance, we often use the symbols and ∼. For two nonnegative variable quantities a and b we write a b if there exists a constant C > 0 such that a ≤ Cb. If a b and b a, then we write a ∼ b. For simplicity we denote by spec(f ) the Fourier spectrum of a tempered distribution f ; in other words, spec(f ) = supp f . Everywhere in this paper S (R d ) is the space of tempered distributions. When X is a function space on R d and u = (u 1 , ..., u d ) is a vector filed on R d where each u j belongs to X, we write u ∈ X instead of u ∈ X d . A similar convention will be made for the (d -1)-vector fields. Other notation will be introduced when needed.
Function spaces
In this section we quickly recall the definition and some properties of some standard function spaces.
Sobolev and Besov spaces
Let S be the space of all Schwartz functions f on R d such that f vanishes in a neighborhood of 0. When 1 < p < ∞ and α ∈ R the homogeneous space Ẇ α,p (R d ) is obtained by completion of S under the norm
f Ẇ α,p := |∇| α f L p .
We can see that we can also define the above homogeneous spaces Ẇ α,p by completion of the the normed function spaces Ẇ α,p c (R d ). Here, Ẇ α,p c (R d ) is the space of all the compactly supported functions whose Ẇ α,p -norm is finite. The spaces Ẇ α,p as defined here are complete.
We continue by briefly recalling the definition of the Besov spaces (we do not define here the Triebel-Lizorkin spaces; see [START_REF] Triebel | Theory of function spaces II[END_REF] for details). Consider a radial function Φ ∈ C ∞ c (R d ) such that supp Φ ⊂ B(0, 2) and Φ ≡ 1 on B(0, 1). For k ∈ Z we define the operators P k , acting on the space of tempered distributions on R d , by the relation
P k f (ξ) := Φ ξ 2 k -Φ ξ 2 k-1 f (ξ) , (14)
for any Schwartz function f on R d . The operators P k will be called Littlewood-Paley "projections" adapted to R d . For any Schwartz function f we have that
f = k∈Z P k f ,
in the sense of tempered distributions. The homogeneous Besov space Ḃα,p q (R d ) (with 1 ≤ p, q ≤ ∞ and α a real number) is obtained by completion of S under the norm
f Ḃα,p q := j∈Z 2 αkq P k f q L p 1/q . We have Ḃα,2 2 (R d ) = Ẇ α,2 (R d
) with equivalent norms. The main advantage of our definition of the homogeneous Besov spaces is the fact that, whenever α 0 -d/p 0 = α 1 -d/p 1 and α 1 > α 0 we have the embedding Ḃα 1 ,p 1
q 1 (R d ) → Ḃα 0 ,p 0 q 0 (R d ), (15)
for any q 0 , q
1 ∈ [1, ∞) with q 0 ≤ q 1 .
Note that we have the following dilation properties:
f (λ•) Ḃα,p q ∼ λ α-d/p f Ḃα,p q , (16)
for any f ∈ Ḃα,p q (R d ) respectively and any λ > 0. In particular, when α = d/p the spaces Ḃα,p q have the same scaling property as L ∞ . In what follows the spaces of the form Ḃd/p,p q will be called critical. It is worth recalling here, that, by a direct application of the Bernstein inequalities we get the embedding Ḃd/p,p
1 (R d ) → L ∞ (R d ). When q > 1 the critical spaces Ḃd/p,p q do not embed in L ∞ .
Remark 12. Note that the spaces Ḃd/p,p q (with q > 1) as defined here contain elements that are not tempered distributions. However, when α < d/p the elements of the space Ḃα,p q are all tempered distributions (see for instance [1, Remark 2.26, p. 68] or [START_REF] Bourdaud | Réalisations des espaces de Besov homogènes[END_REF]).
Remark 13. Since the operator div : S ∩ Ḃd/p,p q → Ḃd/p-1,p q is linear and bounded (here, S ∩ Ḃd/p,p q is endowed with the norm induced by Ḃd/p,p q ), by density of S ∩ Ḃd/p,p q in Ḃd/p,p q it extends uniquely to an operator div : Ḃd/p,p q → Ḃd/p-1,p q . Similar facts hold for other spaces and other operators. In this way we can remove from the hypotheses of Theorem 7, Theorem 8 and Theorem 9 the fact that the source v belongs to S . For instance [START_REF] Burkholder | A geometric condition that implies the existence of certain singular integrals of Banach-space-valued functions[END_REF] , then v ∈ Ḃd/p-2,p q is a tempered distribution and we can define the spectrum of v as spec(v) := supp v. This observation will be applied for other function spaces as well.
It is easy to see that any space of the form Ḃα,p q with p, q ∈ (1, ∞) is embedded in l α q (L p ) and hence it has the U M D property (see for instance [START_REF] Pisier | Martingales in Banach Spaces[END_REF]).
Lorentz-Sobolev spaces
Consider4 some parameters p ∈ (1, ∞), q ∈ [1, ∞] and α ≥ 0. The homogeneous Lorentz-Sobolev spaces Ẇ α L p,q (R d ) is the completion of the normed space of Schwartz functions f on R d under the norm
f Ẇ α L p,q := |∇| α f L p,q ,
where L p,q is the usual Lorentz space of parameters p and q.
Remark 15. One can easily adapt the Remarks 12, 13 and 14 to the case of the Lorentz-Sobolev spaces.
Many of the embedding properties of the Besov and Triebel-Lizorkin spaces hold for the Lorentz-Sobolev spaces (see for instance [START_REF] Seeger | Embeddings for spaces of Lorentz-Sobolev type[END_REF] for detailes). We mention below some properties of Lorentz-Sobolev spaces that will be needed in the proof of Theorem 11. All of them are direct consequences of well-known facts from the theory of Lorentz spaces.
Lemma 16. For any r ∈ [2, ∞), we have that
Ẇ d/2,2 (R d ) → Ẇ d/r L r,2 (R d ).
Proof.
It
f L r,2 |∇| d(1/2-1/r) f L 2 ,
for any Schwartz function f on R d . This can be rewritten as
f Ẇ d/r L r,2 = |∇| d/r f L r,2 |∇| d/2 f L 2 = f Ẇ d/2,2 , obtaining that Ẇ d/2,2 → Ẇ d/r L r,2 .
Lemma 17. For any r ∈ (1, ∞), we have that
Ẇ d/r L r,1 (R d ) → L ∞ (R d ).
Proof. For any Schwartz function f on R d we have that
I d/r * f L ∞ f L r,1 , (17)
where
I d/r (x) = |x| d/p-d = |x| -d/p
, for any x ∈ R d . Indeed, using [13, Theorem 1.4.17 (v), p. 52], we have
R d f (y) |x -y| d/r dy ≤ I d/r (L r,1 ) * f L r,1 = I d/r L r ,∞ f L r,1
, and we can easily see that
I d/r L r ,∞ = sup λ>0 λ x ∈ R d | |x| < (1/λ) r /d 1/r ∼ 1.
Hence, (17) holds. We can reformulate [START_REF] Mironescu | On some inequalities of Bourgain, Brézis, Maz'ya, and Shaposhnikova related to L 1 vector fields[END_REF] as
f L ∞ |∇| d/r f L r,1 = f Ẇ d/r L r,1 , This shows that Ẇ d/r L r,1 → L ∞ .
Lemma 18. Suppose p 0 , p 1 , q 0 , q 1 ∈ [1, ∞) and α 0 , α 1 ≥ 0. Then, for any θ ∈ (0, 1) we have
( Ẇ α 0 L p 0 ,q 0 (R d ), Ẇ α 1 L p 1 ,q 1 (R d )) θ = Ẇ α L p,q (R d ), (18)
where α = (1 -θ)α 0 + θα 1 , 1/p = (1 -θ)/p 0 + θ/p 1 and 1/q = (1 -θ)/q 0 + θ/q 1 .
Proof. This can be proved by Stein's method of interpolation (see for instance [13, Theorem 1.3.7, p. 37]) as follows. Note that the function ξ → |ξ| it defined on R d \ {0} satisfies
∇ k |ξ| it ≤ C(1 + |t|) d+2 |ξ| -k , a.e. in ξ ∈ R d \ {0},
|∇| it L a,b →L a,b a,b C(1 + |t|) d+2 . ( 19
)
Let us consider the analytic family of operators (T z ) z∈S with
T z := |∇| (1-z)α 0 +zα 1 ,
for all z ∈ S. Thanks to [START_REF] Muskhelishvili | Singular Integral Equations[END_REF], the analytic family (T z ) z∈S satisfies the hypothesis of [13, Theorem 1.3.7, p. 37]. Hence, we get T θ ( Ẇ α 0 L p 0 ,q 0 , Ẇ α 1 L p 1 ,q 1 ) θ → (L p 0 ,q 0 , L p 1 ,q 1 ) θ = L p,q , and, in a similar way (applying Stein's method for the family (T -z ) z∈S ),
T -θ (L p,q ) = T -θ (L p 0 ,q 0 , L p 1 ,q 1 ) θ → ( Ẇ α 0 L p 0 ,q 0 , Ẇ α 1 L p 1 ,q 1 ) θ . Hence, T θ ( Ẇ α 0 L p 0 ,q 0 , Ẇ α 1 L p 1 ,q 1 ) θ = L p,q ,
and ( 18) is proven.
The spaces Ẇ α L p,q have scaling properties that are similar to those of the Besov spaces (see ( 16)). In particular, the spaces Ẇ d/p L p,q have the same scaling as L ∞ . As we have seen in Lemma 17 we have Ẇ d/p L p,1 → L ∞ . However, when q > 1 the critical spaces Ẇ d/p L p,q do not embed in L ∞ .
Let us see that the spaces Ẇ α L p,q have the U M D property when p, q ∈ (1, ∞). For this is sufficient to see that L p,q has the U M D property. Consider some p 0 , p 1 ∈ (1, ∞) such that p 0 < p < p 1 . Since, L p 0 and L p 1 are U M D spaces, by Burkholder's theorem (see [START_REF] Burkholder | A geometric condition that implies the existence of certain singular integrals of Banach-space-valued functions[END_REF]) the Hilbert transform is bounded on L 2 (T,L p 0 ) and L 2 (T,L p 1 ) respectively. Hence, the Hilbert transform is bounded on the space
(L 2 (T,L p 0 ), L 2 (T,L p 1 )) η,q = L 2 (T, (L p 0 , L p 1 ) η,q ) = L 2 (T,L p,q ),
where η ∈ (0, 1) is such that 1/p = (1 -η) /p 0 + η/p 1 (see for instance [2, Theorem 5.6.2, p. 123]). By Bourgain's theorem ( [START_REF] Bourgain | Some remarks on Banach spaces in which martingale difference sequences are unconditional[END_REF]), we get that L p,q has the U M D property.
Some quotient spaces
f Y /D := inf f -∈Y D c f + f - Y .
In this paper we will work with quotient spaces of the form Ẇ α,2 /D and (L 1 + Ẇ α,2 )/D. One can easily see that for Y = Ẇ α,2 or Y = L 1 + Ẇ α,2 we have the following norming property sup
f Y /D ≤1 f, g = g Y * ,
for any g ∈ Y * D , where the supremum is taken over all Schwartz functions f with f Y /D ≤ 1. In the case where Y is the Sobolev space Ẇ α,2 it is easy to compute the seminorm induced by Y /D. Namely, let us see that for any u ∈ S and any measurable set D ⊆ R d we have
u Ẇ α,2 /D = R d |ξ| 2α 1 D (ξ) | u(ξ)| 2 dξ 1/2 . ( 20
)
Indeed, we have
u 2 Ẇ α,2 /D = inf v∈ Ẇ α,2 D c R d |ξ| 2α | u(ξ) + v(ξ)| 2 dξ = D |ξ| 2α | u(ξ)| 2 dξ + inf v∈ Ẇ α,2 D c D c |ξ| 2α | u(ξ) + v(ξ)| 2 dξ = D |ξ| 2α | u(ξ)| 2 dξ.
We recall that, by P D we denote the Fourier projection on the set D, i.e., we have
P D f (ξ) = 1 D (ξ) f (ξ),
for any ξ ∈ R d and any Schwartz function f . Note that, in the case where D = (0, ∞) d the operator P (0,∞) d is the Riesz projection. In this case we will write P + in the place of P (0,∞) d .
3 The W-method of complex interpolation
In this section we introduce a variant of the complex interpolation and we prove several of its properties. We call this new method of interpolation the W-method and, as stated in the introduction (see subsection 1.4), this will be used in the proof of Theorem 9, Theorem 7, Theorem 8, Theorem 11. We mainly study here only the properties of the W-method that are used in the proof of our main results. In subsection 3.1 we show that the W-method is indeed an interpolation method. However, we ignore some issues specific to the interpolation methods in general such as computing dual interpolation spaces or reiteration theorems. These problems do not concern us here.
An important aspect is the relation of the W-method with the classical complex method. We do not know in general how to compute efficiently the interpolation spaces obtained via the Wmethod. However, as we will see in subsection 3.2 the space obtained via the W-method is, in many "convenient" cases, the same as the space obtained via the classical complex method.
The main feature of the W-method is that one can use it to "interpolate" linear equations. It is one of the main ingredients that enter in the proof of our main results and it is the final goal of this section.
Construction of the interpolation space
We describe here the W-method and prove some basic properties. The proofs we give are straightforward adaptations of those that correspond to the classical complex interpolation as found in [2, Chapter 4]. Following the general presentation in [2, Chapter 4] let us introduce now the W-method.
For the beginning, fix two Banach spaces X 0 and X 1 and suppose (A 0 , A 1 ) is a Banach couple. Let F 2 = F 2 (A 0 , X 0 | A 1 , X 1 ) be the linear space of all bounded continuous functions f with values in A 0 + A 1 , defined on the strip
S := {z ∈ C | 0 ≤ z ≤ 1} , that are analytic in the open strip S 0 := {z ∈ C | 0 < z < 1} ,
and moreover, such that f (j + it) ∈ A j for any j = 0, 1 and any t ∈ R, and
f F 2 := max j=0,1 sup Λ j ≤1 R Λ j f (j + it) 2 X j dt 1/2 < ∞, (21)
where, for each j = 0, 1, Λ j : A j → X j are linear bounded operators. One can easily verify that
• F 2 defines a norm on F 2 . Fix 0 < θ < 1. Consider the linear space C θ (A 0 , X 0 | A 1 , X 1 ) defined by C θ (A 0 , X 0 | A 1 , X 1 ) := a ∈ A 0 + A 1 | a = f (θ), for some f ∈ F 2 (A 0 , X 0 | A 1 , X 1 ) .
and define, for each a ∈ C θ (A 0 , X 0 | A 1 , X 1 ), the quantity
a θ := inf f F 2 | a = f (θ), f ∈ F 2 (A 0 , X 0 | A 1 , X 1 ) . Lemma 19. The mapping a → a θ is a norm on C θ (A 0 , X 0 | A 1 , X 1 ).
In order to prove Lemma 19 we rely on the following basic fact (and at least implicitely well-known): Lemma 20. Fix some 1 ≤ p < ∞ and let Z be a Banach space. Suppose F : S → Z is a bounded continuous function which is analytic in S 0 such that the functions t → F (j + it) belong to the space L p (R,Z).
Then, for any z ∈ S 0 , we have
F (z) = - 1 2πi R F (it) it -z dt + 1 2πi R F (1 + it) 1 + it -z dt. (22)
In particular, for any θ ∈ (0, 1),
F (θ) Z θ,p max j=0,1 R F (j + it) p Z dt 1/p . ( 23
)
Proof of Lemma 20. Fix some z ∈ S 0 . Consider some arbitrary R > 0 and the curve γ R given by the boundary of the rectangle
[0, 1] × [-R, R], oriented anti-clockwise. For R sufficeintly large we have z ∈ [0, 1] × [-R, R]
. By Cauchy's formula we get
F (z) = 1 2πi γ R F (ζ) ζ -z dζ,
and we can rewrite this as
F (z) = - 1 2πi R -R F (it) it -z dt + 1 2πi R -R F (1 + it) 1 + it -z dt + 1 2πi 1 0 F (x + iR) x + iR -z dx - 1 2πi 1 0 F (x -iR) x -iR -z dx. (24)
Note that, the functions t → F (j + it)/ (j + it -θ) belong to the space L 1 (R, Z). Indeed, by Hölder's inequality (since we always have p > 1) we can write
R F (j + it) j + it -z Z dt ≤ R 1 |j + it -z| p dt 1/p R F (j + it) p Z dt 1/p θ,z R F (j + it) p Z dt 1/p < ∞, (25)
with the natural modification in the case where p = ∞.
Also, 1 2πi
1 0 F (x ± iR) x ± iR -z dx Z ≤ 1 |x ± iR -z| F L ∞ (S,Z) → 0, (26)
when R → ∞.
Using ( 24), [START_REF] Ziemer | Weakly Differentiable Functions[END_REF], letting R → ∞ and using the dominated convergence theorem, we get the representation formula [START_REF] Peetre | Sur la transformation de Fourier des fonctions à valeurs vectorielles[END_REF]. Using ( 22) and ( 25), for z = θ, we obtain [START_REF] Seeger | Embeddings for spaces of Lorentz-Sobolev type[END_REF].
Proof of Lemma 19. Clearly, • θ is a seminorm on C θ (A 0 , X 0 | A 1 , X 1 ). It remains to see that, if a θ = 0, for some a ∈ C θ (A 0 , X 0 | A 1 , X 1
), then a = 0. We prove this by showing that
a A 0 +A 1 θ a θ , (27)
for all a ∈ C θ (A 0 , X 0 | A 1 , X 1 ). For this purpose fix a ∈ C θ (A 0 , X 0 | A 1 , X 1 ) and consider a functional λ ∈ (A 0 + A 1 ) * , with λ = 1, such that a A 0 +A 1 ≤ 2λ (a). Consider also a function
f ∈ F 2 (A 0 , X 0 | A 1 , X 1 ), such that f (θ) = a and f F 2 ≤ 2 a θ .
Let us define, for each j = 0, 1, the linear operators Λ j : A j → X j by Λ j (a j ) = λ(a j )e j , for any a j ∈ A j , where e j ∈ X j are some fixed vectors with e j X j = 1. Clearly, for any j = 0, 1,
Λ j (a j ) X j = |λ(a j )| ≤ a j A 0 +A 1 ≤ a j A j ,
for any a j ∈ A j , and we get Λ j ≤ 1.
Using this observation and introducing the function
F : S → C defined by F (z) := λ (f (z)), one can write, max j=0,1 R |F (j + it)| 2 dt 1/2 = max j=0,1 R |λ (f (j + it))| 2 dt 1/2 = max j=0,1 R Λ j (f (j + it)) 2 X j dt 1/2 ≤ f F 2 ≤ 2 a θ < ∞. ( 28
)
This shows, in particular, that the functions t → F (j + it) belong to the space L 2 (R, C). We also see immediately that F is bounded, continuous on S and analytic in S 0 . Hence, by applying Lemma 20 for Z = C and p = 2 (more precisely (23)), using (28), we get
a A 0 +A 1 ≤ 2λ (a) = 2F (θ) max j=0,1 R |F (j + it)| 2 dt 1/2 a θ ,
which proves (27). Now, thanks to Lemma 19, we can define the interpolation space (A 0 , X 0 | A 1 , X 1 ) θ as being the completion of the normed space (C θ (A 0 , X 0 | A 1 , X 1 ) , • θ ).
One can easily see that (A 0 , X 0 | A 1 , X 1 ) θ is an intermediate space:
A 0 ∩ A 1 → (A 0 , X 0 | A 1 , X 1 ) θ → A 0 + A 1 . ( 29
)
The second embedding in (29), follows directly from the inequality (27). In order to see the first embedding, pick a ∈ A 0 ∩ A 1 and consider the function f (z) := exp (z 2 -θ 2 ) a. One can easily check that f (θ) = a, f ∈ F 2 and
a θ ≤ max j=0,1 sup Λ j ≤1 R Λ j f (j + it) 2 X j dt 1/2 ≤ max j=0,1 R f (j + it) 2 A j dt 1/2 ∼ θ a A 0 ∩A 1 .
This gives us that
A 0 ∩ A 1 → C θ (A 0 , X 0 | A 1 , X 1 ) → (A 0 , X 0 | A 1 , X 1 ) θ .
Let us see now that the W-method provides an exact interpolation functor:
Proposition 21. Consider some Banach spaces X 0 , X 1 . Let (A 0 , A 1 ) , (B 0 , B 1 ) be two Banach couples and T : A 0 + A 1 → B 0 + B 1 be a linear operator such that T : A j → B j is bounded for any j = 0, 1, of norm T j→j .
Then, the operator
T : (A 0 , X 0 | A 1 , X 1 ) θ → (B 0 , X 0 | B 1 , X 1 ) θ ,
is bounded and of norm T θ→θ satisfying
T θ→θ ≤ T 1-θ 0→0 T θ 1→1 .
Proof. Without loss of generality we suppose that T j→j > 0, for any j = 0, 1. For the brevity of notation we denote by
• θ , • F 2 , • θ and • F 2 the norms on the spaces (A 0 , X 0 | A 1 , X 1 ) θ , F 2 (A 0 , X 0 | A 1 , X 1 ), (B 0 , X 0 | B 1 , X 1 ) θ and F 2 (B 0 , X 0 | B 1 , X 1 ) respectively. Pick some a ∈ C θ (A 0 , X 0 | A 1 , X 1 ) and fix some ε > 0. Consider a function f ∈ F 2 (A 0 , X 0 | A 1 , X 1 ) such that f (θ) = a and f F 2 ≤ (1 + ε) a θ . The function F (z) := T z-1 0→0 T -z 1→1 T f (z) belongs to F 2 (B 0 , X 0 | B 1 , X 1 )
. Indeed, F is bounded and continuous on S with values in B 0 +B 1 , analytic in S 0 and, for any linear operators Λ j : B j → X j of norm at most 1, we have
R Λ j F (j + it) 2 X j dt 1/2 ≤ R T -1 j→j Λ j • T f (j + it) 2 X j dt 1/2 ≤ sup Λ j ≤1 R Λ j f (j + it) 2 X j dt 1/2 , ( 30
)
for any j = 0, 1, where the supremum is taken over all linear bounded operators Λ j : A j → X j with Λ j ≤ 1. Here, we have used the fact that T -1 j→j Λ j • T : A j → X j is a linear operator of norm at most 1, for any j = 0, 1. Now, by (30), we get
T θ-1 0→0 T -θ 1→1 T f (a) θ = F (θ) θ ≤ F F 2 ≤ f F 2 ≤ (1 + ε) a θ , and letting ε → 0 one obtains, T a θ ≤ T 1-θ 0→0 T θ 1→1 a θ , for any a ∈ C θ (A 0 , X 0 | A 1 , X 1 ). Since, by definition, C θ (A 0 , X 0 | A 1 , X 1 ) is dense in (A 0 , X 0 | A 1 , X 1 ) θ ,
we get the conclusion.
A particular case
In general, computing the interpolation space (A 0 , X 0 | A 1 , X 1 ) θ seems to be a nontrivial task. However, there are some particular cases where an explicit computation is easy.
Let us restrict to the case, where A 0 = X 0 and A 1 = X 1 and let us denote, for simplicity, the space (X 0 , X 0 | X 1 , X 1 ) θ by (X 0 |X 1 ) θ . Also, instead of F 2 (X 0 , X 0 | X 1 , X 1 ) we write F 2 (X 0 |X 1 ) and instead of C θ (X 0 , X 0 | X 1 , X 1 ) we write C θ (X 0 |X 1 ). In this case, formula [START_REF] Runst | Sobolev spaces of fractional order, Nemytskij operators, and nonlinear partial differential equations[END_REF] becomes
f F 2 = max j=0,1 R f (j + it) 2 X j dt 1/2 . ( 31
)
Indeed, for any j = 0, 1, we have sup
Λ j ≤1 R Λ j f (j + it) 2 X j dt 1/2 ≤ R f (j + it) 2 X j dt 1/2
, and sup
Λ j ≤1 R Λ j f (j + it) 2 X j dt 1/2 ≥ R id X j f (j + it) 2 X j dt 1/2 = R f (j + it) 2 X j dt 1/2
, where id X j : X j → X j is the identity mapping on X j .
It turns out that, the space (X 0 |X 1 ) θ coincides with the space (X 0 , X 1 ) θ obtained via the classical complex interpolation method. The proof of this fact is easy, however we give it below for the sake of completeness. Proposition 22. Suppose (X 0 , X 1 ) is a compatible couple of Banach spaces. Then, for any θ ∈ (0, 1), (X 0 |X 1 ) θ = (X 0 , X 1 ) θ , with equivalence of norms.
Remark 23. Implicitly, the embedding (X 0 |X 1 ) θ → (X 0 , X Proof. Consider some a ∈ (X 0 , X 1 ) θ ∩ C θ (X 0 |X 1 ) and let some f ∈ F 2 (X 0 |X 1 ) be such that
f (θ) = a and f F 2 ≤ 2 a (X 0 |X 1 ) θ .
By [2, Lemma 4.3.2 (ii), p. 93] (or [10, Section 9.4, (ii)]) we have
a (X 0 ,X 1 ) θ ≤ 1 1 -θ R f (iτ ) X 0 P 0 (θ, τ )dτ 1-θ 1 θ R f (1 + iτ ) X 1 P 1 (θ, τ )dτ θ , (32)
where P j (j = 0, 1) are the real Poisson kernels defined by P j (s + it, τ ) := e -π(τ -t) sin πs sin 2 πs + (cos πs -e ijπ-π(τ -t) ) 2 ,
for s ∈ (0, 1), t, τ ∈ R. Note that P j (θ, •) ∈ L 2 (R, R) and by the Cauchy-Schwarz inequality, R f (iτ )
X 0 P 0 (θ, τ )dτ ≤ R f (iτ ) 2 X 0 dτ 1/2 R P 2 0 (θ, τ )dτ 1/2 f F 2 a (X 0 |X 1 ) θ .
In a similar way we get
R f (iτ ) X 0 P 0 (θ, τ )dτ a (X 0 |X 1 ) θ ,
and combining with (32) one obtains
a (X 0 ,X 1 ) θ a (X 0 |X 1 ) θ . (33)
By taking the closure we get (X 0 |X 1 ) θ → (X 0 , X 1 ) θ .
Converselly, if a ∈ (X 0 , X 1 ) θ , then there exists g ∈ F (X 0 , X 1 ) (see [START_REF] Bergh | Interpolation spaces. An introduction[END_REF]Chapter 4] for the standard notation F (X 0 , X 1 )) such that g (θ) = a and max j=0,1
sup t∈R g(j + it) X j ≤ 2 a (X 0 ,X 1 ) θ . ( 34
)
Introduce the function g : S → X 0 + X 1 defined by g(z) := exp(z 2 -θ 2 )g(z), for z ∈ S. We observe that, for any j = 0, 1,
R g(j + it) 2 X j dt 1/2 R e -2t 2 g(j + it) 2 X j dt 1/2 ≤ R e -2t 2 dt 1/2 sup t∈R g(j + it) X j ∼ sup t∈R g(j + it) X j . (35)
Hence, g ∈ F 2 (X 0 |X 1 ), a = g(θ) ∈ C θ (X 0 |X 1 ) and by (34), (35),
a (X 0 |X 1 ) θ a (X 0 ,X 1 ) θ , (36)
We have now (X 0 , X 1 ) θ → (X 0 |X 1 ) θ and Proposition 22 is proven.
An immediate consequence of Proposition 22 is the following useful embedding result:
Corollary 24. Suppose (X 0 , X 1 ) is a compatible couple of Banach spaces. Then, for any Banach space A, we have the embedding
(A ∩ X 0 , X 0 | A ∩ X 1 , X 1 ) θ → A ∩ (X 0 , X 1 ) θ .
Proof. Consider the canonical inclusion ι : A ∩ X 0 + A ∩ X 1 → X 0 + X 1 as a linear bounded operator ι : A ∩ X j → X j and apply Proposition 21. We get
(A ∩ X 0 , X 0 | A ∩ X 1 , X 1 ) θ → (X 0 , X 0 | X 1 , X 1 ) θ = (X 0 |X 1 ) θ .
Since by Proposition 22 we have (X 0 |X 1 ) θ = (X 0 , X 1 ) θ , we now obtain the embedding
(A ∩ X 0 , X 0 | A ∩ X 1 , X 1 ) θ → (X 0 , X 1 ) θ . ( 37
)
Also, using the fact that
(A ∩ X 0 , X 0 | A ∩ X 1 , X 1 ) θ is an intermediate space (see (29)), we have (A ∩ X 0 , X 0 | A ∩ X 1 , X 1 ) θ → A ∩ X 0 + A ∩ X 1 → A,
which together with (37) proves Corollary 24.
Solutions of linear equations
In this subsection we highlight the main strength of the W-method. Namely, we show here how the W-method can be used in order to "interpolate " underdetermined equations. Here we make an essential use of the fact that the Hilbert transform is bounded on spaces of the form L 2 (R, Z), where Z is an U M D space. The U M D property plays here a key role.
Before stating the results in this subsection let us make some (common) notational conventions. The space C l b (R, Z), where l ∈ N, is the space of all the functions f : R →Z for which the k-th derivative f (k) is a continuous and bounded Z-valued function on R for all k ∈ N with k ≤ l. We endow C l b (R, Z) with the norm
f C l b (R,Z) := l k=0 f (k) L ∞ (R,Z) .
Given a measurable function ω : R →(0, ∞) we denote by L 2 (ω, Z) the space of the strongly (Bochner-Lebesgue) measurable functions f : R →Z for which the norm
f L 2 (ω,Z) := R f (t) 2 Z ω(t)dt 1/2 , is finite. When ω(t) = exp (t 2 ) or ω(t) = exp (-t 2 ) the space L 2 (ω, Z) is denoted by L 2 (exp (t 2 ) , Z)
or by L 2 (exp (-t 2 ) , Z) respectively. However, when ω ≡ 1 we prefer to write L 2 (R, Z) instead of L 2 (1, Z).
Boundary values of functions on the strip
Let us recall now some (at least implicitely) well-known facts related to the Hilbert transforms of vector-valued functions. Let Z be a Banach space and consider a function
f ∈ C 4 b (R, Z) ∩ L 2 (exp (t 2 ) , Z).
The Hilbert transform of f is defined by
Hf (t) := 1 π lim ε→0 ε<|t-s|<1/ε f (s) t -s ds = 1 π lim ε→0 ε<|s|<1/ε f (t -s) s dy,
for t ∈ R. As one can immediately check, for such f the above limit exists, for every x ∈ R (the convergence being in the norm of Z). Also, we get that
Hf ∈ C b (R, Z). (38)
Indeed, for every t ∈ R,
Hf (t) Z ≤ 1 π lim ε→0 ε<|t-s|<1 f (s) t -s ds Z + 1 π lim ε→0 1<|t-s|<1/ε f (s) t -s ds Z = 1 π lim ε→0 ε<|t-s|<1 f (s) -f (t) t -s ds Z + 1 π lim ε→0 1<|t-s|<1/ε f (s) t -s ds Z ≤ 2 π f Lip(R,Z) + 1 π f L 2 (exp(t 2 ),Z) f C 3 b (R,Z) + f L 2 (exp(t 2 )
,Z) and hence, Hf (t) is uniformly bounded in Z. The continuity of Hf can be proved in a similar way, by estimating the expression Hf (t 1 ) -Hf (t 2 ), when t 1 , t 2 ∈ R are close to each other.
In what follows we need a vector-valued version of the Plemelj formula. The proof we give is completely similar to the one in the scalar valued case, however, we include it here for completeness (see for instance [START_REF] Muskhelishvili | Singular Integral Equations[END_REF]). Lemma 25. Suppose Z is a Banach space and consider a function f ∈ C 3 b (R, Z)∩L 2 (exp (t 2 ) , Z). Then, when ε 0 we have
1 2πi R f (s) s -(t ± iε) ds → ±f (t) + iHf (t) 2 ,
in the norm of Z, uniformly in t ∈ R.
Proof. We only consider the case of the sign "+", the other one being similar. We first show that
1 2πi R f (s) s -iε dy - f (0) 2 - i 2π |s|>ε f (s) -s ds Z f C 1 b (R,Z) ε 1/2 , ( 39
)
for any ε ∈ (0, 1), where the implicit constant does not depend on f or ε. Changing the variables (s = ετ ), this is equivalent to
R f (ετ ) 1 τ -i -1 |τ |>1 (τ ) 1 τ dτ -πif (0) Z f C 1 b (R,Z) ε 1/2 . ( 40
) Notice that R 1 τ -i -1 |τ |>1 (τ ) 1 τ dτ < ∞ and R 1 τ -i -1 |τ |>1 (τ ) 1 τ dτ = πi,
and hence, it remains to show that
R (f (ετ ) -f (0)) 1 τ -i -1 |τ |>1 (τ ) 1 τ dτ Z f C 1 b (R,Z) ε 1/2 .
One can see this by a direct computation. Indeed, the quantity
R (f (ετ ) -f (0)) 1 τ -i -1 |τ |>1 (τ ) 1 τ dτ Z is bounded by R f (ετ ) -f (0) Z 1 τ -i -1 |τ |>1 (τ ) 1 τ dτ [-R,R] f (ετ ) -f (0) Z 1 1 + |τ | dτ + [-R,R] c f (ετ ) -f (0) Z 1 τ 2 dτ , (41)
for any R > 1. Since,
f (ετ ) -f (0) Z ≤ ε f C 1 b (R,Z) |τ |, we get [-R,R] f (ετ ) -f (0) Z 1 1 + |τ | dτ f C 1 b (R,Z) εR. Also, since f (ετ ) -f (0) Z ≤ 2 f C 1 b (R,Z) , we have [-R,R] c f (ετ ) -f (0) Z 1 τ 2 dτ f C 1 b (R,Z) /R.
From (41) we get now,
R f (ετ ) -f (0) Z 1 τ -i -1 |τ |>1 (τ ) 1 τ dτ f C 1 b (R,Z) (εR + 1/R),
where the implicit constant does not depend on ε or R. Setting R = ε -1/2 we obtain (40) and hence (39). Now consider some function f ∈ C 1 b (R, Z) and for each t ∈ R, define f t : R → Z by f t (x) := f (x + t) for all x ∈ R. We can see that
f t ∈ C 1 b (R, Z) and f t C 1 b (R,Z) = f C 1 b (R,Z)
, for any t ∈ R. By a simple change of variables we can also observe that
1 2πi R f t (s) s -iε ds - f t (0) 2 - i 2π |s|>ε f t (s) -s ds Z equals 1 2πi R f (s) s -(t + iε) ds - f (t) 2 - i 2π |t-s|>ε f (s) t -s ds Z .
Hence, by applying (39) to f t and letting ε → 0 we obtain the conclusion.
Let us introduce the some operators that quantify the boundary behaviour of analytic functions on the strip. For each j = 0, 1 let
H j : L 2 exp t 2 , Z ∩ C 3 b (R, Z) → C b (R, Z) ,
be defined by
H j f (t) := -if (t) -(-1) j Hf (t) 2 ,
and
R j : L 2 (exp (t 2 ) , Z) ∩ C 3 b (R, Z) → C b (R, Z), be defined by R j f (t) := ρ j * f (t),
where ρ j : R → C are the bounded functions
ρ j (t) := 1 2πi 1 1 -(-1) j it , for all t ∈ R. It is easy to see that, for f ∈ L 2 (exp (t 2 ) , Z), the quantity R j f is indeed well-defined and R j f ∈ C b (R, Z).
An easy consequence of (38) and Lemma 25 is the following fact:
Lemma 26. Suppose (B 0 , B 1 ) is a compatible couple of Banach spaces. Consider some functions
u j ∈ C 3 b (R, B j ) ∩ L 2 (exp (t 2
) , B j ), j = 0, 1, and define u :
S → B 0 + B 1 by u(z) := - 1 2πi R u 0 (t) it -z dt + 1 2πi R u 1 (t) 1 + it -z dt,
for all z ∈ S 0 , and u (j + it) := H j u j (t) + R j u 1-j (t),
for all t ∈ R. Then, u ∈ C b (S, B 0 + B 1 ).
Proof. Clearly, by (38) (for
Z = B 0 + B 1 ) we have H j u j ∈ C b (R, B 0 + B 1 )
. Also, by Lemma 25 (for Z = B 0 + B 1 ) we have
(-1) j+1 2πi R u j (t) j + it -(j + iτ + (-1) j ε) dt → H j u j (τ ), in B 0 +B 1 uniformly in τ ∈ R, when ε 0. On the other hand, we have R j u 1-j ∈ C b (R, B 0 + B 1
) and, as one can easily see,
(-1) j+1 2πi R u 1-j (t) j + it -(1 -j + iτ + (-1) j ε) dt → R j u 1-j (τ ), in B 0 + B 1 uniformly in τ ∈ R, when ε 0.
Hence, u is approaching its boundary values uniformly. Since u is analytic on S 0 we obtain the conclusion.
Interpolation of equations
Let us illustrate by some examples the fact that, in general, the surjectivity of operators is not preserved by interpolation: Example 1. Consider the operator T s : L 2 (T) × L 2 (T) → L 2 (T), defined by the formula T s (f, g) := f + g, for any (f, g) ∈ L 2 (T). One can easily see that T s : L 2 (T) × L 4 (T) → L 2 (T) and T s : L 4 (T) × L 2 (T) → L 2 (T) are surjective operators however, the operator
T s : (L 2 (T) × L 4 (T), L 4 (T) × L 2 (T)) 1/2 → (L 2 (T), L 2 (T)) 1/2 , (42)
is not surjective. Indeed, if T s in (42) is surjective, then
T s : L 3 (T) × L 3 (T) → L 2 (T)
is surjective, and we get the false embedding L 2 (T) → L 3 (T).
Example 2. Let us consider now another example which is more closely related to the equations we treat in this paper. For any p ≥ 1 let W -1,p (T 2 ) be the spaces of those distributions f that are divergences of L p -vector fields on T 2 and with f (0) = 0 (in general, if Z is a function space on T 2 , we denote by Z the space of those f ∈ Z with f (0) = 0). The norm on
W -1,p (T 2 ) is given by f W -1,p (T 2 ) = inf f 1 L p (T 2 ) + f 2 L p (T 2 ) | f = ∂ 1 f 1 + ∂ 2 f 2 .
We have that div :
L 1 (T 2 ) → W -1,1 (T 2 ) and div : L 3 (T 2 ) → W -1,3 (T 2
) are surjective operators. However, the operator div :
L 2 (T 2 ) → (W -1,1 (T 2 ), W -1,3 (T 2 )) 1/2 , (43)
cannot be surjective. Indeed, since div :
L 2 (T 2 ) → W -1,2 (T 2 ),
the surjectivity of the operator in (43) would imply that
(W -1,1 (T 2 ), W -1,3 (T 2 )) 1/2 → W -1,2 (T 2 ),
which, by duality is equivalent to
W 1,2 (T 2 ) → (W 1,∞ (T 2 ), W 1,3/2 (T 2 )) 1/2 . (44)
Note that (44) is false (see cite [START_REF] Curcȃ | On the interpolation of the spaces W l,1 (R d ) and W r,∞ (R d )[END_REF]Section 4]).
In what follows we will work in a slightly different setting. Given two pairs of Banach spaces A j → B j , j = 0, 1 and an operator T defined on B 0 + B 1 such thatT : A j → T ((B j ) is surjective, we study the surjectivity of T : (A 0 , A 1 ) θ → T ((B 0 , B 1 ) θ ). Lemma 28 below gives some sufficient additional conditions under which surjectivity is preserved by the complex interpolation. To state and prove Lemma 28 we need the technical Lemma 27.
We introduce first some notation needed in the statement of Lemma 27. Let ϕ ∈ C ∞ c ([-1, 1] , R) be a function of integral 1 and ε > 0. Define the function ϕ ε by ϕ ε (t) := ε -1 ϕ(ε -1 t), for any t ∈ R. For any ε > 0, and any (other) function g : R → Z taking values in some Banach space Z, we define the function g ε := g * ϕ ε . With this notation we state the following: Lemma 27. Let A, X 0 , X 1 , B 0 , B 1 , E, F be Banach spaces such that A, B 0 , B 1 → E and consider a bounded linear operator T : E → F . Denote A 0 := A ∩ X 0 and A 1 := A ∩ X 1 . Suppose moreover that the following conditions are satisfied:
(i) B 1 → X 1 → X 0 and B 0 → A 0 ;
(ii) A and X 1 have a separable preduals;
(iii) T : A j → F and T : B j → F are bounded for each j = 0, 1 and T (B 1 ) → T (A 1 ).
Then, for each j = 0, 1 we have the following:
For any v j ∈ L 2 (exp(t 2 ), B j ) and any ε > 0 there exists a function
u ε j ∈ L 2 (exp(t 2 ), A j ) ∩ C 3 b (R,A j ), such that T u ε j (t) = T v j,ε (t), (45)
for any in t ∈ R, and satisfying the estimates:
u ε j L 2 (A j ) v j L 2 (B j ) , (46)
and
R 1-j u ε j L 2 (exp(-t 2 ),A 1-j ) v j L 2 (exp(t 2 ),B j ) + δ j0 R 1-j v j L 2 (exp(-t 2 ),B 1-j ) . ( 47
)
where all the implicit constants do not depend on v j and ε.
(Here, v j,ε (t) = v j * ϕ ε (t) and δ j0 is the Kronecker symbol, i.e., we have δ j0 = 1 if j = 0 and δ j0 = 0 if j = 0.) Roughly speaking the conditions (45)-( 47) are describing the fact that the equation T u = T v can be solved efficiently on the boundary of the strip S. The role of Lemma 27 is to transform the easy to state conditions (i)-(iii) into the more technical conditions (45)-(47).
In order to prove Lemma 27 we need some simple facts that are easy consequences of classical inequalities.
Fact 1. Suppose Z is a Banach space and consider some function g ∈ L 2 loc (R, Z). Then, we have:
(i) g ε L 2 (exp(-t 2 ),Z) g L 2 (exp(-t 2 ),Z) , uniformly in ε > 0; (ii) g ε L 2 (exp(t 2 ),Z) g L 2 (exp(t 2 ),Z) , uniformly in ε > 0.
Proof of Fact 1. We prove only item (i), item (ii) being similar. By Minkowski's inequality we have
g ε L 2 (exp(-t 2 ),Z) = R e -t 2 B(0,1) g(t -εs)ϕ(s)ds Z dt 1/2 ≤ B R (0,1) R e -t 2 g(t -εs) 2 Z ϕ(s)dt 1/2 ds = B R (0,1) R e -(t+εs) 2 g(t) 2 Z dt 1/2 ϕ(s)ds R e -(t+εs) 2 g(t) 2 Z dt 1/2 g ε L 2 (exp(-t 2 ),Z) ,
where we have used the fact that e -(t+εs) 2 ∼ e -t 2 , when s ∈ B R (0, 1).
Fact 2. Suppose Z is a Banach space and consider some function g ∈ L 2 (exp(-t 2 ), Z). Then, we have R j g L 2 (exp(-t 2 ),Z) g L 2 (exp(t 2 ),Z) , for any j = 0, 1.
Proof of Fact 2. Fix j ∈ {0, 1}. Note first that if g ∈ L 2 (exp(-t 2 ), Z), then g ∈ L 1 (R, Z).
Using the boundedness of the function ρ j on R, one writes
R j g L 2 (exp(-t 2 ),Z) = R e -t 2 R ρ j (t -s)g(s)ds 2 Z dt 1/2 R e -t 2 R g(s) Z ds 2 dt 1/2 R g(s) Z ds.
By the Cauchy-Schwarz inequality we get
R g(s) Z ds ≤ g L 2 (exp(t 2 ),Z) ,
and Fact 2 is proved.
Let us introduce some more notation. Let Z be a Banach space and fix some N ∈ N * . For any function g ∈ L 1 loc (R, Z) we denote by E N g the conditional expectation of g with the respect to the σ-algebra generated by the intervals I k N := [k/N, (k + 1) /N ), were k ∈ Z. In other words, if (g) I is the mean of g on one of these intervals I, i.e., (g)
I := 1 |I| I g(t)dt,
we define the corresponding conditional expectation of g by
E N g := k∈Z (g) I k N 1 I k N .
See [START_REF] Diestel | Vector measures[END_REF]Chapter 5] for some fundamental properties of the conditional expectation operator E N . Now we can pass to the proof of Lemma 27.
Proof of Lemma 27. For each j = 0, 1 consider some functions v j ∈ L 2 (exp (t 2 ) , B j ). In the case where j = 0 one can simply set
u ε 0 := v 0,ε . Clearly, u ε 0 ∈ L 2 (exp(t 2 ), A 0 ) ∩ C 3 b (R,A 0 ).
It is also clear that the conditions (45), (46) are satisfied thanks to the fact that B 0 → A 0 . Let us verify (47). We have
R 1 u ε 0 L 2 (exp(-t 2 ),A 1 ) ∼ R 1 v 0,ε L 2 (exp(-t 2 ),A) + R 1 v 0,ε L 2 (exp(-t 2 ),X 1 ) , (48)
and it remains to bound each term in the right hand side of (48). Since B 0 → A ∩ X 0 , we have in particular that B 0 → A and hence,
R 1 v 0,ε L 2 (exp(-t 2 ),A) R 1 v 0,ε L 2 (exp(-t 2 ),B 0 ) v 0,ε L 2 (exp(t 2 ),B 0 ) v 0 L 2 (exp(t 2 ),B 0 ) , (49)
where for the second " " we have used Fact 2 and for the third " " we have used Fact 1 (ii).
Since
B 1 → X 1 , R 1 v 0,ε L 2 (exp(-t 2 ),X 1 ) R 1 v 0,ε L 2 (exp(-t 2 ),B 1 ) = (R 1 v 0 ) ε L 2 (exp(-t 2 ),B 1 ) R 1 v 0 L 2 (exp(-t 2 ),B 1 ) , (50)
where for "=" we have used the relation ρ * ϕ ε = ϕ ε * ρ and for the last " " we have used Fact 1 (i). From (48), ( 49) and (50) we obtain (47) in the case j = 0. Now, we deal with the case j = 1. By using the open mapping theorem one gets that, if b ∈ B 1 , then there exists a ∈ A 1 such that T a = T b and a A 1 ≤ C b B 1 for some constant C > 0. As a consequence, for each k ∈ Z we can find some elements
a k N ∈ A 1 with T a k N = T (v 1 ) I k N and such that a k N A 1 ≤ C (v 1 ) I k N B 1 .
Hence, defining u N : R → A 1 by
u 1,N := k∈Z a k N 1 I k N ,
we have
T u 1,N (t) = T E N v 1 (t), (51)
for any t ∈ R, and
u 1,N (t) A 1 E N v 1 (t) B 1 , (52)
uniformly in t ∈ R.
Define now the function u ε 1,N := u 1,N * ϕ ε = (u 1,N ) ε , the convolution being in the t variable. Thanks to (51) we have
T u ε 1,N (t) = T (E N v 1, ) ε (t), (53)
for any t ∈ R.
Let us observe that, when N → ∞,
(E N v 1 ) ε (t) -v 1,ε (t) B 1 → 0, (54)
uniformly in t ∈ R.
Indeed, by Jensen's inequality and [12, Corollary 2, p. 126] we have
(E N v 1 ) ε (t) -v 1,ε (t) B 1 ≤ B R (t,1/ε) (E N v 1 ) ε (s) -v 1,ε (s) B 1 ϕ ε (t -s)ds ≤ B R (t,1/ε) (E N v 1 ) ε (s) -v 1,ε (s) 2 B 1 ϕ ε (t -s)ds 1/2 ε E N v 1 -v 1 L 2 (R,B 1 ) → 0.
Also one easily observe that the sequence of functions (u ε 1,N ) N ≥1 is equi-continuous and uniformly bounded. Indeed, using the Cauchy-Schwaz inequality,
u ε 1,N (t 1 ) -u ε 1,N (t 2 ) A 1 ≤ R u 1,N (s) A 1 |ϕ ε (t 1 -s) -ϕ ε (t 2 -s)| ds ≤ u 1,N L 2 (R,A 1 ) R |ϕ ε (t 1 -s) -ϕ ε (t 2 -s)| 2 ds 2 , ( 55
)
4 Spectral analysis
In this section we study the solutions of divergence-like equations via L 2 -based Fourier analysis methods. This is done by a slight modification of the ideas of Bourgain and Brézis used in the proof of [START_REF] Bourgain | On the equation div Y = f and application to control of phases[END_REF]Lemma 2]. While in this case the techniques we use are essentially those of Bourgain and Brézis, we give more general existence results that take into account the shape of the Fourier spectrum of solutions. The final results of this section will represent the "1-endpoint" when we apply the W-method (see Lemma 35 and Lemma 36 below).
Symbols with bounded Fourier transform
Let 1 ≤ ≤ d be some integers and let m : R d → C be a function. We say that m is an -BB symbol 5 if the following conditions are satisfied:
(i) there exists a constant C > 0 such that, in the case < d,
R d- |∂ α 1 1 ...∂ α l m(ξ , ξ )| dξ ≤ C |ξ | +|α| , (71)
for all α = (α 1 , ..., α ) ∈ {0, 1} and all ξ ∈ (0, ∞) , and, in the case = d,
|∂ α 1 1 ...∂ α d d m(ξ)| ≤ C |ξ| +|α| , (72)
for all α = (α 1 , ..., α d ) ∈ {0, 1} d and all ξ ∈ (0, ∞) d ;
(ii) m is an odd function in each of the components ξ 1 , ξ 2 ,....,ξ , i.e., m (ξ 1 , ..., ξ j-1 , -ξ j , ξ j+1 ...., ξ d ) = -m (ξ 1 , ..., ξ j-1 , ξ j , ξ j+1 ...., ξ d ) ,
for all 1 ≤ j ≤ , and all ξ 1 , ξ 2 , ...., ξ d ∈ R.
For any integer ν we denote by I ν the interval [2 ν-1 , 2 ν ]. For every k = (k 1 , ..., k ) ∈ Z we consider the positive dyadic box I k := I k 1 × ... × I k and we associate to it the -symmetric set:
s (I k ) := α 1 ,...,α ∈{0,1} ((-1) α 1 I k 1 ) × ... × ((-1) α I k ) ⊂ R .
The next technical Lemma is the basis for all of our results in this section. Its proof consists in slightly adapting some arguments of Bourgain and Br´ezis (see [ Proof. We first prove Lemma 30 in the case where m is a d-BB symbol. In this case we have to prove that
k∈Z d s d (I k ) m(ξ)e i ξ,x dξ C, (74)
Now, let us notice that whenever a : R d → C, b 1 , ..., b d : R → C are sufficiently smooth functions and J j = [q j , r j ], (q j < r j ), j = 1, ..., d are d intervals, we have
J 1 ×...×J d a (ξ) d j=1 b j (ξ j ) dξ = α∈{0,1} d J 1 ×...×J d (-∇) α a (ξ) d j=1 [b j (ξ j )] α j J j dξ, (76)
where, for each 1 ≤ j ≤ d, the quantity [b j (ξ j )]
α j J j is defined 6 as follows [b j (ξ j )] 0 J j := r j q j b j (t) dt δ r j (ξ j ) and [b j (ξ j )] 1 J j := ξ j q j b j (t) dt,
where δ r j is the Dirac measure on R concentrated in r j . The formula (76) easily follows by induction on d and integration by parts.
Fix some integers k 1 , ..., k d and consider the intervals I k j = 2 k j -1 , 2 k j , j = 1, ..., d. By applying (76) to the functions a = m and b j (ξ j ) = sin (ξ j x j ), we obtain
I k 1 ×...×I k d m(ξ) d j=1 sin (ξ j x j ) dξ = α∈{0,1} d I k 1 ×...×I k d (-∇) α m (ξ) d j=1
[sin (ξ j x j )]
α j I k j dξ. ( 77
)
By a direct computation,
J sin (t • x j ) dt min 4 k j |x j | , 1 |x j | ,
for any subinterval J ⊂ I k j , and consequently, [sin (ξ j x j )]
α j I k j min 4 k j |x j | , 1 |x j | , ( 78
)
for any j = 1, ..., d, any α j ∈ {0, 1} and any ξ j ∈ I k j .
Let us fix some integer 0 ≤ l ≤ d and consider the case of α = (1, ..., 1, 0, ..., 0) ∈ {0, 1} d (l values equal to 1). By (78) we get that the quantity is bounded by
I k 1 ×...×I k d (-∇) α m (ξ) d j=1 [sin (ξ j x j )]
I k 1 ×...×I k l ∂ 1 ...∂ l m ξ 1 , ..., ξ l , 2 k l+1 , ..., 2 k d dξ 1 ...dξ l d j=1 min 4 k j |x j | , 1 |x j | . ( 79
)
Fix some index µ ∈ {1, ..., d}. Using the fact that m is a d-BB symbol one can write (see (72))
I k 1 ×...×I k l ∂ 1 ...∂ l m ξ 1 , ..., ξ l , 2 k l+1 , ..., 2 k d dξ 1 ...dξ l ≤ C I k 1 ×...×I k l 1 2 kµ(d+l) dξ 1 ...dξ l ∼ C 2 k 1 +...+k l 2 kµ(d+l) .
Combining this with (79) we get that
I k 1 ×...×I k d (-∇) α m (ξ) d j=1
[sin (ξ j x j )]
α j I k j dξ C 2 k 1 +...+k l 2 kµ(d+l) d j=1 min 4 k j |x j | , 1 |x j | ,
and hence, for
Γ µ := k = (k 1 , ..., k d ) ∈ Z d | k µ = max 1≤j≤d k j , k∈Γµ I k 1 ×...×I k d (-∇) α m (ξ) d j=1 [sin (ξ j x j )] α j I k j dξ is bounded by C k∈Γµ 2 k 1 +...+k l 2 kµ(d+l) d j=1 min 4 k j |x j | , 1 |x j | = C k∈Γµ 2 k 1 +...+k l 2 kµ(d+l) 2 k 1 +...+k d d j=1 min 2 k j |x j | , 1 2 k j |x j | . (80)
For any k ∈ Γ µ we have k µ = max 1≤j≤d k j . Hence, we can write
2 k 1 +...+k l 2 kµ(d+l) 2 k 1 +...+k d = l j=1 2 k j -kµ d j=1 2 k j -kµ ≤ 1,
and therefore, the right hand side of (80) is at most
C k∈Γµ d j=1 min 2 k j |x j | , 1 2 k j |x j | ≤ C k 1 ,k 2 ,...,k d ∈Z d j=1 min 2 k j |x j | , 1 2 k j |x j | = C d j=1 k j ∈Z min 2 k j |x j | , 1 2 k j |x j | C.
In other words, we have seen that for any µ ∈ {1, ..., d} and any multiindex α ∈ {0, 1} d of the form α = (1, ...1, 0, ..., 0),
k∈Γµ I k 1 ×...×I k d (-∇) α m (ξ) d j=1
[sin (ξ j x j )]
α j I k j dξ C. (81)
Clearly, (81) remains true for any α ∈ {0, 1} d . Now, observing that Z d is covered by the union of the sets Γ 1 , ...Γ d , one can write
k∈Z d I k 1 ×...×I k d (-∇) α m (ξ) d j=1 [sin (ξ j x j )] α j I k j dξ ≤ d µ=1 k∈Γµ ... C.
This, together with (75) and (77) proves Lemma 30 in the case = d (i.e., (74)). The case where 1 ≤ < d, can be obtained from the case = d as follows.
Suppose that 1 ≤ < d and m is an -BB symbol. Let M be a measurable subset of R d-of finite measure. For each x ∈ R d-define the function m x : R → C by m x (ξ ) := M m(ξ , ξ )e i ξ ,x dξ , for all ξ ∈ R . Thanks to the fact that m satisfies the condition (71), m x is well-defined and it satisfies uniformly in θ the condition (72) as a symbol on R . Indeed,
|∂ α 1 1 ...∂ α m x (ξ )| = M ∂ α 1 1 ...∂ α m(ξ , ξ )e i ξ ,x dξ ≤ M |∂ α 1 1 ...∂ α m(ξ , ξ )| dξ ≤ R d- |∂ α 1 1 ...∂ α m(ξ , ξ )| dξ ≤ C |ξ | +|α| , (82)
for all α = (α 1 , ..., α ) ∈ {0, 1} and all ξ ∈ (0, ∞) . (Note that the final estimate in (82) does not depend on the set M .) Also, m θ is odd in each variable and hence, we have (73). Now, using (72) for m θ , we have Proof. For each n ∈ N, we consider the functions K n defined by
k ∈Z s (I k )×M m(ξ)e i ξ,x dξ = k ∈N s (I k ) M m(ξ , ξ )e i ξ ,x dξ e i ξ ,x dξ = k ∈N s (I k ) m x (ξ ) e i ξ ,x dξ C, uniformly in x ∈ R d ,
K n (ξ) = m(ξ)1 J d n (ξ) for all ξ ∈ R d , where J n := [-2 n , -2 -n ] ∪ [2 -n , 2 n ] and J d n = J n × ... × J n (d times).
It is easy to see that K n are well defined continuous functions. One can also see that K n are uniformly bounded. Indeed, applying Lemma 30 for M = J d- n (we suppose that < d; the case = d is similar) we can write (using the triangle inequality),
|K n (x)| ∼ J n ×J d- n m(ξ)e i ξ,x dξ = k ∈Z s (I k )⊆J n s (I k )×J d- n m(ξ)e i ξ,x dξ ≤ k ∈Z s (I k )×J d- n m(ξ)e i ξ,x dξ C,
uniformly in x ∈ R d , and in n ∈ N. (Here, for "=", we have used the fact that the sets s (I k ) with s (I k ) ⊆ J n are pairwise almost disjoint and they cover J n .) Hence, K n L ∞ C uniformly in n ∈ N. Using the sequential Banach-Alaoglu theorem, we can find some K ∈ L ∞ , with K L ∞ C and such that K n → K in the w * -topology on L ∞ , up to a subsequence.
Consider now some ψ ∈ S . Clearly, for any n,
K n , ψ = m1 J d n , ψ . (83)
Since ψ ∈ L 1 we have K n , ψ → K, ψ up to a subsequence when n → ∞. Also, we have
m1 J d n , ψ → m, ψ ,
and by (83) we get K, ψ = m, ψ .
The uniqueness of K immediately follows from (84). Lemma 31 is proved.
Divergence-like equations in
Ẇ d/2,2
We can now prove our existence result for divergence-like equations in a particular type of critical spectral spaces. We start by an analogue of [5, Lemma 2]. The proof we give below rests on some elaborations of the main ideas used by Bourgain and Brézis in the proof of [5, Lemma 2]. Lemma 31 from the previous subsection will play here an important role.
In what follows let us denote by ∇ σ 2 u the first two components of the "σ-gradient" of u, namely,
∇ σ 2 u := (∂ σ 1 u, ∂ σ 2 u),
where ∂ σ j is the Fourier multiplier ∂ σ j := σ j (∇).
Lemma 33. Let d ≥ 3 be an integer. Consider the set U :
= R d-1 × (0, ∞). If σ : R d → R satisfies (P1),(P2) then, |∇| -1 |∇ σ 2 | 2 u Y * /U ∇ σ 2 u (L 1 +Y * )/U + ∇ σ 2 u 1/2 Y * /U ∇ σ 2 u 1/2 (L 1 +Y * )/U , (85)
for any u ∈ S , where Y := Ẇ d/2,2 .
(For the meaning of the notation Y * /U and (L 1 + Y * )/U see subsection 2.3.)
Proof. Clearly, Y * = Ẇ -d/2,2 . First we show that
R d σ 2 1 (ξ)σ 2 2 (ξ) |ξ| d+2 1 U (ξ) | u(ξ)| 2 dξ ∇ σ 2 u 2 (L 1 +Y * )/U + ∇ σ 2 u Y * /U ∇ σ 2 u (L 1 +Y * )/U , (86)
for any function u ∈ S .
Consider some functions
F 1 , F 2 , h 1 , h 2 , F - 1 , F - 2 ∈ S with spec(F - 1 ), spec(F - 2 ) ⊆ U c such that ∇ σ 2 u = (F 1 , F 2 ) + (h 1 , h 2 ) + (F - 1 , F - 2 ), (87)
and
F 1 L 1 + F 2 L 1 + h 1 Y * + h 2 Y * ≤ 2 ∇ σ 2 u (L 1 +Y * )/U . (88)
We have
R d σ 2 1 (ξ)σ 2 2 (ξ) |ξ| d+2 1 U (ξ) | u(ξ)| 2 dξ = c R d σ 1 (ξ)σ 2 (ξ) |ξ| d+2 1 U (ξ) ∂ σ 1 u(ξ) ∂ σ 2 u(ξ)dξ,
Using this, (87) and the fact that spec(F - j ) ⊆ U c (and hence 1 U (ξ) F - j (ξ) = 0, for j = 1, 2 and for all ξ ∈ R d ), we can write
R d σ 2 1 (ξ)σ 2 2 (ξ) |ξ| d+2 1 U (ξ) | u(ξ)| 2 dξ = I + II, where
I := c R d σ 1 (ξ)σ 2 (ξ) |ξ| d+2 1 U (ξ) F 1 (ξ) F 2 (ξ)dξ,
and II is the sum of a finite number of terms of the form
c R d σ 1 (ξ)σ 2 (ξ) |ξ| d+2 1 U (ξ)g 1 (ξ)g 2 (ξ)dξ, (89)
where each g k : R d → C is one of the functions
h 1 , h 2 , h 1 , h 2 , F 1 , F 2 , F 1 , F 2
and at least one g k is h j or h j for some j ∈ {1, 2}.
One can verify immediately that the symbol m defined by
m(ξ) := σ 1 (ξ)σ 2 (ξ) |ξ| d+2 1 U (ξ), ξ ∈ R d \ {0} ,
satisfies the conditions in Lemma 31. Hence, by a applying Lemma 31 and (88),
|I| ∼ | K * F 1 , F 2 | ≤ K * F 1 L ∞ F 2 L 1 ≤ K L ∞ F 1 L 1 F 2 L 1 ∇ σ 2 u 2 (L 1 +Y * )/U . (90)
(Here, we have used the notation from Lemma 31: K = m.)
In order to estimate II we estimate each of its terms of the form (89). By the Cauchy-Schwarz inequality, we get
R d σ 1 (ξ)σ 2 (ξ) |ξ| d+2 1 U (ξ)g 1 (ξ)g 2 (ξ)dξ ≤ 2 k=1 R d 1 U (ξ) |g k (ξ)| 2 |ξ| d dξ 1/2 , ( 91
)
where we have used the inequality σ 1 (ξ)σ 2 (ξ) |ξ| 2 1, which follows directly from (P1).
Note that, if |g k | = | h j |, for some j ∈ {1, 2}, then, by (88)
R d 1 U (ξ) |g k (ξ)| 2 |ξ| d dξ 1/2 ≤ h j Y * ≤ 2 ∇ σ 2 u (L 1 +Y * )/U . (92)
If |g k | = | F j |, for some j ∈ {1, 2}, then, since F j 1 U = ( ∂ σ j u -h j )1 U , the triangle inequality together with (88) gives One can immediately check that the function
σ 1 σ 2 := (σ 1 • R) 2 -(σ 2 • R) 2
is odd in each of the variables ξ 1 , ξ 2 . Using this, we easily observe that the symbol m defined by m (ξ) := σ 1 (ξ)σ 2 (ξ)
|ξ| d+2 1 U (ξ), ξ ∈ R d \ {0} ,
satisfies the conditions in Lemma 31. Hence, as in (86), we obtain that
R d σ 1 (ξ) 2 σ 2 (ξ) 2 |ξ| d+2 1 U (ξ) | v(ξ)| 2 dξ ∇ σ 2 v 2 (L 1 +Y * )/U + ∇ σ 2 v Y * /U ∇ σ 2 v (L 1 +Y * )/U , (95)
for any function v ∈ S , where ∇ σ 2 v := (∂ σ 1 v, ∂ σ 2 v), and ∂ σ 1 is the Fourier multiplier of symbol σ j , for any j = 0, 1. Since the spaces Y * /U and (L 1 + Y * )/U are invariant under the rotation R, by applying (95) to the function v = u • R t we obtain (by changing the variables) that
R d (σ 1 (ξ) -σ 2 (ξ)) 2 (σ 1 (ξ) + σ 2 (ξ)) 2 |ξ| d+2 1 U (ξ) | u(ξ)| 2 dξ is bounded by ∇ σ 2 u 2 (L 1 +Y * )/U + ∇ σ 2 u Y * /U ∇ σ 2 u (L 1 +Y * )/U , (96)
for any function u ∈ S . By adding up, we get from ( 86) and (96) that With the same methods one can prove an analogue of Theorem 8 when d = 2 and the source space is Ẇ 1,2 (R 2 ): Lemma 36. Consider the numbers δ ∈ (0, π/8) and ε ∈ (0, 2]. Then, for any vector field v ∈ Ẇ 1,2 (R 2 ), with spec(v) ⊆ C δ , there exist a vector field u ∈ L ∞ (R 2 ) ∩ Ẇ 1,2 (R 2 ), with spec(u j ) ⊆ C (1+ε)δ , such that div u = div v,
R d σ(ξ) |ξ| d+2 1 U (ξ) | u(ξ)| 2 dξ ∇ σ 2 u 2 (L 1 +Y * )/U + ∇ σ 2 u Y * /U ∇ σ 2 u (L 1 +Y * )/U , (97)
and u L ∞ ∩ Ẇ 1,2 v Ẇ 1,2 .
14 3. 1 24 4analysis 31 4 . 1 5 Solutions in interpolation spaces 42 5 . 1
141244151 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 A naive interpolation strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 The main results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 About the proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Function spaces 2.1 Sobolev and Besov spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Lorentz-Sobolev spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Some quotient spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 The W-method of complex interpolation Construction of the interpolation space . . . . . . . . . . . . . . . . . . . . . . . . 15 3.2 A particular case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.3 Solutions of linear equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.3.1 Boundary values of functions on the strip . . . . . . . . . . . . . . . . . . . 21 3.3.2 Interpolation of equations . . . . . . . . . . . . . . . . . . . . . . . . . . . Spectral Symbols with bounded Fourier transform . . . . . . . . . . . . . . . . . . . . . . . 31 4.2 Divergence-like equations in Ẇ d/2,2 . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Proof of the main results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 5.2 Remark concerning the "third" parameter . . . . . . . . . . . . . . . . . . . . . . 46 Suppose d ≥ 2 is an integer and consider some compactly supported function f ∈ L d R d . Standard Calderón-Zygmund theory shows that there exists a vector field u
1 ( 1 ( 1 v
111 R d ) there exists a vector field u ∈ L ∞ (R d ) ∩ Ḃd/r,r R d ) such that div u = div v,and u L ∞ ∩ Ḃd/r,r Ḃd/r,r 1 .
on R d , where D := ∪ d j=1 D j . Let us recall now some standard notation concerning the Fourier multipliers. To a scalar valued function m ∈ L 1 loc (R d \{0}, R) we associate the Fourier multiplier m(∇) defined by the relation m(∇)f (ξ) := m(ξ) f (ξ), on R d , for any Schwartz function f whose Fourier transform f is compactly supported and vanishing in a neighborhood of 0. In most of the cases one can extend the meaning of m(∇) as follows.
Let
Y be a Banach function space on R d and let D ⊆ R d be a measurable set. Relative to the set D we define the closed subspace Y D of Y by Y D := {f ∈ Y | spec(f ) ⊆ D} , the norm being the one induced by Y . For simplicity we will denote the quotient space Y /Y D c by Y /D. In the case where f ∈ Y is a Schwartz function, we define its Y /D-seminorm by
5 ,
5 Lemma 3, p. 404]): Lemma 30. Let d ≥ 1 be an integer and let m : R d → C be an -BB symbol for some 1 ≤ ≤ d and some constant C. Then, k ∈Z s (I k )×M m(ξ)e i ξ,x dξ C, for any measurable subset M ⊆ R d-of finite measure, uniformly in x ∈ R d and in M . (By convention, if = d, then s (I k ) × M is replaced by s d (I k ).)
e
uniformly in x = (x 1 , ..., x d ) ∈ R d .Since m is odd in the variables ξ 1 , ..., ξ d , for any k ∈ Z d one can write,s d (I k ) m(ξ)e i ξ,x dξ = s d (I k ) iξ j x j dξ = (2i) j x j ) dξ.
α j I k j dξ 6
6 Strictly speaking we should write [b j (•)] αj Jj (ξ j ) instead of [b j (ξ j )] αj Jj . However, for simplicity we prefer here to use the last notation.
which proves Lemma 30. By applying Lemma 30 we can deduce the following useful fact: Lemma 31. Let d ≥ 1 and 1 ≤ ≤ d be some integers, and let m : R d → C be an -BB symbol satisfying condition (71) or (71) (when = d) for some constant C. Then, there exists some kernel K ∈ L ∞ R d such that K(ξ) = m(ξ) on R d and K L ∞ C, the implicit constant not depending on C. Remark 32. The meaning of Lemma 31 is that there exists a unique function K ∈ L ∞ , such that K L ∞ C and K, ψ = m, ψ , for any function ψ ∈ S .
R d 1 U 2 |ξ| d dξ 1 / 2 = R d 1 U 2 dξ 1 / 2 ≤
12121212 (ξ) |g k (ξ)| (ξ) |ξ| -d ∂ σ j u(ξ) -h j (ξ) ∂ σ j u Y * /U + h j Y * ≤ ∇ σ 2 u Y * /U + 2 ∇ σ 2 u (L 1 +Y * )/U ≤ 3 ∇ σ 2 u Y * /U .(93)Since |g k | = | h j | (for some j) for at least one k, we get from (91), (92) and (93) thatR d σ 1 (ξ)σ 2 (ξ) |ξ| d+2 1 U (ξ)g 1 (ξ)g 2 (ξ)dξ ∇ σ 2 u Y * /U ∇ σ 2 u (L 1 +Y * )/U . Hence, |II| ∇ σ 2 u Y * /U ∇ σ 2 u (L 1 +Y * )/U .(94)By (93) and (94) we get (86).Consider the rotationR (ξ) = (ξ 1 -ξ 2 , ξ 1 + ξ 2 , ξ 3 , ..., ξ d ),for any ξ ∈ R d . Consider now the functionsσ 1 := σ 1 • R -σ 2 • R, and σ 2 := σ 1 • R + σ 2 • R.
1 Uσ 2 u 2 (L 1 1 U 2 Y 2 (L 1 2 Y 2 Y 2 (L 1 |∇| - 1 |∇| - 1 2 L 2 ξ∼ |ξ| - 1 d 2 L 2 ξ∼ |ξ| -1 |ξ| 2 1 D 2 L 2 ξ= |ξ| 1 D 2 L 2 ξ∼ |ξ| 1 D|∇| - 1
12112212221112212212212211 where σ(ξ) := σ 2 1 (ξ)σ 2 2 (ξ) + (σ 1 (ξ) -σ 2 (ξ))2 (σ 1 (ξ) + σ 2 (ξ))2 , for any ξ ∈ R d . Since for any real numbers a, b we havea 2 b 2 + (a -b) 2 (a + b) 2 ∼ a 4 + b 4 ,we obtain σ(ξ) ∼ σ 4 1 (ξ) + σ 4 2 (ξ), for all ξ ∈ R d , and now (97) gives us (ξ) | u(ξ)| 2 dξ ∇ +Y * )/U + ∇ σ 2 u Y * /U ∇ σ 2 u (L 1 +Y * )/U .(98)Note that by[START_REF] Pisier | Martingales in Banach Spaces[END_REF] (with D = U ), we have|∇| -1 |∇ σ 2 (ξ) | u(ξ)| 2 dξ,and together with (98) this concludes the proof of Lemma 33.By composition with rotations and by adding up inequalities of the form (85), Lemma 33 easily implies the following: Lemma 34. Let d ≥ 3 be an integer. Suppose that the family of functions G 1 , ..., G d : R d → R d-1 is adapted to the family of half-spaces D 1 , ..., D d ⊂ R d . For any j ∈ {1, ..., d} we have|∇| -1 |G j (∇)| 2 u Y * /U G j (∇)u (L 1 +Y * )/D j + G j (∇)u 1/* /D j G j (∇)u 1/+Y * )/D j ,for any u ∈ S , whereY := Ẇ d/2,2 .(For the properties of the functions G 1 , ..., G d and their relation with the half-spaces D 1 , ..., D d see the subsection 1.3 in the introduction of this paper.)We can now state and prove the existence results of this subsection:Lemma 35. Let d ≥ 3 be an integer. Suppose that the family of functions G 1 , ..., G d : R d → R d-1is adapted to the family of half-spaces D 1 , ..., D d ⊂ R d . Then, for any system of (d -1)-vector fields (v j ) j=1,..,d with v j ∈ Ẇ d/2,2 (R d ) and spec(v j ) ⊆ D j , there exists a system of (d -1)-vector fields (u j ) j=1,..,d , withu j ∈ L ∞ (R d ) ∩ Ẇ d/2,2 (R d ) and spec(u j ) ⊆ D j , such that d j=1 G j (∇) • u j = d j=1 G j (∇) • v j , and d j=1 u j L ∞ ∩ Ẇ d/2,2 d j=1 v j Ẇ d/2,2 .Proof. As before, let Y be the space Ẇ d/2,2 . According to Lemma 34, for any ϕ ∈ S , we have|∇| -1 |G j (∇) |G j (∇) ϕ Y * /D j G j (∇) ϕ (L 1 +Y * )/D j + G j (∇) ϕ 1/* /D j G j (∇) ϕ 1/2 (L 1 +Y * )/D j = G j (∇) ϕ (L 1 +Y * )/D j +(ε 1/2 G j (∇) ϕ 1/* /D j )(ε -1/2 G j (∇) ϕ 1/+Y * )/D j ) ≤ G j (∇) ϕ (L 1 +Y * )/D j +ε G j (∇) ϕ Y * /D j + ε -1 G j (∇) ϕ (L 1 +Y * )/D j ,for any ε ∈ (0, 1) and any j ∈ {1, ..., d}. By adding up these inequalities we getd j=1 |G j (∇) |G j (∇) ϕ Y * /D j ε d j=1 G j (∇) ϕ Y * /D j +ε 1-N d j=1 G j (∇) ϕ (L 1 +Y * )/D j .(99)Since the family G 1 ,...,G d is adapted to D 1 ,...,D d , (see[START_REF] Calderón | Intermediate spaces and interpolation, the complex method[END_REF]) we easily getd j=1 |G j (ξ)| β 1 D j (ξ) ∼ β |ξ| β 1 D (ξ),(100)on R d , for any β > 0. Using now (100), with β = 2, we can writed j=1 |G j (∇) |G j (∇) ϕ Y * /D j ∼ d j=1 |ξ| -1 |G j (ξ) | 2 1 D j (ξ) | ϕ (ξ) | |ξ| -d/j=1 |G j (ξ) | 2 1 D j (ξ) | ϕ (ξ) | |ξ| -d/(ξ) | ϕ (ξ) | |ξ| -d/(ξ) | ϕ (ξ) | |ξ| -d/2way (by (10)) we can writed j=1 G j (∇) ϕ Y * /D j ∼ d j=1 |G j (ξ) |1 D j (ξ) ϕ (ξ) | |ξ| -d/2 L 2 ξ ∼ d j=1 |G j (ξ) |1 D j (ξ) ϕ (ξ) | |ξ| -d/(ξ) | ϕ (ξ) | |ξ| -d/2 |G j (∇) |G j (∇) ϕ Y * /D j ∼ d j=1 G j (∇) ϕ Y * /D j ,and together with (99) yields d j=1 G j (∇) ϕ Y * /D j ε d j=1 G j (∇) ϕ Y * /D j + ε -1 d j=1 G j (∇) ϕ (L 1 +Y * )/D j . Choosing ε sufficiently small one can write d j=1 G j (∇) ϕ Y * /D j d j=1 G j (∇) ϕ (L 1 +Y * )/D j .By duality (using the closed range theorem) we get Lemma 35.
for any t ∈ R and any nonnegative integer k ≤ d + 2, where C is a positive constant depending only on d. It follows that (see[START_REF] Muscalu | Classical and Multilinear Harmonic Analysis[END_REF] Theorem 8.2 , p. 197]) for any a ∈ (1, ∞), the norm of the operator |∇| it : L a → L a satisfies
|∇| it L a →L a
a C(1 + |t|) d+2 .
This implies, via the real method of interpolation (see for instance [2, Theorem 5.2.1 (2), p. 109 ]), that for any a ∈ (1, ∞) and any b ∈ [1, ∞], we have
1 ) θ was already proved and used by Peetre in a different context (see [22, Lemme 1.1]). Both proofs, the one that we give below and Peetre's, are easy consequences of the ideas of Calderón from [10, Section 9.4].
In this paper the distributions with d -1 components will be called (d -1)-vector fields.
Here, "W" stands for "weak".
The results of this subsection will be used only in the proof of Theorem 11.
Here, BB stands for "Bourgain-Brézis".
Here we use the fact that if the dual X * of a Banach space X is separable, then, X is separable (see for instance [14, Theorem 4.6-8, 245]).
for any t 1 , t 2 ∈ R and it remains to notice that, by (52), we have
Since (u ε 1,N ) N ≥1 is equi-continuous and uniformly bounded sequence, and A, X 1 have a separable preduals, there exists some u ε 1 ∈ L 2 (R, A 1 ) such that u ε 1,N (t) → u ε 1 (t) in the w * -topology on A and in the w * -topology on X 1 , up to a subsequence, for all t ∈ R. By an argument similar to the one used in (55), it is easy to see that one can choose u ε 1 ∈ C 3 b (R, A 1 ). Thanks to (53) and (54) one can write T u ε 1 (t) = T v 1,ε (t), for all t ∈ R, which proves (45). In order to verify (46) one uses the Young inequality and (52):
lim inf
Observe now that, as in (56), we get
where for the second " " we have used Fact 1 (ii). In particular, we have that
Let us verify now that u ε 1 also satisfies (47). We can write
where for the first " " we have used Fact 2 and for the second " " we have used the embedding X 1 → X 0 (that implies A 1 → A 0 ). Combining (57) with (58) we get
and (47) is proved in the case j = 1. Lemma 27 is proved.
We are now able to state and prove the main result of subsection 3.3:
Lemma 28. Fix some number θ ∈ (0, 1). Let the Banch spaces A, X 0 , X 1 , A 0 , A 1 , B 0 , B 1 , E, F and the operator T be as in Lemma 27. Moreover, we assume that X 0 , X 1 and B 1 are U M D spaces and that (X 0 , X 1 ) θ has a separable predual. Then, for any b ∈ (B 0 , B 1 ) θ there exists some a ∈ A ∩ (X 0 , X 1 ) θ such that T a = T b,
Since we can replace (if necessary) v by exp(z 2 -θ 2 )v, we can assume without loss of generality that v j ∈ L 2 (exp (t 2 ) , B j ), where v j (t) := v(j + it), for all t ∈ R. Define, for each ε ∈ (0, 1), the function v ε on S by v ε := v * ϕ ε , as in the statement of Lemma 27.
Note that, thanks to Lemma 26,
for all t ∈ R. From this identity, since B 1 is an U M D space, we can write
where for the second " " we have used Young's inequality and for the last " " we have used (60). In particular, we get
By Lemma 27 there exist some functions 45), ( 46), (47). Define ũε :
for all z ∈ S 0 , and ũε
We show that, for any j = 0, 1, we have the estimate:
for any bounded linear operator Λ j : A j → X j with Λ j ≤ 1, the implicit constant not depending on Λ j .
Using (63) we write:
Since the spaces X j have the U M D property, we get
where for the third "≤" we have used (46) and for the last " " we have used (60). It remains to estimate the second term in the right hand side of (65):
where for the first " " we have used (47) and for the last " " we have used (60) and (61). By (65), (66), (67) we have proved (64). Hence, we have obtained
This implies that for a ε := ũε (θ) we have
Note that, by Proposition 22, (B 0 |B 1 ) θ = (B 0 , B 1 ) θ , and by Corollary 24, (A 0 , X 0 | A 1 , X 1 ) θ → A ∩ (X 0 , X 1 ) θ . From this and (68) we get
We observe that for b ε := v ε (θ) we have
Indeed, by applying Lemma 20, (62), the continuity of T : E → F and (45), one gets
We let ε → 0. Since v j,ε → v j in L 2 (R, B j ), for each j = 0, 1 we get that b ε → b in (B 0 |B 1 ) θ = (B 0 , B 1 ) θ . Also, thanks to (69), since A and(X 0 , X 1 ) θ have separable preduals, by the sequential Banach-Aloglu theorem, there exists some a ∈ A ∩ (X 0 , X 1 ) θ such that a 1/n → a (n ∈ N * ) in the w * -topology on A and in the w * -topology on (X 0 , X 1 ) θ , up to a subsequence. Also, by (69) we get
It follows that T b 1/n → T b and T a 1/n → T a in the w * -topology of F , up to a subsequence. Consequently, by (70) we have T a = T b.
θ we can use the above compactness argument in order to obtain a solution for any b ∈ (B 0 |B 1 ) θ . Lemma 28 is proved.
Remark 29. One can easily adapt Lemma 27 and Lemma 28 to the more general case of the equations T u = Lv, where T, L : E → F are possibly different operators. For this we have to change the conditions (i) and (iii) in Lemma 27 by (i') B 1 → X 1 → X 0 and there exists an operator L T : E → F such that L T : B j → A j is bounded for each j = 0, 1 and T • L T = L on B 0 ;
(iii') T : A j → F and L : B j → F are bounded for each j = 0, 1 and L(B 1 ) → T (A 1 ).
The modifications needed for the corresponding proofs are minor. However, for the sake of simplicity, we preferred to present the proofs only in the case T = L.
(For the meaning of C δ see subsection 1.3.) Sketch of the proof. First we establish the result for ε = 2. Let us denote by D(C δ ) the set defined by
the union being taken after all the dyadic boxes
) that are included in C δ and are maximal (with respect to the inclusion relation) with this property. One can find a finite number of rotations R 1 , ..., R n : R 2 → R 2 such that
for some sufficiently large r > 0. As in the proof of (86) we get
for any u ∈ S , where Y = Ẇ d/2,2 . As in the proof of Lemma 31, by Lemma 30 (applied in the case = d = 2) one can see that there exists K ∈ L ∞ such that
with the same meaning as in Lemma 31. We use then the same method as in (90). The rest of the argument remains essentially the same as the one used in the proof of (86).
Note that, by (104) we get (by composition with rotations) that
for all j ∈ {1, ..., n}, where R l j (ξ) is the l-th coordinate of the vector R j (ξ). Using the (103
for all j ∈ {1, ..., n}, and one can write
for all j ∈ {1, ..., n}. One can easily check that
for all ξ ∈ R 2 . Hence, by adding up the inequalities (105) we get
which can be rewritten as
By duality (as in the proof of Lemma 35) we obtain that for any vector field
Thanks to (103) this immediately implies Lemma 36 in the case ε = 2. To obtain the result for any ε ∈ (0, 1) simply cover the symmetric cone C δ with a small union of rotated copies of the symmetric cones C δ/n for some large integer n > 0. It suffices now to apply the result corresponding to the case ε = 2 to each rotated copy of C δ/n and then add the obtained solutions.
Solutions in interpolation spaces
Proof of the main results
We now discuss some immediate applications of the W-method to the divergence-like equation. First we formulate a general result: Theorem 37. Let X, X, Y , Ỹ , F be Banach function spaces on R d satisfying the embeddings X → X, Y → Ỹ → X and consider a bounded linear operator T : X → F . Suppose moreover that the following conditions are satisfied:
(ii) T is bounded from X to F and from Y to F , and
Fix some θ ∈ (0, 1). Then, for any vector field v ∈ (L ∞ ∩ X, Y ) θ , there exists a vector field
Proof. We apply Lemma 28 for the Banach spaces X 0 = X,
Y and the operator T . One can easily observe that in this setting the conditions of Lemma 28 (part of them are explicitly stated in Lemma 27) are satisfied. Indeed, in order to verify the condition (i) in Lemma 27 it suffices to see that X, Y → X and hence,
The space Ỹ is reflexive (since it has the U M D property) and hence, it has a separable predual 7 . Also, A = (L 1 ) * has a separable predual. Thus, the condition (ii) in Lemma 27 is verified. Condition (iii) in Lemma 27 is ensured by condition (ii) in Theorem 37.
Notice that X ∩ Ỹ is a separable space that is dense in ( X, Ỹ ) θ (see [2, Theorem 4.2.2 (a), p. 91]). It follows that ( X, Ỹ ) θ is a reflexive and separable space and consequently it has a separable predual. We also have by (i) that the spaces X, Y , Ỹ have the U M D property. We can apply now Lemma 28 and we get Theorem 37.
Let us see now that Theorem 37 above implies Theorem 10, Theorem 11, Theorem 9 and Theorem 8. In what follows we will ignore the space F since it is easy for the operators T we use to find a space F sufficiently large such that T : X → F (one can simply set F of the form F = B -a,b ∞ for some a, b ∈ (1, ∞), with a sufficiently large).
Proof of Theorem 10. Let us consider some parameter r ∈ [2, ∞) such that 1/p = (1 -θ)/r + θ/2. We apply Theorem 37 for the Banach spaces X = Ḃd/r,r . Hence, X → X and Y → Ỹ → X. Also, by Mazya's theorem (Theorem 3 in the case p = q = 2) we have T (Y ) → T (L ∞ ∩ Ỹ ), for the operator T = div. Now the hypotheses of Theorem 37 are satisfyed.
Observe that, since X = Ḃd/r,r
and we can write
2
Since we also have ( Ḃd/r,r 2 , Ḃd/2,2
2
it remains to apply Theorem 37 and Theorem 10 is proved.
Proof of Theorem 11. As in the proof of Theorem 10 let us consider r ∈ [2, ∞) such that 1/p = (1 -θ)/r + θ/2. We apply Theorem 37 for the Banach spaces
and Y = Ỹ = Ẇ d/2,2 . It remains to verify that the hypotheses of Theorem 37 are satisfyed. Indeed, by the monotonicity properties of the Lorentz spaces we also have Ẇ d/r L r,1 → Ẇ d/r L r,2 , i.e., X → X. By Lemma 16 we get Ẇ d/2,2 → X, i.e., Y = Ỹ → X. Also, by Mazya's theorem (Theorem 3 in the case p = q = 2) we have T (Y ) → T (L ∞ ∩ Ỹ ) and now the hypotheses of Theorem 37 are satisfyed.
By Lemma 17 we have Ẇ d/2,2 → X and X = Ẇ d/r L r,1 → L ∞ and hence L ∞ ∩ X = Ẇ d/r L r,1 . From this and Lemma 18 we get
Lemma 18 also gives
and now one can easily conclude the proof of Theorem 11 by a direct application of Theorem 37.
Proof of Theorem 9. The proof is very similar to the one of Theorem 10. Suppose p, r, θ are as in the proof of Theorem 10. We put
Now, the operator T is the operator formaly defined for the systems of (d -1)-vector fields by the formula
where v 1 , ..., v d ∈ S are (d -1)-vector fields. In order to verify the item (ii) in Theorem 37 we use Lemma 35 instead of Mazya's theorem. It remains to apply Theorem 37 and to observe that, by the retraction method, (( Ḃd/r,r
for any j ∈ {1, ..., d}.
Proof of Theorem 8. Again, suppose p, r, θ are as in the proof of Theorem 10. We put
) C (1+ε)δ . The operator T is the usual divergence operator T = div. It remains to apply Theorem 37 and to observe that, by the retraction method, (( Ḃd/r,r
Both of these equalities rest on the fact that the Fourier projections P C δ on the sets C δ are a sum of two rotated and dilated Riesz projections. Hence, P C δ is bounded on each of the spaces Ḃd/r,r Let us see now that Theorem 9 implies Theorem 7. For this we need only some elementary geometry. Suppose d ≥ 3 and consider the unit vectors ν j := (1, ..., 1, 2, 1, ..., 1)/ √ d + 1 in R d (with value 2 on the j-th position), j ∈ {1, ..., d}. For each j ∈ {1, ..., d} define the half-spaces
and let D be the set D := D 1 ∪ ... ∪ D d . By p D j (ξ) we denote the orthogonal projection of the point ξ on the support hyperplane Π j of D j :
for any j ∈ {1, ..., d}.
Consider the function σ : R d → R defined by σ(ξ) = ξ 1 . We can immediately see that this σ satisfies the conditions (P1), (P2). We have now σ j (ξ) = ξ j , for all j ∈ {1, ..., d}. Consider G 0 := (σ 1 , ..., σ d-1 ) and let G j be obtained by composing G 0 with a rotation that transforms
for all j ∈ {1, ..., d}. In order to see that the family of functions G 1 , ..., G d is adapted to the family D 1 , ..., D d of half-spaces it remains to prove the following equivalence that corresponds to (10): Lemma 38. With the above notation we have
Proof. Observe that |ν i -ν j | < 1 and | ν i , ν j | < 1 for any i, j ∈ {1, ..., d}, i = j. It follows from this that we can find some sufficiently small number α ∈ (0, 1) such that
for any i, j ∈ {1, ..., d}, and
Let c ∈ (0, 1) such that √ 1 -c equals the left hand side of (110). In order to prove Lemma 38 it suffices to see that, for any ξ ∈ R d ,
with
then the left hand side of (111) is at least
and we are done. Else, we have | ξ, ν 1 | > √ 1 -α |ξ| and decomposing ξ as ξ = βν 1 + w, for some β ∈ R and w ∈ R d with w⊥ν 1 , we can rewrite this inequality as
or, equivalently,
Now, note that, using (110)
As above we get Remark 39. It would be interesting if one could replace the set ∆ = R d \(-∞, 0) d in Theorem 7 with the set (-∞, 0) d . This will give a stronger version of Theorem 7. It is not known whether this stronger version is true or not. The methods used in this paper seem to not apply in the case of the set (-∞, 0) d .
Remark concerning the "third" parameter
Let us consider here the problem related to the nonoptimality of the third parameter. For the sake of simplicity we are concerned here only with the divergence equation. Similar observations can be made for the case of the divergence-like equations.
Recall that, in Theorem 10 in contrast to Theorem 3, we lose some control of the parameter q of the Besov spaces involved: we start with a source term in Ḃd/p,p q and we end up with a solution in Ḃd/p,p 2 which, despite the fact that it has the "right" differential regularity (the exponents p and s = d/p are the right ones), it is a space strictly larger than Ḃd/p,p q . This is due to the fact that in order to easily compute the source space we have chosen X such that X → L ∞ . Consequently, we have to take X strictly larger than X. Indeed, choosing X = X the hypotheses of Theorem 37 imply that Ẇ d/2,2 → X → L ∞ , however, Ẇ d/2,2 is not embedded in L ∞ . By the method we used to prove Theorem 10 it is unlikely to improve the solution space to L ∞ ∩ Ḃd/p,p q . A similar remark can be made for Theorem 11.
When we use Theorem 37, in order to not lose any regularity, we would like to have that X
Since Ẇ d/2,2 → X we cannot impose the condition X → L ∞ . Apart from this situation, there are other natural candidates for the space X that one may expect to satisfy (113). However, this condition (113) is too restrictive. For instance we cannot pick X = Ḃd/r,r r for some r ∈ (2, ∞). Indeed, in this case, we have the following negative result: Proposition 40. Let r ∈ (2, ∞) and θ ∈ (0, 1) be some fixed parameters. Then,
(The corresponding norms on the two interpolation spaces are not equivalent.)
Proof. Suppose by contradiction that we have
where 1/p = (1 -θ) /r + θ/2. On the other hand, since p > 2, there exists some η ∈ (0, 1) such that ( Ḃd/p,p p , Ḃd,1
This, together with (114) and T. Wolff's interpolation theorem (see [START_REF] Wolff | A note on interpolation spaces[END_REF]Theorem 2]) implies that, there exists some θ 1 ∈ (0, 1) such that
Consider some function ψ ∈ C ∞ c (B(0, 1)) such that ψ ≡ 1 on B(0, 1/2) and define the operator T ψ by T ψ f = f -f * ψ, for any Schwartz function f . We extend T ψ by continuity to the spaces L ∞ ∩ Ḃd/r,r r , Ḃd,1 ) must be embedded in L ∞ . In other words,
for any Schwartz function f . Young's inequality and the fact that ψ is Schwartz gives us that
which together with (115) yields
In other words we have obtained the embedding W d/2,2 → L ∞ , which is false.
Open problem. Suppose D = R d-1 × (0, ∞) and X is a function space on R d such that ) θ vector field)?
If the answer to this question is yes, then, by using Theorem 37 we would be able to provide a version of Theorem 7 with no loss of regularity in the third parameter: (And a similar statement with Ḃd/p,p q in place of Ḟ d/p,p q .)
One can formulate similar conjectures corresponding to the statements of Theorem 9 and Theorem 8. |
04110682 | en | [
"phys.phys.phys-data-an",
"math.math-oc"
] | 2024/03/04 16:41:24 | 2023 | https://hal.science/hal-04110682/file/2021-06-10_Dreo__EAF.pdf | Keywords:
Algorithm to compute quantiles of 2D (and 3D) distributions.
• Well-founded statistics on top of them.
• Useful for multi-objective optimization problems.
Expected RunTime ECDF [HAB + 16] q 0=q |f(x)-f(x*)|=q* ^qq* 0 q* r … q t q t P(q*=q) P q t P P(q*<q) q t P Expected Quality q t P Expected RunTime
QT Empirical Attainment Function Levelsets [LIS14]
q 0=q |f(x)-f(x*)|=q* ^qq* 0 q* r … q t q t P(q*=q) P q t P P(q*<q) q t P Expected Quality q t P Expected RunTime q t q 0=q |f(x)-f(x*)|=q* ^qq* 0 q* r … q t q t P(q*=q) P q t P P(q*<q) q t P Expected Quality q t P Expected RunTime q t q t P QT Empirical Attainment Function Quality-Time Empirical Attainment Function
• EQ-and ERT-ECDF are trivialy computed:
• fix a target quality (resp. time),
• traverse all runs across time (resp. quality),
• compute the ratio of better-than-target.
• QT-EAF requires a more complex algorithm [GdFF02].
q t P E R T -E C D F E Q -E C D F Q T -E A F le v e ls e t
Part 2
Quantiles on joint distributions
Closeness
• A set of optimization trajectories forms a closed-set.
• Bounded by: • Which are essentially equivalent to quantiles.
• optimal solution, q ∈ [0, bound[, • time budget, t ∈ [1, budget[, • P ∈ [0,
• Because the sample is finite, there is a finite number of levelsets.
• At most r level sets for r input sets.
• (Assuming minimization on both axis).
• "Peel" level sets.
• Sweep one axis in increasing values,
• sweep the other in decreasing values.
• Essentially computes incremental Pareto-optimal archives.
• O(m log m + nm), m points in n runs (asymptotically optimal). • Output surfaces instead of level sets.
• C++ implementation.
• Within IOHexperimenter https://iohprofiler.github. io/IOHexp/.
• May be ported if needed.
Part 4
Statistics
Examples
• Orthogonal partial section statistics:
• Area Under curves (EQ-ECDF and ERT-ECDF).
• Attainment surface (EAF).
• Global statistics:
• Volume under the EAF ≈ sum/mean-like.
• Volume under a levelset ≈ quantile-like.
• Volume under a subset of levelsets (scaling approximation). • We want to optimize both quality and time.
• Covariance [GdFFH01]. q t P E R T -E C D F E Q -E C D F Q T -E A F le v e ls e t
• Because we have no clue about budget or target in advance.
• Maximize an average aggregate?
No.
• Maximize volume under the EAF. • Well-founded statistics on top of them.
• Useful for multi-objective optimization problems.
• Example: automated tuning of stochastic optimization solvers.
Expected
Johann Dreo | Quantile-like measures on multi-dimensional distributions of closed sets | 2021-06Johann Dreo | Quantile-like measures on multi-dimensional distributions of closed sets | 2021-06Johann Dreo | Quantile-like measures on multi-dimensional distributions of closed sets | 2021-06-10 Johann Dreo | Quantile-like measures on multi-dimensional distributions of closed sets | 2021-06-10
Johann Dreo | Quantile-like measures on multi-dimensional distributions of closed sets | 2021-06-10
•
Computes level sets of the distribution [GdFFH01].
• 20 |
20 Johann Dreo | Quantile-like measures on multi-dimensional distributions of closed sets | 2021-06-10 Algorithm 3 dimensions • O(n 2 m log m), m points in n runs, • a logarithmic factor worse than the upper bound.
Johann Dreo | Quantile-like measures on multi-dimensional distributions of closed sets | 2021-06-10 Example of use Automated design • Consider the automated design of an optimization solver as a (meta) bi-ojbective optimization problem [Dre03].
30|•
Johann Dreo | Quantile-like measures on multi-dimensional distributions of closed sets | 2021-06-10 Conclusion Recall Algorithm to compute quantiles of 2D (and 3D) distributions.
31|•
Johann Dreo | Quantile-like measures on multi-dimensional distributions of closed sets | 2021-06-10 Conclusion Perspective Quantify the time/memory/loss compromises.• Impact of the statistic choice on the optimization sub-problem.• Use in multi-objective problems.Introduction• Optimization = find the optimumx * minimizing an objective function f
Statistics & Probability Letters, 57(2):179-182, April 2002.
Pros and cons
• If all level sets are computed, EAF is the true function (given the sample).
• EAH is an approximation (which converges with the discretization). . . • . . . but can be computed on log-log scales, for better resolution.
• EAH scales better regarding the number of points. . . • . . . but require more memory.
Why?
• Multi-objective problems:
• either Pareto-optimal approaches (heavy on user),
• either objectives aggregation (not good math properties).
• How to aggregate randomized observations?
Statistics for bi-objective problems Why?
• Multi-objective problems:
• either Pareto-optimal approaches (heavy on user),
• either objectives aggregation (not good math properties).
• How to aggregate randomized observations?
• EA[FH] is a way to aggregate Pareto-optimal fronts.
• One can compute statistics on it.
Part 6
Supplementary material
Bibliography
Johann Dreo.
Adaptation de la métaheuristique des colonies de fourmis pour l'optimisation difficile en variables continues. Application en génie biologique et médical.
Theses, Université Paris XII Val de Marne, December 2003.
Johann Dreo.
Using performance fronts for parameter setting of stochastic metaheuristics.
In Franz Rothlauf, editor, Genetic and Evolutionary Computation Conference, GECCO 2009, Proceedings, Montreal, Québec, Canada, July 8-12, 2009, Companion Material, pages 2197-2200. ACM, 2009. Viviane Grunert da Fonseca and Carlos M. Fonseca.
A link between the multivariate cumulative distribution function and the hitting function for random closed sets.
Terminal distribution
• The output distribution of a randomized search heuristic is the probability to reach a given quality target:
IMG
Temporal convergence
• Because any solver (should) be quasi-ergodic:
• then it (should) converge in a finite time:
• Thus the probability of attaining the target should not decrease over time:
• Hence the trajectory in objective space of a run is monotonic.
Formal concepts
Pareto Optimality
• The trajectory in objective space being monotonic, all its points are non-dominated, and the set is Pareto-optimal.
Formal concepts
QT-EAF
• Given a set of non-dominated sets: |
04102307 | en | [
"phys",
"spi"
] | 2024/03/04 16:41:24 | 2022 | https://hal.science/hal-04102307/file/Proc_HPLSA2022_PierreBourdon_v0.pdf | Pierre Bourdon
Rodwane Chtouki
Laurent Lombard
Anasthase Liméry
Julien Le Goüet
François Gustave
Hermance Jacqmin
Didier Goular
Christophe Planchat
Anne Durécu
Coherent Beam Combining of Lasers: Toward Wavelength Versatility and Long-Range Operation Compliance
Keywords: Fiber laser, Coherent Beam Combining, nonlinear optics, Optical Parametric Oscillator, phase control, turbulence mitigation
Coherent beam combining (CBC) by active phase control is an efficient technique to power scale fiber laser sources emitting in the near-infrared, between 1 and 2 µm, up to the multi-kilowatt level. Interestingly, it has been demonstrated by our team that CBC could also be used to power scale mid-infrared sources, frequency converters, generating a wavelength between 3 and 5 µm. We present our latest results on coherent combining of continuous-wave highefficiency mid-infrared sources: optical parametric oscillators (OPOs) and detail the difficulties encountered to achieve this combining, as well as the main limitations to efficient operation of CBC in this case.
In a second part of this talk, we also present recent results on coherent combining of seven 1.5-µm fiber lasers through active phase control, using frequency-tagging, and operating efficiently on a remote target. A testbed has been designed to combine these 7 lasers on a remote surface, with phase-locking operating through analysis of the optical signal backscattered by the target, in a so-called target-in-the-loop (TIL) experiment. In such TIL configuration, CBC mitigates both laser-amplification-induced and atmospheric turbulence-induced phase fluctuations simultaneously. CBC demonstrated proper operation outdoors, on a target located up to 1 km from the laser and the results from this experimental campaign will be described.
INTRODUCTION
Coherent beam combining (CBC) techniques involving active phase control of the laser emitters have demonstrated their potential to power scale continuous-wave [START_REF] Shekel | 16kW single mode CW laser with dynamic beam for material processing[END_REF] and pulsed [START_REF] Lombard | Coherent beam combination of narrow-linewidth 1.5 μm fiber amplifiers in a long-pulse regime[END_REF] fiber lasers through coherent addition of the power emitted by multiple separate amplifiers. Using all-fiber components, very compact configurations of coherently combined fiber amplifiers can be designed, with demonstrated overall emitted power exceeding the kilowatt level [START_REF] Shekel | 16kW single mode CW laser with dynamic beam for material processing[END_REF].
However, high power fiber lasers are only available within a finite set of wavelengths, mainly amongst 1 µm, 1.5 µm and 2 µm. Hopefully, nonlinear crystals offer the capability to convert these laser lines to access higher wavelengths in the midinfrared bands, for instance around 4 µm. But as with lasers, power scaling of frequency converters is limited, essentially by the damage threshold of the nonlinear medium. Coherent combining techniques could be useful to power scale frequency converters beyond this damage threshold limit. Unfortunately, combination techniques are difficult to implement in this case, as active phase control requires fast phase modulators at the midinfrared wavelength emitted by the converter. Such components are not exactly "off-the-shelf" and are not as practical nor as fast as the all-fiber modulators available at more standard wavelengths.
Taking advantage of the phase-matching relations involved in any frequency converting process, one can indirectly control the converted wave phase through control of the pump wave phase. When this pump wave is delivered by fiber amplifiers, such a frequency-converter coherent combining configuration can be achieved using standard off-the-shelf all-fiber phase modulators. * [email protected]; phone (+33) 1 80 38 63 82; fax (+33) 1 80 38 63 45; www.onera.fr Our team successfully applied this approach to perform efficient coherent combination of second harmonic generators and difference frequency generators [START_REF] Odier | Coherent combining of second-harmonic generators by active phase control of the fundamental waves[END_REF], [START_REF] Odier | Coherent combining of fiber-laser-pumped 3.4-µm frequency converters[END_REF], [START_REF] Odier | Coherent combining of mid-infrared difference frequency generators[END_REF]. In a first part of this paper, we present the challenges of extending this indirect phase control technique to midinfrared OPO coherent combining, as well as the main limitations to efficient operation of CBC in this case. An experimental test of OPO CBC is also detailed. CBC has been demonstrated for a large number of lasers [START_REF] Bourderionnet | Collective coherent phase combining of 64 fibers[END_REF], [START_REF] Chang | First experimental demonstration of coherent beam combining of more than 100 beams[END_REF], but many realizations use the fact that the laser beams are spatially separated at the output of the lasers, before overlapping after propagation and becoming undistinguishable. While the laser beams are still spatially distinct, it's feasible to measure the phase of each laser and to control this phase in real time to achieve phase-locking. However, for some applications, such process cannot be applied as phase measurement has to be done once the laser beams have already begun to overlap and to interfere.
That's the case of target-in-the-loop CBC where one wants to phase-lock the lasers on a remote target, maximizing the power density deposited at long range. In this case, phase measurement at the output of the lasers is useless, as beam propagation induces additional phase shifts that are not accounted for in this measurement. It's necessary to find some way of driving the laser phases, benefiting from the information available in the optical signal backscattered by the remote target.
One approach is to try and maximize the intensity of this backscattered signal, as maximum power density deposited on the target corresponds to a maximum of this backscattered signal too. It's been done, for instance, very early on, during the first experiments of TIL-CBC on a glint target [START_REF] Pearson | Coherent optical adaptive techniques: design and performance of an 18-element visible multidither COAT system[END_REF], and, later on, up to 7 km using a stochastic parallel gradient descent algorithm [START_REF] Weyrauch | Experimental demonstration of coherent beam combining over a 7 km propagation path[END_REF], a classical approach to maximize optical intensity by step-by-step optical wavefront correction.
In 2009, our team demonstrated a more practical and promising approach that can be performed using frequency-tagging for CBC phase control [START_REF] Jolivet | Beam shaping of single-mode and multimode fiber amplifier arrays for propagation through atmospheric turbulence[END_REF]. Frequency-tagging each optical channel at a specific frequency to assess the phase fluctuations to be compensated for is an efficient technique for CBC, also known as LOCSET [START_REF] Shay | First experimental demonstration of self-synchronous phase locking of an optical array[END_REF]. Frequency-tagging has the advantage of "engraving" specific information on the signal emitted by each optical channel through low-depth modulation. This tagging process enables to easily retrieve the phase information from each channel, within the complex interference signal generated once the laser beams have overlapped. Moreover, this phase information retrieval can also be achieved using the backscattered signal from a remote target. We also demonstrated that a simple evolution of the configuration of detection used for direct CBC could transform a standard CBC system into a fully operational TIL-CBC device [START_REF] Jolivet | Beam shaping of single-mode and multimode fiber amplifier arrays for propagation through atmospheric turbulence[END_REF].
The tests we performed in 2009 were in the laboratory with artificially generated atmospheric turbulence. In a second part of this paper, we present the work done recently to build a testbed coherently combining seven 1.5-µm fiber lasers through active phase control, using frequency-tagging. We also present the first results obtained on TIL-CBC operating this testbed. These results are obtained propagating the laser beams through the atmosphere, hence through real atmospheric turbulence aver a distance ranging up to 1 km.
COHERENT COMBINING OF OPTICAL PARAMETRIC OSCILLATORS
Principle and experimental setup for CBC of parametric amplifiers
Coherent beam combining of non-degenerate three wave mixing nonlinear processes was first demonstrated on devices that didn't require a cavity to operate: difference frequency generators. But the theoretical bases are identical to those of OPO CBC, as both experiments combine parametric amplifiers. To achieve CBC of DFGs or OPOs, phase control of one of the converted waves, the signal or the idler waves, has to be performed. Indirect phase control benefits from the phase-matching condition.
The electric field corresponding to each nonlinearly coupled wave can be written as:
(1) with = , or p and the complex amplitude of the pulsation wave. If we assume that the pump wave is not depleted when passing through the crystal, which is true on the first millimetres of the nonlinear crystal where parametric amplification is still weak, the coupled-amplitude equations describing difference frequency generation are:
, = - + . (2)
where . Propagation equation of the idler wave is:
(3)
with
. A similar equation can be written for the signal wave.
Maximum power conversion is achieved for perfect phase-matching condition, i.e. ∆k = 0.
This system of equations can be solved, in the case of an unseeded idler wave Ai(0) = 0, into:
These solutions correspond to a signal field that keeps its initial phase-offset, and is simply amplified by the interaction, while the idler wave phase-offset depends on both the pump wave's and the signal wave's.
(5-a)
Similarly, in the QPM case, high conversion efficiency can be achieved in a periodically poled crystal. The relation between the phases of the pump, signal and idler waves can be obtained propagating the waves in the crystal, domain by domain and step by step:
(5-b)
Due to this phase relation, controlling the pump wave phase gives complete control of the idler wave phase. For instance, in the QPM case, a ϕp0 phase shift induced on the pump wave will compensate for an idler wave phase variation of ϕp0 -ϕs0, where ϕs0 is the initial phase of the signal wave at the entrance of the nonlinear crystal.
Consequently, phase control of DFG and OPO converters is achievable, using all-fiber electro-optic phase modulators to control the pump wave.
We demonstrated experimentally the feasibility of such indirect phase control combining two DFG modules pumped by 1.064-µm Yb:fiber lasers delivering up to 15 W power each (see Fig. 1).
To fix and maintain the signal wavelength and phase, it is necessary to seed the DFGs with a signal seed at 1553 nm from a laser diode amplified by a commercial erbium-doped fiber amplifier split in two. Using this 1553-nm seeder, we are able to keep both DFG channels at a common and fixed signal wavelength.
The DFG nonlinear crystals are PPLN crystal from Covesion and their length is L = 20 mm. With a 1064-nm pump wavelength and a seeded signal at 1553 nm, quasi-phase matching in the PPLN crystal generates an idler wave at 3.4 µm. The pump beams are focused down to 100 µm waist in the PPLN crystals.
Coherent combining of these DFG modules is achieved using the idler waves interference signal to close the feedback loop (see Fig. 2). Time-averaged combining efficiency is excellent: the residual phase error is λ/28 rms for the idler wave. This experiment is the proof that CBC of continuous wave DFG using indirect phase control is feasible, hence proving that OPO CBC is also feasible as the nonlinear process involved in both cases is the same: parametric amplification.
The main difference between SHG CBC and DFG CBC is that simultaneous phase locking that has been proved feasible for SHG combination [START_REF] Odier | Coherent combining of second-harmonic generators by active phase control of the fundamental waves[END_REF], cannot be achieved so easily when combining non-degenerate three-wave mixing frequency
⎩ ⎪ ⎨ ⎪ ⎧ = !" # $$ # % & * ∆ & = !" & # $$ & # % * ∆ ∆) = ) p -) s -) i # & # = - * -. / % / # & -= !" # $$ # ⎩ ⎪ ⎨ ⎪ ⎧ = 0 123 4 --& / % / # & = -& % 0
converters [START_REF] Odier | Coherent combining of mid-infrared difference frequency generators[END_REF]. Even if seeded, the optical path difference between the two signal channels induces a phase difference that is not compensated for, and the signal waves are not phase-locked simultaneously when the idler waves are.
Experimental tests of OPO CBC, challenges and limitations
First step of experimentally combining OPOs was to design an efficient continuous-wave OPO cavity. Different cavities were tested, ring cavities and linear ones, and the best results were obtained with the linear cavity presented in Fig. 3 that offered the lowest emission threshold. The cavity is 20-cm long and is formed between two concave mirrors with 50-mm and 130-mm curvature radius respectively. Both are high reflection coated for the signal wavelength at 1.5 µm (Rs > 97 %) and the output coupler is high transmission at the idler midinfrared wavelength. Both mirrors offer a low reflection at the pump wavelength to minimize the pump losses.
With this linear cavity, the OPO threshold was around 7 -8 W and the OPO delivered a few hundreds of milliwatts of idler power at 3.8-µm wavelength as can be seen in Fig. 4. Using a wavemeter and finely tuning finely the temperature of the second NL crystal, we were able to bring the second signal wavelength very close to the first one, sufficiently close to allow for fine adjustment of the second signal wavelength to the other signal wavelength value using cavity length tuning with a piezo-electric micro-positioner.
Unfortunately, even with a singly-resonant OPO that should provide smooth tuning of the signal wavelength with the cavity length, we observed thermally induced longitudinal mode-hops that made it impossible to implement proper feedback on the signal wavelength through cavity length control.
Parasitic Fabry-Perot effect due to residual reflectivity of the cavity mirrors at the pump wavelength were also observed and induced detrimental power fluctuations even after limiting their impact as much as possible.
Coherent combining of cw OPOs couldn't be achieved and would require a higher level of stabilization of the OPO cavities than the one available on the experimental setup, as well as a choice of mirrors specifically designed for the OPO cavity with higher transmittance for the pump wavelength.
TARGET-IN-THE-LOOP COHERENT COMBINING OF 7 FIBER LASERS
The laser testbed for CBC and TIL-CBC
The laser configuration is a standard 7-channel master oscillator power amplifier (MOPA) configuration at 1.5 µm wavelength. A single-frequency low power (40 mW) master oscillator (MO) is split and amplified into 7 channels. Each of the 7 Er-Yb doped fiber amplifiers delivers up to 3 W at 1543 nm. One of the fiber amplifiers is not phase controlled and is free to fluctuate in phase while in-between the MO and 6 of the amplifiers, we use fast electro-optic phase modulators with a 150-MHz bandwidth to control the phase of each channel and simultaneously tag each of these 6 channels at a specific frequency around 20 MHz with a low-depth phase modulation.
In this self-referenced LOCSET configuration, the 6 phase-controlled channels follow the phase fluctuations of the seventh channel that's not modulated (see Fig. 6).
Figure 6. On the left side, schematics of the experimental setup for frequency-tagging CBC (also known as LOCSET). On the right side, interference pattern in the far-field, captured on a 1.5-µm camera located at the same distance as the direct CBC detector.
The 7-channel CBC setup delivers a nice interference pattern in the far-field. We confirmed that in direct CBC, i.e. using a fast photodiode to capture part of the central lobe of interference in the far-field, it was easy to lock the phases of the 7 laser channels and stabilize the position of the interference pattern. In LOCSET, frequency tagging of the i th channel at a specific frequency νi results in an interference signal that generates an error signal after demodulation at the same frequency given by equation:
where φi is the i th amplifier chain output phase in the photodetector plane, and φu the unmodulated reference beam phase in the photodetector plane. Ei and Eu are the respective electric field amplitudes at the output of the i th amplifier and of the unmodulated amplifier. βi and βu are the respective modulation depths of the i th amplifier channel and of the unmodulated channel. Driving the 6 error signals simultaneously to zero in real time results in maintaining equal phases for the 7 channels. Any sudden phase shift is immediately compensated for, thanks to the very fast electro-optic phase modulators and fast detection.
CBC efficiency measured as the fraction of the total power located in the central lobe of the interference pattern was higher than 53 % (see Fig. 6). It's very close to the experimental maximum that can be achieved with Gaussian laser beams (theoretical maximum is 63 %), but a little lower due to the opto-mechanical constraints limiting the effective fillfactor in the near-field.
Residual phase error has been measured and is lower than λ/40, confirming again the high efficiency of the CBC process and the high speed of the feedback loop. The opto-mechanical head where the outputs of the 7 fiber amplifiers are collimated and sent through the atmosphere is completely adjustable (see Fig. 7). Piezo-electric motors are used to set the transverse position of the output fiber tip with respect to the collimating lens, in order to align the 7 collimated beams in the same direction. The focus of the collimating lenses is manually tuned beforehand. Figure 3 presents pictures of the first part of the testbed dedicated to laser emission.
For Target-In-the-Loop Coherent Beam Combining (TIL-CBC), a second stage of the testbed is dedicated to long range focusing of the laser beams and to collection and detection of the backscattered signal from the remote target. We chose to use a bi-static configuration (see Fig. 8) where the long range focusing and the reception apertures are separated, to limit the risk of narcissus effect, and decrease the optical complexity of the detection line with respects to a monostatic configuration.
) ( ) ( ) ( ) ( ) - + - ∝ - = 1 1 0 1 _ sin sin N j i j j j i u u i i i error J E E J E t i φ φ β φ φ β
Experimental demonstration of TIL-CBC
After some first demonstrations of CBC and TIL-CBC in the laboratory and then outdoor at shorter range (less than 100 m), the testbed that has been designed to be mobile was transferred to our laser site where we are able to shoot lasers safely up to 1-km range (see Fig. 9). The site is equipped with proper apparatus to monitor the weather and atmospheric conditions, and also to measure turbulence strength through the average value of the Cn² over 1 km.
The experiments took place during summer in the south west of France. As the laser beams propagated horizontally at 1.5 m from the ground, the level of turbulence was quite high and the average value of Cn² over 1 km was often close to or even much higher than 10 -13 m -2/3 . Despite these detrimental turbulence conditions, when the Cn² value was lower than 10 -13 m -2/3 , we were able to perform efficient TIL-CBC up to 1 km.
A transparent wedge plate was placed before the target, to reflect a small fraction of the combined beams power to a black screen located at the same distance as the target. Proceeding like this, we could record video sequences of the interference pattern displayed on black screen and observed with an InGaAs Raptor camera, without interfering with the TIL-CBC process in the target plane. Fig. 9 presents instantaneous images of the interference pattern with and without closing the CBC feedback loop. Even if turbulence induced perturbations clearly appear to decrease the efficiency of the CBC locking process, TIL-CBC still operates successfully and results in an increased power density deposited on the remote target at 1 km.
CONCLUSION
We developed an experimental setup for coherent combining of continuous wave OPOs with active phase control of their pump waves. To prevent fluctuations of the intracavity pump power, parasitic Fabry-Perot effects on the pump wave have to be mitigated Temperature tuning of NL crystal is enough to bring both signal wavelengths close enough for fine tuning through cavity length adjustment. However, this experimental test of OPO coherent combining was unsuccessful due to detrimental signal cavity modehops that prevented from smooth enough tuning of the signal wavelength.
Figure 1 .
1 Figure 1. Schematics of the experimental set-up for demonstrating coherent combining of two 3.4-µm idler beams generated through DFG in PPLN crystals. Coherent combining is achieved by active phase-control of one of the pump waves.
Figure 2 .
2 Figure 2. Time evolution of the interference signals of the idler beams when the phase control loop is open and then closed.
Figure 3 .
3 Figure 3. Optimal continuous-wave OPO linear cavity. The output coupler is mounted on a piezo-electric translation stage to be able to finely tune the position of the spectrum emitted.
Figure 4 .
4 Figure 4. Idler power vs. pump power for the continuous-wave OPO.. After duplicating the cw OPO cavity, we began testing CBC of OPOs. The experimental setup is arranged as in Fig. 5.
Figure 5 .
5 Figure 5. OPO CBC experimental setup. For the sake of making the figure easier to read, ring cavities are drawn for the OPOs but we used the optimized linear cavity for both OPOs in the experiment.
Figure 7 .
7 Figure 7. Pictures of the experimental setup parts.
(
Figure 8 .
8 Figure 8. View of the complete testbed with both emission and reception lines.
Figure 9 .
9 Figure 9. Instantaneous images of the interference pattern generated at 310 m when TIL-CBC feedback loop is active and inactive.
Fig. 10
10 Fig.10presents averaged images from the same video sequence. The size of the interference lobes is not different in the instantaneous images and in the averaged image, demonstrating that the interference pattern was locked on a fixed position in the target plane.
Fig. 11
11 Fig. 11 presents the averaged image obtained when target distance was 1 km.
Figure 10 .
10 Figure 10. Averaged images of the interference pattern generated at 310 m when TIL-CBC feedback loop is active and inactive.
Figure 11 .
11 Figure 11. Averaged images of the interference pattern generated at 1 km when TIL-CBC feedback loop is active and inactive.
--& / % / # * 0 2.53 4--& / % / # 6 & = 6 % -6 + " #
We also developed a mobile testbed for coherent beam combining of 7 fiber lasers emitting at 1.5 µm. This testbed operates through frequency-tagging phase difference monitoring. It can concentrate more than 53 % of the overall power in the central of the interference pattern generated in the far-field, after overlap of the laser beams. The testbed has been designed to use the backscattered signal coming from a remote target as a reference for phase difference monitoring and phase locking. It was operated in a target-in-the-loop configuration, first at short distance in the laboratory, and then at longer distance outdoors, propagating the beams through real, weak to strong atmospheric turbulence. The longer range outdoor experimental campaign demonstrated that TIL-CBC efficiency is preserved up to more than 310 m. Perturbations induced by strong atmospheric turbulence are significantly affecting TIL-CBC, but TIL-CBC operates successfully at 1-km range anyway, albeit with lower efficiency. Future work is dedicated to modifying the testebed to improve its performances, especially when dealing with strong atmospheric turbulence conditions.
ACKNOWLEGMENTS
This work was partially funded by French Delegation Generale pour l'Armement. |
03461241 | en | [
"math.math-ap"
] | 2024/03/04 16:41:24 | 2023 | https://hal.science/hal-03461241v3/file/article_generalized_newtonian.pdf | Laurent Chupin
Nicolae Cîndea
Geoffrey Lacour
email: [email protected]
Variational inequality solutions and finite stopping time for a class of shear-thinning flows
Keywords: Mathematical Subject Classification (2020): 35K55, 76D03, 35Q35, 76A05 non Newtonian, generalized Newtonian, shear thinning, variational inequalities
Introduction
The aim of this paper is to study the existence and some properties of solutions of the system
∂ t u + u • ∇u + ∇p -∆u -div (F (|D(u)|) D(u)) = f in (0, T ) × Ω div(u) = 0 in (0, T ) × Ω u = 0 on [0, T ) × ∂Ω u = u 0 on {0} × Ω (1.1)
in the form of nonlinear parabolic variational inequalities, where Ω is an open bounded subset of R N , for N ∈ {2, 3} with a regular enough boundary ∂Ω. Such nonlinear systems describe the flow of so-called generalized Newtonian fluids and give rise to several relevant models. Let us give some examples. First, if F (t) = C, the system (1.1) is nothing else than the Navier-Stokes equations for a viscous incompressible fluid. Also, by choosing
F (t) = (1 + t 2 )
p-2 2 , system (1.1) describes a Carreau flow. Another relevant case is obtained by choosing, for p ∈ (1, 2) by F (t) = t p-2 for t > 0 which leads to an Ostwald-De Waele (power-law) flow. In the particular case of Bingham arising for p = 1 which describes a viscoplastic behavior, we get F (0) ∈ [0, 1], where τ * = 1 > 0 is the so-called plasticity threshold, which is scaled here. In the latter case, the function is multivalued at the origin (note that a physical consequence of this phenomenon is the nonexistence of a reference viscosity for threshold fluids, see for example [START_REF] Becker | Simple non-newtonian fluid flows[END_REF]). It is now established that this problem can be circumvented by considering the function outside the origin by a regularization process and by giving a meaning to its limit, in sense of subdifferential. This approach has been successfully carried out in the case of a two-dimensional Bingham flow for example (see for example [START_REF] Duvaut | Les inéquations en mécanique et en physique[END_REF]). In the present paper, we focus on the mathematical analysis of shear-thinning flows: a flow is said to be shear-thinning when its viscosity decreases as a function of the stresses applied to it, which is to say, in the flows we consider, that the function decreases as the shear rate increases. We mainly refer to [START_REF] Coussot | Rhéophysique: la matière dans tous ses états[END_REF][START_REF] Bird | Dynamics of polymeric liquids[END_REF][START_REF] Galdi | Hemodynamical flows[END_REF] for the physical motivations of such models. Throughout the present article, we will consider simple fluid flows, that is we will make the assumption that the shear rate is the second invariant of the strain-rate tensor, and moreover it is a scalar quantity given by |D(u)|.
Since the 1960s, the study of such systems has been the subject of numerous articles (see, for instance, [START_REF] Frehse | Non-homogeneous generalized Newtonian fluids[END_REF] and references therein). On one hand, the existence of weak solutions of (1.1) is a difficult problem which has been studied in many particular cases. When the shear tensor S has an p-structure (see [START_REF] Diening | Existence of weak solutions for unsteady motions of generalized Newtonian fluids[END_REF] for a definition), the existence of weak solutions has been established under certain hypotheses for p > 2N N +2 (see [START_REF] Eberlein | Existence of weak solutions for unsteady motions of Herschel-Bulkley fluids[END_REF], and also [START_REF] Málek | Global analysis of the flows of fluids with pressuredependent viscosities[END_REF]). The study of the existence of solutions under other assumptions has been done in [START_REF] Frehse | Non-homogeneous generalized Newtonian fluids[END_REF] in the case p > 2N N +2 , in [START_REF] Málek | On weak solutions to a class of non-Newtonian incompressible fluids in bounded three-dimensional domains: the case p ≥ 2[END_REF] for the case N = 3 and p ≥ 9 4 , and in [START_REF] Wolf | Existence of weak solutions to the equations of non-stationary motion of non-Newtonian fluids with shear rate dependent viscosity[END_REF] for p > 2N +2 N +2 , and in [START_REF] Berselli | Existence of strong solutions for incompressible fluids with shear dependent viscosities[END_REF] for the three-dimensional periodic case in space with p > 7 5 . One can avoid such hypotheses on p by using dissipative solutions. The existence of dissipative solutions has been proved in [START_REF] Abbatiello | On a class of generalized solutions to equations describing incompressible viscous fluids[END_REF] in the three-dimensional setting. Nevertheless, variational inequality solutions are particularly interesting in view of numerical simulations perspective (see for example [START_REF] Glowinski | Numerical analysis of variational inequalities[END_REF]Chapter 4] or [START_REF] Saramito | Complex fluids[END_REF]) as for controllability (see for example [START_REF] Friedman | Optimal control for variational inequalities[END_REF] or [START_REF] Ito | Optimal control of parabolic variational inequalities[END_REF])
We recall that in the three-dimensional setting, it is known that we can prove the existence of a unique weak solution to the incompressible Navier-Stokes equations, as long as the norm of the initial velocity field is small enough and the force term is sufficiently regular. This theorem has been generalized for a large class of non-singular stress tensors in [START_REF] Amann | Stability of the rest state of a viscous incompressible fluid[END_REF], as long as the initial velocity field u 0 and the force term f are regular enough. The study of the regularity of solutions in a more general framework is a difficult problem, the case of the flow of an incompressible Newtonian fluid governed by the Navier-Stokes equations remaining open in the three dimensional case. However, the existence of regular solutions, sometimes giving rise to the uniqueness of solutions or even to the existence of strong solutions, has been established in the case of shear tensors having an (p, µ)-structure in [START_REF] Diening | Strong solutions for generalized Newtonian fluids[END_REF] and in [START_REF] Berselli | Existence of strong solutions for incompressible fluids with shear dependent viscosities[END_REF] in a three-dimensional periodic in space case. In the steady case, somes results have been obtained as in [START_REF] Frehse | On analysis of steady flows of fluids with sheardependent viscosity based on the Lipschitz truncation method[END_REF] (existence for p > 2N N +2 ), in [START_REF] Zhou | Regularity of weak solutions to a class of nonlinear problem with non-standard growth conditions[END_REF] (regularity), in [START_REF] Berselli | Global regularity properties of steady shear thinning flows[END_REF] (regularity), or in [START_REF] Crispo | On the existence, uniqueness and C 1,γ (Ω) ∩ W 2,2 (Ω) regularity for a class of shear-thinning fluids[END_REF] (existence and regularity).
Finally, a property of some non-Newtonian shear-thinning flows is the existence of a finite stopping time, that is, roughly speaking, a time from which the fluid is at the rest.This property has been established, for example, in the case of a two-dimensional Bingham flow in [START_REF] Jesús | Qualitative properties and approximation of solutions of Bingham flows: on the stabilization for large time and the geometry of the support[END_REF], and in the case of some electrorheological fluids in [START_REF] Abbatiello | Existence of regular time-periodic solutions to shear-thinning fluids[END_REF].
In this paper, we firstly establish the existence of weak solutions by a parabolic variational inequality (see Theorem 3.1 and Definition 2.1) as used in [START_REF] Duvaut | Les inéquations en mécanique et en physique[END_REF] for shear tensors τ of the form τ (D(u)) = F (|D(u)|)D(u) + D(u), by setting conditions directly on the viscosity coefficient F , restricting ourselves to the case of shearthinning flows. More precisely, we will make the following assumptions: (C1) F : (0, +∞) → (0, +∞);
(C2) F ∈ W 1,∞ loc ((0, +∞)); (C3) t → tF (t) is non-decreasing on (0, +∞); (C4) there exist p ∈ [1, 2], t 0 > 0 and K > 0 such that for every t ≥ t 0 , F (t) ≤ Kt p-2 .
Some examples of functions verifying the above assumptions are given in Appendix A. We emphasize in particular that this takes into account many physical models, such as the Carreau, Bingham, Herschel-Bulkley, Cross, or power law flows.
Remark 1. Assumption (C3) is equivalent to the fact that for all ε ≥ 0, the function t → tF √ ε + t 2 is non-decreasing. Indeed, we can write:
∀t ∈ (0, +∞), tF ε + t 2 = t √ ε + t 2 ε + t 2 F ε + t 2 .
Hence, t → tF √ ε + t 2 is the product of two non-negative and non-decreasing functions, so it is a nondecreasing function. The opposite implication being obvious by setting ε = 0.
Then, in Section 5 we establish the existence of a finite stopping time for solutions of (1.1) in the case of a function F verifying (C1)-(C4) and such that
F (t) ≥ Ct p-2 . (1.2)
This shows in particular the existence of a finite time from which the fluid is at rest, when the flow is comparable to that of a power-law or threshold fluid, thus for a shear-thinning flow. One of the main objectives of this paper, in addition to completing some existing results, is to provide simple hypotheses to verify the existence of solutions for shear-thinning flows. Emphasis is also placed on considering solutions whose regularization is that generally used in numerical simulations, in order to make sense of the finite stopping time observed numerically and experimentally. Let us conclude by observing that many thixotropic flows (such as blood) are represented by usual shear-thinning models, depending on the circumstances of the flow studied (see [START_REF] Robertson | Rheological models for blood[END_REF]).
We will note in a generic way the constants by the letter C throughout this article, and will omit their dependence on the parameters in the notations.
Weak characterization of solutions by a parabolic variational inequality
In this section we introduce a weak formulation of system (1.1) using a parabolic variational inequality (see Definition 2.1). Firstly, we point out that in the system (1.1), we do not consider any frictional force on ∂Ω. Recalling that
H 1 0 (Ω) is the closure of C ∞ 0 (Ω) into H 1 (Ω)
, it is thus natural to assume that the initial velocity field u 0 is of null trace on ∂Ω, that is u 0 belongs to H 1 0,σ (Ω), the space of functions v ∈ H 1 0 (Ω) such that div(v) = 0, where H 1 0 (Ω) is endowed with the norm u → ∇u L 2 . We denote H -1 σ (Ω) its dual and •, • is the duality product between H -1 σ (Ω) and H 1 0,σ (Ω). Following the ideas employed for showing the existence of solution to Bingham equations in [START_REF] Duvaut | Les inéquations en mécanique et en physique[END_REF][START_REF] Kinderlehrer | An introduction to variational inequalities and their applications[END_REF], we define a functional j making appear the viscous non-linear term in (1.1) in its derivative. We fix for the moment 0 ≤ ε ≤ δ and we define a function G ǫ : (0, +∞) → (0, +∞) and a functional
j ε : H 1 0,σ (Ω) → R by G ε (t) = ˆt 0 sF ( ε + s 2 ) ds for every t ∈ (0, +∞) (2.1)
and
j ε (v) = ˆΩ G ε (|D(v)|) dx, (v ∈ H 1 0,σ (Ω)), (2.2)
respectively. We also denote j = j 0 and G = G 0 . One can check that G ε is a convex functional for ε small enough. Indeed, G ′ ε (t) = tF ( ε + t 2 ), for every t ∈ (0, +∞), and applying the hypothesis (C3) the convexity of G follows immediately.
Lemma 2.1. For every ε > 0 the functional j ε defined by (2.2) is convex and verifies
j ′ ε (v), w -1,1 = ˆΩ F ε + |D(v)| 2 (D(v) : D(w)) dx (v, w ∈ H 1 0,σ (Ω)). (2.3)
Proof. The convexity of j ε is immediately obtained from the hypothesis (C3) and (2.2). For every t ∈ R we have
d dt (G ε (|D(v + tw)|)) = G ′ ε (|D(v + tw)|) d dt (|D(v + tw)|) = F ε + |D(v + tw)| 2 |D(v + tw)| D(v + tw) : D(w) |D(v + tw)| = F ε + |D(v + tw)| 2 D(v + tw) : D(w).
Hence
j ′ ε (v + tw), w = d dt j ε (v + tw) = ˆΩ d dt (G ε (|D(v + tw)|)) dx = ˆΩ F ε + |D(v + tw)| 2 D(v + tw) : D(w) dx.
Letting t going to 0 we obtain (2.3).
Remark 2. We point out that j ′ is well defined. Firstly, by our assumptions (C2) and (C3), we can deduce that for all β ∈ 0, 1 2 , there exists δ 0 such that: 1+β) for every t ∈ (0, δ 0 ).
F (t) ≤ t -(
Indeed, assume that this last inequality does not hold, then for every δ 0 > 0, there exists t 0 ∈ (0, δ 0 ) such that:
F (t 0 ) > t -(1+β) 0 .
We can consider without loss of generality that δ 0 < min 1, F
-1
β , which implies, using our assumption (C3):
δ -β 0 < t -β 0 < t 0 F (t 0 ) ≤ F (1)
. This contradiction shows the result. We recall Korn's L 2 equality for divergence free vector fields:
ˆΩ|D(ϕ)| 2 dx = 1 2 ϕ 2 H 1 0 , (ϕ ∈ H 1 0,σ (Ω)).
Using these last results and applying Cauchy Schwarz's and Hölder's inequalities, we get:
| j ′ (u), ϕ -1,1 | = ˆΩ F (|D(u)|)D(u) : D(ϕ) dx ≤ 1 √ 2 ˆΩ F (|D(u)|) 2 |D(u)| 2 dx 1 2 ϕ H 1 0 = 1 √ 2 ˆ{|D(u)|≤δ 0 } F (|D(u)|) 2 |D(u)| 2 dx + ˆ{|D(u)|>δ 0 } F (|D(u)|) 2 |D(u)| 2 dx 1 2 ϕ H 1 0 ≤ 1 √ 2 ˆ{|D(u)|≤δ 0 } |D(u)| -2β dx + ˆ{|D(u)|>δ 0 } F (|D(u)|) 2 |D(u)| 2 dx 1 2 ϕ H 1 0 = 1 √ 2 1 1 -2β ˆ{|D(u)|≤δ 0 } ˆ|D(u)| 0 s 1-2β ds dx + ˆ{|D(u)|>δ 0 } F (|D(u)|) 2 |D(u)| 2 dx 1 2 ϕ H 1 0 .
And so j ′ is well-defined.
Definition 2.1 (Weak solution of (1.1)). We say that a function u ∈ L 2 (0, T ),
H 1 0,σ (Ω) ∩ C w ((0, T ), L 2 σ (Ω)) such that u ′ ∈ L 4 N (0, T ), H -1
σ (Ω) is a weak solution of (1.1) if and only if u verifies u |t=0 = u 0 ∈ H 1 0,σ (Ω), and for all ϕ ∈ C ∞ ((0, T ) × Ω):
ˆT 0 u ′ (t), ϕ(t) dt + 1 2 u 0 2 L 2 (Ω) -u(T ) 2 L 2 (Ω) + ˆT 0 ˆΩ D(u(t)) : D(ϕ(t) -u(t)) dx - ˆT 0 ˆΩ (u(t) • ∇u(t)) • ϕ(t) dx dt + ˆT 0 ˆΩ G (|D(ϕ(t))|) -G (|D(u(t))|) dx dt ≥ ˆT 0 f (t), ϕ(t) -u(t) dt. (2.4)
Let's quickly motivate this definition. First, we point out that since u belongs to C w ((0, T ), L 2 σ (Ω)), Definition 2.1 makes sense. Then, if we consider that the Lebesgue measure of the set
{(t, x) ∈ (0, T ) × Ω | |D(u)(t, x)| ≤ δ}
is equal to zero for a small δ > 0, we have, from an argument similar to the one in Lemma 2.1 that:
ˆT 0 j ′ (u), ϕ dt = ˆT 0 ˆΩ F (|D(u)|) (D(u) : D(ϕ)) dx dt.
Now, if we replace ϕ by u + sϕ, with s > 0, in the variational inequality (2.4), we obtain after dividing by s:
ˆT 0 ˆΩ D(u) : D(ϕ) dx dt + ˆT 0 ˆΩ G (|D(u + sϕ)|) -G (|D(u)|) s dx dt ≥ ˆT 0 ˆΩ f -u ′ , ϕ dt - ˆT 0 ˆΩ (u • ∇u) • ϕ dx dt.
Since j admits a Fréchet-derivative, it also admits a Gâteaux-derivative and both are the same. Hence, taking the limit as s → 0:
ˆT 0 ˆΩ D(u) : D(ϕ) dx dt + ˆT 0 ˆΩ F (|D(u)|) (D(u) : D(ϕ)) dx dt ≥ ˆT 0 ˆΩ f -u ′ , ϕ dt - ˆT 0 ˆΩ (u • ∇u) • ϕ dx dt.
Repeating once again the previous reasoning but writing u -sϕ instead of u + sϕ, we get the following equality:
ˆT 0 ˆΩ D(u) : D(ϕ) dx dt + ˆT 0 ˆΩ F (|D(u)|) (D(u) : D(ϕ)) dx dt = ˆT 0 ˆΩ f -u ′ , ϕ dt - ˆT 0 ˆΩ (u • ∇u) • ϕ dx dt.
Therefore, assuming that u is regular enough, we obtain
- 1 2 ˆT 0 ˆΩ ∆u • ϕ dx dt - ˆT 0 ˆΩ div (F (|D(u)|) D(u)) ϕ dx dt = ˆT 0 ˆΩ f -u ′ -u • ∇u • ϕ dx dt.
Furthermore De Rham's theorem for a domain with Lipschitz boundary states that there exists a pressure term p such that f = ∇p into some well chosen Sobolev space (see [17, section 2] for details). Considering such a function and also the two previous observations, we can write:
ˆT 0 ˆΩ u ′ + u.∇u - 1 2 ∆u + ∇p -div (F (|D(u)|) D(u)) -f ϕ dx dt = 0, (ϕ ∈ C ∞ ((0, T ) × Ω)) ,
which is almost everywhere equivalent to the equation (1.1) up to the multiplicative dynamic viscosity constant 1 2 . We have omitted this constant in Definition 2.1 for convenience, and note that it is enough to add the constant 2 in front of the term ´T 0 ´Ω D(u) : D(u -ϕ) dx dt in order to find exactly (1.1).
Finding a solution to the parabolic variational inequality thus amounts to giving meaning to the integral of the nonlinear viscosity coefficient term inherent in the problem, which can be a singular integral in the case of a power-law or a Bingham fluid.
Main results
We present in this section the main results of this article. The study of the existence of solutions by variational inequality has been developed following the classical Stampacchia's theorem and was further developed in [START_REF] Kinderlehrer | An introduction to variational inequalities and their applications[END_REF]. Then, this method was successfully applied for some nonlinear parabolic problems, as the two dimensional Bingham equations in [START_REF] Duvaut | Les inéquations en mécanique et en physique[END_REF], or some power law systems in [START_REF] Málek | Weak and measure-valued solutions to evolutionary PDEs[END_REF]. Following the same approach, we get the following existence theorem. Theorem 3.1. Assume that the function F satisfies the hypotheses (C1)-(C4) and that Ω ⊂ R N , N ∈ {2, 3}, is a bounded domain with a Lipschitz boundary, T > 0 and consider an initial datum u 0 ∈ H 1 0,σ (Ω) and a force term f ∈ L 2 ((0, T ), H -1 σ (Ω)). Then, there exists a weak solution u of (1.1) having the following regularity
u ∈ C w (0, T ), L 2 σ (Ω) ∩ L 2 (0, T ), H 1 0,σ (Ω) and u ′ ∈ L 4 N ((0, T ), H -1 σ (Ω)
). This theorem thus ensures the existence of suitable solutions in the two-dimensional and three-dimensional cases. It follows from classical arguments that the solutions are Hölder continuous in time, for a well-chosen Hölder coefficient. Moreover, we show in the appendix, in Corollary ?? of Proposition B.1 that in some interesting cases as for power-law flows, that the solutions satisfy an energy equality.
Unlike the Navier-Stokes case, the nonlinear term in the Bingham equations allows us to obtain the rest of the fluid in finite time in the two-dimensional case. This has been demonstrated in [START_REF] Jesús | Qualitative properties and approximation of solutions of Bingham flows: on the stabilization for large time and the geometry of the support[END_REF], using the following approach: it is assumed that the force term will compensate the initial kinetic energy of the fluid, which amounts to establishing a relation between the norm u 0 L 2 and an integral of f (t) L 2 . This argument is based on the use of the following Nirenberg-Strauss inequality:
∃γ > 0, ∀u ∈ H 1 0 (Ω), u L 2 ≤ γ ˆΩ|D(u)| dx.
We note that such an inequality cannot be true in dimension greater than two, because it would contradict the optimality of Sobolev embbeddings. We therefore propose to slightly adapt this approach to show the existence of a stopping finite time in both the two and the three-dimensional cases. Firstly, let us formalize the definition.
Definition 3.1 (Finite stopping time). Let u be a weak solution in the sense of Definition 2.1 of the system (1.1). We say that T 0 ∈ (0, T ) is a finite stopping time for u if:
u(T 0 ) L 2 (Ω) = 0.
In order to prove the existence of a finite stopping time for the solution u provided by Theorem 3.1, we do not make any assumption on the initial velocity field, but we assume that after a certain time the fluid is no longer subjected to any external force. More exactly we make some more assumption on F as stated by the following theorem.
Theorem 3.2 (Existence of a finite stopping time). Assume that the hypotheses of Theorem 3.1 are verified, that T > 0 is choosen large enough, and let p ∈ [1, 2). Moreover, we assume that there exists two positive constants κ and T 1 < T such that F (t) ≥ κt p-2 for every t ∈ (0, +∞) and f = 0 almost everywhere on (T 1 , T ).
(3.1)
Then, there exists a finite stopping time T 0 ∈ (0, T ) for u in the sense of Definition 3.1.
4 Proof of Theorem 3.1
In this section, we establish the proof of Theorem 3.1 in the bi-dimensional and three-dimensional settings.
In order to prove this result, we begin by establishing an energy estimate for solutions obtained by the Galerkin method in order to obtain uniform bounds with respect to the parameters. We note here that we will have two parameters: a first parameter due to Galerkin's approximation, and a second one due to the regularization proper to the viscosity coefficient F .
Existence of a Galerkin weak solution
We apply here the usual Galerkin method using the Stokes operator in homogeneous Dirichlet setting, and we use its eigenfunctions (w i ) i∈N as an orthogonal basis of H 1 0,σ (Ω) and orthonormal basis of L 2 σ (Ω) (see [START_REF] Evans | Partial differential equations[END_REF] for details about this property, and [34, Section 2.3] for details concerning the Stokes operator).
For every positive integer m, we denote by P m the projection of L 2 σ (Ω) onto Span ((w i ) 1≤i≤m ). We would like to formally define our Galerkin system as follows.
∂ t u m + P m (u m • ∇u m ) + ∇P m (p) -∆u m -P m (div (F (|D(u m )|) D(u m ))) = P m f div(u m ) = 0 on (0, T ) × Ω u m = 0 on [0, T ) × ∂Ω u m = P m (u 0 ) on {0} × Ω. (4.1)
In order to avoid the issue posed by the nonlinear term in domains for which the fluid is not deformed we consider the following regularized Galerkin system:
∂ t u m,ε + P m (u m,ε • ∇u m,ε )+ ∇P m (p) -∆u m,ε -P m div F ε + |D(u m,ε )| 2 D(u m,ε ) = P m f div(u m,ε ) = 0 on (0, T ) × Ω u m,ε = 0 on [0, T ) × ∂Ω u m,ε = P m (u 0 ) on {0} × Ω, (4.2
) with 0 < ε < 1. Applying a Galerkin method, we can see that, writing u m,ε (t) = m i=1 d i m (t)w i , we obtain the ordinary differential system for all 1 ≤ i ≤ m:
d i m ′ (t) = f, w i - ˆΩ 1 2 w i 2 H 1 0 d i m (t) dx -ˆΩ D(u 0 ) : D(w i ) dx - ˆΩ 1 2 w i 2 H 1 0 F ε + m j=1 1 2 w j 2 H 1 0 (d j m (t)) 2 + 2(D(w j ) : D(u 0 ))d j m (t) + 1 2 u 0 2 H 1 0 d i m (t) dx -ˆΩ F ε + m j=1 1 2 w j 2 H 1 0 (d j m (t)) 2 + 2(D(w j ) : D(u 0 ))d j m (t) + 1 2 u 0 2 H 1 0 (D(u 0 ) : D(w i )) dx - m j=1 ˆΩ w j • ∇w i d i m (t)d j m (t) dx, (4.3)
completed with initial condition d i m (0) = (u 0 , w i ) H 1 0 . This system is described by a locally Lipschitz continuous function with respect to d m . Indeed, applying the hypothesis (C2), the function ψ : R m → R defined by
ψ(x) = F ε 2 + m j=1 1 2 w j 2 H 1 0 x 2 j + 2(D(w j ) : D(u 0 ))x j + 1 2 u 0 2 H 1 0 ∀x ∈ R m
is locally Lipschitz. The Picard-Lindelöf theorem shows the existence of a solution for system (4.2).
Energy estimate and consequences
We recall that the solution u m,ε of (4.2) belongs to Span ((w i ) 1≤i≤m ), for (w i ) i∈N the basis of H 1 0,σ (Ω) which are the eigenfunctions of the Stokes operator in the homogeneous Dirichlet setting.
In order to clarify our presentation, we specify that we consider the following notion of solution.
Definition 4.1 (Solution of (4.2)). We say that u m,ε ∈ L 2 ((0, T ),
H 1 0,σ (Ω)), u ′ m,ε ∈ L 2 ((0, T ), H -1 (Ω)
) is a weak solution of (4.2) if for every ϕ ∈ C ∞ ((0, T ) × Ω) and for a.e. t ∈ (0, T ) it satisfies
u ′ m,ε , ϕ + ˆΩ D(u m,ε ) : D(ϕ) dx + j ′ ε (u m,ε ), ϕ -ˆΩ(u m,ε • ∇u m,ε ) • ϕ dx = f, ϕ . (4.4)
We also say that (4.4) is the formulation in space of the solution of (4.2) when the time is fixed. We point out that this definition makes sense since we are studying smooth finite dimensional Galerkin solutions. Then, in order to obtain weak limits into the Galerkin formulation, we establish some estimates.
Proposition 4.1. Assume that u m,ε is a solution of (4.2) in the sense of Definition 4.1. Then, there exists a positive constant C depending on p, Ω, N , T , u 0 L 2 (Ω) and f L 2 ((0,T ),H -1 (Ω)) such that the following estimates hold:
1. u m,ε 2 L ∞ ((0,T ),L 2 σ ) + 1 2 u m,ε 2
L 2 ((0,T ),H 1 0,σ ) ≤ C f 2 L 2 ((0,T ),H -1 ) + u 0 2 L 2 ; 2. j ′ ε (u m,ε ) L 4 N ((0,T ),H -1 ) ≤ C 1 + f L 2 ((0,T ),H -1 ) + u 0 L 2 p-1 ; 3. u ′ m,ε L 4 N ((0,T ),H -1 ) ≤ C f 2 L 2 ((0,T ),H -1 ) + u 0 2 L 2 + C f 2 L 2 ((0,T ),H -1 ) + u 0 2 L 2 2 +C 1 + f L 2 ((0,T ),H -1 ) + u 0 L 2 p-1 .
Before the proof of Proposition 4.1, we state some useful results. We start by recalling a well known Gagliardo-Nirenberg inequality (for the proof, see, for instance, [START_REF] Nirenberg | On elliptic partial differential equations[END_REF] or [START_REF] Friedman | Partial differential equations of parabolic type[END_REF]).
Theorem 4.1 (Gagliardo-Nirenberg inequality on bounded Lipschitz domain). Assume that Ω is a bounded domain in R N with Lipschitz boundary. Moreover, assume that there exists a couple (q, r)
∈ [1, +∞], θ ∈ [0, 1] and (l, k) ∈ N 2 such that: 1 p = k N + 1 r -l N θ + 1-θ q k l ≤ θ ≤ 1.
Then, there exists C := C(k, l, N, r, q, θ, Ω) > 0 such that the following inequality holds:
∇ k u L p (Ω) ≤ C u θ W l,r (Ω) u 1-θ L q (Ω) .
The following result formalizes some other properties. Lemma 4.1. Let X be a Banach space, and γ ≥ 1 2 . Then, the following inequality holds:
∀(u, v) ∈ X 2 , u + v γ X ≤ 2 (γ-1 2 ) u γ X + v γ X .
Proof. Using the convexity of t → t 2(2-p) and triangle's inequality of the norm, we get:
u + v 2γ X = 2 2γ u + v 2 2γ X ≤ 2 2γ-1 u 2γ X + v 2γ X .
Applying now the well-known inequality:
∀(a, b) ∈ [0, +∞) 2 , √ a + b ≤ √ a + √ b, we get the result.
Proof of Proposition 4.1.
1. Setting ϕ = u m,ε in the weak formulation, we get:
1 2 d dt u m,ε 2
L 2 + ˆΩ|D(u m,ε )| 2 dx + j ′ ε (u m,ε ), u m,ε ≥0 -ˆΩ(u m,ε • ∇u m,ε ) • u m,ε dx =0 = f, u m,ε .
Using the well-known Korn's L 2 equality for divergence free vectors fields, we get
d dt u m,ε (t) 2 L 2 + u m,ε (t) 2 H 1 0 ≤ 2 f (t), u m,ε (t) .
Moreover, we have:
2 f (t), u m,ε (t) ≤ 2 f (t) 2 H -1 + 1 2 u m,ε (t) 2 H 1 0 .
Then, using the above inequality and integrating on (0, t) we get
u m,ε (t) 2 L 2 + 1 2 ˆt 0 u m,ε 2 H 1 0 dt ≤ 2 ˆt 0 f 2 H -1 dt + u 0 2 L 2 . (4.5)
Indeed, we recall that (P m (u 0 ), w i ) L 2 = (u 0 , P m w i ) L 2 = (u 0 , w i ) L 2 , and the conclusion follows. From now on, we will omit to detail this last part which is usual.
2. We have, using Cauchy-Schwarz's inequality and Korn's equality in the divergence free L 2 setting:
j ′ ε (u m,ε ), ϕ = ˆΩ F ε + |D(u m,ε )| 2 D(u m,ε ) : D(ϕ) dx ≤ 1 √ 2 ˆΩ F ε + |D(u m,ε )| 2 2 |D(u m,ε )| 2 dx 1 2 ϕ H 1 0 . (4.6)
From hypothesis (C4), setting A = Ω ∩ {|D(u m,ε )| ≤ t 0 } and B its complement in Ω, we obtain
ˆΩ F ε + |D(u m,ε )| 2 2 |D(u m,ε )| 2 dx = ˆA F ε + |D(u m,ε )| 2 2 |D(u m,ε )| 2 dx + ˆB F ε + |D(u m,ε )| 2 2 |D(u m,ε )| 2 dx.
Let's estimate these two integrals independently. By assumption (C3), we have that the application
t → t 2 F √ ε + t 2 2
is non-decreasing, and we obtain directly:
ˆA F ε + |D(u m,ε )| 2 2 |D(u m,ε )| 2 dx ≤ F ε + t 0 2 2 t 0 2 |A| ≤ F ε + t 0 2 2 t 0 2 |Ω| ≤ F 1 + t 0 2 2 1 + t 0 2 |Ω| ≤ C.
Then we have, using again (C4):
ˆB F ε + |D(u m,ε )| 2 2 |D(u m,ε )| 2 dx ≤ K ˆB |D(u m,ε )| 2 (ε + |D(u m,ε )| 2 ) 2-p dx ≤ K ˆB|D(u m,ε )| 2(p-1) dx ≤ K ˆB|∇u m,ε | 2(p-1) dx ≤ C u m,ε 2(p-1) H 1 0 ,
where we used Jensen's inequality in the concave setting with t → t p-1 in the last line. So, we obtain:
ˆΩ F ε + |D(u m,ε )| 2 2 |D(u m,ε )| 2 dx 1 2 ≤ C + C u m,ε 2(p-1) H 1 0 1 2 . (4.7)
Thus, combining the inequality (4.6)-(4.7) and using Lemma 4.1 with γ = 2 N , we get:
j ′ ε (u m,ε ) 4 N H -1 ≤ C + C u m,ε 4(p-1) N H 1 0 .
Therefore, integrating in time over (0, T ):
j ′ ε (u m,ε ) 4 N L 4 N ((0,T ),H -1 ) ≤ C + C u m,ε 4(p-1) N L 4(p-1) N ((0,T ),H 1 0 )
.
Then, since 0 < 4(p-1) N ≤ 2, we get, using the embbedding L 2 ֒→ L
4(p-1) N
and Lemma 4.1 with X := H 1 0 , q = 4(p-1)
N and p = 2 on u m,ε L 4(p-1) N ((0,T ),H 1 0 )
:
j ′ ε (u m,ε ) 4 N L 4 N ((0,T ),H -1 ) ≤ C + C u m,ε 4(p-1) N L 2 ((0,T ),H 1 0 ) .
Using the first point of the proposition for t = T , and since 4(p-1) N ≥ 0, we get:
j ′ ε (u m,ε ) 4 N L 4 N ((0,T ),H -1 ) ≤ C + C( f L 2 ((0,T ),H -1 ) + u 0 L 2 ) 4(p-1) N .
Then, using the exponent N 4 on both sides and applying once again Lemma 4.1 with γ = N 4 on the right-hand side in the ineuality above leads us to:
j ′ ε (u m,ε ) L 4 N ((0,T ),H -1 ) ≤ C + C( f L 2 ((0,T ),H -1 ) + u 0 L 2 ) p-1 .
This is the wished result.
3. From the weak formulation (4.4) we get
u ′ m,ε , ϕ = -ˆΩ D(u m,ε ) : D(ϕ) dx -j ′ ε (u m,ε ), ϕ + ˆΩ(u m,ε • ∇u m,ε ) • ϕ dx + f, ϕ . (4.8)
Let us point out that
ˆΩ D(u m,ε ) : D(ϕ) dx = 1 2 ˆΩ ∇u m,ε • ∇ϕ dx ≤ 1 2 u m,ε H 1 0 ϕ H 1 0 . (4.9)
Also, setting p = 4, k = 0, l = 1, r = q = 2, and s = 2 into Theorem 4.1, we get the existence of a positive constant C which only depends on N and Ω such that:
u L 4 ≤ C ∇u N 4 L 2 u 4-N 4 L 2
and, renoting C, we get:
u 2 L 4 ≤ C ∇u N 2 L 2 u 4-N 2 L 2 . ( 4
ˆΩ(u m,ε .∇u m,ε ).ϕ dx ≤ 1 2 ˆΩ|u m,ε | 2 .|∇ϕ| dx ≤ u m,ε 2
L 4 ∇ϕ L 2 ≤ C u m,ε 4-N 2 L 2 u m,ε N 2 H 1 0 ϕ H 1 0 . (4.11)
So, putting (4.9)-(4.11) and the second estimate of the Proposition 4.1 in (4.8), we obtain
u ′ m,ε , ϕ ≤ 1 2 u m,ε H 1 0 ϕ H 1 0 + j ′ ε (u m,ε ) H -1 ϕ H 1 0 + C u m,ε 4-N 2 L 2 u m,ε N 2 H 1 0 ϕ H 1 0 + f H -1 ϕ H 1 0
, and, therefore,
u ′ m,ε (t) H -1 ≤ 1 2 u m,ε H 1 0 + j ′ ε (u m,ε ) H -1 + C u m,ε 4-N 2 L 2 u m,ε N 2 H 1 0 + f H -1 .
Now, using the following convexity inequality
∀k ∈ N, ∀(x i ) 1≤i≤k ∈ (0, +∞) k , ∃C > 0, k i=1 x i 4 N ≤ C k i=1 x 4 N i
we get:
u ′ m,ε (t) 4 N H -1 ≤ C u m,ε 4 N H 1 0 + j ′ ε (u m,ε ) 4 N H -1 + u m,ε 8-2N N L 2 u m,ε 2 H 1 0 + f 4 N H -1 .
Since N ∈ {2, 3}, we have 4 N ≤ 2. Hence, integrating in time over (0, T ) and using the embedding
L 2 (Ω) ֒→ L 4 N (Ω): u ′ m,ε 4 N L 4 N ((0,T ),H -1 ) ≤ C u m,ε 4 N L 2 ((0,T ),H 1 0 ) + j ′ ε (u m,ε ) 4 N L 4 N ((0,T ),H -1 ) + C u m,ε 8-2N N L ∞ ((0,T ),L 2 ) u m,ε 4 N L 2 ((0,T ),H 1 0 ) + C f 4 N L 2 ((0,T ),H -1 ) .
Using the previously given convexity inequality and the first and second points of the proposition we obtain the desired result.
Weak convergence
We are now interested in the weak convergence with respect to the estimates proven in Section 4.2. Here, we prove such convergences by passing to the limit with respect to the parameter ε in a first time, then by passing to the limit with respect to the Galerkin parameter m.
Before proving Theorem 3.1, we establish several useful lemmas.
Lemma 4.2. Consider that ϕ ∈ L 2 ((0, T ), H 1 0 (Ω)), then there exists a constant C(ε, ϕ) > 0 which goes to zero as ε does, such that the following inequality holds:
j ε (ϕ) + C(ε, ϕ) ≥ j(ϕ), (4.12)
where j ε and j are defined by (2.2).
Proof. Recalling that the assumption (C3) states that t → tF (t) is increasing, we get:
j(ϕ) := ˆΩ ˆ|D(ϕ)| 0 sF (s) ds dx ≤ ˆΩ ˆ√ε 0 sF (s) ds dx + ˆΩ ˆ√ε+|D(ϕ)| √ ε sF (s) ds dx ≤ ε √ εF (ε)|Ω| + ˆΩ ˆ√2|D(ϕ)| √ ε+|D(ϕ)| 2 0 sF ( ε + s 2 ) ds dx ≤ ε √ εF (ε)|Ω| + ˆΩ ˆ2 1 2 ε 1 4 |D(ϕ)| 1 2 +|D(ϕ)| |D(ϕ)| sF ( ε + s 2 ) ds dx :=C(ε,ϕ) +j ε (ϕ),
which is the wished result.
Lemma 4.3.
Consider Ω an open bounded subset of R N with Lipschitz boundary, and a sequence
(w n ) n∈N such that w n ⇀ n→+∞ w in L 2 ((0, T ), H 1 0,σ (Ω)).
Then, for almost all (t, x) ∈ (0, T ) × Ω, the following inequality holds:
|D(w n )(t, x)| ≥ |D(w)(t, x)|.
Proof. Firstly, let us recall that since w n ⇀ w in L 2 ((0, T ), H 1 0 (Ω)) then, for all Lebesgue points t 0 ∈ (0, T ) and x 0 ∈ Ω, for all δ > and R > 0 small enough, we have w n ⇀ w in L 2 ((t 0 -δ, t 0 + δ), H 1 (B(x 0 , R)). Indeed, we have for all test function ϕ :
ˆT 0 ˆΩ ∇w n • ∇ϕ dt dx -→ n→+∞ ˆT 0 ˆΩ ∇w • ∇ϕ dt dx.
Hence, we can take ϕ, which belongs to C ∞ 0 ((t 0 -δ, t 0 + δ) × B(x 0 , R)) (up to arguing by density thereafter), satisfying:
∇ϕ = ∇ψ on (t 0 -δ, t 0 + δ) × B(x 0 , R) 0 on (0, T ) × Ω\(t 0 -δ, t 0 + δ) × B(x 0 , R)
and so this leads to:
ˆt0 +δ t 0 -δ ˆB(x 0 ,R) ∇w n • ∇ψ dt dx -→ n→+∞ ˆt0 +δ t 0 -δ ˆB(x 0 ,R) ∇w • ∇ψ dt dx.
That is w n ⇀ w in L 2 ((t 0 -δ, t 0 + δ), H 1 (B(x 0 , R))). Now, applying Korn's L 2 equality and using that fact we get for all Lebesgue point t 0 of (0, T ) and x 0 ∈ Ω that:
ˆt0 +δ t 0 -δ ˆB(x 0 ,R) |D(w n )| 2 dx dt ≥ ˆt0 +δ t 0 -δ ˆB(x 0 ,R) |D(w)| 2 dx dt.
Dividing each side by 2δ|B(x 0 , R)|, we get:
t 0 +δ t 0 -δ B(x 0 ,R) |D(w n )| 2 dx dt ≥ t 0 +δ t 0 -δ B(x 0 ,R) |D(w)| 2 dx dt
then letting (δ, R) → (0, 0) leads to the result, applying Lebesgue's differentiation theorem.
The following lemma gives the convergence of u m,ε when ε goes to zero.
v m ∈ L 2 (0, T ), H 1 0,σ (Ω) ∩L ∞ (0, T ), L 2 σ (Ω) with v ′ m ∈ L 4 N
(0, T ), H -1 σ (Ω) such that, up to subsequences:
1. u ′ m,ε ⇀ v ′ m in L 4 N (0, T ), H -1 σ (Ω) ; 2. u m,ε ⇀ v m in L 2 (0, T ), H 1 0,σ (Ω) ; 3. u m,ε → v m in L 2 ((0, T ), L 2 σ (Ω)); 4. u m,ε * ⇀ v m in L ∞ (0, T ), L 2 σ (Ω) . Moreover, v m satisfies, for all ψ ∈ C ∞ ((0, T ) × Ω): 1 2 v m (T ) 2 L 2 - 1 2 u 0 2 L 2 - ˆT 0 v ′ m , ψ dt + ˆT 0 ˆΩ D(v m ) : D(v m -ψ) dx dt + ˆT 0 j(v m ) -j(ψ) dt - ˆT 0 ˆΩ(v m • ∇v m ) • ψ dx dt ≤ ˆT 0 f, v m -ψ dt. (4.13)
Proof. The first and second points follow from the reflexivity of L 4 N (0, T ), H -1 σ (Ω) and L 2 ((0, T ), H 1 0,σ (Ω)) respectively, the third one from Aubin-Lions' Lemma, and the last one by Banach-Alaoglu-Bourbaki's theorem.
Then, since u m,ε is a solution of (4.2), it satisfies (4.4). Testing against ϕ = u m,ε -ψ in (4.4) for a test function ψ, we have:
u ′ m,ε , u m,ε -ψ + ˆΩ D(u m,ε ) : D(u m,ε -ψ) dx + j ′ ε (u m,ε ), u m,ε -ψ -ˆΩ(u m,ε • ∇u m,ε ) • ψ dx = f, u m,ε -ψ . (4.14)
Applying Lemma 2.1 leads to the well-known convexity inequality:
j ε (u m,ε ) -j ε (ψ) ≤ j ′ ε (u m,ε ), u m,ε -ψ . (4.15)
Using now Lemma 4.2 for u m,ε in (4.15), we get:
j(u m,ε ) -C(ε, u m,ε ) -j ε (ψ) ≤ j ′ ε (u m,ε
), u m,ε -ψ and then, by (C3) and Lemma 4.3 applied to u m,ε for the convergence toward v m , we get:
j(v m ) -C(ε, u m,ε ) -j ε (ψ) ≤ j ′ ε (u m,ε ), u m,ε -ψ .
Then, we can write (see [START_REF] Evans | Partial differential equations[END_REF] part 5.9. for details):
∀ϕ ∈ H 1 0,σ (Ω), ˆΩ u m,ε (T )ϕ dx = u m,ε (T ), ϕ = ˆT 0 u ′ m,ε (t), ϕ dt + u 0 , ϕ . (4.16)
Now, we also have, using Proposition 4.1:
ˆT 0 u ′ m,ε (t), ϕ dt + u 0 , ϕ ≤ u ′ m,ε L 4 N ((0,T ),H -1 ) ˆT 0 ϕ 4 4-N H 1 0 dt 4-N 4 + C u 0 L 2 ϕ H 1 0 ≤ C T 4-N 4 + u 0 L 2 ϕ H 1 0 .
In the above inequality we considered ϕ as a function in L ∞ ((0, T ), H 1 0 (Ω)), so it belongs to L 4 4-N ((0, T ), H 1 0 (Ω)) and its left-hand side defines a linear form over L 4 N ((0, T ), H -1 (Ω)). Also, the weak convergence leads to:
ˆT 0 u ′ m,ε (t), ϕ dt -→ ε→0 ˆT 0 v ′ m (t), ϕ dt. (4.17)
Finally, (4.16) and (4.17) imply, up to apply a dominated convergence theorem, to:
u m,ε (T ) ⇀ ε→0 v m (T ) in L 2 (Ω). (4.18)
Then, (4.18) implies:
lim ε→0 1 2 u m,ε (T ) 2 L 2 -P m (u 0 ) 2 L 2 ≥ 1 2 v m (T ) 2 L 2 -P m (u 0 ) 2 L 2 (4.19)
Also, from usual estimates (see [START_REF] Robinson | The three-dimensional Navier-Stokes equations. Number 157 in Cambridge studies in advanced mathematics[END_REF]Chapter 4]), since u m,ε → ε v m in L 2 ((0, T ), H 1 0,σ (Ω)), we have:
ˆT 0 ˆΩ|D(u m,ε )| 2 dx dt -→ ε→0 ˆT 0 ˆΩ|D(v m )| 2 dx dt (4.20)
and
ˆT 0 ˆΩ(u m,ε • ∇u m,ε ) • ψ dx dt -→ ε→0 ˆT 0 ˆΩ(v m • ∇v m ) • ψ dx dt. (4.21)
Integrating in time (4.14), and passing to the limit over ε, combining with (4.20), (4.21), and (4.19) leads to (4.13).
Arguing in the same way, we obtain the following result.
Lemma 4.5. Under the assumptions of Proposition 4.1, there exists u ∈ L 2 (0, T ),
H 1 0,σ (Ω) ∩L ∞ (0, T ), L 2 σ (Ω) with u ′ ∈ L 4 N (0, T ), H -1
σ (Ω) such that the function v m given by Lemma 4.4 verifies.
1. v ′ m ⇀ u ′ in L 4 N (0, T ), H -1 σ (Ω) ; 2. v m → u in L 2 (0, T ), L 2 σ (Ω) ; 3. v m ⇀ u in L 2 ((0, T ), H 1 0,σ (Ω)); 4. v m * ⇀ u in L ∞ (0, T ), L 2 σ (Ω)
. Moreover, we point out that u ∈ C w ((0, T ), L 2 σ (Ω)) from the above estimates (see [10, Proposition V.1.7. p.363] for details).
Proof of Theorem 3.1. We point out that the coefficients of v m given by Lemma 4.4 satisfy an ODE as (4.3) with ε = 0, then v m is still smooth in space and time. Moreover, we can take up again the method previously used, that is we can write :
∀ϕ ∈ H 1 0,σ (Ω), ˆΩ v m (T )ϕ dx = v m (T ), ϕ = ˆT 0 v ′ m (t), ϕ dt + P m (u 0 ), ϕ . (4.22)
Using Proposition 4.1 then leads to:
ˆT 0 v ′ m (t), ϕ dt + u 0 , ϕ ≤ v ′ m L 4 N ((0,T ),H -1 ) ˆT 0 ϕ 4 4-N H 1 0 dt 4-N 4 + C u 0 L 2 ϕ H 1 0 ≤ C T 4-N 4 + u 0 L 2 ϕ H 1 0 . (4.23)
Then, the weak convergence leads to: Then, (4.18) implies:
ˆT 0 v ′ m (t), ϕ dt -→ ε→0 ˆT 0 v ′ m (t), ϕ dt. ( 4
lim m→+∞ 1 2 v m (T ) 2 L 2 -P m (u 0 ) 2 L 2 ≥ 1 2 u(T ) 2 L 2 -u 0 2 L 2 (4.26)
Using once again usual estimates for Navier-Stokes equation, since v m -→ m→+∞ u in L 2 ((0, T ), H 1 0,σ (Ω)), we have:
ˆT 0 ˆΩ|D(v m )| 2 dx dt -→ m→+∞ ˆT 0 ˆΩ|D(u)| 2 dx dt (4.27)
and
ˆT 0 ˆΩ(v m • ∇v m ) • ψ dx dt -→ m→+∞ ˆT 0 ˆΩ(u • ∇u) • ψ dx dt. (4.28)
Applying lemma 4.3 with our assumption (C3) and passing to the limit over m, we get:
lim m→+∞ ˆT 0 j(v m ) dt ≥ j(u). ( 4
1 2 u(T ) 2 L 2 (Ω) -u 0 2 L 2 (Ω) - ˆT 0 u ′ , ψ dt + ˆT 0 ˆΩ D(u) : D(u -ψ) dx dt + ˆT 0 j(u) -j(ψ) dt - ˆT 0 ˆΩ(u • ∇u) • ψ dx dt ≤ ˆT 0 f, u -ψ dt (4.30)
which is the desired result, that is u is a weak solution of (1.1).
Existence of a finite stopping time for shear-thinning flows
In this part, we assume that hypotheses of the Theorem 3.2 are fulfilled. We are interested to show the existence of a finite stopping time of weak solutions of (1.1) for a viscosity coefficient F which behaves at least as a power-law model, following classical methods for proving such an extinction profile. In fact, one can observe that the nonlinearity proper to Ostald-De Waele or Bingham flows in some special cases implies the existence of such a finite stopping time, as it has already been proved for the two-dimensional Bingham equation under some assumptions in [START_REF] Jesús | Qualitative properties and approximation of solutions of Bingham flows: on the stabilization for large time and the geometry of the support[END_REF]. Moreover, the study of such a profile has been proved in the case of the parabolic p-Laplacian, see [15, section VII.2] for a bounded initial datum or [START_REF] Barbu | Controllability and stabilization of parabolic equations[END_REF]Theorem 4.6] for the case p = 1 and with initial datum belonging to L 2 (Ω). In this section, we will moreover assume for convenience that the force term belongs to L 2 ((0, T ), L 2 (Ω)) or, if necessary, we will identify the duality bracket •, • with the L 2 inner product. Note that this assumption is not necessary, the results remain valid for f ∈ L 2 ((0, T ), H -1 σ (Ω)).
Before proving the Theorem 3.2, we need to prove the following useful lemma.
Lemma 5.1. Assume that u ∈ L 6 (Ω). Then, for all r ∈ (0, 3), the following inequality holds:
u r L 2 (Ω) ≤ 3 -r 3 u 4r 3-r L 3 2 (Ω) + r 3 u 2 L 6 (Ω) . (5.1)
Proof of Lemma 5.1. First, we write for s ∈ (0, 2):
u 2 L 2 (Ω) := ˆΩ|u| s |u| 2-s dx. (5.2)
Now, passing to the power r 2 and using Hölder's inequality in (5.2) leads to:
u r L 2 (Ω) ≤ ˆΩ|u| sp dx r 2p ˆΩ|u| (2-s)q dx r 2q . (5.3)
Finally, we apply Young's inequality into (5.3) to obtain:
u r L 2 (Ω) ≤ 1 a ˆΩ|u| sp dx ar p + 1 b ˆΩ|u| (2-s)q dx br q . ( 5.4)
Fixing successively p = 3 2s , s = 4 3 , and b = 3 r leads to (5.1), that is the result is proved. We point out that it is necessary to have r ∈ (0, 3) in order to satisfy the necessary conditions in the inequalities used above:
s ∈ 0, 3 2 , r > 0, 1 p + 1 q = 1, 1 a + 1 b = 1.
Moreover, we recall the Nirenberg-Strauss inequality: (Ω) the following inequality holds:
u L N N-1 (Ω) ≤ C D(u) L 1 (Ω) .
(5.5)
We are now able to prove Theorem 3.2. We point out that the proof being well-known in the two-dimensional case (see [START_REF] Jesús | Qualitative properties and approximation of solutions of Bingham flows: on the stabilization for large time and the geometry of the support[END_REF]) and can be in that last case a direct application of the Korn's inequality and Sobolev's embbedding theorem. For this reason, we only give a proof in the three-dimensional setting.
Proof of Theorem 3.2. Let u m,ǫ be the solution of (4.2). Choosing ϕ = u m,ε in (4.4) we get:
u ′ m,ε , u m,ε + ˆΩ|D(u m,ε )| 2 dx + j ′ ε (u m,ε ), u m,ε -ˆΩ (u m,ε ∇u m,ε ) u m,ε dx =0 = f, u m,ε . (5.6)
Combining (2.2) and (3.1), we obtain
j ′ ε (u m,ε ), u m,ε ≥ ˆΩ|D(u m,ε )| 2 ε + |D(u m,ε )| 2 p-2 2 dx (5.7)
Now, observing that:
|D(u m,ε )| 2 = ε + |D(u m,ε )| 2 -ε and ε + |D(u m,ε )| 2 p-2 2 ≤ ε p-2 2 ,
and since:
|D(u m,ε )| p ≤ (ε + |D(u m,ε )| 2 ) p 2 ,
we get from (5.6) and (5.7), we get:
1 2 d dt u m,ε (t) 2 L 2 (Ω) + ˆΩ|D(u m,ε )| 2 dx + C D(u m,ε ) p L p (Ω) ≤ f, u m,ε + |Ω|ε p 2 .
(5.8)
Then, using successively the embbedding L p (Ω) ֒→ L 1 (Ω), assumption (3.1) and the Theorem 5.2, we get from (5.8), for t ∈ (T 1 , t):
1 2 d dt u m,ε (t) 2 L 2 (Ω) + ˆΩ|D(u m,ε )| 2 dx + C u m,ε p L 3 2 (Ω) ≤ |Ω|ε p 2 .
(5.9)
Now, from the embbedding u ∈ H 1 0 (Ω)/ D(u) L 2 (Ω) < +∞ ֒→ L 6 (Ω) which can be obtained using Korn's L 2 equality and Sobolev embbedding H 1 0 (Ω) ֒→ L 6 (Ω) we get from (5.9):
1 2 d dt u m,ε (t) 2 L 2 (Ω) + C u m,ε 2
L 6 (Ω) + u m,ε p L 3 2 (Ω)
(5.10)
Conclusions and problems remaining open
In this paper, we have been able to establish the existence of variational inequality solutions for a large class of generalized Newtonian flows in both two-and three-dimensional settings, including threshold fluid flows. This result completes that of [START_REF] Duvaut | Les inéquations en mécanique et en physique[END_REF] and should be related to that obtained in [START_REF] Abbatiello | On a class of generalized solutions to equations describing incompressible viscous fluids[END_REF] in the context of dissipative solutions. The issue of global regularity in the three-dimensional setting remains open, the question being still widely open for the Leray-Hopf (and Leray) solutions of the incompressible Navier-Stokes equations.
An important question would be to know if it is possible to control this extinction profile, the question of the control of solutions of quasilinear parabolic equations of p-Laplacian type remaining currently open.
A Some examples of functions F
In this section, we give some examples of functions F satisfying the conditions (C1)-(C4), most of which correspond to models of non-Newtonian coherent flows in the physical sense. This is the case for quasi-Newtonian fluids such as blood, threshold fluids such as mayonnaise, or more generally in the case of polymeric liquids.
1. Firstly, in order to describe power-law fluids (also known as Ostwald-De Waele or Norton-Hoff flows), we can consider functions (F p ) 1<p<2 given by: (0, +∞) → (0, +∞) F p :
t -→ t p-2 .
2. Considering functions (F µ,p ) µ>0,p∈ [START_REF] Abbatiello | On a class of generalized solutions to equations describing incompressible viscous fluids[END_REF][START_REF] Abbatiello | Existence of regular time-periodic solutions to shear-thinning fluids[END_REF] of the form (0, +∞) → (0, +∞) F µ,p : t -→ (µ + t 2 )
p-2 2 leads to Carreau flows.
3. Cross fluids are obtained by choosing function (F γ,α ) γ>0,p∈ [START_REF] Abbatiello | On a class of generalized solutions to equations describing incompressible viscous fluids[END_REF][START_REF] Abbatiello | Existence of regular time-periodic solutions to shear-thinning fluids[END_REF] given by: (0, +∞) → (0, +∞) F γ,p : t -→ γ + t p-2 .
4. Another possible choice is to take functions (F p,β,γ ) given (0, +∞) → (0, +∞) F p,β,γ :
t -→ t p-2 log(1 + t) -β
If t ∈ (0, γ] log(1 + γ) -β t p-2 If t ∈ (γ, +∞) for 1 < p < 2 and some β, γ > 0 with γ small enough.
B An energy equality
The purpose of this appendix is to establish, for some F and in the two-dimensional case, a energy equality (non-constructive) for the weak solutions of (1.1), which are continuous in time from Aubin-Lions's lemma.
More exactly, we have the following result.
Note that it is sometimes hard to establsih such equalities in the non-Newtonian setting, we quote as example the work of [START_REF] Beirão | On the energy equality for solutions to Newtonian and non-Newtonian fluids[END_REF] on the subject.
Proposition B.1. Assume that N = 2 and that u is a weak solution to (1.1) satisfying assumptions of Theorem 3.1. Moreover, assume that there exists θ > 0 such that for all t ∈ (0, T ), we have:
G(t) = θt 2 F (θt). (B.1)
Then we have for almost all t ∈ (0, T ) that there exists η(t) ∈ (0, 1] such that:
1 2 u(t) 2 L 2 + 1 2 u 2
L 2 ((0,T ),H 1 0 ) + ˆt 0 1 η(s) j ′ (η(s)u), u ds = 1 2 u 0 2 L 2 + ˆt 0 f, u ds.
Proof. Testing against χ (0,t) (ϕ -u m,ε ) in the weak formulation (4.4), for a well-chosen t and passing to the weak limit over m and ǫ, by Lemma 4.5, since v m (t) L 2 (Ω) ≥ u(t) L 2 (Ω) , we have for almost all time t ∈ (0, T ): Using now ϕ = (1 + δ)v m in Lemma 4.5 (for every δ > 0) as a test function, then dividing each side of the obtained inequality by δ > 0 and passing to the limit with respect to the parameter δ → 0 we obtain, after passing to the limit over m recalling that G admits a Gâteaux-derivative, that: Remark 3. Clearly, the case of threshold flows is contained into the previous proposition, since we can write, for t > 0: ˆt 0 s p-1 ds = 1 p t p = θt 2 (θt) p-2 with θ = p -1 p-1 if 1 < p < 2, θ = 1 if p = 1.
ˆt 0 u ′ , ϕ ds + 1 2 u 0 2 L 2 (Ω) -u(t
. 10 )
10 Then, we have: Study of generalized Newtonian fluid flows
Lemma 4 . 4 .
44 With the hypotheses of Proposition 4.1 there exists
. 24 )
24 Finally, (4.22) and (4.24) imply: v m (T ) ⇀ ε→0 u(T ) in L 2 (Ω). (4.25)
Lemma 5 . 2 ([ 36 ,
5236 Theorem 1]). Let Ω be an open bounded subset of R N with Lipschitz boundary, then there exists a constant C > 0 which depends of N and Ω such that for all u ∈ W
) 2 L 2 2 L 2 ( 2 L 2 (
222222 (Ω) + ˆt 0 ˆΩ D(u) : D(ϕ -u) dx ds -ˆt 0 ˆΩ(u • ∇u) • ϕ dx ds + ˆt 0 j(ϕ) -j(u) ds ≥ ˆt 0 f, ϕ -u ds. (B.2)Testing against ϕ = 0 in (B.2), we get:Ω) -u(t) 2 L 2 (Ω) + ˆt 0 ˆΩ|D(u)| 2 dx ds + ˆt 0 ˆΩ G(|D(u)|) dx ds ≤ ˆt 0 f, u ds. (B.3)Now, we write:∀r ∈ (0, +∞), G(r) = ˆr 0 sF (s) ds = r 2 ˆ1 0 yF (ry) dy.We point out that the result still holds for r = 0. Combining the above equality and (B.1), we deduce that:ˆΩ G(|D(u)|) dx = ˆΩ θ|D(u)| 2 F (θ|D(u)|) dx = 1 θ j ′ (θu), θu . So, (B.3) leads to: Ω) -u(t) 2 L 2 (Ω) + ˆt 0 ˆΩ|D(u)| 2 dxds + 1 θ ˆt 0 j ′ (θu), θu ds ≤ ˆt 0 f, u ds. (B.4)
1 2 u 0 2 L 2 ( 2 u 0 2 L 2
22222 Ω) -u(t) 2 L 2 (Ω) + ˆt 0 ˆΩ|D(u)| 2 dx ds + ˆt 0 j ′ (u), u ds ≥ ˆt 0 f, u ds. (B.5)Finally, using once again the assumption (C2), we obtain from the inequalities (B.4) and (B.5), that there exists η ∈ [θ, 1] such that:1 (Ω) -u(t) 2 L 2 (Ω) + ˆt 0 ˆΩ|D(u)| 2 dx ds + ˆt 0 1 η j ′ (ηu), ηu ds -ˆt 0 f, u ds = 0,which is the wished result.
Now, we can apply Lemma 5.1 with r = 3p p+4 to get:
(5.11)
Then, (5.11) combined with (5.10) leads to, for t ∈ (T 1 , T ):
(5.12)
Assume that for all t ∈ (T 1 , T ), we get that u m,ε L 2 (Ω) ≥ 1 µ |Ω|ε p 2 , for some µ > 0 small enough. Then dividing by u m,ε (t) L 2 (Ω) the both sides of (5.12), we obtain for almost all t ∈ (T 1 , T ):
which is equivalent to:
Up to take µ < C, integrating over (T 1 , t), we have for almost all t ∈ (T 1 , T ):
which leads to u m,ε (t)
8-p p+4
L 2 (Ω) < 0 for t large enough, up to take T large enough. This is a contradiction, so, there exists a time
, and so letting ε → 0 leads to v m (t) L 2 (Ω) = 0 for all t ∈ [T 0 , T ). Indeed, assume that t s is the time for which we have u m,ε (t s ) L 2 (Ω) ≤ 1 µ |Ω|ε p 2 . Then one can verify that t → u m,ε (t) 2 L 2 (Ω) is non-increasing on [max(t s , T 1 ), T ), and we get the result setting T 0 = max(t s , T 1 ).
So we get that there exists t ∈ [T 0 , T ) such that u( t) L 2 (Ω) = 0. If it was not the case, we would have:
This would be a contradiction with the uniform bound of (v m ) m∈N in L 2 ((T 0 , T ), L 2 σ (Ω)) and the convergence v m ⇀ u in L 2 ((T 0 , T ), L 2 σ (Ω)). Also, there exists t in [T 0 , T ) such that u( t) L 2 (Ω) = 0, and, arguing as we have already done for (v m ) m∈N , we have that t → u(t) 2 L 2 (Ω) is non-increasing on [ t, T ). Finally, since u ∈ C w ((0, T ), L 2 σ (Ω)), we get: ∀t ∈ [ t, T ), u(t) L 2 (Ω) = 0, which is the desired result and concludes the proof. |
04110772 | en | [
"spi.gciv"
] | 2024/03/04 16:41:24 | 2017 | https://hal.science/hal-04110772/file/Houshmandrad-Article%202-HAL.pdf | Mohammad Hosein Houshmandrad
email: [email protected]
Keywords: developing countries, project Delivery system, contracting companies, infrastructure plans, technical cooperation
à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Identifying the necessary indicators to confirm the qualification of the executive contract of infrastructure plans
Mohammad Hosein Houshmandrad, Mohammad Beizapour, Rahim Aminzadeh
To cite this version:
Mohammad Hosein Houshmandrad, Mohammad Beizapour, Rahim Aminzadeh. Identifying the necessary indicators to confirm the qualification of the executive contract of infrastructure plans. 5th national conference on applied research in civil engineering, architecture and urban management, Dec 2017, Tehran, Iran. hal-04110772
Identifying the necessary indicators to confirm the qualification of the executive contract of infrastructure plans , Mohammad Beizapour, Rahim Aminzadeh
1.Introduction
The advancements resulting from the sustainable economic and social development of societies have caused vast changes in project management. The construction industry as well as engineering projects in the oil, gas and petrochemical industries, hydropower and hydropower plants, dams, etc., need to be developed due to the increasing needs of communities and the greater impact of time, cost and quality in evaluations and decisions. On the other hand, achieving the goals of the project through the explanation of procedures, actions, sequence of events, contractual relationships and the limits of responsibilities and obligations of each of the involved factors, in order to successfully achieve the goals of the project. One of the main indicators of development and sustainable development in today's world is that countries have reliable infrastructures and rely on infrastructure facilities and parent industries, which is one of the main conditions for achieving this. Technical knowledge and financing are mentioned as two basic sources and main bottlenecks for the implementation of infrastructure projects and parent industries, which always cause challenges for the employer: 1 -Financing the construction of infrastructure projects sometimes puts the employer under such pressure that employers are forced to borrow from outside their organization and sometimes, if the borrowing problem is not fulfilled, it leads to the non-implementation or incompleteness of the infrastructure project.
-
The next influential source is the necessary technical knowledge to carry out and develop infrastructure facilities, which is transferred to the contractor if the necessary knowledge exists in the employer's system. However, in case of lack of knowledge of technical knowledge, he is forced to acquire or transfer it from outside the organization, and if this transfer or acquisition of knowledge is not realized, the employer is bound to terminate the project. Therefore, in order to get rid of this challenge in infrastructure projects, the employer entrusts the provision of these two items to the contracting company implementing infrastructure projects by using new contract methods, therefore the role of the contractor in these projects is of particular importance. that the ability or weakness of the contractor implementing these projects has a significant impact on achieving goals in the development path. Therefore, according to the aforementioned propositions and the importance of the role of the contracting company or group of contracting companies (consortium), we are looking for indicators to determine the ability of the contracting company (domestic or foreign) before referring the work [START_REF] Fuomshi | Drafting and writing of all types of contracts[END_REF][START_REF] Arjmand | Meters and estimates and the basic principles of contracting[END_REF][START_REF] Parisi | Research method in science[END_REF][START_REF] Azar | Applied decision making: MADM approach[END_REF][START_REF] Azar | Fuzzy management science[END_REF][START_REF] Maleki | Management of contractual affairs of construction projects[END_REF][START_REF] Houshmandrad | Choosing the most optimal type of contract in infrastructure projects from the employer's point of view in the field of technical participation and getting rid of the pressures of liquidity provision[END_REF][START_REF] Houshmand | employer in the field of technical assistance and relief from the pressures of funding[END_REF].
PROMETHEE technique
In this part, PROMETHEE technique is introduced based on J.P. Burns and B. Marshall's article .PROMETHEE-1 (partial ranking) and PROMETHEE-2 (full ranking) by J. P. Burns presented for the first time in the conference organized by R. Nadeo and M. Landry was presented at the University of Lowell, Quebec, Canada. In the same year, many applications of this method in the field of health care by J. Davignon was used. A few years later, J. P. Burns and B. Marshall presented PROMETHEE-3 (ranking based on distances) and PROMETHEE-4 (continuous mode). In 1988, the same authors proposed the interactive visual module Gaia, which is a stunning graphical representation to support the PROMETHEE methodology. In 1992 and 1994, J. P. Burns and B. In addition to the previous methods, Marshall proposed two interesting appendices called PROMETHEE-5 (MCDA including partial constraints) and PROMETHEE-6 (representation of the human mind). So far, a significant number of successful applications of PROMETHEE methodology in various fields such as banking, industrial location, human resource planning, water resources management, investment, medicine, chemistry, health care [START_REF] Mobasseri | Impact of driving style on fuel consumption[END_REF][START_REF] Mobasseri | Traffic noise and it's measurement methods[END_REF][START_REF] Mobasseri | Intelligent Circuit Application for Detecting the amount of Air in Automobile Tires[END_REF][START_REF] Mobasseri | A Comparative Study Between ABS and Disc Brake System Using Finite Element Method[END_REF], tourism, dynamic management, etc. have been discussed. The success of this method is mainly due to its mathematical properties and especially the ease of working with them. Consider the following multi-criteria problem: max So that A is a set of possible options and It is a set of evaluation criteria. Here, the maximum (Max) and minimization (Min) of some criteria are not considered, what is expected from the decision maker is to identify an option that optimizes all the criteria. Below is the basic information of the multi-criteria problem, including 1 evaluation [START_REF] Maleki | Management of contractual affairs of construction projects[END_REF][START_REF] Bordogna | A fuzzy linguistic approach generalizing boolean information retrieval: A model and its evaluation[END_REF][START_REF] Collier K | Managing construction contract[END_REF][START_REF] Bucklet | fuzzy hierarchical analysis. fuzzy sets and systems[END_REF].
Table1.Evaluation
Providing a solution for a multi-criteria problem does not only depend on the basic information such as the evaluation table, but also depends on the decision maker himself. There is no absolute optimal solution! The best consensus solution (middle) depends on the personal preferences of each decision maker, which are present in the "mind" of each decision maker.
Consequently, peripheral information is needed to represent these subjective preferences to guide the decision maker toward a useful decision goal.
Consequently, peripheral information is needed to represent these subjective preferences to guide the decision maker toward a useful decision goal. The usual relation related to multi-criteria problems of type ( 1) is defined as follows.
For every So that P, I, and R are respectively abbreviations for preference, indifference, and incomparability, which are very clear definitions. An option is better than other options if it is better or at least equal to the assumed option in all criteria. If one option is better than others in criterion s and another option is better in criterion r. It is not possible to choose the best option without using additional information. Therefore, in this case, the two options are not comparable.
Options that are not defeated by other options are called efficient solutions. In a given evaluation table for a specific multi-criteria problem, usually most of the options (and often all of them) are efficient. Often there is a weak correlation of type from I and P. When one option is better from the point of view of one criterion, other options are better on other criteria. As a result, incomparability has happened in most of the 2 pairwise comparisons, in which case it is not possible to make a decision without additional information. For example, this information can include:
1-interactions between criteria; 2-The value function that combines all the criteria in a single function to obtain a single criteria problem for which there is an optimal solution; 3-Weights that show the relative importance of the criterion; 4-the preferences related to each pairwise comparison within each criterion; 5-Thresholds that establish the preferred limits.
So far, a large number of multi-criteria decision support methods have been proposed, all of these methods are derived from the same evaluation table, but they are different in terms of additional information needed. PROMETHEE methods require clear supplementary information and this information is easily obtained and understandable by decision makers and analysts.
The goal of all multi-criteria methods is to enrich the dominance graph, for example; Reducing the number of inconsistencies. (R) When the utility function is built, the multicriteria problem becomes a single-criteria problem for which there is an optimal solution. This seems like an overstatement because it is based on very strong assumptions (are all our decisions really based on a utility function defined somewhere in our brains?) and completely changes the structure of a decision problem. For this reason, B. Roy suggested that non-ranking relationships be created that only include realistic enrichments of dominance relationships. PEOMETHEE methods are among non-ranked methods.
In order to make a suitable multi-criteria method, some necessary items are listed below: Prerequisite 1: The range of deviations (differences) between the evaluations of the options for each criterion should be considered: This information can be easily calculated, but it is not used in efficiency theory. When these deviations are imperceptible, the communication of mastery can be enriched.
Necessary condition 2: Since the evaluations of gj(a) of each criterion are expressed according to its own unit, scale effects should be completely eliminated. Obtaining results based on the scales provided by the evaluations is not acceptable. Unfortunately, not all multi-criteria guidelines meet this condition.
Necessary condition 3: In the case of pairwise comparisons, a suitable multi-criteria method should provide the necessary information, which are: b is preferable to a; b and a are without difference; b and a are not comparable; Of course, the goal of this work is to reduce at least the number of incommensurability, but not when they are not realistic. So this instruction may be considered as a fair instruction. When all incommensurability for a particular instruction has been systematically retrieved by the information provided, this instruction can be further discussed.
Prerequisite 4: Different multi-criteria methods require different additional information and perform different calculation instructions, so that their proposed solutions are different. Therefore, it is important to develop methods that are understandable for decision makers. Prerequisite 5: An appropriate guideline should not include technical parameters that are not important to the decision maker. Requirement 6: An appropriate method should provide information about the conflicting nature of criteria.
Requirement 7: Most multi-criteria methods allocate relative importance weights for each criterion. These weights represent an important part of the decision maker's mentality. Assigning these weights to criteria is not an easy task and usually decision makers strongly dislike this practice. A suitable method should include sensitivity tools (sensitivity analysis) to easily test different sets of weights. PROMETHEE methods and the visual interactive module related to Gaia take into account all these conditions. On the other hand, some mathematical properties that multi-criteria problems may have can also be considered. PROMETHEE methods are designed to handle multi-criteria problems as well as related evaluation tables. Additional information required to implement PROMETHEE is clear and understandable for both analysts and decision makers, this information includes information between criteria and information within each criterion: According to the understanding of the set {wj, j=1,2,...,k}, the weights of the relative importance of different criteria should be completed. These weights are non-negative numbers and are independent of the measurement unit of each criterion. There is nothing wrong with considering the weights as normal because [START_REF] Azar | Fuzzy management science[END_REF][START_REF] Maleki | Management of contractual affairs of construction projects[END_REF][START_REF] Bordogna | A fuzzy linguistic approach generalizing boolean information retrieval: A model and its evaluation[END_REF][START_REF] Collier K | Managing construction contract[END_REF][START_REF] Bucklet | fuzzy hierarchical analysis. fuzzy sets and systems[END_REF][START_REF] Dalkey | An experimental application of the Delphi method to the use of experts[END_REF]:
Results
Data analysis in this research is done by the following three techniques, which are:
• Receiving feedback and statistical analysis of answers to questions in a group and finalizing the effective criteria in evaluating and ranking contractors implementing infrastructure projects, using the fuzzy Delphi technique [START_REF] Dalkey | An experimental application of the Delphi method to the use of experts[END_REF];
• Carrying out paired comparisons and weighting the selected criteria using the fuzzy hierarchical analysis technique;
• Final ranking of indicators with the help of Prometheus technique.
Using library studies and the use of experts' opinions, about 50 related indicators were identified to identify the competence of contracting companies implementing infrastructure projects, and finally 32 of them were approved, as described in the following table: The results of index ranking with the Prometheus method state that the indexes that have more out-of-ranking flow are more prioritized. Therefore, the definition of the acceptable limit for confirming the indicators is as described in the table below. In the following, the rejection or acceptance status of the criteria is as described in the following table.
Conclusion
As a result, after collecting the opinions of the elites and summarizing the indicators and performing the analysis, 12 indicators were finally accepted. Therefore, the criterion for identifying and distinguishing the contracting company qualified to carry out the country's infrastructure plans (projects) is as follows:
1-Technical dimension
• Having a comprehensive project planning and control system
• How to comply with standards and technical specifications in previous projects
• How to implement the previous projects in terms of quality, cost and expected schedule
• 3 -
3 Condition and usability of equipment and machines 2-Experimental dimension • Continuous communication and coordination with the contractor and the monitoring system The dimension of organization and management • Level of education, field of study and executive experience of expert staff and key elements 4-Good record and credibility • Acquiring various qualifications from official and reliable organizations and bodies • Credit and goodwill of the company and key personnel 5-Dimension of financial qualification • Financial ability and support 6-After the suggested price •Price
Table 2 .
2 Final relative importance weight … … ... … The highest weight is the most important criterion.
Table No . 3-Contractor rating criteria
No
3 How to implement the previous projects in terms of
quality, cost and schedule
4 Compliance with relevant current laws such as
environmental, labor and social security laws
5 Compliance with the safety and protection instructions of
the workshop
6 Condition and usability of equipment and machines
7 Complete and timely equipment of the workshop
8 experimental Executive experience in the desired field and field of
work
9 Being a local contractor or having experience at the
project site
10 Creativity and innovation in previous projects
11 Application of appropriate executive methods and
organization and order in workshop affairs
12 Classification of workshop documents and documentation
of previous works
13 Continuous communication and coordination with the
contractor and the monitoring system [17]
14 The scale of completed projects
15 Volume and type of works referred to the subcontractor
16 Use of new technologies
17 Organization and Efficient management and suitable management system
management and for work execution
18 planning of the country Stability of board members and specialized staff
19 The level of education, field of study and executive
experience of the expert staff and key elements
20 Presentation of articles in conventions, conferences and
specialized magazines
21 Works, compositions and scientific and technical research
22 Continuous training of employees
23 Organization size
24 Organizational behavior of the contractor
25 Good history and Acquiring various qualifications from official and reliable
credibility organizations and bodies
26 Receiving official awards and recognitions
27 Good record in previous jobs
28 Credit and reputation of the company and key personnel
29 Financial qualification Financial ability and support
30 Timely payment of wages for employees, workshop
workers and subcontractors
31 Insuring all facilities, equipment and personnel against
possible accidents
32 How to analyze the price
33 Proposed price price
Row aspect index
1 technical Having a comprehensive project planning and control
system
2 How to comply with standards and technical
specifications in previous projects
Table No . 4-Definition of acceptable limit
No
Determining domain Criterion Scoring The score Score the
the share selection range of of the highest
acceptable factor each lowest criterion
score limit criterion criterion value
value
Table No . 5 -The final status of the criteria
No
subcontractor
16 Use of new technologies 5 5 / rejection
17 Organization and Efficient management and suitable management system for work execution 9 25 / confirmation
18 management and planning Stability of board members and specialized staff 3 66 / rejection
19 of the country The level of education, field of study and
executive experience of the expert staff and key elements 9 25 / confirmation
20 Presentation of articles in conventions, conferences and specialized magazines 4 66 / rejection
21 Works, compositions and scientific and technical research 3 25 / rejection
22 Continuous training of employees 5 5 / rejection
23 Organization size 5 5 / rejection
Row 24 1 25 2 26 3 27 4 28 5 29 30 6 31 32 aspect technical Good history and credibility Financial qualification index Organizational behavior of the contractor Acquiring various qualifications from Having a comprehensive project planning and control system official and reliable organizations and bodies How to comply with standards and technical specifications in previous projects Receiving official awards and recognitions Good record in previous jobs How to implement the previous projects in terms of quality, cost and schedule Credit and reputation of the company and key personnel Compliance with relevant current laws such as environmental, labor and social security laws Compliance with the safety and protection instructions of the workshop Financial ability and support Timely payment of wages for employees, workshop workers and subcontractors Condition and usability of equipment and machines Insuring all facilities, equipment and personnel against possible accidents How to analyze the price Final 25 / 3 average 25 / 9 25 / 9 25 / 48 / 5 9 14 / 7 25 / 9 5 / 5 5 / 5 5 / 5 25 / 9 25 / 3 25 / 9 25 / 3 25 / 3 Condition rejection confirmation confirmation confirmation rejection rejection confirmation rejection rejection confirmation rejection rejection rejection confirmation rejection
7 33 Proposed price price Complete and timely equipment of the workshop 5 / 93 / 5 8 rejection confirmation
8 experimental Executive experience in the desired field and field of work 9 25 / confirmation
9 Being a local contractor or having experience at the project site 9 25 / confirmation
10 Creativity and innovation in previous projects 5 5 / rejection
11 Application of appropriate executive
methods and organization and order in workshop affairs 5 5 / rejection
12 Classification of workshop documents and documentation of previous works 5 5 / rejection
13 Continuous communication and
coordination with the contractor and the monitoring system 9 25 / confirmation
14 The scale of completed projects 5 5 / rejection
15 Volume and type of works referred to the 5 5 / rejection |
04110858 | en | [
"math"
] | 2024/03/04 16:41:24 | 2023 | https://hal.science/hal-04110858/file/bm-submission.pdf | Térence Bayen
email: [email protected]
F Mairet
Approximation of chattering arcs in optimal control problems governed by mono-input affine control systems
Keywords: optimal control, singular arcs, chattering, numerical methods
In this paper, we consider a general Mayer optimal control problem governed by a mono-input affine control system whose optimal solution involves a second-order singular arc (leading to chattering). The objective of the paper is to present a numerical scheme to approach the chattering control by controls with a simpler structure (concatenation of bang-bang controls with a finite number of switching times and first-order singular arcs). Doing so, we consider a sequence of vector fields converging to the drift such that the associated optimal control problems involve only first-order singular arcs (and thus, optimal controls necessarily have a finite number of bang arcs). Up to a subsequence, we prove convergence of the sequence of extremals to an extremal of the original optimal control problem as well as convergence of the value functions. Next, we consider several examples of problems involving chattering. For each of them, we give an explicit family of approximated optimal control problems whose solutions involve bang arcs and first-order singular arcs. This allows us to approximate numerically solutions (with chattering) to these original optimal control problems.
Introduction
In numerous optimal control problems arising in various areas such as aerospace, biology, mechanics, the optimal solution involves what is called Fuller's phenomenon. In that case, we also say that chattering occurs. This property means that the optimal control possesses an infinite number of extremal bang arcs over a finite time interval. This particularly interesting behavior of optimal controls was originally discovered by A.T. Fuller in the seminal works [START_REF] Fuller | Study of an optimum non-linear control system[END_REF][START_REF] Fuller | Further study of an optimum non-linear control system[END_REF][START_REF] Fuller | Relay control systems optimized for various performance criteria[END_REF]. Since then, many properties related to chattering were studied by several authors (see, e.g., [START_REF] Ryan | Singular optimal controls for second-order saturating systems[END_REF] and the monograph [START_REF] Zelikin | Theory of chattering control, Systems & Control: Foundations & Applications[END_REF] giving a large panel on the chattering phenomenon).
Fuller's phenomenon arises whenever an optimal control problem presents a second order singular arc. Recall that in geometric optimal control theory, a singular singular arc is a time interval on which the optimal control takes values within the interior of the admissible control set. The occurrence of such arcs is of utmost importance to compute an optimal solution since it completely determines the optimal synthesis. This situation is encountered typically when computing minimum time syntheses in the two-dimensional setting in presence of a turnpike singular arc (see [START_REF] Boscain | Optimal syntheses for control systems on 2-D manifolds[END_REF] or [START_REF] Bayen | Optimal synthesis for the minimum time control problems of fedbatch bioprocesses for growth functions with two maxima[END_REF]). In the setting of Mayer optimal control problems governed by an affine control system, the application of the Pontryagin maximum Principle (in short, PMP) shows that several cases may occur (see, e.g., [START_REF] Chitour | Singular trajectories of control-affine systems[END_REF]). First, geometric control theory predicts that the control input is present explicitly in the expression defining the derivative of order 2n, n ≥ 1, of the switching function. So, if the control input is present in the second-order derivative of the switching function (allowing to compute the singular control as a function of the state and co-state), then, we say that a singular arc of first order occurs. But, it may happen that the control is present explicitly only in the fourth-order derivative of the switching function (and not in the second-order derivative) leading that way to a second-order singular arc and to chattering, see, e.g., [START_REF] Zelikin | Theory of chattering control, Systems & Control: Foundations & Applications[END_REF][START_REF] Schättler | Geometric optimal control[END_REF]. More generally, it is possible to encounter optimal control problems involving singular arcs of order greater than three. But, we shall restrict our attention here to the case of first and second order singular arcs that are the most common ones in the application models.
A striking point related to second order singular arcs is that they arise in simple examples from mathematical modeling such as in aerospace [START_REF] Zhu | Minimum time control of the rocket attitude reorientation associated with orbit dynamics[END_REF][START_REF] Zhu | Geometric optimal control and applications to aerospace[END_REF], in biomedicine [START_REF] Ledzewicz | Singular controls and chattering arcs in optimal control problems arising in biomedicine[END_REF][START_REF] Grigorieva | Chattering and its approximation in control of psoriasis treatment[END_REF][START_REF] Schättler | Optimal control for mathematical models of cancer therapies. An application of geometric methods[END_REF], in biology [START_REF] Mairet | The promise of dawn: microalgae photoacclimation as an optimal control problem of resource allocation[END_REF][START_REF] Mairet | Parameter estimation for dynamic resource allocation in microorganisms: a bi-level optimization problem[END_REF], or in physics [START_REF] Robin | Chattering phenomenon in quantum optimal control[END_REF]. The simplest problem involving chattering is known as Fuller's problem for which the underlying system is the two-dimensional double integrator associated with a quadratic cost. It is worth noticing that chattering renders the analysis of optimal solutions more delicate than for instance in the case of a first-order singular arc for which the optimal synthesis involves in general a low number of switching times. In particular, the occurrence of chattering raises the question of implementation of an optimal solution from a practical viewpoint. The question is how to bypass non-implementable solutions and how synthesizing an adequate sub-optimal control (typically based on a finite number of bang-bang arcs). This question was studied by several authors for instance in [START_REF] Caponigro | Regularization of chattering phenomena via bounded variation controls[END_REF] by modifying the objective function via bounded variation controls.
In the present paper, we consider another approach to tackle chattering based on a perturbation technique. More precisely, we consider a general optimal control problem governed by a mono-input affine control system whose optimal solution involves a second-order singular arc. We suppose that there is a sequence of dynamics converging to the original one such that the associated optimal solution involves only first-order singular arcs. This sequence is constructed by approximating the drift term. From a practical viewpoint, we thus approach the original optimal control by a perturbed one for which the optimal solution does not involve chattering. We first prove that, up to a sub-sequence, an extremal of the perturbed optimal control problems converges to an extremal to the original optimal control problem. As well, we prove the convergence of the value function associated to the perturbed optimal control problem to the value function associated with the original optimal control problem. Convergence of optimal controls (up to a sub-sequence) is also guaranteed on time intervals for which the limit trajectory is non-singular. Interestingly, the approximated optimal control is a concatenation of a finite number of bang and singular arcs. Next, we show how to construct such approximating sequence on three examples with chattering : a variation on Fuller's problem, a resource-allocation model [START_REF] Giordano | Dynamical allocation of cellular resources as an optimal control problem: novel insights into microbial growth strategies[END_REF], and a quantum control system [START_REF] Robin | Chattering phenomenon in quantum optimal control[END_REF].
The paper is organized as follows. In Section 2, we introduce our methodology and we prove convergence results between the sequence of approximated optimal control problems and the original optimal control problem. Next, in Section 3, we address three examples for which we provide an explicit sequence of approximated optimal control problems. For each of them, we verify that the limit problem possesses chattering whereas perturbed optimal control problems involve only first-order singular arcs. In addition, for each example, we provide numerical solutions (with an increasing number of arcs as the sequence approaches the original dynamics) to the approximated optimal control problems via a direct method [START_REF]BOCOP: an open source toolbox for optimal control[END_REF].
Approximation and convergence result 2.1 Second order singular arcs
Hereafter, we denote by A ⊤ the transpose of a matrix A and the inner product in the Euclidean space is written a • b where a, b are two vectors. Let n ∈ N * and consider two vector fields f, g : R n → R n of class C ∞ , a terminal pay-off ψ : R n → R of class C 1 , and let C be a non-empty closed convex subset of R n . Given T > 0 and x 0 ∈ R n , we consider the Mayer optimal control problem inf
u∈U ψ(x(T )) s.t. ẋ(t) = f (x(t)) + u(t)g(x(t)) a.e. t ∈ [0, T ], x(0) = x 0 , x(T ) ∈ C, (OCP)
where U denotes the set of admissible controls u : [0, T ] → [-1, 1] that are measurable. The aim of this paper is to provide alternative strategies in order to study (OCP) whenever f and g satisfy the condition:
[g, [f, g]] = 0 over R n , (2.1)
or whenever the covector is orthogonal to [g, [f, g]] along a singular arc, where [f, g](x)
:= D x g(x)f (x) - D x f (x)g(x)
denotes the Lie bracket of f and g and where D x f (x), D x g(x) stand respectively for the Jacobian matrix of f and g. Under one of these two conditions, we know that optimal controls may involve an infinite number of switching times over [0, T ]. This phenomenon renders the analysis of optimal trajectories quite involved (e.g., the determination of the switching locus in this context). Thus, from a numerical point of view, it is desirable to deal with controls having a finite number of switching times.
First, we would like to remind why condition (2.1) implies the so-called Fuller's phenomenon (or chattering). From Pontryagin's Principle [START_REF] Pontryagin | Mathematical theory of optimal processes[END_REF][START_REF] Vinter | [END_REF], if u is an optimal control and x denotes the associated solution of the system defined over [0, T ], there exist p 0 ≤ 0 and an absolutely continuous function p : [0, T ] → R n (the covector) satisfying the adjoint equation
ṗ(t) = -D x f (x(t)) ⊤ p(t) -u(t)D x g(x(t)) ⊤ p(t) a.e. t ∈ [0, T ], (2.2)
together with the transversality condition
p(T ) + p 0 ∇ψ(x(T )) ∈ -N C (x(T )), (2.3)
where N C (x) denotes the (convex) normal cone to the set C at some point x ∈ C. The pair (p 0 , p(•)) also fulfills the non-triviality condition (p 0 , p(•)) ̸ = 0. Finally, the Hamiltonian maximization condition gives us the following control law:
u(t) =
φ(t) = p(t) • [f, g](x(t)), t ∈ [0, T ],
and that φ
(t) = p(t) • [f, [f, g]](x(t)) + u(t)p(t) • [g, [f, g]](x(t)) a.e. t ∈ [0, T ]. (2.5)
Thus, under (2.1), if a singular arc occurs, the (corresponding) singular control u cannot be expressed thanks to (2.5), but further derivations of φ are needed to compute the singular control (at least two according to the geometric control theory). This leads to the singular arcs of order at least two. In this direction, let us remind the following result about the occurrence of the chattering phenomenon for second-order singular arcs (see [START_REF] Schättler | Geometric optimal control[END_REF]). This means that the only possibility for an optimal control to connect the trajectory to the singular arc is an infinite sequence of bang arcs since the proposition excludes the concatenation of the singular arc to the optimal trajectory by a finite number of bang arcs. Also, singular controls may exceed the largest admissible value leading to saturation [START_REF] Bayen | Tangency property and prior-saturation points in minimal time problems in the plane[END_REF], that is why, the singular control is supposed to be admissible in Proposition 2.1. In addition to this proposition, let us also recall Legendre-Clebsch's condition [START_REF] Schättler | Geometric optimal control[END_REF] in the above context (with a single input). If a singular arc of order2 q ∈ N * is optimal over some time interval [t 1 , t 2 ], then it must satisfy the condition
- ∂ ∂u d 2q dt 2q ϕ ≤ 0.
If this inequality is strict, we then speak of the strict Legendre-Clebsch condition. Typically, if q = 1, then, ϕ must satisfy φ|u > 0 along a singular arc whereas if q = 2, then, it must satisfy φ|u = 0 and .... ϕ |u < 0.
Approximation of the optimal control problem
In order to avoid Fuller's phenomenon, we propose the following approach. We suppose that there is a sequence
f k : R n → R n (of class C ∞ for each k ∈ N) such that ∥f k -f ∥ L ∞ → 0, (2.7)
together with
∥D x f k -D x f ∥ L ∞ → 0, (2.8)
as k → +∞ (where
L ∞ := L ∞ (R n ) is the set of bounded functions) and such that [g, [f k , g]] ̸ = 0 over R n , (2.9)
for every k ∈ N. This hypothesis will allow us to prevent chattering (see the discussion in Section 2.4). Consider now the approximated Mayer optimal control problem:
inf u∈U ψ(x(T )) s.t. ẋ(t) = f k (x(t)) + u(t)g(x(t)) a.e. t ∈ [0, T ], x(0) = x 0 , x(T ) ∈ C, (OCP k )
and let us introduce additional hypotheses in order to ensure existence of solutions to (OCP) and (OCP k ).
Hereafter | • | denotes the Euclidean norm in R n .
Assumption 2.1. We suppose that for every k ∈ N, the target set C is reachable from x 0 for the controlled dynamics
ẋ(t) = f k (x(t)) + u(t)g(x(t)) a.e. t ∈ [0, T ], (2.10)
where u ∈ U. Similarly, we assume that C is also reachable from x 0 for the controlled dynamics
ẋ(t) = f (x(t)) + u(t)g(x(t)) a.e. t ∈ [0, T ], (2.11)
where u ∈ U. In addition, we suppose that there is c ≥ 0 such that
∀k ∈ N, |f k (x)| ≤ c(|x| + 1) ; |f (x)| ≤ c(|x| + 1) ; |g(x)| ≤ c(|x| + 1) for all x ∈ R n .
(2.12)
Note that for simplicity, we supposed that the sequence (f k ) satisfies a linear growth condition, but this hypothesis could be removed in view of (2.7).
Lemma 2.1. If Assumption 2.1 is satisfied, then for every k ∈ N, (OCP k ) has a solution as well as (OCP).
Proof. Since the velocity sets associated to (2.10) and (2.11) are convex w.r.t. u, we can apply Filipov's existence Theorem (see [START_REF] Cesari | Optimization-Theory and applications. Problems with ordinary differential equations[END_REF]), whence the result.
Convergence results
For every k ∈ N, let u k be an optimal solution of (OCP k ). According to the PMP, there exist p 0 k ≤ 0 and an absolutely continuous function
p k : [0, T ] → R n such that (p 0 k , p k (•)) ̸ = 0 and such that ṗk (t) = -D x f k (x k (t)) ⊤ p k (t) -u k (t)D x g(x k (t)) ⊤ p k (t) a.e. t ∈ [0, T ], (2.13)
together with the transversality condition
p k (T ) + p 0 k ∇ψ(x k (T )) ∈ -N C (x k (T )). (2.14)
In addition, the Hamiltonian maximization condition implies that
u k (t) = sign(ϕ k (t)) a.e. t ∈ [0, T ] s.t. ϕ k (t) ̸ = 0, (2.15)
where ϕ k (t) := p k (t) • g(x k (t)). Finally, since the problem is autonomous the Hamiltonian of the problem is constant almost everywhere, that is, for all k ∈ N, there is
H k ∈ R such that H k := p k (t) • f k (x k (t)) + u k (t)p k (t) • g(x k (t)) a.e. t ∈ [0, T ].
The transversality condition (2.14) implies that for every k ∈ N, there exists
ξ k ∈ N C (x k (T )) such that p k (T ) + p 0 k ∇ψ(x k (T )) = -ξ k , so, the scalar r k := (p 0 k ) 2 + |ξ k | 2 is with positive value for all k ∈ N since the pair (p 0 k , p k (•)) is non-trivial. Hence, we can set q k := p k /r k . (2.16)
Hereafter, we say that a sequence (x k ) of absolutely continuous functions over [0, T ] strongly-weakly converges to a function
x ⋆ : [0, T ] → R n if and only if ∥x k -x ⋆ ∥ L ∞ ([0,T ] ; R n ) → 0 and ẋk ⇀ ẋ⋆ (weak convergence in L 2 ([0, T ] ; R n )).
Proposition 2.2. Suppose that Assumption 2.1 is satisfied. For every k ∈ N, denote by u k an optimal solution of (OCP k ) and let q k be defined by (2.13)-(2.14), and (2.16). Then, there exists an extremal (x ⋆ , p, , u ⋆ ) associated with (OCP) such that, up to a sub-sequence, (x k , q k ) strongly-weakly converges to (x ⋆ , p) over [0, T ]. In addition, one has the pointwise convergence
u k (t) → u ⋆ (t) on every time interval [t 1 , t 2 ] such that ϕ(t) ̸ = 0 where ϕ(t) := p(t) • g(x ⋆ (t)), t ∈ [0, T ].
In addition, u ⋆ is an optimal solution to (OCP).
Proof. From the linear growth assumption on the sequence of dynamics (f k ) and f (Assumption 2.1), we easily obtain that the sequence (x k ) is uniformly bounded over [0, T ] since the set of admissible controls is bounded. So, the sequence (x k (T )) is also bounded, hence, up to a subsequence, we may assume that it converges to some value w ∈ R n . Now, let us set α k := p 0 k /r k and β k := -ξ k /r k so that the transversality condition reads as follows:
q k (T ) + α k ∇ψ(x k (T )) = β k .
By compacteness of the unit ball in R 1+n , we may assume (extracting a sub-sequence if necessary) that the sequence (α k , β k ) k converges to a limit (α, β) ∈ R 1+n which is non-zero and such that α ≤ 0. By closedness property of the normal cone, one also obtains that -β ∈ N C (w). Next, observe that the Hamiltonian of the problem and that the adjoint equation are linear w.r.t. the covector, so the triple (x k , q k , u k ) satisfies:
Hk := H k r k = q k (t) • f k (x k (t)) + u k (t)q k (t) • g(x k (t)) a.e. t ∈ [0, T ], (2.17)
together with the state-adjoint system
ẋk (t) = f (x k (t)) + u k (t)g(x k (t)) + ηk (t), qk (t) = -D x f (x k (t)) ⊤ q k (t) -u k (t)D x g(x k (t)) ⊤ q k (t) + ηk (t), (2.18)
where for a.e. t ∈ [0, T ], ηk (t) and ηk (t) are defined as:
ηk (t) := f k (x k (t)) -f (x k (t)) ; ηk (t) := D x f (x k (t)) ⊤ q k (t) -D x f k (x k (t)) ⊤ q k (t).
Setting z k := (x k , q k ) and using that (x k ) is uniformly bounded over [0, T ], we obtain that there exists c ′ ≥ 0 such that for every k ∈ N one has
|z k (t)| ≤ c ′ (|z k (t)| + 1) a.e. t ∈ [0, T ], (2.19)
where | • | denotes the Euclidean norm in R 2n . From the preceding inequality and the fact that (x k (T ), q k (T )) converges, we obtain after application of Gronwall's Lemma that (
x k (•), q k (•)) is uniformly bounded over [0, T ]. Now the multi-function F (x, q) := {(f (x) + ug(x), -D x f (x) ⊤ q -uD x g k (x) ⊤ q) ; u ∈ [-1, 1]},
is with compact convex values for every (x, q) ∈ R 2n . Remind that z k (T ) converges and note also (using the uniform boundedness of (x k , q k ) in L ∞ ([0, T ] ; R n )) that the sequences ηk (•) and ηk (•) uniformly converge to zero. Hence, we can apply [START_REF] Clarke | Nonsmooth analysis and control theory[END_REF]Theorem 1.11] which shows the existence of an absolutely continuous pair (x ⋆ , p) as well as a control function u ⋆ ∈ U such that
ẋ⋆ (t) = f (x ⋆ (t)) + u ⋆ (t)g(x ⋆ (t)), ṗ(t) = -D x f (x ⋆ (t)) ⊤ p(t) -u ⋆ (t)D x g(x ⋆ (t)) ⊤ p(t), (2.20)
for a.e. t ∈ [0, T ]. In addition, (x k , q k ) strongly-weakly converges to (x ⋆ , p) over [0, T ]. Also, by passing to the limit as k → +∞, we obtain that x ⋆ (0) = x 0 and that p(T ) + α∇ψ(x ⋆ (T )) = β ∈ -N C (x ⋆ (T )) (using the closedness property of the normal cone) where (α, β) is non-zero and α ≤ 0. This proves the first property of Proposition 2.2.
Let us now address convergence of the sequence of controls (u k ) to u ⋆ . Suppose that there is t 0 ∈ [0, T ] such that ϕ(t 0 ) > 0. By uniform convergence of ϕ k to ϕ, there exist k 0 ∈ N, ε > 0, and κ > 0 such that for every k ≥ k 0 and every t ∈ [t 0 -ε, t 0 ], one has ϕ k (t) ≥ κ. Thus, by the Hamiltonian maximization condition, we obtain that u k (t) = +1 for every t ∈ [t 0 -ε, t 0 ] and every k ≥ k 0 . Hence, for all η ∈ (0, ε) and for all k ≥ k 0 , one has 1
η t0 t0-η ẋk (t) dt = 1 η t0 t0-η [f k (x k (t)) + g(x k (t))] dt.
Thus, if we let k → +∞ (utilizing uniform convergence of the sequences (f k ) and (x k ) and also weak convergence of ( ẋk )) and next η ↓ 0, one obtains using that almost every
t 0 ∈ [0, T ] is a Lebesgue point of u ⋆ (since u ⋆ is bounded): ẋ⋆ (t 0 ) = f (x ⋆ (t 0 )) + g(x ⋆ (t 0 )).
It follows that u ⋆ (t 0 ) = +1 = sign(ϕ(x ⋆ (t 0 ))). By repeating this argumentation in a neighborhood of every point t 0 ∈ [0, T ] such that ϕ(x ⋆ (t 0 )) ̸ = 0, we obtain that u ⋆ satisfies the Hamiltonian maximization condition (2.4) as desired as well as the pointwise convergence of (u k ) to u ⋆ at every point t 0 ∈ [0, T ] such that ϕ(t 0 ) ̸ = 0. This concludes the proof of the convergence of the sequence of controls (u k ) to u ⋆ . Finally, let us check that u ⋆ is an optimal solution to (OCP). Doing so, observe that
ψ(x k (T )) ≤ ψ(x k (T )),
for every admissible pair (x k , ũ) of (OCP). Note also that ũ is a fixed control in U. So, again using Gronwall's Lemma, we can check that (x k ) strongly-weakly converges to x over [0, T ] where x is the unique solution to ẋ(t) = f (x(t)) + u(t)g(x(t)), a.e. t ∈ [0, T ] together with x(0) = x 0 . Hence, passing to the limit as k → +∞ (ψ being continuous) implies that
ψ(x ⋆ (T )) ≤ ψ(x(T )),
for every admissible pair (x, ũ) to (OCP). This shows that u ⋆ is an optimal solution to (OCP) which ends the proof of the proposition.
Remark 2.1. As a byproduct of the previous proposition, we obtain that the value function associated to (OCP k ) converges to the value function associated to (OCP). In addition, the control u k uniformly converges to u ⋆ on every segment [t 1 , t 2 ] such that ϕ(t) ̸ = 0 over [t 1 , t 2 ].
Discussion about the employed methodology
Recall that in the present setting, chattering may occur whenever (2.1) is verified or whenever the covector is orthogonal to [g, [f, g]] along a singular arc. So, under one of these two conditions, we propose to introduce a sequence of functions (f k ) approaching f (see (2.7) and (2.8)) and such that (2.9) is verified for every k ∈ N.
The main advantage of considering (OCP k ) in place of (OCP) is that the behavior of an optimal solution to (OCP k ) might be simpler to handle. Indeed, as shows Proposition 2.1, one can expect Fuller's phenomenon for an optimal solution to (OCP) involving a singular arc. In contrast, if (x k , p k , u k ) is an extremal of (OCP k ), then, according to the PMP, the corresponding switching function ϕ k fulfills:
φk (t) = p k (t) • [f k , [f k , g]](x k (t)) + u k (t)p k (t) • [g, [f k , g]](x k (t)).
We can thus expect that along a singular arc
p k (t) • [g, [f k , g]](x k (t)
) ̸ = 0, so Fuller's phenomenon may not arise. In addition, if Legendre-Clebsch's condition is verified for every k ∈ N, then, we can also expect the occurrence of a singular arc of turnpike type (thus a first-order singular arc without involving an infinite number of switching times, see, e.g., [START_REF] Bayen | Optimal synthesis for the minimum time control problems of fedbatch bioprocesses for growth functions with two maxima[END_REF]). Hence, for the approximated problem, Fuller's phenomenon may not arise even if Legendre-Clebsch's condition is not verified (in that case, we speak of anti-turnpike [START_REF] Bonnard | Singular trajectories and their role in control theory[END_REF][START_REF] Boscain | Optimal syntheses for control systems on 2-D manifolds[END_REF]).
Note that we supposed the terminal time T > 0 fixed, but it can also be free (in that case, we can adapt the previous properties by using classical techniques considering an augmented state, see, e.g., [START_REF] Bressan | Introduction to the mathematical theory of control[END_REF]). In the next section, we provide three examples for which an optimal solution contains a second-order singular arc. Doing so, we introduce an explicit sequence of functions f k approaching the dynamics such that the associated optimal control problem does not possesses chattering.
Application to three examples involving chattering
Throughout this section, we are given a sequence (ε k ) of positive numbers such that ε k ↓ 0 as k → +∞. By limit problem, we mean the optimal control problem for which ε k = 0 (i.e. without perturbation).
3.1 A variant of Fuller's problem
ẋ1 = x 2 , ẋ2 = ρ(x 1 ) + u(t), ẋ3 = ℓ(x 1 ) + ε k x 2 2 , ; x 1 (0) = x 0 1 ∈ R x 2 (0) = x 0 2 ∈ R x 3 (0) = 0 ; x(T ) ∈ C, (3.1)
where C ⊂ R 3 is a non-empty closed convex subset. Suppose that ℓ : R → R has a unique minimum x m 1 over R such that ℓ ′ (x m 1 ) = 0 together with ℓ ′′ (x m 1 ) > 0. Without any loss of generality, we may also assume that ℓ(x m 1 ) = 0 (by translation). Furthermore, assume that -ρ(x m 1 ) ∈ [-1, 1] (this is required for admissibility of singular arcs).
Remark 3.1. Fuller's Problem corresponds to the "limit" case ε k = 0 together with ρ(x 1 ) = 0, ℓ(x 1 ) = x 2 1 , and C = {(0, 0)} × R.
The previous framework encompasses this example setting
f (x) := x 2 ρ(x 1 ) ℓ(x 1 ) ; g(x) := 0 1 0 ; f k (x) := x 2 ρ(x 1 ) ℓ(x 1 ) + ε k x 2 2 , since we can easily check that [g, [f, g]] = 0 and [g, [f k , g]] = (0, 0, -2ε k ) ⊤ ̸ = 0 (for ε k > 0)
. We now provide more details on optimal solutions of (3.1) for ε k > 0 and in the limit case.
1. Study of the optimal control problem in the case where ε k > 0. By application of the PMP considering only normal lifts, the Hamiltonian of the problem becomes:
H k := p 1 x 2 + p 2 ρ(x 1 ) + p 2 u -ℓ(x 1 ) -ε k x 2 2 .
Indeed, using the transversality condition, we obtain that p 3 (T ) = -1 and because H k does not depend on x 3 , the function p 3 must be constant. It follows that the adjoint equation can be written as:
ṗ1 = -p 2 ρ ′ (x 1 ) + ℓ ′ (x 1 ), ṗ2 = -p 1 + 2ε k x 2 .
The switching function is p 2 which satisfies p2 = p 2 ρ ′ (x 1 ) -ℓ ′ (x 1 ) + 2ε k ρ(x 1 ) + 2ε k u.
We are now in a position to characterize singular arcs for every ε k > 0. Using that the Hamiltonian is zero along any extremal (since the terminal time is free), every singular arc is such that:
ε k x 2 2 = ℓ(x 1 ) ; u ⋆ k = ℓ ′ (x 1 ) 2ε k -ρ(x 1 ) ; ẍ1 = ℓ ′ (x 1 ) 2ε k ,
where u ⋆ k denotes the singular control. The set of points (x 1 , x 2 ) ∈ R 2 such that ε k x 2 2 = ℓ(x 1 ) is the singular locus. In addition, for every k ∈ N, we find that p2 |u = 2ε k > 0 thus Legendre-Clebsch's condition is verified along a singular arc. In particular, since ε k ̸ = 0, every singular arc is of first order.
2. Study of the optimal control problem in the limit case. Using similar notation as when ε k > 0, we find that the switching function associated with the optimal control problem with ε k = 0 now satisfies p2 = p 2 ρ ′ (x 1 ) -ℓ ′ (x 1 ), thus, we need to differentiate p 2 at least four times to check the order of the singular control. Note also that we must have -ℓ ′ (x 1 ) = 0 along a singular arc, hence, x 1 = x m 1 . By differentiating p2 w.r.t. t, we find that ... p 2 = -ℓ ′′ (x 1 )x 2 , hence x 2 = 0 along the singular arc (because ℓ ′′ (x m 1 ) > 0), and finally .... p 2 = -ℓ ′′ (x m 1 )u. In that case, a singular arc is then characterized by
x ⋆ 1 = x m 1 ; x ⋆ 2 = 0 ; u ⋆ = -ρ(x m 1 )
, where u ⋆ is the singular control. We conclude that the limit case has a second-order singular arc which is such that .... p 2 |u = -ℓ ′′ (x m 1 ) < 0, thus Legendre-Clebsch's condition is also verified. In addition, |u ⋆ | = | -ρ(x m 1 )| ≤ 1, thus the singular arc is admissible. Remark 3.2. (i) When ε k > 0, the singular arc is of turnpike type, however, a saturation phenomenon may occur since for t > 0, one has |u ⋆ k (t)| ≤ 1 if and only if
ℓ ′ (x 1 (t)) 2ε k -ρ(x 1 (t)) ≤ 1.
Since ε k is small, the singular arc is admissible only if x 1 is close to x m 1 . This defines a "small" subset of the singular locus in the plane (x 1 , x 2 ) for the admissibility of the singular arc. We also see that the singular locus as ε k ↓ 0 is a small deformation of the origin (the singular locus for the problem with ε k = 0). (ii) Uniform convergence of the sequence (f k ) and its derivative (D x f k ) to f and D x f respectively is straightforward over compact subsets of R 3 . But, since for a given initial condition (x 0 1 , x 0 2 , 0) ∈ R 3 , optimal trajectories of (3.1) are uniformly bounded, the uniform convergence of f k to f on compact subsets is enough to apply the results of Proposition 2.2.
In the next section, we depict numerically optimal solutions when ε k > 0 and also in the limit case.
Numerical simulations on the perturbed optimal control problem
For all the examples presented in this article, simulations were carried out with the direct method, using the Bocop solver [START_REF]BOCOP: an open source toolbox for optimal control[END_REF]. For sake of simplicity, the final time was fixed. The Gauss II method 3 was used for discretization, with 1000 time steps. Simulations of the perturbed Fuller's problem (i.e., Problem (3.1) with ρ(x 1 ) = 0 and ℓ(x 1 ) = x 2 1 ) are depicted on Fig. 1 and2 for different values of ε k . In line with our previous results, the trajectories are of type Bang-Singular. In particular, Fig. 2 highlights the fact that optimal paths (for ε k > 0) lie in the singular locus ε k x 2 2 = x 2 1 before approaching the target point (see Remark 3.2). As ε k decreases, we can see that the number of bang arcs increase and the trajectories approximate that of the original problem. Optimal controls are a succession of bang arcs together with a "small" terminal first-order singular arc such that the optimal path lies in the singular locus and then reaches the target point.
Also, we have performed a numerical simulation of optimal paths when ε k < 0, see Fig. 3. In that case, Legendre-Clesbsch's condition has the opposite sign leading that way to an anti-turnpike behavior. It follows that approximated optimal controls are a succession of bang arcs (see, e.g., [START_REF] Bonnard | Singular trajectories and their role in control theory[END_REF]). In that case, no singular arcs occurs.
Application to a resource allocation model
Presentation of the model
We consider a control system describing a self-replicator model of bacterial growth where x 1 is the concentration of precursors within a cell and x 2 denotes the concentration of gene expression machinery. The control u with values in [0, 1] will denote the resource allocation parameter defining the proportion of precursors used for making gene expression machinery. The dynamical resource allocation model amounts then to maximize biomass, in order to understand cellular regulations acquired through evolution. For more details on the modeling, we refer to [START_REF] Giordano | Dynamical allocation of cellular resources as an optimal control problem: novel insights into microbial growth strategies[END_REF]. This yields the three-dimensional Mayer optimal control problem inf
u(•)∈[0,1] x 3 (T ) s.t. ẋ1 = e(1 -x 2 ) -(1 + x 1 )h(x 1 , x 2 ), ẋ2 = h(x 1 , x 2 )(u(t) -x 2 ), ẋ3 = -h(x 1 , x 2 ), ; x 1 (0) = x 0 1 , x 2 (0) = x 0 2 , x 3 (0) = 0, (3.2)
3 It is an implicit method of fourth order in two stages to solve numerically an ordinary differential equation, [START_REF]BOCOP: an open source toolbox for optimal control[END_REF]. where h(x 1 , x 2 ) := x1x2 K+x1 , e, K are positive constants, (x 0 1 , x 0 2 ) ∈ R + × [0, 1], and T > 0 is fixed. The objective function in (3.2) represents the biomass to be maximized. We can observe that x 2 is always bounded between 0 and 1 provided that x 0 2 ∈ [0, 1], but, this property does not hold for x 1 (for more details, see [START_REF] Giordano | Dynamical allocation of cellular resources as an optimal control problem: novel insights into microbial growth strategies[END_REF]). The expression of h will lead to chattering (see below), making the use of this model and its variants (see Note that in that case, Legendre-Clebsch's condition has the opposite sign (i.e., p2 |u = 2ε k < 0), so, we have an anti-turnpike behavior for the solution to the approximated optimal control problem leading to an increasing number of bang arcs. e.g. [START_REF] Yegorov | Optimal feedback strategies for bacterial growth with degradation, recycling, and effect of temperature[END_REF][START_REF] Yabo | Dynamical analysis and optimization of a generalized resource allocation model of microbial growth[END_REF]) more complex. A way to avoid this behavior is to make a slight perturbation of the system replacing h(x 1 , x 2 ) by
h ε (x 1 , x 2 ) := x1 K+x1 x2
1+εx2 (actually, we replace x 2 → x 2 by x 2 → x2 1+εx2 ). Hereafter, we keep ε in place of ε k for convenience. Setting f (x 1 ) := x1 K+x1 and g ε (x 2 ) := x2 1+εx2 , we can rewrite the above control system as follows:
ẋ1 = e(1 -x 2 ) -(1 + x 1 ) f (x 1 )g ε (x 2 ), ẋ2 = f (x 1 )g ε (x 2 )(u(t) -x 2 ), ẋ3 = -f (x 1 )g ε (x 2 ). (3.3)
So, for ε = 0, we are in the limit case with chattering (see below) whereas for every ε > 0, the optimal synthesis may exhibit singular arcs of order at most 1 (see below). In addition, it is easily seen that
|g ε (x 2 ) -x 2 | = εx 2 2 1 + εx 2 ≤ ε ; |g ′ ε (x 2 ) -1| ≤ 3ε,
for every ε ∈ (0, 1] and every x 2 ∈ [0, 1]. Hence, the convergence results of the previous section can be applied.
We shall now verify the properties related to the occurrence of a singular arc by application of the PMP.
1. Study of the optimal control problem in the cases where ε > 0. We apply the PMP with the maximum convention. Observe that the cost to be minimized is Φ(x(T )) := x 3 (T ), so the terminal adjoint vector is p(T ) := -∇Φ(x(T )) = (0, 0, -1). It follows that H can be written
H = p1e(1 -x2) -p1(1 + x1) f (x1)gε(x2) + p2 f (x1)gε(x2)(u -x2) + f (x1)gε(x2) → max u∈[0,1]
so u(t) = sign(p 2 (t)) except along (possible) singular arcs. Note that the switching function ϕ is ϕ := p 2 f (x 1 )g ε (x 2 ), but since f (x 1 )g ε (x 2 ) > 0, we will compute the derivatives w.r.t. t of p 2 in place of ϕ for simplification. Now, the covector fulfills over [0, T ] the adjoint equation:
ṗ1 = p 1 g ε (x 2 )( f (x 1 ) + (1 + x 1 ) f ′ (x 1 )) -p 2 f ′ (x 1 )g ε (x 2 )(u -x 2 ) -f ′ (x 1 )g ε (x 2 ), ṗ2 = p 1 e + p 1 (1 + x 1 ) f (x 1 )g ′ ε (x 2 ) -p 2 f (x 1 )(g ′ ε (x 2 )(u -x 2 ) -g ε (x 2 )) -f (x 1 )g ′ ε (x 2 ).
Let us now address the occurrence of a singular arc. Doing so, suppose that p 2 ≡ 0 over [t 1 , t 2 ] where [t 1 , t 2 ] is a sub-interval of [0, T ]. We get ṗ2 = 0 thus
p 1 (e + (1 + x 1 ) f (x 1 )g ′ ε (x 2 )) = f (x 1 )g ′ ε (x 2 ).
We now turn to the computation of p2 (along p 2 = ṗ2 = 0 over [t 1 , t 2 ]). We get:
p2 = ṗ1[e + (1 + x1) f (x1)g ′ ε (x2)] + p1 d dt (e + (1 + x1) f (x1)g ′ ε (x2)) - d dt ( f (x1)g ′ ε (x2)) = [p1gε(x2)( f (x1) + (1 + x1) f ′ (x1)) -f ′ (x1)gε(x2)][e + (1 + x1) f (x1)g ′ ε (x2)] + p1[ ẋ1( f (x1)g ′ ε (x2) + (1 + x1) f ′ (x1)g ′ ε (x2)) + (1 + x1) f (x1)g ′′ ε (x2) ẋ2] -f ′ (x1)g ′ ε (x2) ẋ1 -f (x1)g ′′ ε (x2) ẋ2
To compute p2 |u there are two contributing terms (from ẋ2 ). We get replacing p 1 by its value in the second equality:
p2 |u = (1 + x1) f 2 (x1)g ′′ ε (x2)gε(x2)p1 -f 2 (x1)g ′′ ε (x2)gε(x2) = (1 + x1) f 2 (x1)g ′′ ε (x2)gε(x2) f (x1)g ′ ε (x2) e + (1 + x1) f (x1)g ′ ε (x2) -f 2 (x1)g ′′ ε (x2)gε(x2) = -e f 2 (x1)gε(x2)g ′′ (x2) e + (1 + x1) f (x1)g ′ ε (x2)
.
Proposition 3.1. Let ε > 0 and consider an extremal of the optimal control problem. If a singular arc occurs, then, the Legendre-Clebsch condition is fulfilled.
Proof. From the preceding expression of p2 |u , we deduce that along a singular arc, the switching function
satisfies φ|u = -ef 3 (x 1 )g ε (x 2 ) 2 g ′′ ε (x 2 ) e + (1 + x 1 ) f (x 1 )g ′ ε (x 2 )
> 0, since g ε is strictly concave, whence the result.
2. Study of the optimal control problem in the limit case. We can rewrite the system as follows
ẋ1 = e(1 -x 2 ) -(1 + x 1 ) f (x 1 )x 2 , ẋ2 = f (x 1 )x 2 (u(t) -x 2 ), ẋ3 = -f (x 1 )x 2 .
Again, we apply the PMP with the maximum convention and similarly, the terminal adjoint vector is p(T ) = (0, 0, -1). It follows that the Hamiltonian H can be written
H = p1e(1 -x2) -p1(1 + x1) f (x1)x2 + p2 f (x1)x2(u -x2) + f (x1)x2 → max u∈[0,1]
so u(t) = sign(p 2 (t)) except along (possible) singular arcs. As previously, we shall compute the derivatives of p 2 w.r.t. t in place of the switching function ϕ := p 2 f (x 1 )x 2 (since f (x 1 )x 2 > 0). The covector fulfills the adjoint equation over [0, T ]:
ṗ1 = p 1 x 2 ( f (x 1 ) + (1 + x 1 ) f ′ (x 1 )) -p 2 f ′ (x 1 )x 2 (u -x 2 ) -f ′ (x 1 )x 2 , ṗ2 = p 1 (e + (1 + x 1 ) f (x 1 )) + p 2 f (x 1 )(2x 2 -u) -f (x 1 ).
Let us now address the occurrence of a singular arc. Doing so, suppose that p 2 ≡ 0 over [t 1 , t 2 ] where [t 1 , t 2 ] is a sub-interval of [0, T ]. We get ṗ2 = 0 thus p 1 (e + (1 + x 1 ) f (x 1 )) = f (x 1 ).
We now turn to the computation of p (k)
2 , k = 2, 3, 4 (along p 2 = ṗ2 = 0 over [t 1 , t 2 ]). We get replacing p 1 , ṗ1 , and ẋ1 by their respective value:
p2 = ṗ1[e + (1 + x1) f (x1)] + p1[ f (x1) + (1 + x1) f ′ (x1)] ẋ1 -f ′ (x1) ẋ1 = [p1x2( f (x1) + (1 + x1) f ′ (x1)) -f ′ (x1)x2][e + (1 + x1) f (x1)] + p1[ f (x1) + (1 + x1) f ′ (x1)][e(1 -x2) -(1 + x1) f (x1)x2] -f (x1)[e(1 -x2) -(1 + x1) f (x1)x2] = p1[ f (x1) + (1 + x1) f ′ (x1)] -f ′ (x1)e = Ψ(x1), where Ψ(x 1 ) := e[ f 2 (x 1 ) -e f ′ (x 1 )] e + (1 + x 1 ) f (x 1
) .
We deduce that if a singular arc occurs, then it is at least of second order since φ|u = f (x 1 )x 2 p2 |u = 0, i.e., p(t) • [g, [f, g]](x(t)) = 0 along the singular arc. In addition, we obtain that if a singular arc occurs, then, we have p 2 = ṗ2 = p2 = 0, so Ψ(x 1 ) = 0 which gives
x 1 = x ⋆ 1 := √ eK.
We also obtain ... p 2 = Ψ ′ (x 1 ) ẋ1 = 0. A computation shows that
Ψ ′ (x ⋆ 1 ) = 2 √ Kee (2Ke + e √ Ke + √ Ke)(K + √ Ke) ,
which is obviously non-zero, hence, x 1 remains constant along the singular arc and, so, the singular arc is at least of second order. Finally, differentiating a last time w.r.t. t yields ....
p 2 = Ψ(x 1 )ẍ 1 = Ψ ′ (x 1 )(-ẋ2 (e + (1 + x 1 ) f (x 1 ))).
This gives ẋ2 = 0, hence x 2 = 0 or x 2 = u. The case x 2 = 0 implies that ẋ1 = e > 0 in contradiction with the fact that ẋ1 = 0. So, we can conclude that u = x 2 . Hence, the singular arc can be characterized as:
x ⋆ 1 = √ eK ; x ⋆ 2 := e e + (1 + x ⋆ 1 )f (x ⋆ 1 ) = √ e( √ e + √ K) 1 + e + 2 √ e √ K ; u ⋆ = x ⋆ 2 .
Finally, we get that ....
p 2 |u = -Ψ ′ (x ⋆ 1 )(e + (1 + x ⋆ 1 )f (x ⋆ 1
))x ⋆ 2 (x ⋆ 1 ) < 0, hence, Legendre-Clebsch's condition is satisfied.
Numerical simulations on the perturbed optimal control problem
Fig. 4 depicts trajectories obtained for different values of ε k > 0. Following [START_REF] Giordano | Dynamical allocation of cellular resources as an optimal control problem: novel insights into microbial growth strategies[END_REF], simulations are carried out with a fixed final time, and only the transients until reaching the optimal steady-state are depicted (bang arcs also appear at the end of the simulations, but they are not relevant for the biological problem). The results are similar to those of the Fuller's problem. For ε k > 0, optimal controls are a concatenation of bang arcs together with a terminal singular arc approaching the singular arc in the limit case. A value of ε k = 0.01 is enough to obtain an accurate approximation of the optimal trajectory to the original problem.
Application to a quantum control system
Presentation of the problem
Here, we consider an optimal control problem introduced in [START_REF] Robin | Chattering phenomenon in quantum optimal control[END_REF] in which chattering occurs. The model describes the control of a three-level quantum system to be steered from a given initial condition on the unit sphere of R 3 (denoted by S 1 ) to the north pole (0, 0, 1). This yields the following optimal control problem (written in Mayer form):
min T ≥0, u∈U x 4 (T ) s.t. ẋ1 = -x 2 , ẋ2 = x 1 -u(t)x 3 , ẋ3 = u(t)x 2 , ẋ4 = x 2 1 2 ,
and
(x 1 (0), x 2 (0), x 3 (0), x 4 (0)) ∈ S 1 × {0}, (x 1 (T ), x 2 (T ), x 3 (T ), x 4 (T )) ∈ {(0, 0, 1)} × R. As it can be verified, the dynamics of (x 1 , x 2 , x 3 ) is with values in S 1 . We assume for sake of simplicity that x 3 (t) ≥ 0, ∀t ∈ [0, T ] (by choosing initial conditions) so that one can reduce the dimension of the system. Thus, we can introduce the following control system:
ẋ1 = -x 2 , ẋ2 = x 1 -u(t) 1 -x 2 1 -x 2 2 , ẋ3 = 1 2 x 2 1 + ε 2 x 2 2 ,
proceeding as previously, that is, by adding a small perturbation term in the state x 3 . The optimal control problem then amounts to minimize x 3 (T ) w.r.t. the pair (T, u) such that x 1 (T ) = x 2 (T ) = 0 starting from some initial condition (x 0 1 , x 0 2 , 0) ∈ R 3 such that (x 0 1 ) 2 + (x 0 2 ) 2 < 1. 1. Study of the optimal control problem in the case where ε > 0. In that case, the Hamiltonian associated with the problem can be written (taking into account that p 3 = -1 is constant):
H = -p 1 x 2 + p 2 x 1 -up 2 1 -x 2 1 -x 2 2 - x 2 1 2 -ε x 2 2 2 → max u∈[-1,1]
and it is conserved over [0, T ] (since the problem is autonomous) and also equal to zero (because the terminal time is free). The adjoint equation reads as follows:
ṗ1 = -p 2 -ux1p2 √ 1-x 2 1 -x 2 2 + x 1 , ṗ2 = p 1 -ux2p2 √ 1-x 2 1 -x 2 2 + εx 2 Now, the switching function is ϕ := -p 2 1 -x 2 1 -x 2 2
, so, it is enough to study the behavior of p 2 . Along a singular arc, we have p 2 = ṗ2 = 0, thus p 1 = -εx 2 . Replacing into the Hamiltonian yields the singular locus εx 2 2 = x 2 1 in the state space. By differentiating this expression, we obtain the value of the singular control
u ⋆ ε := ± (1 + ε)|x 1 | ε 1 -x 2 1 -x 2 2 .
We note that the value of |u ⋆ ε | may exceed 1, so in that case, we have saturation (see [START_REF] Bayen | Tangency property and prior-saturation points in minimal time problems in the plane[END_REF]). However, numerical simulations indicate that it is admissible when the trajectory is close to the origin (what can be expected).
Finally, the second order derivative of p 2 satisfies: p2 |u = -ε 1 -x 2 1 -x 2 2 < 0. It follows that Legendre-Clesbsch's condition is satisfied (since ϕ = -p 2 ).
2. Study of the optimal control problem in the limit case. In this case, the switching function is also ϕ = -p 2 . By differentiating w.r.t. t, we find that φ = -p 1 + ux2p2 √ 1-x 2 1 -x 2
2
. Let us examine singular arcs in this case. Supposing that ϕ = 0 over some time interval [t 1 , t 2 ] gives us -p 1 x 2 -
x 2 1 2 = 0. The equation φ = 0 also implies that p 1 = 0, hence x 1 = 0. Along the singular arc, we can write φ = -ṗ1 = x 1 , thus ... ϕ = -ẋ1 = x 2 , and finally .... ϕ = ẋ2 = x 1 -u 1 -x 2 1 -x 2 2 . To conclude this case, we obtain that the singular arc satisfies:
x ⋆ 1 = x ⋆ 2 = u ⋆ = 0 ; φ|u = 0 ;
.... ϕ |u = -1 < 0.
We can thus conclude that the limit case is indeed a second-order singular arc for which Legendre-Clesbch's condition is verified.
Numerical simulations on the perturbed optimal control problem
The results obtained for the perturbed quantum control system are presented on Fig. 5. Here again, we can see that the number of bang arcs increases as ε k decreases. In addition, the trajectory ends up with a singular arc approaching the singular arc of the limit problem (corresponding to u = 0). This example, like the previous ones, confirms the applicability of the method to various control systems.
Conclusion and perspectives
In this work, we have studied how to slightly modify optimal control problems involving a second order singular arc in order to obtain a "simpler" optimal control problem whose solutions do not possess chattering. The advantage of these modified optimal control problems is that they are more tractable from a numerical viewpoint since optimal controls involve only a finite number of switching times. The examples we have studied show that we obtain that way a deformation of the singular locus (corresponding to the second-order singular arc) into a "small" singular locus (corresponding to a first-order singular arc). Since approximated optimal controls may involve both singular and bang arcs, our approach somehow differs from other approaches using only a sequence of bang-bang arcs to approach the chattering control. In particular, our methodology allows to apply the PMP on the approximated problem and to obtain that way qualitative properties of the approximated control (like the value of the singular control for the approximated problem). Other applications such as in [START_REF] Ledzewicz | Singular controls and chattering arcs in optimal control problems arising in biomedicine[END_REF] could be investigated in future works. As well, an interesting question could be to find out if every control system involving a second order singular arc (typically, in our setting, when [g, [f, g]] is identically zero) can be approximated by control systems with a first-order singular arc (for which this Lie bracket is nonzero), and how to determine an approximated system. Additionally, convergence of the singular control (for the approximated problem) to the singular control (for the second order singular arc) could also be addressed in future works.
3. 1 . 1
11 Presentation of the problem Let ρ, ℓ : R → R be two smooth functions. Consider the three-dimensional optimal control problem inf u∈U , T ∈R+ x 3 (T ) s.t.
2 Figure 1 : 2 Figure 2 :
2122 Figure 1: Simulations of the perturbed Fuller's problem with different values of ε k .
2 Figure 3 :
23 Figure3: Simulations of the perturbed Fuller's problem with ε k < 0. Note that in that case, Legendre-Clebsch's condition has the opposite sign (i.e., p2 |u = 2ε k < 0), so, we have an anti-turnpike behavior for the solution to the approximated optimal control problem leading to an increasing number of bang arcs.
2 Figure 4 :
24 Figure 4: Simulations of the perturbed resource allocation problem (3.2) with different values of ε k .
2 Figure 5 :
25 Figure 5: Simulations of the perturbed quantum control system with different values of ε k .
If this situation occurs, then, we say that the extremal 1 (x, p, u) is singular over [t 1 , t 2 ] (equivalently, we say that the trajectory has a singular arc over [t 1 , t 2 ]). The study of singular arcs relies on the expressions defining the derivatives of ϕ w.r.t. the time. A simple computation shows that one has
sign(ϕ(t)) a.e. t ∈ [0, T ] s.t. ϕ(t) ̸ = 0, (2.4)
where ϕ(t) := p(t) • g(x(t)) is the switching function. On a sub-interval [t 1 , t 2 ] ⊂ [0, T ] on which ϕ vanishes,
a different analysis should be carried out in order to find the expression of an optimal control (in particular
(2.4) is useless).
An extremal is a triple (x, p, u) satisfying the state-adjoint system, the transversality condition, and the Hamiltonian maximization condition.
[START_REF] Bayen | Optimal synthesis for the minimum time control problems of fedbatch bioprocesses for growth functions with two maxima[END_REF] We mean that along a singular arc, the input is present explicitly in the expression of ϕ (2q) and not in ϕ (2r) , r = 1, ..., q -1.
Acknowledgments
We thank Anas Bouali and the Inria Biocore team for helpful discussions on the subject.
Declarations
Ethical Approval: not applicable.
Competing interests: not applicable.
Authors' contributions: All authors contributed equally in this manuscript.
Funding: not applicable.
Availability of data and materials: not applicable. |
00411108 | en | [
"shs.anthro-se"
] | 2024/03/04 16:41:24 | 2009 | https://hal.science/hal-00411108/file/Coupaye_-_TAJA_-_What_s_the_Matter_with_Technology_-_Shortened_080606.pdf | Ludovic Coupaye
What's the Matter with Technology?
"What's the Matter with Technology? Long (and Short) Yams Materialisation and Technology in Nyamikum village, Maprik district, East Papua New Guinea." (9074 words)
Within the recent re-emergence of material culture and materiality within the field of anthropology, the very process of making things ("that make people" as rightly pointed out by Miller 2005: 5) -in other words, "technology"occupies a revealing position. As rightly outlined by the authors of the introduction to this volume, Hocart's statement on canoe-making [START_REF] Hocart | The Canoe and the Bonito in Eddystone Island[END_REF] indeed seems to echo functionalist and reductionnist views on the place material culture has occupied in subsequent anthropological fields of enquiry. Even more, I would argue, it also directly evokes the reluctance that anthropologists have since developed regarding the study of technology, notably in Anglo-Saxon literature.
In the collective attempt from participants to this volume to address materialisation as a process, it is interesting to note how 'technology' and techniques are pervasive to most examples: Guo on Solomon island shell-money making, Veys in the ways in which barkcloath and mats are imbued of feminine potency, Geismar in the very process of making photographs or Bonshek in the importance given to the loss of technical knowledge in Santa-Cruz, all mention at some stage of their discussions how the making of things impart them with sets of values, and properties. To complement their demonstrations, my own contribution
The Australian Journal of Anthropology Ludovic Coupaye D r a f t will be focussing less on artefacts themselves than on technical systems that materialise them. 1 "Technology", "labour", and "modes of production" are familiar terms in anthropology. However, as a starting claim, I would advocate that both vocabulary and disciplinary boundaries have contributed to give specific connotations to these terms, and seem, indeed, to have cast them within specific forms of determinism: technology with matter, labour with value, and production with social conditions. In the current discussion on materiality of things (Miller 2005, Ingold 2007), I would situate the present argument on the ground that objects do not stem fully-armed and fully-clad with their properties before being injected in the different sets of transactions they are integrated into. Be it in exchange, consumption or phenomenal engagements with them, things are also perceived as the material -or at least perceptual -results of processes and agencies which origins might not be known or only imagined, but which are nevertheless, consciously or not, presumed. Among all these engagements and processes, that the contributors to this volume agreed to define as materialisation, I would stress technology as being, in way a foundational one.
In this paper, I will first explore this idea of materialisation, through a brief review of the elements from the study of technologies in anthropology, and then turn towards an example taken from the Abelam area, in Papua New Guinea. 2 1 This paper is based on a fieldwork done between January 2002 and September 2003, leading to PhD submitted at the Sainsbury Research Unit for the Arts of Africa, Oceania and the Americas [START_REF] Coupaye | Growing Artefacts, Displaying Relationships: Outlining the technical system of Long Yam cultivation and display among the Abelam of Nyamikum village (East Sepik Province, Papua New Guinea)[END_REF]. Fieldwork has been funded by both the AHRC, and the Robert Sainsbury Scholarship. Subsequent research has been conducted at the SRU, and is currently the subject of postdoctoral fellowship at the Research Department of the Musée du Quai Branly, Paris, leading to the publication of a book. I am deeply grateful for the help and support from Steven Hooper, Pierre Lemonnier, Philippe Peltier as well as Josh Bell, for our discussions, his friendship and encouragements. I am also thankful to the members of the 2007 ASAO session for their remarks and comments, notably Wonu Veys, Liz Bonsheck and Haidy Geismar. 2 I do not address here the question of historical and dynamic aspects of Nyamikum yam cultivation. Far from considering the yam cultivation system as a static elements, I have chosen to focus here on the current picture of this phenomenon. Although early ethnography [START_REF] Kaberry | The Abelam Tribe, Sepik District, New Guinea: A Preliminary report[END_REF] pays some attention to the relationships between yam ceremonies and now absent social featuresnotably initiations cycles, war, and ceremonial exchanges with neighbouring groups -my aim here is not to try to come back to a 'mere technical' approach of yam, synchronic and a-historical, but to illuminate some aspects of the contemporary relationality of yam growing as a sociotechnical system.
The Australian Journal of Anthropology
Ludovic Coupaye D r a f t
From body techniques to technical systems: Materialisation as technology. 3
The place given to techniques in anthropology is sometimes coined as one of the main distinctions between English and French anthropology (Chevalier 1998, Faure-Rouesnel 2001). Indeed, while Mauss's essay Body Techniques (1950a[1935]), and his Manuel d'ethnographie (1947) both pointed directly towards material culture and physical interactions between people and their surrounding world, Lemonnier notices (1992: 3, 5) that it is rather surprising that this aspect of his legacy in the English-speaking literature remained underexamined especially when compared to the treatment given to The Gift (Mauss 1950b(Mauss [1923(Mauss -1924]]; see also Bray 1997: 12-13). Most of developments in this trend come from the works of one of Mauss's student, the prehistorian [START_REF] Leroi-Gourhan | Le Geste et la Parole I: Techniques et Langage ; II La Mémoire et les Rythmes[END_REF][START_REF] Leroi-Gourhan | Le Geste et la Parole I: Techniques et Langage ; II La Mémoire et les Rythmes[END_REF][1943[START_REF] Leroi-Gourhan | Le Geste et la Parole I: Techniques et Langage ; II La Mémoire et les Rythmes[END_REF][START_REF] Leroi-Gourhan | Le Geste et la Parole I: Techniques et Langage ; II La Mémoire et les Rythmes[END_REF]; Ingold 1999), and of the agronomist and anthropologist André-George [START_REF] Haudricourt | Vergleichende Studien zu Kulthaüsern im Sepik-Gebiet und an der Nordküste[END_REF]. While the former, up until recently, was best known in the Anglo-Saxon world for writing on prehistoric art (Leroi-Gourhan 1967), the latter might have been read for his works on ethnobotany [START_REF] Haudricourt | Vergleichende Studien zu Kulthaüsern im Sepik-Gebiet und an der Nordküste[END_REF][START_REF] Miller | Material Culture and Mass-Consumption[END_REF]. However, the position of what Haudricourt called la technologie culturelle (1968, 'cultural technology') had some difficulties to find its proper place, even in France, in spite of the creation of the CNRS laboratory "Technique & Cultures", where it has known its most interesting developments, through the work of, notably, Robert Cresswell (see [START_REF] Cresswell | Prométhée ou Pandore? Propose de Technologie Culturelle[END_REF] and Pierre Lemonnier (1982Lemonnier ( , 1983Lemonnier ( , 1986Lemonnier ( , 1992)). A more complete history of the reasons for this distinction between French and English-speaking regions, as well as the resistance to integrate the study of technical systems within anthropology, still remains to be discussed, but elements can be found in writings on ethnography of techniques (Lemonnier 1992[START_REF] Bray | Technology and Gender: Fabrics of Power in Late Imperial China[END_REF][START_REF] Dobrès | Technology and Social Agency[END_REF], Schiller 2001).
In the English literature, the focus on technical systems has retained some attention through studies in archaeology and its ethnoarchaeological developments, such as in study of pottery or iron working (e.g. Van der Leeuw 1976, 1991;[START_REF] Edmonds | Description, Understanding and the Chaîne Opératoire[END_REF][START_REF] Rowlands | The magical production of iron in the Cameroon Grassfield[END_REF][START_REF] Gosselain | In Pots We Trust: The Processing of Clay and Symbols in Sub-Saharan Africa[END_REF][START_REF] Dobrès | Technology and Social Agency[END_REF][START_REF] Sillar | The Challenge of 'Technological Choice' For Materials Science Approaches in Archaeology[END_REF]. In contrast, anthropological studies come mostly from the French-speaking area (Creswell 1996, Lemonnier 1992[START_REF] Jamard | La Technologie Culturelle peut-elle se tromper? Entretien avec Robert Cresswell[END_REF], along with Warnier's recent focus on praxeology, in relation to the research group "Matière à Penser" -'matter to think' (Warnier 2002). However, Anglo-Saxon literature was not completely reluctant to analyse technology from an ethnographic point of view (e.g. Pfaffenberger 1988, 1992, Sillitoe 1998, 3 Regarding the definitions of the term "technology", Sigaut (1985) has rightly pointed out the how the term can be confusing. Lemonnier considers it embracing "all aspects of the process of action upon matter, whether it is scratching one's nose, planting sweet potatoes, or making jumbo jets" (Lemonnier 1992: 4). Lemonnier suggest to call 'technological' an action which must "involve at least some physical intervention which leads to a real transformation of matter, in terms of current scientific laws of the physical world" (Lemonnier 1992: 5). Pfaffenberger's definition (1992: 497): "Technique […] refers to the system of material resources, tools, operational sequences and skills, verbal and nonverbal knowledge, and specific modes of work coordination that come into play in the fabrication of material artefacts. Sociotechnical systems, in contrast refers to the "distinctive technological activity that stems from the linkage of techniques and material culture to the social coordination of labour. The proper and indispensable subjects of a social anthropology of technology, therefore, include all three: techniques, sociotechnical systems, and material culture." Sigaut (2002) restricts the use of technology for the science studying technical activities.
8/26/2009
The Australian Journal of Anthropology Ludovic Coupaye D r a f t [START_REF] Mackenzie | Androgynous Objects: String Bags and Gender in Central New Guinea[END_REF], especially in the U.S. where the inherently cultural aspects were perhaps easier to integrate in a discipline that still had connections with archaeology (see notably Lechtman and Merill 1977).
Concerning processes of materialisation, and for the purpose of this paper, three major interrelated aspects outlined by these studies have been selected. First, technology as being a black box in anthropological analysis; second its systemic nature; and third its sociogenic properties.
Just as material culture is understood to be humble and invisible in its contribution to social life (Miller 1987: 85-108), the social component of technology is inherently invisible (Pfaffenberger 1992: 500-502), a characteristic that could be held responsible for its exclusion from anthropological enquiries. Not only does this property explain how technology can be qualified as a "black box" (Lemonnier 1996), but it also outlines what Sillar, borrowing Karl Polanyi's statement about economy, calls the embeddedness of technology4 . This intimate relationship between technology and society has been notably highlighted by sociologists of scientific knowledge and STS (Science and Technology Studies; see Pfaffenberger 1992: 495-502, Latour 1993[START_REF] Cresswell | Prométhée ou Pandore? Propose de Technologie Culturelle[END_REF], who pointed out the "sociality of human technological activity" (Pfaffenberger 1992: 492, 493). This sociality is such that the relationship between technology and society has sometimes been qualified has a seamless web were "the social is indissolubly linked wit the technical and the economic" (Hughes 1990: 112;but see Cresswell in Jamard 1999: 551). "Sociotechnical systems are heterogeneous constructs that stem from the successful modification of social and non-social actors so that they work together harmoniously -that is, so that they resist dissociation" (Pfaffenberger 1992: 498;500). This "seamless web" also constitutes both one of the major characteristics and one of the major difficulties for those who study these "technical systems". Their systemic nature not only gears any technical activity to cultural and social phenomena, but also associates several domains of activities together (Lemonnier 1992: 7). Observing that planting yams amongst the Abelam is related to social organisation is no surprise; that it implies also other technical activities such as digging, rope-making, fence and house building can be more easily perceived; but that actual body techniques and technical operations on material involved can be geared to local -emicconceptions of labour, matters and substances, which in turn appears as embodied elements of the habitus, can be difficult to perceive unless one records the operational sequences, as some of the clues cannot be found in verbal explanations of the actors themselves.
This systemic aspect has two main consequences. First, interrelations between different operations, through similar materials or techniques that intervene at different moments of the process, associate several domains and artefacts together. Second, the obvious non-linearity of this system defies evolutionist or "technicist" determinism (Pfaffenberger 1992: 510-513, Coupaye 2004: 135-160). For instance, as recorded in the case of Maori [START_REF] Schaniel | New Technology and Cultural Change in Traditional Societies[END_REF], in spite of narratives about "primitive tools", and supposed technical improvement, the shovel has not replaced the digging-stick in Abelam context. In this example, not only do theoretical elements on consumption intervene -the symbolic identity
The Australian Journal of Anthropology Ludovic Coupaye D r a f t value that the digging stick materialises -but so does also phenomena that belong to the sociotechnical system of gardening: the organisation of a work party corresponds also to the embodied ways in which one does this type of activity (Coupaye 2004: 168-169;178-181; Appendix 1; see also [START_REF] Mcguigan | Wosera Abelam Digging Sticks: An Example of Art in Action[END_REF].
The third aspect follows logically from the two previous ones. What stems from studies of techniques and technology, is their profound sociogenic properties (Pfaffenberger 1992: 500). Mauss's premises offer is the possibility to perceive that sociality is also physically embodied through the performance of physical action in relation to the physical world, in a non-verbal form. While military drilling can be considered as the most obvious example (Mauss 1950a: 367, 384), daily engagements with the material world -itself created by sociotechnical interactions with the environment -also form physical occasions of re-enactment of social values. Rules of conduct, proper (or improper) ways of doing things, be it how one forms a queue in a supermarket, or how one makes a pot, or uses a digging stick to plant a garden in the foothills of the Prince Alexander Range, all can be evaluated and formalised in term of appropriateness, either from the angle of efficacy or even aesthetics of action (see [START_REF] Hardin | The Aesthetics of Action: Continuity and Change in a West African Town[END_REF]. These embodied, nonverbal body practices outlined by Mauss, and underlying Bourdieu's examination of "generative schemes of practical logic" (Bourdieu 1977: 114-124), are socialization processes, not only of the body of the actor, but of his/her person, as well as of the artefact manipulated and created. Materialisation is per se socialization.
Exploring these properties requires two observations which must be recognised as related in our interpretations. Firstly, things can be considered indeed as representatives of "congealed labour" (Damon 1980: 284-286), but as long as labour is understood not as an ontological reality, as the marxist vulgate tended to do (Bonte 1999: 16), but as a concretion or a materialisation process of cultural values. Secondly, things, through their silent presence, are always assumed to be the phenomenal indexes of the efficacy of such invisible processes, even when the process is unknown [START_REF] Gell | The Technology of Enchantment and the Enchantment of Technology[END_REF]. In this perspective, the sociogenic potential of things, released through engagements with them, comes from the properties -one could say the 'materiality' -they acquired through the processes which has led to their materialisation. "Things are parts of persons because they are creation of them" recalls Damon (1980: 284) in his discussion of kula valuables.
In the example of Nyamikum, an Abelam 5 village of Maprik district of the East Sepik Province, Papua New Guinea, I am not concerned with aspects of 'cultural technology' such as technical choices, technological styles, or relationships between tradition and innovation, (see [START_REF] Van Der Leeuw | Tradition and Innovation[END_REF]Papousek 1992, Lemonnier 1993;[START_REF] Sillar | The Challenge of 'Technological Choice' For Materials Science Approaches in Archaeology[END_REF]. I will rather focus on the production 5 Inasmuch as the very term "Abelam" is a construct coming from the encounters between actual people of the Maprik area, colonial administrator and ethnographers, people from Nyamikum do define their own language as "Abulës", and acknowledge the entire area as part of the same group, with further divisions such as Samukundi, within which they identify a disappearing local language called Arenyëm, distinct from the Maaje-Kundi (or Manje Kundi), quoted in the literature (cf. [START_REF] Mcguigan | Wosera Abelam Digging Sticks: An Example of Art in Action[END_REF], Losche 1999: 215). Similarly the description of such entity in the following section cannot be taken much farer than the imperfect reduction of several variations and blurring distinctions between neighbouring groups.
The Australian Journal of Anthropology
Ludovic Coupaye D r a f t of yam tubers, and on how the properties of these artefacts stem from the ways in which they are constructed through cultivation. I will describe some components of the technical system, as well as selected parts of the operational sequence that leads from the opening of the garden, to the consumption of yam. This will allow me to suggest how yams are composite objects, whose materiality is made of the intertwining of several layers of relationships, wrought together by a sociotechnical system, informing how this materiality affects their 'consumption'.
Setting
Located between the villages of Nyelikum [START_REF] Scaglion | Seasonal Patterns In Western Abelam Conflicts Managements Practices: the Ethnography of Law in the Maprik Sub-District Province, East-Sepik Province, Papua New Guinea[END_REF]) and Kimbangwa (Huber-Greub 1988), Nyamikum village's borders (disputed) follow roughly the course of one of the tributaries of the Mitpëm river ('Midpum' in older maps) on the west and the course of the Wutpam ('Odpum') river on the east. At the time of my stay, and based on the 2000 census, the population was slightly above 1100 people. The village is composed of about 25 to 30 hamlets, three of them regularly used as centres for fortnightly meetings organised by the Councillor and ceremonies such as the annual long yam display. Playing an important role in the cultivation process, social organisation can be broadly described as a patrilineal-clan based organisation with exogamic and virilocal rules of marriage. Lineages are components of some twenty-five këm, a term alternatively used as 'clan' and 'place'6 , who co-operate in operations such as the cleaning of the footpaths, the planting of gardens, and the preparation of ceremonies.
Each of these këm has a special relationship with a whole list of totemic species -such as birds, called jaabë, but also trees, insects and leaves (Forge 1966: 29). These ties also relate këm with specific spirits, notably the n Gwaal n du, the clan's mythical ancestor, as well as with potentially dangerous spirits dwelling in specific places in the surrounding bush and forests, such as the waalë living in water holes. Both ancestors and spirits, especially because of their material anchoring in the land belonging to their clan, are said to actively participate in the growth of food, tying together landownership, personhood, spiritual powers and cosmology [START_REF] Huber-Greub | Kokospalmenmenschen: Boden und Alltag und ihre Bedeutung im Selbstverständnis der Abelam von Kimbangwa (East Sepik Province, Papua New Guinea)[END_REF], 1990). Also fuelling the dynamics of food production and exchange, the entire village is divided into ceremonial moieties that cross-cut through the organisation in këm. These two moieties, (ara), are officially engaged in competitive exchanges involving mainly yams of the long variety, and confront each man with his ceremonial partner (saabëra) in the other moiety (cf. notably Losche 1982: 80-85). This dualist system, present in every village of the Abelam area, is also organised to form a web that ties Nyamikum to villages as far as Apangai, in the West and Kalabu in the East (Forge 1970: 273-274). This partnership, which used
The Australian Journal of Anthropology Ludovic Coupaye D r a f t to be the underlying principle organising the initiations, is often compared locally to two football teams engaged in a game, and reflects the nature of political alliances. Intra-village ceremonial exchanges are based more on friendly competition, saabëra actually having a joking and motivating relationship with one another. Inter-village ceremonial exchanges were considered to be more aggressive, with the possible outcome of brawls or even feuds (Forge 1990: 162). This dualistic system is presented by Nyamikum people themselves as an integrated part of what growing food is about, and an individual without ceremonial partners would present himself as being "alone" -without challenge or support, both said to be necessary to the production of food in particular, and excellence in general.
Nyamikum is thus included within wider networks of relationships that connect it to other villages, both in and beyond the Abelam-speaking area, depending on the geopolitical map of allies and enemies. These networks relate ceremonial practices such as initiations, but also secret networks of connections between cultivators for material and non-material support, notably through the relation with sacred stones, owned and controlled by each këm. Linking together all villages, through kinship, friendship and moiety affiliations, they serve as the conduits for the circulation of cultivars, techniques, things, knowledge and 'magical' substances [START_REF] Forge | Paint: A Magical Substance[END_REF]) on which the success of yam cultivation is said to depend [START_REF] Lea | Abelam land and sustenance horticulture in an area of high population density, Maprik[END_REF], Forge 1966[START_REF] Losche | Male and female in Abelam Society[END_REF][START_REF] Huber-Greub | Kokospalmenmenschen: Boden und Alltag und ihre Bedeutung im Selbstverständnis der Abelam von Kimbangwa (East Sepik Province, Papua New Guinea)[END_REF][START_REF] Coupaye | Growing Artefacts, Displaying Relationships: Outlining the technical system of Long Yam cultivation and display among the Abelam of Nyamikum village (East Sepik Province, Papua New Guinea)[END_REF]). Other 'components', which I will briefly outline in the following description, include cultivators' bodily substances and magical support from land through the co-operation and support of the series of totemic spirits, who are able to recognise the legitimate owner of the land, as well as the quality of social relationships between genders, kin, moieties and other villagers.
The Australian Journal of Anthropology
Ludovic Coupaye D r a f t Long Yam Cultivation as a materialisation process
In spite of the increasing presence of a capitalism economy in the area, through a combination of cash earning activities such as cash-crop growing, 7 access to store food such as (like elsewhere in Papua New Guinea) rice, cannedmeat and canned-fish, and chinese noodles, and a better road system, food production in Nyamikum is mostly based on shifting cultivation, 8 notably of yams. 9 Two main species of yams are cultivated: short ones, Dioscorea esculenta ('mami' in Tok Pisin; ka in Abelam) form the main diet; and long ones, Dioscorea alata ('yam' in Tok Pisin; waapi in Abelam) usually grown by men for what is generally described as ceremonial purpose. In particular, long yams waapi have been the most documented, due the spectacular annual ceremonies 10 , where gigantic tubers are decorated, and displayed, before being exchanged between ceremonial partners [START_REF] Kaberry | The Abelam Tribe, Sepik District, New Guinea: A Preliminary report[END_REF]: 355-356, Tuzin 1972, 1995;Huber-Greub 1988: 347;90: 274;[START_REF] Coupaye | Growing Artefacts, Displaying Relationships: Outlining the technical system of Long Yam cultivation and display among the Abelam of Nyamikum village (East Sepik Province, Papua New Guinea)[END_REF]Coupaye , 2007a).
An inventory in the village of Nyamikum gives a list of approximately 40 cultivars of D. esculenta, and 20 of D. alata (Coupaye 2004: 94-97). Both types of yams gardening are perceived as both cosmologically and technically linked. Long yams, are said to be the sine qua non condition for the success of short yam gardening: the harvest of long yams is first to come and the activity of waapi cultivation is what 'opens the road to all food'. This causal association not only forms the underlying justification of the cultivation of both species, in parallel, but also invites us to understand the materiality of ka and waapi in relation to one another, and to approach their cultivation as a whole.
Turning to 'technographic' aspects (Sigaut 2002: 425), the basic sequence of operations of shifting cultivation can be summarized as follow.
(1) Opening of the garden → (2) Clearing → (3) Planting → (4) Tending → (5) Cropping → (6) Fallowing 11
This simple succession of operations can be decomposed in several techniques, each of which combines matters (earth, wood, water, bodies, etc.), energies (the forces which moves objects and transform matter), objects (tools, artefacts, 'means of work'), gestures (prodding, splitting, hitting, flattening, etc.) and knowledge (Lemonnier 1992: 5-9). 12 Regarding short yam gardens, steps 2 to 7 These include notably coffee, cocoa and recently vanilla. In March-April 2000, the hurricane Hudah destroyed the vanilla gardens of Madagascar, the world's primary producer. Vanilla quickly spread over Papua New Guinea, notably in the Sepik. At the time of my departure, in September 2003, one kilogram of dried vanilla beans was sold between Kina 600 and K800 (then roughly equivalent to ₤120 to ₤160). 8 For other studies of shifting cultivation, as a case of indigenous techniques, and their relations with magic, ritual, environment, cosmology, food production, or time, cf. inter alia Malinowski, Concklin 1961, Lea 1964; Rappaport 1968; Sigaut 1982; Juillérat 1986, 1999; Bonnemaison 1991; Sillitoe 1999; Gross 1998. 9 Taro (Colocasia esculenta), Aibika (Hibiscus (Abelmoschus) manihot) along with sago (Metroxylon spp.) are also an important part of the diet, however, Nyamikum people describe yams as being the most important crop(See [START_REF] Lea | Abelam land and sustenance horticulture in an area of high population density, Maprik[END_REF][START_REF] Coupaye | Growing Artefacts, Displaying Relationships: Outlining the technical system of Long Yam cultivation and display among the Abelam of Nyamikum village (East Sepik Province, Papua New Guinea)[END_REF] 10 According to Nyamikum gardeners, short yams were also the subjects of a display ceremony set after the long yams one. However, no ceremony of the sort has been performed during the time of my own stay. 11 Compare with Concklin 1961: 29, figure 1. 12 See Coupaye 2004: 143-153, for a detail of these components within the yam production system, and a discussion of the systemic aspects. From a methodological point of view, let me
8/26/2009
The Australian Journal of Anthropology Ludovic Coupaye D r a f t 5 of the sequence are in fact repeated between 2 and four times before leaving the land going back to fallow for twenty years [START_REF] Lea | Abelam land and sustenance horticulture in an area of high population density, Maprik[END_REF]Allen1982, 1985[START_REF] Lory | Les Jardins Baruya[END_REF]. In contrast, waapi gardens are in general used only once as such and usually left to fallow.
The systemic nature of technical activities allows us to deploy the sequence and to show several layers of material activities (see also Kaberry 1941: 354;[START_REF] Lea | Abelam land and sustenance horticulture in an area of high population density, Maprik[END_REF]. The opening of the garden involves tools (axes and bush-knives), the use and mastering of fire, networks of social cooperation (the landowner with a party made of his kin and the people of his hamlet). Clearing will be made by men (heavy remains of trees) and women (cleaning smaller parts). Planting requires techniques such as carrying the yam setts (the full yams or cuts that will be used as "seeds"), digging the soil or placing and covering the sett in its mound. Regarding tending, in order to profit from both sun and water, yams vines (kutë) are usually staked or put on a trellis that elevates them above the level of the ground [START_REF] Johnston | Productivity of Lesser Yam (Dioscorea esculenta) in Papua New Guinea as influenced by sett weight and staking[END_REF]. Along with yams, several species are planted (and thus harvested) at different moments of the year, such as taro, bananas, tobacco, edible cane, beans or peanuts. Finally, gardening itself requires many other operations that might not be directly linked to the garden or the growing of crops. House-making, such as the storage house (ka n diga), or the garden house ( m baarë), and in the past, fence-building, are integral parts of the operations that people present as related to gardening. These in turn call for woodcutting (for timber), rope making (to tie the timber together, in combination with nails), sago-tending (to get the leaves for thatching), and so forth.
My description would however be incomplete if I did not include other elements usually considered as peripheral to most agronomic concerns (Coupaye 2004: 51-53), but which are viewed as essential to the entire process. Technical activities, defines Sigaut (2002: 424), are characterised from other activities, by the fact that "they are not simply material, they are intentionally material". This brings in the notion of efficacy, that was part of Mauss's definition of a technical gesture (1950a: 371), but also calls for the inclusion of elements generally dismissed from 'pure' technological or agronomical concerns, such as rites and magic, which, from an anthropological angle, have always been part of technical activities (see [START_REF] Hocart | The Canoe and the Bonito in Eddystone Island[END_REF][START_REF] Malinowski | Coral Gardens and their Magic: A Study of the Methods of Tilling the Soil and of Agricultural Rites in the Trobriand Islands[END_REF][START_REF] Malinowski | Coral Gardens and their Magic: A Study of the Methods of Tilling the Soil and of Agricultural Rites in the Trobriand Islands[END_REF]; [START_REF] Forge | Paint: A Magical Substance[END_REF]Gell 1988[START_REF] Gell | The Technology of Enchantment and the Enchantment of Technology[END_REF][START_REF] Rowlands | The magical production of iron in the Cameroon Grassfield[END_REF]. Not only are these elements locally perceived to be materially effective, but they also inform the type of causalities mobilised to constitute the final artefact, and the properties attributed to materials. Authors such as [START_REF] Forge | Paint: A Magical Substance[END_REF] or [START_REF] Malinowski | Coral Gardens and their Magic: A Study of the Methods of Tilling the Soil and of Agricultural Rites in the Trobriand Islands[END_REF][START_REF] Malinowski | Coral Gardens and their Magic: A Study of the Methods of Tilling the Soil and of Agricultural Rites in the Trobriand Islands[END_REF]) have outlined the role of substances or chants, but I wish to focus here on two main elements that are considered part of the growing process and integrated in the materiality of yams:
(1) gardeners and their body substances and qualities; (2), social behaviours requirements.
The yam growing process involves the circulation of substances, seen to be essential to its success. First, cultivators must submit themselves to specific emphasize that such simple definition of the components of a given technique, should not conceal the complexity of practical reality it encompasses, and that it should not be restrained to the elements relevant only from the angle of Eurocentric conceptions of "technology". This precision not only seems necessary to temper the etic nature of such descriptive method, but also to be able to make visible what are the constructing principles of the material result, in other words the materiality fabricated.
8/26/2009
The Australian Journal of Anthropology Ludovic Coupaye D r a f t prescriptions and proscriptions, in Nyamikum called yakët (TP: 'bilip', English: "belief" or in anthropological terms: taboos), which includes behavioural, alimentary, and physical requirements. Restriction from sexual intercourse is pointed out as the main proscription, to avoid the diffusion of the menstrual blood of the gardener's partner within his body. This blood is considered dangerous and inimical not only to the growth of things, but to many other activities such as war -before the Pax Australiana 13 -, painting, building, football, and such, that is any activity of which the success must be secured. While less performed today, penis bloodletting was considered as the only way to get rid of the nefarious substance contained in the menstrual blood. During my stay, some younger people, between 20 and 30, told me that this practice could be replaced by going to the Maprik medical centre to give blood.
Yakët is more a process intervening in than a factor of cultivation. Its contents depends on the këm and the individual, and is often seen as a sort of family recipe, combining recurrent elements. These are aimed to make the body 'light' (yëpwi, Coupaye 2004: 113-121), a quality said to make gestures and the spirit 'sharp' and efficient. In fact, yakët is aimed both at avoiding nefarious consequences, and at increasing the chances of success. The quality and nature of the yakët is said to have direct consequences on the cultivation of long yams, notably on the material result itself, especially the length and shape of the waapi. This was the result of the role, within the growing process, of tutelary spirits who would 'smell' the menstrual substance and withdraw their support from the gardener, rendering him unable to obtain a proper result, whatever his other qualities and skills were. In addition, the menstrual substance itself could directly affect the tubers, through contagion resulting from the gardener's touching the earth, the sett or the vines, or through the sweat resulting from labour falling on the ground, or on the yam mound. This would make the tubers shrivel and die, or at least remain small, and without taste. Finally, through the 'heaviness' ( n gumëk) of the body of the human actor, resulting from both the natural exhaustion following a sexual activity and from the effect of menstrual blood itself, the gardener would be weakened and feeble, making him prone to spiritual dangers, vulnerable to sorcery attacks, and fumbling in each technical action.
A second component fuelling the technical system is the body substance jëwaai -which must be comprehended in relation with the Yakët. Alternatively, 'blood', 'scent' and 'flesh', the jëwaai is a substance that forms the basis of an ability that can be compared to the English notion of a 'green thumb', but also influences the success of activities other than gardening, notably the performance of magic (kus). Three types of jëwaai, listed here from the best to the worse in terms of effects on crops, are inherited from either the mother's or the father's line: Bird, Pig, and Wallaby 14 . One's jëwaai could be tested by planting a banana tree and checking both the speed and the quality of its growth, but the main material result of one's jëwaai could be seen in the size of the yam tubers. Jëwaai 13 During the years following WWII, Australian administration set out to put an end to "tribal warfare" in PNG, notably in the Highlands, through a mixture of colonial power and development of economic structure. 14 This classification system was systematically considered to be equivalent to the "waitman " classification of blood. The rules of transmission of jëwaai from either the mother or the father did not seem to be governed by any specific reason other than luck. Like a fluid, mixing a good jëwaai with a bad one could either result in a medium one, or a good or a bad. In fact, the quality of somebody's jëwaai was ascertained post hoc, ergo proper hoc, through the test described below.
8/26/2009
The Australian Journal of Anthropology Ludovic Coupaye D r a f t is equally distributed in both men and women. In fact, certain gardeners considered that depending on the type of jëwaai, of the wife, and provided that she followed the same Yakët as her husband's, or was past menopause, she could assist her husband in tasks not directly related to long yams (such as weeding other crops in the long yams garden, making fire, etc.). As for the role of menstrual substance, both old women and young girls, because of the absence of menstruation are more commonly allowed in the garden. Certain gardeners consider that young girls (even more than young boys) can administrate the magical substances on waapi tubers, as they are the only people who are definitely not sexually active.
The jëwaai is usually constituent of both invisible and visible body fluids such as breath, smell, sweat, blood, saliva, and sperm, which explains its contagious properties. It both forms the signature of an individual, as land and bush spirits are able to recognise one's jëwaai -another means to affirm landownership, as these spirits are in control of an important part of the land's fertility -and the means to successfully perform activities related to the spiritual domains. Specific operations such as the making of magical substances or the utterance of chants and words, requires specific jëwaai, and people mention lineages specialised in spiritual or technical activities because of the exceptional qualities of their substance (Coupaye 2004: 121-126).
These two notions yakët and jëwaai, relate to local conceptions about bodily fitness, and how one can harness the energies required to processes essential to build one's -and per extension the këm's (clan) or even the village's -fame ('having a name'). But through behavioural requirements, it is the individual's sociality which is also integrated as a component of the production system. However the association of yam cultivation with social dynamics is not solely grounded on the necessity to avoid disputes and conflicts within and between communities (and in previous times, war with neighbours), nor is it based solely on seasonal patterns [START_REF] Scaglion | Seasonal Patterns In Western Abelam Conflicts Managements Practices: the Ethnography of Law in the Maprik Sub-District Province, East-Sepik Province, Papua New Guinea[END_REF][START_REF] Scaglion | Abelam Yam Beliefs and Sociorhythmicity: A Study in Chronoanthropology[END_REF].
During fortnightly meetings, organised by the Councillor, recommendations from the local government are transmitted, and internal conflicts, either territorial or domestic, are publicly mediated, these being pointed out as endangering the village's capacity to produce long yams, and consequently food. These public occasions also see the performance of rituals of peacemaking, unmasking of sorcery and payment of compensations, all accompanied by metaphorical discourses on the necessity to keep the peace within the community to avoid troubling the growth of the yams (Coupaye 2007b). Recruiting seemingly heterogeneous elements, such as vanilla, national elections, World Cup, waapi, God, n gwaals spirits, sorcery and Yakët, these metaphoric discourses (aa n jaku n di, or "veiled speech"), publicly performed, and worked out by the audience, are also said to be essential elements meant to heat the 'place' and accelerate the growth of yams.
Embodied sociality intervenes differently, according to the type of gardens. Ka gardens entail the cooperation of the entire hamlet, with possible affines and partners (with whom the garden is shared), for the planting is usually performed in one day, which is a fundamental step in the process. In contrast, waapi gardens are planted rather secretly, by the gardener, accompanied with few of his friends, also in a strict state of Yakët. The long yam gardens into which I
8/26/2009
The Australian Journal of Anthropology Ludovic Coupaye D r a f t was allowed, presented between five to eight waapi -of different cultivars, but always included a Maa m butap -all planted during a months' period. Harvest of ka is made by the household, and sometimes affines, while waapi are harvested by the same group of men who planted it.
Finally, the material aspects of harvested yams themselves also differ in terms of size, shape and constitution. This is a result not only of the different cultivars, but also of the techniques used to make them grow. Short yams are grown in holes that do not exceed 0.4 m. The full yam is used as sett, and is placed at the bottom of the hole, and recovered by finely hand-broken soil. A tuber is harvested after six to seven months and yields, depending on the cultivars, between three to six new tubers that vary between 0.005 to 0.025 m. diameter in size. As discussed, a planter's jëwaai also influences the size of the tubers. These variations in size are a calculated effect by the garden's owners, as people need tubers of different sizes for various purposes (to feed the pigs, to be re-planted, for daily food, or for festive occasions). Thus, the involvement of a work party of up to forty people is a way to mix the jëwaai of individuals within the garden, so that a wider range of tubers can be harvested. In contrast, waapi are planted in individual mound, and only a cut is used. The hole is dug before, reaching up to two metres, filled with finely cleaned and broken soil, on top of which a rounded mound of up to one metre high is made (Lea 1966, Coupaye 2004: 166-178). Once the mound is ready, the sett is placed on the top of the mound, and the new tuber is able to grow deeper through the softened soil. An average size of 1.8 to 2 m. is usually obtained, with an average weight of 45 to 50 kg. This technique can also be used to obtain ka tubers of up to one metre, called jaa m bi, that are also used for ceremonial occasions.
The shape, size and texture even more than the quantity of yams produced are thus seen as the materialisation of a combination of bodily, social, spiritual and moral qualities of their cultivators. However, the ethnographically famous long yams waapi are perhaps the main manifestations of such qualities. In fact, while both ka and waapi gardening imply precise behaviours and rituals, the cultivation of waapi requires the gardener to follow a precise and more arduous Yakët. This is combined with the fact that while in the ka garden, both men and women operate together, only men perform most of the technical operations in the waapi garden, and take care of the entire process. While all long yams cultivated within the waapi garden are submitted to the same requirements, they are especially applied when one wants to obtaining the 'head of food', the Maa m butap long yam cultivar, around which the main ceremony, Waapi Saaki ("the Lining Up of the Long Yams") is elaborated. Growing Maa m butap, and the Maa m butap itself, materialises the necessity to behave properly because, if men were to fail growing long yams, then food could not come out of other gardens, as people not in a Yakët state will most likely act foolishly, committing adultery, engaging in sorcery business or brawling unnecessarily.
Such judgments and comments indicate how the very technical process of cultivation is perceived as a socialisation process, but also how the yam is perceived as the result of the process itself. During a Waapi Saaki in June 2003, a man in his thirties was exhibiting a long yam he had himself cultivated, having been "fined" the year before by the influential men of the village for provoking a brawl during a long yam ceremony. During the public discourse he made, the man metaphorically referred to his waapi as his 'penalty', but also as the 'road' he had used to learn how to behave. The yam he was presenting today, decorated with feathers, flowers, shell, and mask, was the material index of him being, now, a "man", and not a child anymore.
The sociality of sequences
Turning finally to the performance of the process itself, local perceptions reveal features that confirm the inherent social component of food production, reaffirming, if needed, the validity of the notion of sociotechnical system (Figure 1). o Giving the substances to the Maabutap (on the head).
o Sënaba: when the yam vine starts to turn yellow or brown, 'fertilisers' are given to the tuber. Then, the tuber is really starting to grow.
o Kwaat Baalë ('Pit-Pig'). When leaves are drying. A hole is dug underneath the mound to check the growing point of the tuber. Depending on the gardener's evaluation, an extension of the waagu (hole) is made ('extension': sabakara). If the tuber has not reached the end of the waagu, the soil is removed (waagu jagët: 'emptying the hole'), and new soil is put: kulë taapu wëlikwe ('New 'bed' [using the name of the coconut sheath: taapu] placed'), made of top soil (makwalkëpma).
o Lëraa: one moon after the Kwaat baalë. Another hole is dug under the tëkët to check the size of the tuber. Depending on the evaluation, extension of the hole or addition fertilisers can be made. These two summary accounts in figure 1 illustrate the different levels of interlacements that contribute to the materialisation of yams.
Kulang's sequence of phases corresponds to the operations he himself performed during the 2001-2002 season and which allowed him to harvest a Maa m butap that was considered to be the best during the June 2002 Waapi Saaki. He focused on the different moments when one had to give the Maa m butap the two substances (cf. Forge 1962) essential to help their growth: a vegetal-based liquid called the gunyë n gi (lit. 'water-stinging') and another liquid, but sometimes only powder/mineral-based, kusbawu (lit. 'magic ash').
Gayinigi and Kitnyora's sequence is different and appears less detailed, even though coming from acknowledged Nëma n du (Big/Great Men). What is relevant here is that the different phases are less concerned with primarily material operations, and not only include different types of operations, but also cover different types of domains. Actually, the two Nëma n du's focus is on phases that deal with the negotiation aspects of the process: the account stresses interactions notably between and with the different villages' Nëma n du and or Kajatu n du of different villages -individuals whose identity is kept secret, wardens of secret stones that control the fertility of crops (Coupaye 2004: 128-133) -, and notably on secret negotiations with and between the stone-wardens and negotiations within the community to decide the type of ceremony to be held. One can also notice the stress on the importance of the 'cleaning of the place', both before the planting and after, and the importance of settling disputes and the avoidance of conflicts. When I asked, both appeared as two distinct moments, marking the two thresholds of the year, each one marked by the killing of pigs. However, during my stay, because of the scarcity of pigs, I have not been able to observe such a ceremony. Pig meat was reserved for the Waapi Saaki itself and this gift of meat seemed to act for both phases of 'cleaning'.
What emanates from these accounts is that long yam growing constitutes both a mythic-technical frame for food production, while simultaneously constituting a technical synecdoche for gardening. Growing waapi and displaying them is more than about the phallic cult it was first compared to [START_REF] Kaberry | The Abelam Tribe, Sepik District, New Guinea: A Preliminary report[END_REF][START_REF] Kaberry | The Abelam Tribe, Sepik District, New Guinea: A Preliminary report[END_REF](Kaberry -1942;;[START_REF] Tuzin | Yam symbolism in the Sepik : an Interpretation Account[END_REF]Tuzin , 1995)), while simultaneously evoking spirits and initiates (Hauser-Schäublin 1995: 41-43, Coupaye 2007a). It corresponds to the intricate perception of what yam production is about, and how it weaves together social relationships with the performance of material activities. To materialise a yam requires intertwining substances, material actions, social interactions and symbolic negotiations. Nyamikum gardeners's perceptions and interpretations of the process regarding the factors essential for the success of the process force us to consider that growing of long yams implies a wider system which calls upon and makes manifest types of relationalities in forms of interlacements and networks of what is usually considered as material and non-material aspects. It also re-adjusts our conception of "technical systems" as only functional and practical aspects of human agency intended to have a physical result on reality. However, questioning what type of reality we are dealing with here allows us in fact to extend the notion of technical systems towards domains that are actually usually considered only as symbolical, or purely social or cultural, and analyse processes of production that bring materiality at the same level as sociality. This paper dealt more with the ways in which peoples of Oceania materialise themselves, rather than with processes by which anthropological or museum discourses and practices contribute materialising Oceania. What is argued for is the need to investigate the inherent relationality of things and activities -in a similar sense that the relationality of personhood demonstrated elsewhere [START_REF] Strathern | The Gender of the Gift: Problems With Women and Problems With Society in Melanesia[END_REF](Strathern , 1999) ) -not only through their use and consumption, but through how they are made. Analysis of sociotechnical systems such as Cresswell's, Lemonnier's, Latour's or Pfaffenberger's demonstrate that technology is as much about the making of relations, as it is about the materialisation or, I suggest, the objectification of successful relations. Material and non material, social and technical, all are wrought together in the making of an artefact, which instantiates always more than what is visible on its surface, and even more than what is made of it, while being consistent with the material nature of the thing itself. Yams as artefacts are thus more than "congealed labour", they can be seen as "condensed networks… [which] work as summation or stop" (Strathern 1996: 523), that is materialised moments when properties acquired from the materialisation process as a system, can be engaged with, in consumption or use. 16 It also gives an insight about how certain categories of objects can be considered not as bounded entities, but shifting ones, and processual ones that have the ability to generate new sets of relations, to have an agency.
The very notion of "artefact" itself might be central and, I agree, goes beyond the mere idea of manufactured object (Miller 1987: 112-115) as it always implies agencies and intentionalities that are perceived as having been encapsulated within its very material form [START_REF] Gell | The Technology of Enchantment and the Enchantment of Technology[END_REF](Gell , 1998)), or better: made of materials. Take a piece of rock brought back from the moon by Apollo 17, for instance. Susan Pearce rightly insists on the selection process and the "cultural value it is given and not primarily the technology which has been used to give it form of content, although this is an important mode of value creation". I would argue that maintaining such distinction between cultural value and technology obviates the entire sociotechnical process that includes the making of a rocket, the training of astronauts, the gathering of fuel, and material resources to launch them into space, the body technique (learned through hours of drilling) of the astronaut walking on a low gravity environment, and the processes that make them land safely. Had I the piece of rock on my desk (Ingold 2007), its materiality would definitely stem from its materials, as well as from the network of relations it would materialise. I suspect Abelam people finding stones in the river to make the heap central to their ceremonial ground would not argue with me, notably because it implies carrying them back all the way up to the hamlet,
The Australian Journal of Anthropology Ludovic Coupaye D r a f t in the night so that everybody would think it appeared through the agency of ancestors.
I suggest that things' capacities to participate to social life, in other words their properties, are not only made visible through the ways in which people engage with them once they are made, but also stem from how they are made, produced, fabricated, worked out, all these terms taken in a non metaphorical sense. Things' properties stem from the material and sensual qualities they have acquired, or are thought to have acquired, through processes now invisible. These technological processes, by definition socialising ones, be they known or unknown, always intertwine several levels of reality. If "sociality" increasingly replaces the notion of 'society', 'materiality' could be the relational definition of 'material' outlining how things are as composite and fluid as those of persons. But this fluidity, this multivalence, or what makes things 'hybrids' in our eyes [START_REF] Latour | Nous n'avons jamais été modernes: Essai d'anthropologie symétrique[END_REF], comes from the systemic nature of the process, and of the multiplicity of domains to which their creation resorts. Even when the technology is unknown, or foreign, even metaphorically speaking, the origin of things is always presumed by those who encounter them, as the result of processes: money is assumed to be grown in the same manner as food (Bell, this volume), or, as I was myself told, coming from specific machines that every white man has in his home. Things are concretions of relations, and their materiality also stems on how they came into being, not only from how they are used. This brings us briefly back to "labour", "technology" or "modes of production": to approach the materialisation of artefacts from the angle of the sociotechnical system itself is not only a methodological choice to attain an emic understanding of indigenous materiality. It is also grounded on the material validity (one could be tempted of speaking of multiples validities) of representational -or ideological -components of technological phenomena. Indeed, I argue that such components not only contribute to shaping how human beings construe their relationships with each other and with the material world, but correspond at the same time to what is moulded by and how these relationships are materialised in the form of new products of these representations. Objectification as Miller's defined it is a powerful tool to understand how materialisation is close to socialisation. However, human beings' technical ability to concretise social values in artefacts, to condense their networks of relations, and to surround themselves with such materialised results of socialisation could indeed constitute one of the main reasons why things still matter: not only because of how we consume them, but also because of how we make them consumable.
o
Fig. 1: Three local accounts of the long yam growing process (short version). Kulang in his early forties and both Gayiningi and Kitnoyra are over sixty, and are considered to be Nëma n du. Nyamikum 2002. NB: in the transcription, for the sake of clarity, I removed the superscripted consonants in local terms.
8/26/2009 The Austra 8/26/ lian Journal of Anthropology Ludovic Coupaye D r a f t 2009 13
From the moment the vines climb the vertical trellis (taawu) o
Gathering the ingredients for the two main 'magical' substances/fertilisers: gunyëgi (lit. 'water-stinging') and kusbawu (lit. 'magic ash)'.
The Austra lian Journal of Anthropology Ludovic Coupaye
D r a f t
Edward Kulang's account Gayiningi and Kitnyora's account
Part I: From before the planting
o Beginning of the Yakët.
o Clearing then burning the long yam garden
o Ends when the yam vine reaches the base of the vertical trellise
Part II:
8/ 26/2009 14
8/26/2009 The Australian Journal of Anthropology Ludovic Coupaye D r a f t Conclusion: "things are parts of persons, because they are creation of them" 15
The Australian Journal of Anthropology Ludovic Coupaye D r a f t
Lechtman, H. & R. Merrill (eds) 1977. Material Culture: Styles, Organization, and Dynamics of Technology. St Paul: Xest.
Cf. the title of the 2006 workshop organised by Bill Sillar at the department of archaeology, UCL, May 2006.
8/26/2009
The SIL transcription këm is the equivalent of the term kum found in literature (for example,[START_REF] Huber-Greub | Kokospalmenmenschen: Boden und Alltag und ihre Bedeutung im Selbstverständnis der Abelam von Kimbangwa (East Sepik Province, Papua New Guinea)[END_REF] or kim(Hauser-Schaüblin 1989). Its translation as 'place' has become part of names of villages, such as Nyami-këm, Nyeli-këm or Sara-këm. However, earlier transcriptions have often led to the use of the spelling '-kum' or even '-gum', such as the official map spelling for Neligum, Gweligum or Waigagum. As it interfered quite often with the actual names of the clans, such as Tatmëkëm, or Sarëkëm -notably while I was myself collecting genealogies, with people giving me the name of their hamlet instead of their clan and reversely -I use '-këm' when referring to clans, and '-kum' when referring to villages..8/26/2009
Damon 1980: 204.
However, although this approach of sociotechnical phenomena does evoke the holistic notion of "social total fact"(Mauss 1950) or systemic studies such as the one conducted by Rappaport (1968), it does not entirely subscribed either to the complete reliance on actor-network theory or on temptations of materialistic or functionalist explanations (for a critique of Rappaport see Hornborg 1996). If anything, I would perhaps consider an approached based on the non-linearity of complex systems (cf. Lee 1997).8/26/2009 |
00012408 | en | [
"info.info-cl"
] | 2024/03/04 16:41:24 | 2004 | https://hal.science/hal-00012408/file/moot_cg2004.pdf | Richard Moot
email: [email protected]
Signes
Graph Algorithms for Improving Type-Logical Proof Search
Keywords: Automated Deduction, Floyd-Warshall Algorithm, Lambek Calculus, Proof Net, Ranked Assignments
Proof nets are a graph theoretical representation of proofs in various fragments of type-logical grammar. In spite of this basis in graph theory, there has been relatively little attention to the use of graph theoretic algorithms for type-logical proof search.
In this paper we will look at several ways in which standard graph theoretic algorithms can be used to restrict the search space. In particular, we will provide an O(n 4 ) algorithm for selecting an optimal axiom link at any stage in the proof search as well as an O(kn 3 ) algorithm for selecting the k best proof candidates.
Introduction
Type-logical grammar [START_REF] Morrill | Type Logical Grammar: Categorial Logic of Signs[END_REF][START_REF] Moortgat | Categorial type logics[END_REF] is an attractive logical view of grammatical theory. Advantages of this theory over other formalisms include a simple, transparent theory of (λ term) semantics thanks to the Curry-Howard isomorphism and effective learning algorithms for inducing grammars from linguistic data [START_REF] Buszkowski | Categorial grammars determined from linguistic data by unification[END_REF].
Proof nets, first introduced for linear logic by [START_REF] Girard | Linear logic[END_REF], are a way of presenting type-logical proofs which circumvents the redundancies present in, for example, the sequent calculus by performing all logical rules 'in parallel'.
The only non-determinism in trying to prove a theorem consists of selecting pairs of axiom links. Each possible selection -if correct -will result in a different proof. However, many of these possible selections can never contribute to a proof net, while a naive algorithm might try these selections many times. It is the goal of this paper to provide algorithms for filtering out these possibilities at an early stage and selecting the axiom link which is most restricted, thereby improving the performance of proof search.
Given that the problem we are trying to solve is known to be NP complete, even in the non-commutative case, it would be too much to hope for a polynomial algorithm [START_REF] Kanovich | The multiplicative fragment of linear logic is NPcomplete[END_REF][START_REF] Pentus | Lambek calculus is NP-complete[END_REF]. However, we will see an algorithm for computing the best possible continuation of a partial proof net in O(n 4 ).
A second aim is to develop a polynomial algorithm by means of approximation. If we consider only the best k axiom links, then we can find these in O(kn 3 ). When best is defined as 'having axiom links with the shortest total distance' this algorithm converges with results on proof nets and processing [START_REF] Johnson | Proof nets and the complexity of processing centerembedded constructions[END_REF][START_REF] Morrill | Incremental processing and acceptability[END_REF].
Proof Nets and Essential Nets
In this section we will look at two ways of presenting proof nets for multiplicative intuitionistic linear logic (MILL) together with their correctness criteria and some basic properties.
Though the results will be focused on an associative, commutative system, it is simple to enforce non-commutativity by demanding the axiom links to be planar [START_REF] Roorda | Resource logics: A proof-theoretical study[END_REF] or by labeling, either with pairs of string positions [START_REF] Morrill | Higher order linear logic programming of categorial deduction[END_REF] or by algebraic terms [START_REF] De Groote | An algebraic correctness criterion for intuitionistic proofnets[END_REF]. In order to have more flexibility in dealing with linguistic phenomena, other constraints on the correctness of proof nets have been proposed [START_REF] Moot | Proof nets for the multimodal Lambek calculus[END_REF], but given that the associative, commutative logic is the worst case (in the sense that it allows the most axiom links) with respect to other fragments of categorial grammars there are no problems adapting the results of this paper to other systems. However, we leave the question of whether it is possible to perform better for more restricted type-logical grammars open.
The choice of presenting the logic with two implications which differ only in the order of the premisses of the links is intended to make the extensions to the non-commutative case more clearly visible.
A ⊢ A [Ax] ∆ ⊢ A Γ, A, Γ ′ ⊢ C Γ, ∆, Γ ′ ⊢ C [Cut] Γ, A, B, ∆ ⊢ C Γ, A • B, ∆ ⊢ C [L•] Γ ⊢ A ∆ ⊢ B Γ, ∆ ⊢ A • B [R•] ∆ ⊢ B Γ, A, Γ ′ ⊢ C Γ, A / B, ∆, Γ ′ ⊢ C [L/] Γ, B ⊢ A Γ ⊢ A / B [R/] ∆ ⊢ B Γ, A, Γ ′ ⊢ C Γ, ∆, B \ A, Γ ′ ⊢ C [L\] B, Γ ⊢ A Γ ⊢ B \ A [R\]
Table 1 The sequent calculus for L/MILL with commutativity implicit
np ⊢ np [Ax] s ⊢ s [Ax] np, np \ s ⊢ s [L\] np ⊢ s / (np \ s) [R/] s ⊢ s [Ax] np, (s / (np \ s)) \ s ⊢ s [L\] (s / (np \ s)) \ s ⊢ np \ s [R\] s ⊢ s [Ax] s / (np \ s), (s / (np \ s)) \ s ⊢ s [L/]
Sequent Calculus
Table 1 shows the sequent calculus for the Lambek calculus L, first proposed by [START_REF] Lambek | The mathematics of sentence structure[END_REF]. The commutative version, the Lambek-van Benthem calculus LP, is also known as the multiplicative fragment of intuitionistic linear logic MILL. An example sequent derivation is shown in Figure 1.
Proof Nets
Proof nets are an economic way of presenting proofs for linear logic, which is particularly elegant for the multiplicative fragment. When looking at sequent proofs, there are often many ways of deriving essentially 'the same' proof.
Proof nets, on the other hand, are inherently redundancy-free.
We define proof nets as a subset of proof structures. A proof structure is a collection of the links shown in
A + A - A + A - B - A - A • B + B - A - A / B - A + B - B \ A + A + B + A • B + A - B + A / B - B + A + B \ A Table 2
Links for proof structures this paper. A cut link has two premisses, which can appear in any order, and no conclusions. All other links have an explicit left premiss and right premiss.
We also distinguish between negative (antecedent) and positive (succedent) formulas and between tensor (solid) and par (dotted) links.
Definition 1 A proof structure S is a collection of links such that:
(1) every formula is the conclusion of exactly one link, (2) every formula is the premiss of at most one link, formulas which are not the premiss of a link are called the conclusions of the proof structure, (3) a proof structure has exactly one positive conclusion.
Given a proof structure, we want to decide if it is a proof net, that is, if it corresponds to a sequent proof. A correctness criterion allows us to accept the proof structures which are correct and reject those which are not. In this section, we will look at the acyclicity and connectedness condition from [START_REF] Danos | The structure of multiplicatives[END_REF], which is perhaps the most well-known correctness condition for proof nets in multiplicative linear logic. We will look at another condition in the next section.
Definition 2 For a proof structure S, a switching ω for S is a choice for every par link of one of its premisses.
Definition 3 From a proof structure S and a switching ω we obtain a correction graph ωS by replacing all par links Proof search in a proof net system is a rather direct reflection of the definitions. Given a sequent Γ ⊢ C we unfold the negative formulas in Γ and the positive formula C, giving us a proof frame. Note that given a polarized formula, exactly one link will apply, making this stage trivial. An example proof frame for the sequent
(np \ s), (s / (np \ s)) \ s ⊢ s of Figure 1 is given in Figure 2.
We have given the atomic formulas an index as subscript only to make it easier to refer to them; the numbers are not formally part of the logic. The matrix next to the proof frame in the figure represents the possible linkings: the rows are the negative formula occurrences, whereas the columns are the positive formula occurrences. White entries represent currently impossible connections whereas colored entries represent the current possibilities.
The next stage consists of transforming the proof frame into a proof structure by linking atomic formulas of opposite polarity. It is this stage which will concern us in this paper. This is a matter of putting exactly one mark in every row and every column of the table. Finally, we need to check the correctness condition. Though there are many correction graphs for a proof structure, [START_REF] Guerrini | Correctness of multiplicative proof nets is linear[END_REF] shows we can verify the correctness of a proof structure in linear time. The proof structure in Figure 3 is indeed a proof net, which we can verify by testing all correction graphs for acyclicity and connectedness. Of the 5 alternative linkings, only one other produces a proof net.
Essential Nets
For out current purposes, we are interested in an alternative correctness criterion proposed by [START_REF] Lamarche | Proof nets for intuitionistic linear logic I: Essential nets[END_REF]. This criterion is based on a different way of decomposing a sequent, this time into a directed graph, with conditions on the paths performing the role of a correctness criterion. A net like this is called an essential net. The links for essential nets are shown in Table 3 on the next page, though we follow de Groote (1999) in reversing the arrows of [START_REF] Lamarche | Proof nets for intuitionistic linear logic I: Essential nets[END_REF].
Definition 5 Given an essential net E its positive conclusion is called the output of the essential net and the negative conclusions, as well as the negatives premisses of any positive / or \ link, are called its inputs.
Definition 6 An essential net is correct iff the following properties hold.
(1) it is acyclic, (2) every path from the negative premiss of a positive / or \ link passes through the conclusion of this link, (3) every path from the inputs of the graph passes reaches the output of the graph.
-
A + A ? - A + A 6 - A - B - A • B ] - A + B - A / B ] + B - A - B \ A - + A + B + A • B ^-B + A + A / B + A - B + B \ A Table 3 Links for essential nets - s 1 + \ 10 - / 9 ] + s 4 - np 7 ^+ np 8 - s 2 - \ 11 - + s 5 + / 12 - s 3 - \ 13 - + s 6 ? s 1 s 2 s 3 s 4 s 5 s 6 np 8 np 7 Fig. 4. Example essential net Theorem 7 (Lamarche (1994)) A sequent Γ ⊢ C is provable in MILL iff its essential net is correct.
Condition (1) reflects the acyclicity condition on proof nets, whereas conditions (2) and (3) reflect the connectedness condition. The formulation of 'every path' exists only to ensure correctness of the negative • link; in all other cases there is at most one path between two points in a correct essential net.
Figure 4 gives the essential net corresponding to the proof frame we have seen before, but this time with the np axiom link already performed. Remark that we have simply unfolded the formulas as before, just with a different set of links.
It will be our goal to eliminate as many axiom links as possible for this example.
Though the correctness criterion was originally formulated for the multiplicative intuitionistic fragment of linear logic only, [START_REF] Murawski | Dominator trees and fast verification of proof nets[END_REF] show -in addition to giving a linear time algorithm for testing the correctness of an essential net -that we can transform a classical proof net into an essential net in linear time. So our results in the following sections can be applied to the classical case as well.
Basic Properties
In order to better analyze the properties of the algorithms we propose, we will first take a look at some basic properties of proof nets.
Axiom Links
Since we will be concerned with finding an axiom linking for a partial proof structure P which will turn P into a proof net, we first given some bounds on the number of proof structures we will have to consider. Given that the problem we are trying to solve is NP complete, it is not surprising these bounds are quite high.
Proposition 8 Let P be a proof net and f an atomic formula, then the number of positive occurrences of f is equal to the number of negative occurrences of f . This proposition follows immediately from the fact that every atomic formula is the conclusion of an axiom link, where each axiom link has one positive and one negative occurrence of a formula f as its conclusion.
Proposition 9 Every proof frame F has O(a!) axiom linkings which produce a proof structure, where a is the maximum number of positive and negative occurrences of an atomic formula in F .
If we have a positive atomic formulas, we have a possibilities for the first one, since all negative formulas may be selected, followed by a -1 for the second etc. giving us a! possibilities.
Proposition 10 Every proof frame F has O(4 a ) planar axiom linkings which produce a proof structure, where a is the maximum number of positive and negative occurrences of an atomic formula in F . This follows from the fact that a planar axiom linking is simply a binary bracketing of the atomic formulas and the fact that there are C a-1 such bracketings, where C k , the kth Catalan number, approaches 4 k / √ πk 3/2 .
Proposition 11 For every partial proof structure with a atomic formulas which are not the conclusion of any axiom link there are O(a 2 ) possible axiom links.
Given that every positive atomic formula can be linked to every negative atomic formula of the same atomic type this gives us a 2 pairs.
Graph Size
Proposition 12 For every proof structure S with h negative conclusions, 1 positive conclusion, p par links and t tensor links, the following equation holds.
p + h = t + 1 = a
Given Proposition 8, the number of positive and negative atomic formulas is both a. Suppose we want to construct a proof structure S with h negative conclusions and 1 positive conclusion from these atomic formulas. When we look at the links in Table 2 we see that all par links reduce the number of negative conclusions by 1 and all tensor links reduce the number of positive conclusions by 1.
Proposition 13 Every essential net
E has v = h+1+2(t+p) = O(a) vertices and 2t + p ≤ e ≤ 2(t + p) + a = O(a) edges.
This follows immediately from inspection of the links: all conclusions of the essential net (h negative and 1 positive) start out as a single vertex and every link adds two new vertices. For the edges: the minimum number is obtained when we have no axiom links and all par links are positive links for \ or / which introduce one edge, the maximum number includes a axiom links and par links which are all negative links for •.
Corollary 14 An essential net is sparse, ie. the number of edges is proportional to the number of vertices, but if we add edges for all possible axiom links it will be dense, ie. e is proportional to v 2 , Immediate from Proposition 11 and Proposition 13.
Acyclicity
We begin by investigating the acyclicity condition, condition (1) from Definition 6, which is the easiest to verify.
In order to select the axiomatic formula which is most constrained with respect to the acyclicity condition we can simply enumerate all a 2 possible axiom links However, this means we will visit the vertices and edges of the graph many times. It is therefore a practical improvement to compute the transitive closure of the graph in advance, after which we can perform the acyclicity queries in constant time.
In this paper we will use the Floyd-Warshall algorithm [START_REF] Cormen | Introduction to Algorithms[END_REF] for computing the transitive closure, which computes the transitive closure of a directed graph in O(v 3 ) time. Though there are algorithms which perform asymptotically better for sparse graphs, it is hard to beat this algorithm in practice even for sparse graphs because of the small constants involved, while for dense graphs, which we will consider in the next section when we take all possible axiom links into account, it is the algorithm of choice [START_REF] Sedgewick | Algorithms in C: Graph Algorithms[END_REF].
The Floyd-Warshall algorithm is based on successively eliminating the intermediate vertices c from every path from a to b. Given a vertex c and the paths a → c → b for all a and b we create a direct path a → b if it didn't exist before. That is to say, there is a path from a to b if either there is a path from a to c and from c to b or if there is a path from a to b which we already knew about (Figure 5).
path(a, b) := path(a, b) ∨ (path(a, c) ∧ path(c, b)) (1)
After eliminating c, for every path in the original graph which passed through c there is now a shortcut which bypasses c. After we have created such shortcuts for all vertices in the graph it is clear that the resulting graph has an edge a → b iff there is a path from a to b in the original graph. (right of the figure). A square in the matrix is colored in iff there is a link from the row to the column in the graph.
The relevant part of the graph for the acyclicity test is marked by a square around columns 1 -3 of row 4 -6. We see here, for example, that given the existence of a path 4 → 1 an axiom link between s 1 and s 4 would produce a cycle (via node 10 in the original graph) and is therefore to be excluded. A similar observation can be made for s 5 and s 3 .
Connectedness
Verifying conditions (2) and (3) from Definition 6 is a bit harder. The question we want to ask about each link is: does this link contribute to a connected proof structure? Or, inversely, does excluding the other possibilities for the two atomic formulas we connect mean a connected proof structure is still possible.
To check the conditions we need to verify the following:
(1) for every negative input of the net we verify there exists a path to the positive conclusion, (2) for every negative • link we verify that both paths leaving from it reach their destination, (3) for every positive / or \ link we check the existence of a path from its negative premiss to its positive conclusion continuing to the positive conclusion of the essential net.
Given that we are already computing the transitive closure of the graph for verifying acyclicity, we can exploit this by adding additional information to the matrix we use for the transitive closure. There are many ways of storing this extra information, the simplest being in the form of an ordered list of pairs. Given that, for a atomic formulas, each possible connection allows (a -1) 2 other connections (ie. it is agnostic about all possibilities not contradicting this one) but excludes 2(a -1) possibilities, it is more economic to store the connections which are excluded. For example, the ordered set associated to the edge from 1 to 4 will be {1 -5, 1 -6, 2 -4, 3 -4}, meaning "there is an edge from 1 to 4 but not to anywhere else and the only edge arriving at 4 comes from 1".
Note that in the description of the Floyd-Warshall algorithm, we made use only of the logical 'and' and 'or' operators. For ordered sets, the corresponding operations are set union and set intersection. For eliminating vertex c from a path from a to b, we first take the union of the ordered set representing the links which are not in a path from a to c with that representing the links not in a path from c to b (any vertex in either set couldn't be in a path from a via c to b). Then, we take the intersection of this set with the old set associated to the path from a to b.
path(a, b) := path(a, b) ∩ (path(a, c) ∪ path(c, b)) (2)
Note that Equation 2 is simply Equation 1 with both sides negated, the negations moved inward and set union and intersection in the place of the logical 'or' and 'and' operators.
Given that we can implement the union and intersection operations in linear time with respect to the size of the input sets, the total complexity of our algorithm becomes O(v 3 2(a -1)) = O(a 4 ). Every entry from the previous graph is now divided into 9 subentries -one for each possible axiom link. We read the entry for 1 -4 as follows: the first row indicate the link between 1 and 4, and thereby the absence of a link 1 -5 and 1 -6, the second row indicates the possibilities for linking 2, which just excludes 2 -4 and the third row indicates that for 3 just the 3 -4 connection is impossible. Again, we have marked the table entries relevant for our correctness condition by drawing a black border around them.
When we look at the transitive closure, we see that, should we choose to link s 2 to s 6 , this would make it impossible to reach vertex 6 from vertices 4, 5, 9 or 13. Remark also that cycles need to be excluded separately. For example, the path from 5 to 1 in Figure 7 does not mean we need to exclude the s 1 -s 5 axiom link.
Figure 8 shows the proof frame of Figure 4 with all constraints taken into account. We see that, whatever choice we make for the first axiom link, all other axiom links will be fixed immediately, giving us the two proofs s 1 -s 5 , s 2 -s 4 , s 3 -s 6 (ie. the proof shown in Figure 3) and s 1 -s 6 , s 2 -s 5 , s 3 -s 4 .
Extensions and Improvements
An interesting continuation of the themes explored in this paper would be to look at dynamic graph algorithms, where we maintain the transitive closure under additions and deletions of edges. This would avoid recomputing the transitive closure from scratch after every axiom link and would allow us to take advantage of the information we have already computed. [START_REF] King | A fully dynamic algorithm for maintaining the transitive closure[END_REF] propose an algorithm with O(n 2 ) update time based on keeping track of the number of paths between two vertices, which is easy enough to adapt to our current scenario in the case of acyclicity tests, though it remains unclear if it can be adopted to check for connectedness.
Another improvement would be to represent the ordered sets differently. Given that their structure is quite regular, it may be possible to improve upon linear time union and intersection. However, given that for each iteration the size of the sets either remains the same or decreases (the principal operation in Equation 2 being intersection), it remains to be seen if this will result in a practical improvement.
Finally, we can consider the work in the two previous sections as using a sort of 'lookahead' of one axiom link, which is to say we exclude all axioms links which, by themselves, would produce a cyclic or disconnected proof structure. This can be extended quite naturally to doing k axiom links at the same time, though each extra axiom link will multiply the required space by O(n 2 ), the required time for acyclicity by O(n 3 ) and the required time for connectedness by O(n 4 ) (this is relatively easy to see because we are in effect substituting n(n -1) for the old value of n).
Polynomial Time
If we add weights to the different connections, the situation changes. The simplest way to add weights to our graph is to use the distance between two atomic formulas as their weight and prefer the total axiom linking with the least total weight. This choice of assigning weights is closely related to work on left-toright processing of sentences, proposed independently by [START_REF] Johnson | Proof nets and the complexity of processing centerembedded constructions[END_REF] and [START_REF] Morrill | Incremental processing and acceptability[END_REF]. The claim they make is that the complexity of a phrase depends on the number of 'open' or unlinked axiom formulas a reader/listener will have to maintain in memory to produce a parse for this sentence.
Finding a minimum-weight solution to this problem is known as the assignment problem. [START_REF] Murty | An algorithm for ranking all the assignments in order of increasing cost[END_REF] was the first to give an algorithm for generating the assignments in order of increasing cost. His O(kn 4 ) algorithm for finding the k best assignments can be improved to O(kn 3 ), even though tests on randomly generated graphs have shown the observed complexity to be O(kn 2 ) [START_REF] Miller | Optimizing Murty's ranked assignment method[END_REF].
Because using the distance as weight tends to favor cyclic connections, it is preferable to make one pass of the algorithm described in the previous section and assign a weight of infinity to all edges which are either cyclic or disconnected. Figure 9 shows a weighted graph corresponding to an example from [START_REF] Morrill | Incremental processing and acceptability[END_REF], the sentence 'someone loves everyone', which has a preferred reading where the subject has wide scope. Of the four readings of this sentence (two if we enforce planarity) there is a preference for connecting s 1 -s 2 , s 3 -s 6 , s 5 -s 4 , with a total weight of 11, as compared to connecting s 1 -s 6 , s 3 -s 4 , s 5 -s 2 , with a total weight of 19. There is an important difference between using distance weights like we do here and keeping track of the open axiom links like Johnson and Morrill, which is that we select a linking which is best globally. It is therefore to be expected that we will make different predictions for some 'garden path' sentences (ie. sentences where a suboptimal local choice for the axiom links will be made).
Finding an appropriate value of k and fragments of type-logical grammar for which this k is guaranteed to find all readings remains an interesting open question. Cautious people might select k = n!, given that in any type-logical grammar there are at most n! links possible, to generate all readings in increasing order of complexity. It seems tempting to set k to n 3 , because many interesting grammar formalisms have O(n 6 ) complexity [START_REF] Joshi | The convergence of mildly contextsensitive grammar formalisms[END_REF], and find type-logical fragments for which we can show proof search using this strategy is complete.
Conclusion
We have seen how standard graph algorithms can be modified to aid in proof search for type-logical grammars by rejecting connections which can never contribute to a successful proof.
We have also seen how weighing the connections allows us to enumerate the links in increasing order and linked this with processing claims.
Fig. 1 .
1 Fig. 1. Example sequent derivation
Fig. 3 .
3 Fig. 3. Example proof net linkings next to the matrix which corresponds to it.
Fig. 5 .
5 Fig. 5. Eliminating node c from the path from a to b and reject those which produce a cycle.We can easily verify whether a graph contains a cycle in time proportional to the representation of the graph, v + e, using either breadth-first search or depth-first search (e.g.Cormen et al., 1990, Section 23.2 and 23.3 respectively), giving us an O(a 2 (v + e)) = O(a 3 ) algorithm for verifying all pairs.
Figure 6 Fig. 6 .
66 Figure 6 on the next page shows the essential net of Figure 4 on page 7 in adjacency matrix representation (left of the figure) and its transitive closure
Fig. 7 .
7 Fig. 7. Initial graph (left) and its transitive closure (right)
Fig. 8 .
8 Fig. 8. Essential net with acyclicity and connectedness taken into account Figure 7 on the page before shows the initial graph and its transitive closure.Every entry from the previous graph is now divided into 9 subentries -one for each possible axiom link. We read the entry for 1 -4 as follows: the first row indicate the link between 1 and 4, and thereby the absence of a link 1 -5 and 1 -6, the second row indicates the possibilities for linking 2, which just excludes 2 -4 and the third row indicates that for 3 just the 3 -4 connection is impossible. Again, we have marked the table entries relevant for our correctness condition by drawing a black border around them.
Fig. 9 .
9 Fig. 9. Minimum Weight Linking for 'someone loves everyone'
Table 2 which satisfies the conditions of Definition 1. A link has its conclusions drawn at the bottom and its premisses at the top. The axiom link, top left of the table, has no premisses and two conclusions which can appear in any order. The cut link, top right of the table, is mentioned only for completeness; we will not consider cut links in
- |
00411110 | en | [
"info.info-ds"
] | 2024/03/04 16:41:24 | 2009 | https://hal.science/hal-00411110/file/box_operator.pdf | Olivier Roussel
email: [email protected]
Michèle Soria
email: [email protected]
Boltzmann sampling of ordered structures
Boltzmann models from statistical physics combined with methods from analytic combinatorics give rise to efficient algorithms for the random generation of combinatorials objects. This paper proposes an efficient sampler which satisfies the Boltzmann model principle for ordered structures. This goal is achieved using a special operator, named box operator. Under an abstract real-arithmetic computation model, our algorithm is of linear complexity upon free generation ; and for many classical structures, of linear complexity also provided a small tolerance is allowed on the size of the object drawn. The resulting programs make it possible to generate random objects of sizes up to 10 7 on a standard machine.
Introduction
In 2004, Duchon, Flajolet, Louchard and Schaeffer [START_REF] Duchon | Boltzmann samplers for the random generation of combinatorial structures[END_REF] proposed a new model, called Boltzmann model, which leads to systematically construct samplers for random generation of objects in combinatorial classes described by specification systems. This framework has two main features: uniformity (two objects of the same size will have equal chances of being drawn) and quasi-linear time complexity.
Boltzmann samplers depend on a real parameter x and generate an object a in a given combinatorial specifiable class A with a probability essentially proportional to x |a| where |a| is the size of a. Hence they draw uniformely in the class A n of all the objects of size n in A. The size of the output is a random variable, and parameter x can be efficiently tuned for a targetted mean value. Moreover using rejection, one can obtain exact size or approximate size samplers. Efficient generation of huge objects makes it possible to adress problem of testing and benchmarking. This new approach differs from the "recursive method" introduced by Nijenhuis and Wilf [START_REF] Nijenhuis | Combinatorial algorithms[END_REF] bringing the possibility of relaxing the constraint of an exact size for the output. This implies a significant gain in complexity: no preprocessing phase is needed and expected time complexity is linear in the size of the output.
Boltzmann samplers have been developped for a whole set of combinatorial classes, labelled or unlabelled. Basically, they are classes defined from basic elements by means of fundamental constructions such as cartesian products, disjoint unions, sequences, sets and cycles, these operators being well known in combinatorics.
1
In this paper, we focus on a particular operator, introduced by Greene [START_REF] Greene | Labelled formal languages and their uses[END_REF], which allows to construct objects with internal order constraints. We extend Boltzmann model to efficiently generate these objects. The main idea is to slightly change the value of the x parameter along the execution of the algorithm. This simple idea leads to Boltzmann samplers for objects such as alternating permutations (which we could not handle before) and more generally any variety of increasing tree [START_REF] Bergeron | Varieties of increasing trees[END_REF]. This also leads to a new point of view for random sampling of combinatiorial objects built upon Cycle, Set or Sequence, as these constructions can be rephrased using ordered structures [START_REF] Greene | Labelled formal languages and their uses[END_REF][START_REF] Flajolet | Analytic combinatorics[END_REF]. Nonetheless, there is still some points to be adressed in order to make this method fully effective: we have to evaluate the generating series of our objects in some arbitrary value, these generating functions may not exist in a closed form, and we may have to solve differential equations.
As an example, we will focus on alternating permutations. All the proofs are available in the appendix.
Boltzmann model for labelled structures
Definition A combinatorial class A is a denombrable (or finite) set, with a size function size | • | : A → N and such that there is only finitely many objects of each size.
In the following, we will use these notations: if A is a class, then for any object α ∈ A, |α| is its size. Furthermore
A n = {α ∈ A | |α| = n} and a n = Card(A n ).
There exists two kind of combinatorial objects: unlabelled and labelled ones. Basically, a labelled object of size n could be seen as an unlabelled object of the size, in which each atomic part is numbered by a number in {1, . . . , n}. In other words, a labelled object of size n is simply an unlabelled one of the same size paired with a permutation in S n .
In all the following, we will only consider labelled objects.
Definition Let A be a (labelled) combinatorial class. We define the exponential generating function defined by
A(z) = α∈A 1 |α|! z |α| = n∈N a n z n n!
Now, in order to construct some objects, we need a set of rules. Precisely, we will have some basics objects, called atoms, and a set on operators which allow us to build big objects from littler ones. A few of these operators are presented on figure 1 or more completely in [START_REF] Flajolet | Analytic combinatorics[END_REF].
Definition A Boltzmann sampler ΓC(x) for a (labelled) combinatorial class C is a random generator such that the probability of drawing a given object γ ∈ C of size n is exactly:
P x (γ) = 1 C(x) x |γ| |γ|! = 1 C(x) x n n! where C(x) is obviously the (exponential) generating function of C. A Description A(z) ΓA(x) ε Empty class 1 return ε Z Atomic class z return ➀ B × C Cartesian product B(z) × C(z) return (ΓB(x), ΓC(x)) B + C Disjoint union B(z) + C(z) if Bernouilli B(x) B(x)+C(x)
then return ΓB(x) else return ΓC(x) Seq(B) Sequence Let's notice that this sampler depends on a (real) parameter x, which can be tuned to aim a given expected size for the output of the algorithm. To be precise, if we denote by N the size of the output, one can solve the équation
E x (N ) = x C ′ (x)
C(x) in order to fix the value of x. We could also compute then
V x (N ) = x 2 C ′′ (x) C(x) + x C ′ (x) C(x) 1 -x C ′ (x) C(x)
. Boltzmann model is a very useful tool to efficiently generate combinatorial structures [START_REF] Duchon | Boltzmann samplers for the random generation of combinatorial structures[END_REF][START_REF] Flajolet | Boltzmann sampling of unlabelled structures[END_REF]. In particular, it is possible to automatically build a sampler according to the specification of a combinatorial class, following recursively the rules described in figure 1 or in [START_REF] Duchon | Boltzmann samplers for the random generation of combinatorial structures[END_REF].
Extension to ordered structures
We introduce the central object of this paper: the box operator, for which we extend the Bolzmann formalism.
The main idea of this paper consist of changing the value of the x parameter of the sampler ΓA during the recursive calls. Indeed, one can notice that, in the classical Boltzmann sampler (see figure 1), this value is fixed at the beginning and stays constant during the execution of the algorithm. Here, we will change this value during the execution according to a given probability density.
Definition The box product of two labelled classes B and C, noted by B × C, is the subset of B × C such that the least label is in the left composant of the pair, on the B part.
Furthermore, we need to ensure that B 0 = 0, i.e. the class B has no empty objects.
We have to highlight the fact that this operator is a binary one, and not an unary one! Indeed, the real operation is not • , but rather • × •.
Proposition 2.1. The generating function of
A = B × C is A(z) = z 0 B ′ (t)C(t)dt
Boltzmann sampler for the box operator
As said before, we want to change the value of the parameter x of the sampler according to a given probability law.
Proposition 2.2. If 0 < x < ρ A , then δ A x (t) = xB ′ (tx)C(tx) A(x)
is a probability density over [0, 1]. Now, we can almost explicitely present our algorithm for generating A = B × C following Boltzmann's ideas. We will need just one last notion: the derivative class.
Definition
The derivative class of a given combinatorial class B, written B ′ , is the set of objects of B in which an atom has been substituted by a hole, by a reservation. In other words, one atom (of size 1 by definition) has been replaced by a distingued atom, written ⊙, whose size is 0.
We will explain later how to compute this differentiated class. The proof just consists in checking that we can get all the desired objects, only these ones, and each one with the correct probability.
Nonetheless, we still have several points to adress before being able to really implement this algorithm: we have to define the differentiated class B ′ , and explain how to draw according to the probability density δ x .
Theoretical complexity
The free generation of objects (with no constraint on the size) works according to the following algorihm:
1. First, draw a random number according to δ x 2. Then generate two sub-objects 3. Eventually re-label the final object At this point, we first investigate the complexity of this algorithm related to the size of the generated object. Let us assume that the first step is in O(1). We will see later that this assumption is indeed correct, even if the hidden constant can be quite big. Moreover, we will suppose that the third step is also in O(1).
Then, according to the algorithm, to get an object of size n, we have to get two sub-objects of sizes k and n -k. From this, we check that the complexity of our algorithm is O(n): it's the same complexity as for the previously known Boltzmann samplers.
In a majority of real-world usage of random sampling, we just care about the shape, the form of the squeleton of the huge objects we get. The exact position of the label is not as important as the structure of the objects we get.
In addition to this complexity of the free generator, one can estimate the complexity for approximate-size generator. First, let us remind that approximatesize generation means that we want to get an object of size in
[(1 -ε)n, (1 + ε)n]
In the original paper [START_REF] Duchon | Boltzmann samplers for the random generation of combinatorial structures[END_REF], it is proved that, under some conditions on the analytical nature of the generating function, we only need a constant number of trials for approximate-size sampling. Here, these conditions are often true, and the generation stays linear.
Proposition 2.4. For any combinatorial structure whose generating function ∼ C(1 -x) -α log β 1 1-x , using a simple rejection method, the number of trials is constant for a given precision.
As a majority of "interesting" classes ave thus property, it means that we can quite often ensure a linear complexity even for approximate-size generation.
For example, in our particular case of alternating permutations, if ε = 10%, then the average number of trials is less than 7.
Example: Alternating permutations
Alternating permutations (also known as increasing proper binary trees) are defined by the specification T = Z + Z × (T × T ). From this one we can get
T (z) = z + z 0 dz dz (u)T 2 (u)du, and therefore T (z) = tan(z) with ρ T = π 2 . To lighten, let us note A = Z × T 2 and A(x) = tan(x) -x.
From there, one can write:
δ A x (t) = x dz dz (tx)T 2 (tx) A(x) = x tan 2 (tx) tan(x) -x
Hence, a Boltzmann sampler ΓT (x), for 0 x < ρ T = π 2 , is the following:
Algorithm 2 Boltzmann sampler for T Input: x ∈ R Output: An object of T , meaning an alternating permutation Require:
0 x π 2 1: Draw uniformly U over [0, 1] 2: if U x T (x) = x tan(x) then 3:
T ← Leaf ➀ 4: else 5:
Draw α over [0, 1] according to δ A x 6:
T ← Node ⊙ (ΓT (αx), ΓT (αx)) 7: end if 8: return T with right labels One can compute the expected size of the output:
E T x (N ) = x T ′ (x) T (x) = x 1 + tan 2 (x) tan(x) = 2x sin(2x) ∼ π/2 1 1 -x π/2
More precisely, one can compute the distribution of the sizes of the output. We have P T
x (N = 2n) = 0 and a simple computation leads to
P T x (N = 2n + 1) = T 2n+1 x 2n+1 T (x)(2n + 1)! ∼ ∞ 4 π tan x x π/2 2n+1
This probability asymptotically follows a geometric distribution of parameter x π/2 . This behaviour really differs with the proper binary tree case.
Algorithmic issues
In order to be able to implement on a computer the previous algorithm, we first have to adress a few practical points.
The B ′ combinatorial class
This class B ′ intuitively corresponds with the objects of the B class, in which a reservation replaced an atom. One can check that the generating function of
B ′ is (B ′ ) (z) = dB(z) dz (z).
Proper definition and set of properties are presented in [START_REF] Bergeron | Combinatorial species and tree-like structures[END_REF].
To define it precisely, we have to introduce a new special atom, written ⊙ and such that | ⊙ | = 0. But be aware that ⊙ = ε ! This atom, simply speaking, is juste a reservation for a future atom Z. It's just an empty place, but an existing and reserved one.
Then, one can recursively define or class B ′ by:
ε ′ = ∅ ⊙ ′ = ∅ Z ′ = ⊙ (A + B) ′ = A ′ + B ′ (A × B) ′ = A ′ × B + A × B ′ (Seq(A)) ′ = Seq(A) × A ′ × Seq(A) (Set(A)) ′ = A ′ × Set(A) (Cyc(A)) ′ = A ′ × Seq(A) (A × B) ′ = A ′ × B
We can check that we can always compute a formula without any differentiated class isomorph to our differentiated class. Hence, one can automaticaly compute a Boltzmann sampler for this isomorph class, and so for the differentiated one. So, we are able to draw in a linear time an object of this class, following Boltzmann's model.
This being said, we have to notice that the classes B and B ′ are quite close. This observation beeing done, it is really frustrating to have to unroll this whole rewriting system to be able to draw objects in the differentiated class. It might be possible to find a more efficient and satisfying way to draw into such classes.
Drawing according to δ x
The second issue is that we have to be able to draw a (real) variabel according to an (almost arbitrary) distribution δ x over [0 , 1] One can easily verify that, if x < ρ, then δ x is a C ∞ function from [0, 1] to R + . More precisely, we know that ∀k ∈ N, ∀t ∈ [0, 1], ∂ To draw a random variable X according to δ x , we will adapt a rejection method to our particular distribution. We define a subdivision of [0, 1] into consecutive intervals ([α 0 , α 1 ], . . . ,
[α i-1 , α i ], . . . , [α N -1 , α N ]) with 0 = α 0 < α 1 < • • • < α i < • • • < α N = 1.
On each [α i-1 , α i ], let h i δ x be a continuous function. We choose the h i simple enough to be able to compute and know all the values A i = (α i -α i-1 ) αi αi-1 h i (u)du. These conditions on the h i ensure that we will be able to draw a random variable according to these functions, and to correct the fault induced by this approximation of δ x .
In the following, let Area(h) = N i=1 A i be the total area under all the h i . We use then the following algorithm: Algorithm 3 Draw a variable x according to a probability density δ x Input:
δ x : [0, 1] → R + ; N ∈ N ; α i ∈ [0, 1] ; and h i : [α i-1 , α i ] → R + Output: X ∈ [0, 1]
Require: δ x is a probability density over [0, 1] ; α 0 = 0 and α N = 1 ; and ∀i ∈ {1, . . . , N }, h i δ x Ensure: X ∼ δ x 1: repeat 2:
Draw i ∈ {1, . . . , N } such that P(i = k) = A k Area(h) 3:
Draw a random variable X according to the density hi Ai sur
[α i-1 , α i ] 4:
Draw a random variable Y according to the uniform law over [0, 1] 5: until Y h i (X) δ x (X) 6: return X Theorem 3.1. This algorithm returns a random variable X according to the distribution δ x ; and the expected number of rounds to finish is Area(h). We can compute that, if p = 1 Area(h) , then P(stop k ) = (1 -p) k-1 p and
0 1 t δ A x (t)
E({Number of trials}) = ∞ k=0 kP(stop k ) = 1 p = Area(h).
We can see here that the h i functions have to be the closest to the density δ x , in order to minimize the average number of trials to draw the variable X.
We tried two kind of approximations for the h i functions:
• piecewise constant functions:
∀i ∈ {1, . . . , N }, ∀t ∈ [α i-1 , α i ], h i (t) = δ x (α i )
• piecewise linear functions δ x being non-decrasing, we are sure that in both case:
∀i ∈ {1, . . . , N }, ∀t ∈ [α i-1 , α i ], h i (t) δ x (t)
In practice, we can see that for our problem, piecewise constant functions appear to offer the better compromise between pre-computing time and computing time. As our distributions are "more and more increasing" over [0, 1]i.e. all their derivatives are non-negatives -we chose to have a geometric subdivision of the interval: α i = 1 -2 i and α N = 1 (see figure 2). Experimentally, this kind of subdivision works quite well.
Conclusion
The random generation of ordered structures is in general a very difficult problem. In this short paper, we have presented a way to efficiently generate some of these structures, using Boltzmann model. As said before, it is possible with our algorithm for the box operator to generate objects of size up to 10 7 in reasonable time (at most about 100 seconds).
Moreover, we keep all the good properties induced by Boltzmann samplers: genericity of the algorithms, uniformity over objects of the same size, linear complexity in the size of the output as far as only the shape is concerned. The basic idea behind this extension is to slightly change the parameter during the recursive calls.
We think this idea can be extended to generate a lot of other structures and operators. For example, the shuffle operator, the Hadamard product, or almost any specification which can be described by a system of differential equations on the combinatorial classes.
Proof. By definition of δ A
x , we can write that ∀t ∈ [0, 1], 0 δ A x (t), and:
1 0 δ A x (t)dt = 1 0 xB ′ (tx)C(tx) x 0 B ′ (u)C(u)du dt = x x 0 B ′ (z)C(z) dz x x 0 B ′ (u)C(u)du = 1 Therefore, δ A
x define a correct probability density over [0, 1].
Theorem The algorithm described above for the combinatorial class A = B × C is a correct Boltzmann sampler.
Proof. We have to check that we can get each object in A with this algorithm, and only those ones ; and mainly that we get each object with the right probability.
The first point is easy to check, as the algorithm follows by construction the exact definition of what is a box product.
About the second point, we have to check that we get any object α ∈ A with probability P
A x (α) = x |α| A(x) 1
|α|! . Indeed, we can remind that:
C(x) = γ∈C x |γ| |γ|! and B(x) = β∈B x |β| |β|!
From this equation:
B ′ (x) = β∈B x |β|-1 (|β| -1)! A(x) = x 0 B ′ (u)C(u)du = x 0 β∈B u |β|-1 (|β| -1)! γ∈C u |γ| |γ|! du = x 0 (β,γ)∈B×C u |β|+|γ|-1 (|β| -1)!|γ|! du = x 0 α=( e β,e γ)∈A (| β| -1)!| γ|! (|α| -1)! u |α|-1 (| β| -1)!| γ|! du = x 0 α∈A u |α|-1 (|α| -1)! du A(x) = α∈A x |α| |α|!
Then we can write, assuming having Boltzmann samplers for B ′ and C, for a given pair (β ′ , γ) ∈ B ′ × C, the probability P
x ((β ′ , γ)) = x |β ′ | B ′ (x) 1 |β ′ |! x |γ| C(x) 1 |γ|! .
Hence, for a given α = ( β, γ) ∈ A, we can write, for the algorithme described earlier for A that:
P A x (α) = 1 0 (| β| -1)!| γ|! (|α| -1)! P tx ((β ′ , γ)) δ A x (t)dt = 1 0 (|β| -1)!|γ|! (|α| -1)! × (tx) |β ′ | B ′ (tx) 1 |β ′ |! (tx) |γ| C(tx) 1 |γ|! × xB ′ (tx)C(tx) x 0 B ′ (u)C(u)du dt = x |α| A(x) 1 (|α| -1)! 1 0 t (|α|-1) dt = x |α| A(x) 1 |α|!
Eventually, we get a correct Boltzmann sampler for this combinatorial class.
Theorem The algorithm 3 returns a random variable X according to the distribution δ x ; and the expected number of rounds to finish is Area(h).
Proof. It is quite simple to see that this algorithm implements the rejection method for the distribution δ x under the curve defined by the h i , and since h i δ x on each inverval, the algorithm is correct.
We will now find the expected number of rounds before returning a value. Let us use the events {The algorithm ends after exactly k rounds} written stop k , and {We chose the i th interval} written A i . First, we will have to compute P(stop 1 ) = N i=1 P(stop 1 |A i )P(A i ) But P(A i ) = Ai Area(h) according to the algorithm. And as to the conditional probability, we get:
P(stop 1 |A i ) = P(Y h i (X) δ x (X)|A i ) = αi αi-1 P(Y h i (X) δ x (X)|X ∈ [µ, µ + dµ], A i )P(X ∈ [µ, µ + dµ]|A i ) = αi αi-1 P(Y h i (X) δ x (X)|X ∈ [µ, µ + dµ], A i ) h i (µ) A i dµ = αi αi-1 δ x (µ) h i (µ) h i (µ) A i dµ = 1 A i αi αi-1
Alternating permutations: experimental results
On a 1Ghz notebook, with 2Go of RAM, and a naive implementation in OCaml, we can reach sizes about 10 7 in reasonable time (about 10 seconds for free generation: see figure 4). Moreover, on can compute, for this specific class, the number of trials for approximate size generation. We can show that, for a given precision ε, only Using this approach, we can use Boltzmann model advantages, on so draw large object in very few time. For example, the example displayed as figure 3 can be computed in less than one millisecond on a standard computer.
1 1 -
1 B(z) l := Geom(B(x)) return (ΓB(x), . . . , ΓB(x)) l times Set(B) Set e B(z) l := Poisson(B(x)) return {ΓB(x), . . . , ΓB(x)} l times Cyc(B) Cycle log 1 1-B(z) l := Logarithmic(B(x)) return ΓB(x), . . . , ΓB(x) l times
Figure 1 :
1 Figure 1: Some classical constructors, their generating function and sampler
Algorithm 1 2 : 3 : 4 :
1234 Boltzmann sampler ΓA for A = B × C Input: One real number x Output: An object of A Require: 0 < x < ρ A 1: Draw U in [0, 1] according to the probability density δ A x Randomly draw an object b ′ from the class B ′ using ΓB ′ (U x) Randomly draw an object c from the class C using ΓC(U x) Let a = (b ′ , c) correctly labelled 5: return The object a, with the reservation ⊙ replaced by an atom Z One should notice that the shape of this algorithm really looks like the one for the simple cartesian product B × C. Particularly, both of the two recursive calls for B ′ and C are independant and both with the same modified value for the parameter.
Theorem 2 . 3 .
23 The algorithm described above for the combinatorial class A = B × C is a correct Boltzmann sampler.
k δx ∂t k (t) 0. Particularly, the probability density is non-decreasing on his definition domain, and ∀t ∈ [0, 1], lim x→0 δ x (t) = 1 and lim x→ρ δ x (t) = Dirac(t -ρ).
Figure 2 :
2 Figure 2: Shape of a typical δ x distribution
Figure 3 :
3 Figure 3: Increasing proper binary tree: size 2033, drawn in less than 1ms
From this point, P(stop k ) = (1 -p) k-1 p and E({Number of rounds}) =
e 4 1 sinhFigure 4 :
14 Figure 4: Time for generating alternating permutation
A(x)
Proof. Following the definition of the box product, we have
Indeed, we have to distribute only n-1 labels, the least being fixed in B. Among those, we have to choose only k -1 for the first composant, and n -k for the second one. We can then rewrite this equation, using properties of binomial coefficients, and the fact that B 0 = 0:
which is a Cauchy product. This last remark leads to the claimed formula.
Remark Funnily, using this framework, the well-known integration by parts formula
can be proven and read as: "In a pair of labelled object, the least labeal is either on the first composant, either on the second one", which is absolutely trivial.
Proposition If 0 < x < ρ A , then δ A x (t) = xB ′ (tx)C(tx) |
04111276 | en | [
"info.info-im",
"info.info-ti",
"sdv.mhep",
"sdv.mhep.csc"
] | 2024/03/04 16:41:24 | 2023 | https://inria.hal.science/hal-04111276/file/article.pdf | PhD Nicolas Cedilnik
email: [email protected]
PhD Jean-Marc Peyrat
Weighted tissue thickness
Keywords: imaging, thickness, infarct, fat
Measuring the thickness of a tissue can provide valuable clinical information; anatomical structures segmented on medical images can include sub-structures (inclusions) corresponding to a dierent biological tissue. This article presents a method, based on partial dierential equations, to measure the thickness of one specic tissue in this particular conguration.
After describing the mathematical formulation of our weighted thickness denition, we show on synthetic geometries in one, and two dimensions that it outputs the expected results. We then present three possible applications of our method on cardiac imaging data: measuring the muscular thickness of a ventricle with fat inltration; measuring the thickness of an infarct scar; visualising the transmural extent of an infarct scar.
Introduction
Measuring the thickness of an anatomical structure is a common task for radiologists. More specically, in the eld of cardiology, it is often needed to measure the thickness of the myocardial muscle, as it convey information about its health status, which has been shown to be related to its electrophysiological properties ( [START_REF] Michael | Correlation between computer tomography-derived scar topography and critical ablation sites in postinfarction ventricular tachycardia[END_REF], [START_REF] Takigawa | Detailed comparison between the wall thickness and voltages in chronic myocardial infarction[END_REF]).
The visible myocardium segmented on medical images can have inclusions of non-muscle tissue such as fat inltration, calcications, or brosis. It can happen that we are interested in measuring the thickness of the actual muscle bers, ignoring these inclusions (g. [START_REF] Cedilnik | VT scan: Towards an ecient pipeline from computed tomography images to ventricular tachycardia ablation[END_REF]. In this article we describe a method to perform this task, and show that it can reciproqually be used to measure the thickness of the inclusion.
Methodology
Our method is an extension of the method described by Yezzi and Prince ( [START_REF] Yezzi | An eulerian PDE approach for computing tissue thickness[END_REF]).
R: myocardium blood pool inclusions
∂ R ∂ R 0 1
th ic k n e s s Fig. 1. Illustration of the problem adressed by our method. We want to measure the thickness of R, excluding the thickness of the inclusions, ie the length of the thickness line, exclusing the dotted part. ∂0R: inner boundary. ∂1R: outer boundary.
Correspondence trajectories
Briey, Yezzi and Prince dene thickness at each point x of the tissue region R as the total arclength of a unique curve, passing through x. This curve originates on the the inner boundary of the tissue region ∂ 0 R, and terminates on its outer boundary ∂ 1 R (see g. 1)
Such curves can be obtained by solving the Laplace equation over R:
∆u = 0 (1)
with the Dirichlet boundary conditions:
u(∂ 0 R) = 0 and u(∂ 1 R) = 1 (2)
u denes a scalar eld in R and is used to dene curves within R with the tangent eld
- → T : - → T = ∇u ||∇u|| (3)
For didactic purposes, in g. 1 the curve is represented as a straight dotted line (thickness), but -→ T actually denes curvy trajectories for complex shapes of R.
Arclength computation
Yezzi and Prince ( [START_REF] Yezzi | An eulerian PDE approach for computing tissue thickness[END_REF]) dene the length functions L 0 , where L 0 (x) gives the arclength of the correspondence trajectory between ∂ 0 R and x (reciprocally, L 1 (x) between ∂ 1 R and x).
∇L 0 • - → T = 1, with L 0 (∂ 0 R) = 0 (4)
Weighted tissue thickness 3
-∇L 1 • - → T = 1, with L 1 (∂ 1 R) = 0 (5)
Thickness at every point x is then dened by summing these length functions:
W (x) = L 0 (x) + L 1 (x) (6)
Weighted arclength
To account for the inclusion of tissue which thickness we want to exclude, we propose to replace the right term (constant) of eq. ( 4) and eq. ( 5) by a function f varying over the tissue region.
∇L 0 • - → T = f (x) (7) -∇L 1 • - → T = f (x) (8)
We call f the arclength weight function and it has the following properties:
If x is fully occupied by the tissue region we want to measure, f (x) = 1 If x is fully occupied by an inclusion we want to exclude from the thickness measure, f (x) = 0 f (x) can also take any value in the [0, 1] interval. This can be useful to account for partial volume eects by using a transfer function between the intensity (eg, Hounseld Units) of the medical image and f .
Ray-tracing as a possible alternative
Another approach to achieve comparable measurements is to use ray-tracing. In such framework, one needs to dene straight lines emanating from one surface and measure the length of the segment between the two surfaces ∂ 0 R and ∂ 1 R.
Typically, these lines would be normal to the surface of the considered anatomical structure at the point where they emanate. The measured distance (ie, the tissue thickness) could be similarly weighted with the weight function f . However, our methodology presents several advantages over this approach:
It operates directly on voxels and it does not require to dene a surface, something usually achieved by converting voxel data to triangular meshes.
While there are several algorithms and tools to achieve this conversion, arbitrary choices must be made to control, among other things, the smoothness of the surface; and these choices, inuencing the directions of the rays, can have drastic eects on the thickness values.
The results with our method are thickness values for all voxels of the segmented structure, and not on a single surface. This can be leveraged for further analysis, or parameterizing fast electrophysiological models based on rectangular-grid data ([4], [START_REF] Cedilnik | VT scan: Towards an ecient pipeline from computed tomography images to ventricular tachycardia ablation[END_REF]).
Nicolas Cedilnik, PhD and Jean-Marc Peyrat, PhD
There is no need to dene out of which surface these trajectories have to be calculated. Since normals have dierent directions depending on whether they are dened from the inner or outer surface, this would result in visually dierent thickness maps depending on the considered surface. With our method, and as a consequence of both the denition of the correspondence trajectories and the voxel nature of the output, one can see topologically similar thicknesses on the epicardial, endocardial, or even mid-wall surface.
3 Implementation
Algorithm
It has been shown ( [START_REF] Yezzi | An eulerian PDE approach for computing tissue thickness[END_REF]) that computing eq. ( 4) and eq. ( 5) over R, in 3D amounts to solving (equations ( 8) and ( 9) of [START_REF] Yezzi | An eulerian PDE approach for computing tissue thickness[END_REF])
L 0 [i, j, k] = f (x) + |T x | L 0 [i ∓ 1, j, k] + |T y | L 0 [i, j, ∓1, k] + |T z | L 0 [i, j, k ∓ 1] |T x | + |T y | + |T z | L 1 [i, j, k] = f (x) + |T x | L 1 [i ± 1, j, k] + |T y | L 1 [i, j, ±1, k] + |T z | L 1 [i, j, k ± 1] |T x | + |T y | + |T z | NB: in [6], f (x) is a constant equal to 1.
Numerical solving
Results can be obtained by the same methods described by Yezzi et Prince ( [START_REF] Yezzi | An eulerian PDE approach for computing tissue thickness[END_REF]). Iterative approaches until convergence are possible, but a fast-marchinglike algorithm is more ecient (computational resource-wise), especially in a single-threaded context, since it requires a single pass over R, in theory. However we want to note here, than unlike [START_REF] Yezzi | An eulerian PDE approach for computing tissue thickness[END_REF] describe, our experiments showed that this fast-marching-like approach does not reach convergence in a single pass. We propose an iterative/fast-marchine-like combined approach where the traversal order of the rst pass is memorized for subsequent passes, reaching convergence more rapidly than a naive iterative approach. 1
Results on toy geometries
One dimension We conducted experiments in one dimension to verify that the results were those expected. We set correspondence trajectories to straight horizontal lines and set a 4-element wide hole along these trajectories where the weight function values ranged from 0 to 1 (g. 2).
1 An informal benchmark with a real left ventricular wall geometry of 363268 voxels (voxel spacing = 0.8 mm), on an AMD Ryzen 9 3950X CPU, single-threaded, took 11.2 ± 0.2 seconds for the iterative approach vs 7.5 ± 0.2 seconds for the semi-ordered approach over 10 runs. 3)); weight function f ; weighted arclengths L0 and L1 (eq. ( 4), eq. ( 5))
Diffusion
and the resulting weighted thickness.
As expected, the resulting weighted thickness W p (x) equals to the number of elements n in R along the horizontal line when the weight function f (x) is set to 1 along the line, corresponding to the non-weighted thickness computation.
When f (x holes ) = 0, W p (x) = n -n holes . For 0 < f (x holes ) < 1, n < W p (x) < -n holes .
Diffusion
Weight L0 L1 Thickness
Applications
In this section we present possible applications of our method on clinical data.
Workow
For all clinical applications, the preliminary step is a segmentation of the structure that we want to measure. For the myocardium, this is typically done by segmenting the blood pool inside the cardiac cavities to create an endocardial binary mask, and the outer layer of the myocardial wall to create an epicardial mask. [START_REF] Komatsu | Regional myocardial wall thinning at multidetector computed tomography correlates to arrhythmogenic substrate in postinfarction ventricular tachycardia: Assessment of structural and electrical substrate[END_REF] One then has to dene the weight function f . The way it is dened varies depending on the application, as shown in the following sections. Our method can also be used to quantify the local thickness of an infarct scar.
To achieve this, after segmentation of the endocardial and epicardial masks, one has to segment the scar mask (how to obtain such segmentation is outside the scope of this article).
The weight function f is then dened such that f (x) = 0 in the myocardium and f (x) = 1 in the scar. This can be used to visually assess the thickness of the brosis, as shown on g. 5
Scar transmurality
Instead of the absolute thickness of the scar, in millimeters, our method can also be used to visualise the local proportion of brosis in the myocardial wall, a ratio we call scar transmurality. A value of 1 would mean that an area of the myocardial wall is pure brosis and reciproqually, a value of 0 must be interpreted as the absence of brosis in this area, ie. no ischemic scar.
To obtain such measure, a possible approach is to:
1. Compute the non-weighted thickness of the wall, ie, setting f (x) = 1 all over the myocardial wall. The transmural extent, i.e., the scar transmurality is dened for every voxel of the myocardial wall. See g. 6 for an example result.
Conclusion
In this article, we dened a measure we named weighted thickness, suited to measure the thickness of a tissue on medical images, when it presents inclusions of a dierent tissue that we want to exclude from the measure. It does so by giving voxels dierent weights depending on a weight function that must be dened accordingly to the considered application. Like other partial dierential equationbased thickness measures, it outputs a scalar map of the same dimensions as the input segmentation, in which each pixel (or voxel) has a thickness value, coherent from the inner to the outer boundary of the measured structure.
The main limitation of our article is that we do not evaluate the clinical relevance of the weighted thickness measure. In particular, the transfer function described in 4.2 has been arbitratily dened and would require tuning and validation for the implied application, ie dening the health status of the myocardial on CT images. Our future work will focus on such validation.
Fig. 2 .
2 Fig.2. Toy example in 1D. From top to bottom, each row has an inclusion with a increasing weight. From left to right: scalar eld u (diusion) from eq. (1) and tangent eld -→ T (eq. (3)); weight function f ; weighted arclengths L0 and L1 (eq. (4), eq. (5))
Fig. 3 .
3 Fig. 3. Toy example in 2D. Refer to the legend of g. 2 for details.
Fig. 4 .Fig. 5 .
45 Fig. 4. Myocardium-only thickness. From left to right: (top) CT image showing fat inclusions in the left myocardium (red arrows); (bottom) result of f (x); non-weighted thickness; weighted thickness showing more severe thinning. The thicknesses are shown both on a 2D slice (top) and projected on a surfacic mesh of the ventricular wall.
Fig. 6 .
6 Fig. 6. Scar transmurality. From left to right, total, total wall thickness, scar thickness, scar transmurality. Top images are overlay of the thickness values over the original MR image, bottom images are projections of the values on a surfacic mesh. |
00411146 | en | [
"info.info-pl",
"info.info-iu"
] | 2024/03/04 16:41:24 | 2009 | https://hal.science/hal-00411146/file/sipe6-bissyande.pdf | Tegawendé Bissyandé
email: [email protected]
Laurent Réveillère
email: [email protected]
David Bromberg
email: [email protected]
UbiGate: A Gateway to Transform Discovery Information into Presence Information
Keywords: UbiGate, SIP, Presence, Pervasive, SDP, Service Discovery
Pervasive computing involves various entities which need to coordinate tasks and share resources through different service discovery protocols. However, the multiplicity and the incompatibility of those protocols have made interconnectivity problematic. Moreover, most service discovery protocols require a strong participation of users to genuinely play their part. Consequently, service discovery in a pervasive environment has become a challenge that researchers as well as practicionners have tried to overcome through various approaches. Nevertheless, existing solutions mostly consist of designing new protocols which usually address specific application needs while participating in the increase of heterogeneity.
To address these problems, we present a new paradigm for service discovery involving the use of a gateway, called Ubi-Gate, and relying on SIP, a widespread signaling protocol. Centered around the notion of presence, UbiGate enables real time availability of service information while hiding the heterogeneity of underlying protocols. We have developped a prototype of UbiGate supporting service discovery protocols such as the protocol used in Bluetooth service discovery mechanism and a protocol enabling the detection mechanism of RFID. Preliminary results show that UbiGate enables new service discovery protocols, either IP or non-IP based, to be seamlessly supported with no significant overhead in discovery latency.
INTRODUCTION
Ubiquitous Computing, otherwise known as pervasive computing, refers to a computing paradigm where the user is constantly in interaction with computing devices [START_REF] Kindberg | System software for ubiquitous computing[END_REF]. As envisioned by Weiser, pervasive computing takes into account the natural human environment and allows the computers themselves to vanish into the background [START_REF] Weiser | The computer for the 21st century[END_REF]. In such environments, computers surrounding the user share communication resources and computing capacities packaged as services. The pervasive computing scheme therefore aims at enabling the use of those services in the most seamless and effortless possible way. Users collect information that describes attributes of available services using Service Discovery Protocols (SDPs). Currently, pervasive computing makes use of a wide range of SDPs [START_REF]Specification of the Bluetooth System Core version 1.0b: Part E, Service Discovery Protocol (SDP)[END_REF][START_REF] Guttman | Service Location Protocol, Version 2[END_REF][START_REF] Helal | Konark -a service discovery and delivery protocol for ad-hoc networks[END_REF][START_REF] Liang | Mobile service discovery protocol (msdp) for mobile ad-hoc networks[END_REF][START_REF] Marin-Perianu | Cluster-based service discovery for heterogeneous wireless sensor networks[END_REF][START_REF] Mian | A survey of service discovery protocols in multihop mobile ad hoc networks[END_REF][START_REF]Jini Architecture Specification version 2.0[END_REF][START_REF]Salutation Architecture Specification version 2.0c[END_REF][START_REF]UPnP Device Architecture version 1.0[END_REF]. Therefore, a user will be isolated if his device lacks the capacity to discover resources available in its vicinity because of protocol incompatibility.
Existing SDPs are either based on the Pull model or the Push model. In the Pull model, service requesters regularly send requests to discover services available in their environment. In the Push model, service providers automatically announce the services they provide by sending advertisements. However, neither model correctly address the constraints of pervasive computing. Indeed, the volatility of devices requires service requesters to reiterate their discovery requests to get up-to-date information when using the Pull Model. In the Push Model, advertisements sent by service providers are performed cyclicly. Hence, users must wait for the next advertisement to get accurate information. The dynamicity of pervasive environments suggests that users should be notified when the state of a service changes in order to provide realtime snapshots of the context in which they operate. Such a realtime service discovery can be compared to the notification of contact status in Instant Messaging (IM) clients such as Windows Live T M Messenger or Google Talk T M . In those IM clients, the user performs a realtime monitoring on the presence status of each member of its buddy list. This ability to access realtime information about a person's status, communications capabilities, and preferences as suggested by Rosenberg in [START_REF] Rosenberg | Presence: The best thing that ever happened to voice[END_REF] is relevant to virtually every means of communication.
So as to convey presence information dynamically, various instant messaging protocols are available, such as XMPP (also known as Jabber) [START_REF] Saint-Andre | Extensible Messaging and Presence Protocol (XMPP): Core. RFC 3920[END_REF], IRC [START_REF] Oikarinen | Internet Relay Chat Protocol[END_REF] and SIP/SIMPLE [START_REF] Lonnfors | Session initiation protocol (SIP) extension for partial notification of presence information[END_REF][START_REF] Roach | Session Initiation Protocol (SIP)-Specific Event Notification[END_REF][START_REF] Rosenberg | A presence event package for the session initiation protocol (SIP). RFC 3856[END_REF]. In particular, the Session Initiation Protocol, otherwise known as SIP, is widely adopted for its simplicity, portability and extensibility [START_REF] Jiang | Towards junking the PBX: Deploying IP telephony[END_REF][START_REF] Rosenberg | Presence: The best thing that ever happened to voice[END_REF].
Taking into consideration both the aforementioned challenges of Service Discovery in pervasive computing and the SIP widespread adoption, we introduce a new service discovery paradigm based on SIP.
This paper
This paper presents a new approach for service discovery based on the SIP protocol. It consists of exposing discovery information about services as presence information for users. To do so, we introduce the UbiGate gateway which is in charge of transforming discovering information into presence information. UbiGate handles all discovery tasks in the environment and delivers pertinent information to each user.
The contributions of this paper are as follows:
• We have proposed a new approach for service discovery involving the use of a gateway and relying on the widely used SIP protocol. Following our approach, we present UbiGate: a gateway to transform discovery information into presence information;
• We have developped dedicated modules for UbiGate to support RFID and Bluetooth technologies. Our preliminary experiments show that UbiGate does not introduce any significant performance penalty in service discovery while enabling support of new services to be easily added.
The rest of this paper is organized as follows. Section 2 introduces our approach and the architecture of the UbiGate gateway. Section 3 assesses our prototype implementation of UbiGate, demonstrating its benefits in a pervasive environment. In Section 4, we present other approaches that were proposed as attempts to unify the mechanisms of service discovery. Finally, Section 5 concludes and presents future work. The most important concern in service discovery within pervasive computing is the interconnectivity among communication entities. Yet, the variety of protocols, because of their incompatibility, has made it difficult for one device to keep track of the availability of existing services. The underlying challenges are to enable service discovery using a unique protocol, and to improve reactivity, considering the volatility of pervasive environments. Our approach consists of building a gateway in charge of transforming service discovery messages into presence information for end-users. This gateway, called UbiGate, exchanges information with users through the SIP protocol, while managing various native discovery protocols, as illustrated in Figure 1.
OUR APPROACH
UbiGate aims at unifying service discovery protocols so as to make them transparent for users. Besides, it is pertinent to suppose that the appearance of new protocols will facilitate the advertisement of more services that may interest users. Therefore, the infrastructure has been designed to easily welcome new protocols. In UbiGate, protocols are plugged as modules packaging their capabilities.
Practically, UbiGate enables service discovery through SIP compliant devices. Using the SIP protocol, users can address discovery requests to the gateway which will manage all tasks for information gathering. SIP is thus the pillar of our approach. So as to better understand how it sustains the infratructure, an overview of the protocol is presented.
SIP background
SIP is originally a signaling protocol for Voice over IP (VoIP) and third generation mobile phones. It is standardized by the IETF and adopted by the ITU. 1 This protocol enables creating, modifying and terminating a communication between parties. Communications include audio/video communications, games, and instant messaging.
The SIP protocol is based on a client-server model. A SIP message can be sent or received by a client. A sent message is said to be outgoing; otherwise, it is said to be incoming.
SIP is a text-based protocol similar to other well-known protocols such as HTTP2 and RTSP. 3 A SIP message begins with a line indicating whether the message is a request (including a protocol method name) or a response (including a return code). A sequence of required and optional headers follows. Finally, a SIP message includes a body containing other information relevant to the message.
Logically, SIP is composed of three main entities: a registrar server to allow users to record their current location, a proxy server in charge of dispatching SIP messages, and a user agent required on each communication device to perform all SIP-related actions.
To support mobility, a user is assigned a SIP URI (Uniform Resource Identifier), which is a symbolic address, analogous to an e-mail address. When a SIP proxy receives a message for a local URI, it asks the local registrar server to translate the URI into contact information for a specific user agent. A user must thus inform the registrar server of the user agent at which he would like to receive messages.
Following the success of SIP, the IETF has produced many specifications related to presence and instant messaging with SIP. This set of specifications is known as SIMPLE 4 and covers topics ranging from protocols for subscription and publication to presence document formats.
UbiGate architecture
The UbiGate gateway aims at meeting the challenges of service discovery in pervasive computing by leveraging the SIP protocol. The architecture of UbiGate is based upon the three major components illustrated in Figure 2. The main issue that is addressed by the UbiGate gateway is the reliability of the communication channel between UbiGate and users. Indeed, service requesters rely on Ubi-Gate to perform discovery tasks and regularly convey back to them appropriate information. So as to ensure a reliable and efficient access to information, service attributes are stored in a centralized registry managed by the gateway. Indeed, because of the dynamicity of pervasive environments, distributed storage systems are hard to deploy and maintain efficiently. Considering the capabilities of SIP, UbiGate deploys an event framework in charge of managing a SIP presence server. This Presence Manager handles the storage of services as entries, and matches requesters' subscriptions with those entries.
In order to discover services, UbiGate uses a service broker. This broker handles the native discovery protocols supported by the gateway. The Communication Manager (i.e. the service broker), is in charge of activating service discovery sessions and recovering the attributes of services that are available in the environment. This component cooperates with the previous one for service information storage.
The Gateway Manager enforces the coordination between the components. It therefore enables them to conjointly perform discovery and notification tasks. This component also provides a control interface for the administration of UbiGate, allowing users to easily add new features.
Furthermore, since our approach aims at providing an infrastructure that is as much extensible as possible, Ubi-Gate's suported discovery protocols are integrated through modules. For a given discovery protocol, the module packaging its capacities can be plugged into the gateway and removed as well without impacting the behaviour of existing protocols. This design feature allows UbiGate to be easily integrated in the environment so as to limit the impact of changes for end-users. Typically, the gateway renders seamless the upgrades on discovery protocols as well as disruptions in environment settings. Indeed, service providers are not required to switch to or add new protocols in their service advertisement kernel. In point of fact, it is up to the gateway to adapt itself to the environment by integrating in its core new modules for supporting available discovery protocols.
Service Discovery using UbiGate
Service discovery through UbiGate can be split in two distinct activities: the discovery process activity which collects service information, and the notification activity during which information is delivered to service requesters.
UbiGate uses a Pull model for its discovery processes. Practically, all the native service discovery protocols managed by the gateway are regularly called upon to scan the environment so as to provide up-to-date information on available services. However, as far as service requesters are concerned, the discovery system is perceived as running a Push model that works in real time. Indeed, an advertisement is performed each time the context changes. This scheme allows UbiGate to unify service discovery mechanisms while improving the Push model. It is this notion of realtime notification that has encouraged the choice of SIP.
A user wishing to benefit from the capacities of UbiGate is required to register itself as a service requestor by subscribing to the presence of a service. The subscription sent to the gateway is performed by sending a SIP SUBSCRIBE request, including an URI5 that represents the targeted service. For the purpose of guaranteeing a uniform interaction with the gateway, we have introduced an ad-hoc format for encoding relevant subscription information into URIs. This format indicates the type of service, or/and the provider's name, and/or the preferences on the protocol through which the service is advertised. The service request format thus provided allows requesters to target a specific type of service on a given device while filtering the advertisement protocol. The latter option can be useful if the requester is sending requests on behalf of other entities that have strict constraints. As illustrated in the example of Figure 3, keywords such as Printer may be used so that communicating entities may understand the significance of the terms utilized.
To discover the full range of available services in its vicinity, the client can send beforehand a SUBSCRIBE [START_REF] Roach | Session Initiation Protocol (SIP)-Specific Event Notification[END_REF] request using the joker UbiAny. The gateway will then inform the client of all service types available through a list of keywords. Using them, the user can then properly transmit its subscription to the gateway. Service attributes requested by a user are described in a standard PIDF 6 XML document attached to SIP NOTIFY [START_REF] Roach | Session Initiation Protocol (SIP)-Specific Event Notification[END_REF] messages. To enable standard, unmodified, SIP user agents such as Windows Live Messenger to correctly process such a PIDF document, we use the optional note tag recommended by the RFC [START_REF] Sugano | Presence information data format (PIDF)[END_REF] for additional information, as illustrated in Figure 4. Each of these service descriptions contains only the essential information that characterizes a service: the service type, the address of the entity offering the service, and the native protocol used by the provider to announce the service.
Thus, the compact description proposed in Figure 4 presents a OBEX Object Push service advertised by the Bluetooth Service Discovery Protocol supported by a Bluetooth-enabled device which MAC address is '0C:19:73:1F:1C'. This type of description is provided when requesters specify in their requests the service type and the protocol. The presence server then supplies the provider's address as requested information.
However, it is noteworthy to mention that the flexibility of SIP can be exploited so as to deliver better descriptions of services. Indeed, besides the essential attributes listed previously, a user may be interested in knowing the name of the service provider, its state along with the possible functionalities it yields. To deliver such an extended description, we have designed a new Service Discovery Information Format that is an extension of PIDF. In this new format, instead of using the PIDF note tag, UbiGate fills a service tag with all attributes values. The new format thus allows a more enhanced description of services as illustrated in the example of Figure 5. The protocol tag in the Service Discovery Information Format contains a value that refers to the protocol used by the service provider to advertise the service. Thus, the value 'RFID' has been placed in the example of Figure 5 because an RFID tag has been attached to the printer Zeus, and it is the detection of this tag that has provided all the information. This extended description format has been designed to include all informations provided by Bluetooth Service Description Protocol as it was the experimental protocol used in UbiGate. Yet, the extensibility of the PIDF format guarantees future adaptations of the format to include any additional attribute proposed by any service discovery protocol. Moreover, the format proposed is flexible since the parser does not require all services to be completely detailed. Besides, note tags can be used when the description proposes more information that need to be conveyed to the requester.
Advanced Features. With UbiGate, a user can subscribe to any available services, without any concern for diversity of entities that provide those services, nor for the protocols they use for advertisement. However, usually, users require advanced discovery features based on service providers or more often on protocols used for service advertisement. Typically, if we consider a user that needs to discover, with his PDA, the services advertised by his desktop computer, it is important to allow him to subscribe to the computer rather than to all service types that UbiGate can find in the entire environment. Likewise, a user wishing to track all RFID tags that penetrate his environment should receive information related only to the protocol used for RFID. Thus, Ubi-Gate provides an optimisation of discovery requests. This optimisation consists of enabling a selection for the options mentionned above by including fields in subscription URIs.
ASSESSMENT
To assess our approach, we have developed a prototype implementation of UbiGate. This prototype includes about 10,500 lines of Java code, and relies on the SIP presence server 7 provided by NIST (National Institute for Standards and Technology).
For the purpose of presenting the contributions of Ubi-Gate, we have considered two case studies involving the Bluetooth technology and RFID which is increasingly adopted for tracking purpose.
First, we consider a building manager who needs to be informed in real time when items of great value cross specific gates of his building. These items have been associated with RFID tags. As an important constraint, the manager should be able to receive information on his desktop or laptop computer and any SIP compliant device. To enable this scenario, we have integrated in UbiGate support for RFID by developing a module, as described in Section 2.2. This module consists of about 200 lines of Java code. For our experiments, we used the ASPX RFID Kit (Icode 2) RW-310 with C-100 Converter.
In our second case study, we consider that a user with a Bluetooth-enabled device will likely scan the environment several times to discover available Bluetooth services. Since, inquiries in Bluetooth respect the Pull model, service discovery is no longer seamless nor effortless because it involves perpetual manipulation from users and/or permanent inquiries from devices. Using UbiGate, the user only has to subscribe to the presence of Bluetooth entities and their services. To do so, we have developed a Bluetooth module consisting of about 1,100 lines of Java code and based on the AvetanaBluetooth 8 stack.
As described in Section 2.3, creating well-formatted requests requires that users know their syntax, which can be a complex task. To overcome these limitations, we have developed a SIP User Agent that is compliant with UbiGate's new format for service description. It provides a powerful graphical interface simplifying the subscription steps. We have tested the presence server of UbiGate using SIP compliant standard clients such as SJPhone 9 and SIP Communicator 10 .
Within the same perspective, we have implemented an interface for UbiGate's administrators. They can use the interface to start pulling with a specific service discovery protocol even if there is no subscription yet. Thus, when requesters address their requests to UbiGate, the latter has already information to deliver. The interface also allows shutting down the gateway or disabling unused protocols.
Table 1 lists the sizes of the implementations for UbiGate's different components. It also includes the sizes of the two We now present preliminary results of a performance evaluation of our UbiGate implementation. For our experiments, the gateway has been deployed on an Intel Pentium 4 computer with CPU frequency of 3.06 GHz and 1 MB of memory. We have also setup a native application using the Avetana stack directly. Due to specificities of Bluetooth technology, only first inquiry sessions are relevant. Indeed, the service discovery process is then complete and the devices do not use any information from previous inquiries to accelerate information exchanges. The comparison in Table 2 shows that there is no significant overhead using Ubi-Gate compared to a direct discovery process using the Avetana Bluetooth stack. Indeed, over the 31 seconds required for bluetooth service discovery completion, UbiGate takes less than 1 second to reformat informations data and convey them through the network for display on the client side.
Native
UbiGate Delay 1 st inquiry ≈ 31 seconds ≈ 31.7 seconds
Table 2: Comparison of Bluetooth inquiry delays
We have also deployed Ubigate on an OSGI T M platform in order to facilitate its deployment, migration and the management of dependancies. For this, we have encapsulated UbiGate's components as OSGi bundles. The OSGi bundles for the modules enable dynamic upgrades of UbiGate, while presenting roughly the same size of code as the non-OSGi version. Our OSGI based implementation of UbiGate has been deployed and run successfully on the knopflerfish 11 OSGi platform. The OSGi bundles as well as the complete Java code of the UbiGate project are available at the project web page 12 .
Finally, it is noteworthy to indicate that UbiGate infrastructure allows new service discovery protocols (such as Jini, 11 Knopflerfish: www.knopflerfish.org 12 UbiGate: http://uuu.enseirb.fr/∼bissyand/UbiGate/ UPnP, ...) to be easily added. Further, as outlined in Figure 6, Ubigate enables IP and non-IP applications to discover each other transparently.
RELATED WORK
Over the years many research groups and industries have focused on designing solutions to cope with the heterogeneity of service discovery protocols. Typically, providing SDP interoperability consists of enabling applications to switch their current SDPs on the fly according to their networked environment. This is made possible through the use of an intermediate representation of SDPs paradigms (i.e an intermediary protocol) in order to abstract incompatibilities among SDPs to exclusively consider their similarities [START_REF] Becker | Base: A microbroker-based middleware for pervasive computing[END_REF][START_REF] Bromberg | INDISS: Interoperable discovery system for networked services[END_REF][START_REF] Costa | A Reconfigurable Component-based Middleware for networked Embedded Systems[END_REF][START_REF] Grace | A reflective framework for discovery and interaction in heterogeneous mobile environments[END_REF][START_REF] Loureiro | A flexible middleware for service provision over heterogeneous pervasive networks[END_REF][START_REF] Raverdy | A multi-protocol approach to service discovery and access in pervasive environments[END_REF]. So far, two approaches have emerged depending on how applications are either bound or unbound to this intermediary protocol [START_REF] Bromberg | Interoperability of service discovery protocols: Transparent versus explicit approaches[END_REF]. In an explicit approach, applications need to be explicitly designed to use a specific discovery API that translates, if required, the intermediary protocol to the SDP currently used according to the networked environment [START_REF] Raverdy | A multi-protocol approach to service discovery and access in pervasive environments[END_REF]. In a transparent approach applications are unaware of the translation process [START_REF] Bromberg | INDISS: Interoperable discovery system for networked services[END_REF]. While the latter offers seamless interoperability to legacy applications, the former enables the extending existing SDPs with advanced features. Our approach combines the strengths of the current approaches to provide a new SDP paradigm based on the concept of presence in order to provide a realtime discovery as required by pervasive environments. Following the example of the Amigo service architecture [START_REF] Georgantas | The amigo service architecture for the open networked home environment[END_REF], UbiGate allows the integration of heterogeneous technologies by establishing interoperability of SDPs through different mechanisms involving the deployment of an intermediate node.
CONCLUSION
The multiplicity of discovery protocols, the capacities of pervasive computers, and the expectations of users, have contributed to the rise of new challenges for service discovery. To meet these challenges, we have proposed a new approach for service discovery involving the use of a gateway and relying on a widespread protocol, SIP.
Following our approach, we have designed the UbiGate gateway in charge of transforming discovery information into presence information. UbiGate can then process subscriptions from SIP compliant devices and manage discovery tasks using adequate native service discovery protocols. Centered around the notion of presence, UbiGate enables a realtime availability of service information while hiding the heterogeneity of underlying protocols.
The suitability of the approach for pervasive computing has been illustrated through two case studies involving the use of the Bluetooth service discovery protocol and a protocol for RFID. We have then developed dedicated modules for UbiGate, thus demonstrating the ease for adding support of new protocols.
After tackling the issues in service discovery in this paper, future work will involve service delivery using the SIP protocol. Taking advantage of the peer-to-peer model of SIP and the portability aspect of UbiGate, service delivery can be dealt with efficiently.
Figure 1 :
1 Figure 1: Interaction among pervasive computers, users and UbiGate
Figure 2 :
2 Figure 2: UbiGate architecture
sip:[email protected]
Figure 3 :
3 Figure 3: Example of subscription URI
<note> Bluetooth.OBEXObjectPush.0C:19:73:1F:1C </note>
Figure 4 :
4 Figure 4: Compact description of a service
Figure 5 :
5 Figure 5: Extract of the extended description of a service
Figure 6 :
6 Figure 6: UbiGate service discovery stack (Example with Bluetooth SDP)
Table 1 :
1 Package size of UbiGate components and modules
Component/Module Package size
Gateway manager 4.8 Kb
UbiGate Communication manager 25.1 Kb
SIP Presence Server 4.1 Mb
Total size of UbiGate 4.13 Mb
Modules Bluetooth RFID 22.2 Kb 6.1 Kb
ITU: International Telecommunications Union.
HTTP: HyperText Transfer Protocol.
RTSP: Real Time Streaming Protocol.
SIMPLE: SIP for Instant Messaging and Presence Leveraging Extensions.
URI: Uniform Resource Identifier.
PIDF: Presence Information Data Format.
Acknowledgements
The authors would like to extend their gratitude to EN-SEIRB graduate students Nabila Ayadi, Jean Collas and Aurélie Goubault de Brugière who contributed major parts of the implementation of UbiGate's prototype. We further thank Dr. Julia Lawall and the anonymous reviewers for their useful comments. |
00411166 | en | [
"spi.nrj"
] | 2024/03/04 16:41:24 | 2009 | https://hal.science/hal-00411166/file/SMM19_Raulet.pdf | Marie-Ange Raulet
Benjamin Ducharne
New mathematical
come
New mathematical approach using fractional derivatives to take into account excess losses in magnetic materials
Marie-Ange Raulet a , Benjamin Ducharne b , Daniel Guyomar Accurate prediction of magnetic circuit dynamical behaviour constitutes a fundamental problem for the design and the optimization of electromagnetic devices. The representation of dynamical effects due to eddy currents with a macroscopic model [1] is usually well admitted and adopted by researchers and electrical engineers. On the other hand, macroscopic modelling of microscopic dynamical effects due to wall motions which induce the so called excess losses, always exhibits interests of various research workers [2]- [6].
This paper proposes a new mathematical approach for the modelling of the dynamical microscopic effects. This approach is based on a fractional derivative formulation. This fractional term is used to describe the excess field contribution (1):
! ! " dt t B d t Hexc ) ( . ) ( # . ( 1
)
The fractional derivative model has already been used with success to model hysteresis on a large frequency bandwidth of ferroelectric ceramics (1mHz < ƒ < 10Hz) [7]. In the case of magnetic materials the fractional formulation is required to describe excess losses.
The new model is implemented and tested by considering a magnetic sample magnetized by a sine flux. An analytical study is carried out and gives results which are in a good agreement with the well known Bertotti's results related to the frequency relation of the excess energy [2].
Finally, comparisons between simulations and large number of experimental results performed on a SiFe sample magnetized by a varying frequency sine flux will be considered as validation of the new model. |
00411172 | en | [
"spi.nrj"
] | 2024/03/04 16:41:24 | 2009 | https://hal.science/hal-00411172/file/SMM19_Chailloux.pdf | Thibaut Chailloux
Laurent Morel
Fabien Sixdenier
Olivier Garrigues
Electromagnetic actuator to reduce vibration sources
à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Electromagnetic actuator to reduce vibration sources
Thibaut Chailloux * , L. Morel * , F. Sixdenier * , O. Garrigues * * Université de Lyon, F69622, France ; Université Lyon 1, F69622, France ; CNRS UMR5005 AMPERE, 43, Bd du 11 Novembre 1918, Villeurbanne, F-69622, France E-mail : [email protected] , [email protected] , [email protected] , [email protected]
In order to improve passenger comfort, a reduction of vibration sources in vehicles is being considered by the manufacturers. The vibrations can be compensated mainly by 4 ways : mechanical damping (passive system), hydraulic or pneumatic actuators, electromagnetic actuators [1], piezoelectric actuators.
As part of a study with Airbus, these solutions were compared: there is no universal solution, each should be used to compensate specific frequencies, amplitudes and forces, for a closed loop regulation.
In particular, Airbus wanted to compensate a vibration on an aircraft engine for which an electromagnetic actuator was recommended. (The specifications are : maximal force 6kN, frequency approximately 100Hz, movement amplitude 3µm, volume 10cm (length) x 10cm (width) x 20cm (height), temperature between -50 and +150°C).
Different topologies of electromagnetic actuators were compared and the chosen one was an electromagnet whose geometry has been optimized.
In regard of the 3D shape of the magnetic circuit, the choice of soft magnetic composite was quite natural because of its electromagnetic field isotropy.
In order to simulate the global system and estimate losses, we need an accurate model. [2], able to modelize dynamic hysteresis.
The aim of this study consists in finding a global behaviour model of the actuator that can give the temporal evolution of physical quantities and the different losses.
A first prototype that meets the main specifications has been built in the laboratory. The electromagnetic force, so as magnetic and electrical signals (voltage U, current I, magnetic flux !, magnetizing field H, magnetic induction B) generated by the actuator, and finally the actual losses, confirms that the soft magnetic composite is a good material candidate for this kind of actuators.
[ 1 ]
1 Zupan et al./Actuator Classification and selection-The Development, Advanced Engineering Materials, Vol. 4, N°12 2002 [2] Dynamical Models for Eddy Current in Ferromagnetic Cores Introduced in an FE-Tuned Magnetic Equivalent Circuit of an Electromagnetic Relay, Sixdenier F., Raulet M.-A., Marion R., Goyet R., Clerc G., Allab F., IEEE Transactions on Magnetics 44, 6 (2008) 866-869
Figure 1 :
1 Figure 1: General aspect of the actuator |
03838806 | en | [
"spi.meca.msmeca"
] | 2024/03/04 16:41:24 | 2022 | https://hal.science/hal-03838806/file/article.pdf | Moës
M Le Cren
A Martin
P Massin
N Moës
A robust 3D crack growth method based on the eXtended Finite Element Method and the Fast Marching Method
Keywords: Level sets, Signed distance functions, Fast Marching Method, Crack propagation
L'archive ouverte pluridisciplinaire
Introduction
The level set method coupled with X-FEM [START_REF] Belytschko | Elastic crack growth in the finite elements with minimal remeshing[END_REF][START_REF] Moes | A finite element method for crack growth without remeshing[END_REF]) is very effective to simulate 2D [START_REF] Stolarska | Modelling crack growth by level sets in the extended finite element method[END_REF]) and 3D [START_REF] Moes | Non-planar 3D crack growth by the extended finite element and level sets -Part I: Mechanical model[END_REF][START_REF] Gravouil | Non-planar 3D crack growth by the extended finite element and level sets -Part II: Level set update[END_REF] crack growth. The initial crack is represented by two level set functions: the faces of the crack belong to the zero level set of a first level set function while the second level set function is defined in such a way the intersection of the zero level set of the two level set functions describes the crack front. The two level set functions are signed distance functions. The two level set functions are orthogonal in the sense that their gradients are orthogonal. The growth of the crack is discretized in increments. Linear elastic fracture mechanics is used to describe the displacement of a point of the crack front from its position at the beginning of the growth to its position at the end of the growth increment. At the end of each growth increment, the level set functions are updated to describe the new crack position.
The udpate of both level set level set functions is based on the level set method introduced by [START_REF] Osher | Fronts propagating with curvature dependent speed: Algorithms based on Hamilton-Jacobi formulations[END_REF]. The procedure proposed by [START_REF] Gravouil | Non-planar 3D crack growth by the extended finite element and level sets -Part II: Level set update[END_REF] is as follows. In a first step, called the extension step, the displacement field is first extended to a neighborhood of the zero level set. In a second step, called the update step, the Hamilton-Jacobi equation describing the evolution of each level set function is integrated. The intersection of the zero level set of the two functions obtained after the update step describes the new position of the crack front. These functions are not expected to be true signed functions. The last step consists of two interlocked steps, called the reinitialization step and the orthogonalization step. The reinitialization step builds two new level set functions, from the functions obtained at the end of the update step. The orthogonalization step ensures the two updated level set functions are orthogonal.
Diffrent approaches to perform each of the four steps of the update of the two level set functions have been explored. [START_REF] Gravouil | Non-planar 3D crack growth by the extended finite element and level sets -Part II: Level set update[END_REF] proposed to integrate Hamilton-Jacobi equations to perform these four steps. [START_REF] Sukumar | Extended finite element method and fast marching method for threedimensional fatigue crack propagation[END_REF][START_REF] Sukumar | Threedimensional non-planar crack growth by a coupled extended finite element and fast marching method[END_REF] and [START_REF] Shi | Abaqus implementation of extended finite element method using a level set representation of three-dimensional fatigue crack growth and life predictions[END_REF] formulated the extension and the reinitialization steps in terms of eikonal equations and used the Fast Marching Method, introduced by [START_REF] Sethian | A marching level set method for monotonically advancing fronts[END_REF]. [START_REF] Prabel | Level set X-FEM non matching meshes: application to dynamic crack propagation in elastic-plastic media[END_REF] performed the four steps of the update of the two level set functions on a dedicated regular grid, in order to use an upwind scheme to integrate Hamilton-Jacobi equations. [START_REF] Colombo | Fast and robust level set update for 3D non-planar X-FEM crack propagation modelling[END_REF] introduced a geometric approach to perform the extension and update steps. [START_REF] Colombo | An implicit geometrical approach to level sets update for 3D non planar X-FEM crack propagation[END_REF] extended this geometric approach to perform also the reinitialization and orthogonalization steps, in order to obtain a more robust method.
The developments are implemented in code aster finite element package, developped by EDF (1989EDF ( -2021)). Three methods are available to perform the reinitialization and orthogonalization steps. Nevertheless, none of these methods fit our desire of a robust tool, able to perform the mixed mode propagation of a crack in an industrial structure and, for example, the Brokenshire test (see [START_REF] Barr | Torsion fracture tests[END_REF]) is still a challenge.
In this paper, we propose a new approach in which the geometric approach proposed by [START_REF] Colombo | An implicit geometrical approach to level sets update for 3D non planar X-FEM crack propagation[END_REF] is used for the orthogonalizaton step and the Fast Marching Method for the reinitialization step. Our implementation of this approach is designed for triangulated meshes. We then extend this method to all types of (linear) volume elements available in a standard finite element library.
Level set update to model the growth of a crack
The growth of the crack is discretized in growth increments. At the begining of a growth increment k, the crack Γ k is described by two level set functions : the normal level set function Φ k n and the tangential level set function Φ k t . The crack faces are described by the set of points:
Γ k = {M, Φ k n (M ) = 0} ∩ {M, Φ k t (M ) < 0}. ( 1
)
The crack front Γ k 0 is discretized by linear segments (cf. Fig. 1). Let P i be a point on the crack front. The local frame (T i , N i , B i ) is computed from the level set functions Φ k n and Φ k t :
B i = ∇Φ k n (P i ) ∇Φ k n (P i )
,
T i = ∇Φ k t (P i ) × ∇Φ k n (P i ) ∇Φ k t (P i ) × ∇Φ k n (P i ) , N i = B i × T i .
(2)
The standard finite element interpolation of the level set functions is used to compute ∇Φ k n (P i ) and ∇Φ k t (P i ). The mechanical fields are computed by means of X-FEM. We typically use an energetic approach to compute the energy release rate G i and stress intensity factors K i I , K i II and K i III . G i is then computed using the domain integral method [START_REF] Destuynder | Sur une Interprétation Mathématique de l'Intégrale de Rice en Théorie de la Rupture Fragile[END_REF][START_REF] Li | A comparison of methods for calculating energy release rates[END_REF] while the stress intensity factors are computed using interaction integrals [START_REF] Gosz | An interaction energy integral method for computation of mixed-mode stress intensity factors along non-planar crack fronts in three dimensions[END_REF]. We can also use the displacement jump extrapolation technique [START_REF] Chan | On the finite element method in linear fracture mechanics[END_REF] to compute K i I , K i II and K i III and apply Irwin's formula [START_REF] Irwin | Analysis of stresses and strains near the end of a crack traversing a plate. Journal of Applied Mechan-(a) (b) Fig. 22 Fracture surface observed experimentally (Barr and Brokenshire 1996) (a), and obtained by means of the Fast Marching Method[END_REF]) to obtain G i . These quantities are used to determine the crack growth size and crack growth direction.
Following [START_REF] Gravouil | Non-planar 3D crack growth by the extended finite element and level sets -Part II: Level set update[END_REF] and [START_REF] Sukumar | Threedimensional non-planar crack growth by a coupled extended finite element and fast marching method[END_REF], we assume a fatigue crack growth law and we use a modified Paris' law [START_REF] Paris | A Critical Analysis of Crack Propagation Laws[END_REF], in which G plays the role of the stress intensity range ∆K. The maximal crack growth size during a growth increment ∆a max is an input parameter of the simulation. The value of ∆a max corresponds to the typical element size in the neighborhood of the crack. The crack growth size ∆a i associated to the point P i on the crack front is:
∆a i = G i G max m ∆a max , (3)
where G max is the maximum of the energy release rate G, for all the points of the crack front, and C and m are the parameters of Paris' law.
The maximum hoop stress criterion is used to find the crack growth direction. The angle β of the crack growth direction with respect to the plane tangent to the crack is obtained by [START_REF] Erdogan | On the Crack Extension in Plates Under Plane Loading and Transverse Shear[END_REF]:
β = 2 tan -1 1 4 K I K II - K II |K II | K I K II 2 + 8 . (4)
A planar crack growth will correspond to assume β = 0. Finally, the displacement of the point P i on the crack front is given by the vector:
∆a i = ∆a i (cos β i N i + sin β i B i ), (5)
where β i is the angle β evaluated at point P i . At the end of the growth increment, the point Q i , defined by:
OQ i = OP i + ∆a i , (6)
lies on the new crack front Γ k+1
0
. The problem is now to build new level set functions Φ k+1 n and Φ k+1 t so that the new crack front Γ k+1 0 can be described as the intersection of the zero level set of Φ k+1 n and the zero level set of Φ k+1 t .
PDE-based approaches: Simplex and Upwind Methods
We now expose how the extension, update, reinitialization and orthogonalization steps are implemented for Simplex and Upwind Methods. The general scheme follows the approach proposed by [START_REF] Colombo | Fast and robust level set update for 3D non-planar X-FEM crack propagation modelling[END_REF].
Extension step
The extension step extends the displacement defined on the crack front to a neighbourhood of the crack front. This neighbourhood is the domain in which the level set functions will be updated during the update step. The simplest approach consists in updating the level set functions in the whole domain. One can also restrict the updated domain, in order to save computational time.
The first step of the extension consists in mapping each point of the domain to a point on the crack front, by projecting the point of the domain onto the crackfront. This first step is always performed for all the nodes in the domain, because it enables the computation of the distance from each node of the mesh to the crack front. This distance can then be used to restrict the domain in which the level set functions will be udpdated to a torus surrounding the crack front, following the procedure described in [START_REF] Colombo | Fast and robust level set update for 3D non-planar X-FEM crack propagation modelling[END_REF].
Let M be a node in the domain. The projection algorithm finds point P on the crack front Γ k 0 such that point M belongs to the plane (P, N, B). The algorithm computes point P and segment [P i P j ] to which P belongs. The displacement of point P reads:
∆a P = ∆a(cos βN + sin βB). (7)
The computation of crack growth size ∆a, angle β and local frame (T, N, B) at point P , from the data known at points P i and P j , is discussed in detail in [START_REF] Colombo | Fast and robust level set update for 3D non-planar X-FEM crack propagation modelling[END_REF].
Displacement at point M is computed from displacement at point P . Displacement ∆a P n is associated to the displacement of the zero level set of Φ n and displacement ∆a P t is associated to the displacement of the zero level set of Φ t . Displacement at point M is decomposed as:
∆a M = ∆a M n + ∆a M t , (8)
where ∆a M n is associated to the displacement of the zero level set of Φ n and ∆a M t is associated to the displacement of the zero level set of Φ t . Displacement ∆a M n is zero where Φ k t is negative to ensure Γ k ⊂ Γ k+1 at the end of the growth increment. Displacement ∆a M n where Φ k t is positive is computed assuming ∆a n varies linearly from ∆a n = 0 where Φ k t = 0 (i.e. on the crack front Γ k 0 ) to ∆a n = ∆a sin β where Φ k t = ∆a cos β.
Update step
We use the geometrical approach proposed in [START_REF] Colombo | Fast and robust level set update for 3D non-planar X-FEM crack propagation modelling[END_REF]. The updated level set functions at point M read:
Φ n (M ) = Φ k n (M ) -∆a M n • ∇Φ k n (M ), Φ t (M ) = Φ k n (M ) -∆a M t • ∇Φ k t (M ). (9)
Updated level set functions Φ n and Φ t are neither expected to be signed distance functions nor expected to be orthogonal.
Reinitialization and orthogonalization steps
The reinitialization and orthogonalization steps are interlocked. Actually, in the case of a non-planar crack, it is not possible to fulfill the orthogonality and unit gradient conditions at the same time, except in the neighborhood of the crack front. We use the following procedure:
∂Φ n ∂τ = - Φ n |Φ n | ( ∇Φ n -1), (10)
with initial data:
Φ n (τ = 0) = Φ n . ( 11
)
Let I n be the set of nodes belonging to cells which contain the zero level set of Φ n . Level set function Φ k+1 n is built so that the zero level set of Φ k+1 n coincides with the zero level set of Φ n . Thus, level set function Φ k+1 n at a given node of set I n is the distance from this node to the zero level set of Φ n , where Φ n is positive, and the opposite of the distance from this node to the zero level set of Φ n , where Φ n is negative. The computation of the distance from each node of I n to the zero level set of Φ n is based on a projection algorithm proposed in [START_REF] Colombo | Fast and robust level set update for 3D non-planar X-FEM crack propagation modelling[END_REF].
Orthogonalization. A level set function Φ t orthognal
to Φ k+1 n is computed as the steady-state solution of the following evolution problem:
∂Φ t ∂τ = - Φ k+1 n |Φ k+1 n | ∇Φ k+1 n ∇Φ k+1 n • ∇Φ t , (12)
with initial data:
Φ t (τ = 0) = Φ t . (13)
We use the result of the The difference between the Simplex Method and the Upwind Method is in the way the Hamilton-Jacobi equations ( 10), ( 12) and ( 14) are integrated. Simplex Method is dedicated to unstructured meshes (triangles or tetrahedra) and implements the method proposed in [START_REF] Barth | Numerical schemes for the Hamilton-Jacobi and Level Set equations on triangulated domains[END_REF]. Upwind Method is dedicated to regular grids (quadrangles or hexahedra) and implements the upwind scheme proposed in [START_REF] Prabel | Level set X-FEM non matching meshes: application to dynamic crack propagation in elastic-plastic media[END_REF]). Both methods use a one step explicit time integration scheme (explicit Euler) to integrate the Hamilton-Jacobi equations. Therefore the time step must be less than a critical value in order to satisfy the CFL condition.
Geometric Method
We now expose the fully geometric method proposed by [START_REF] Colombo | An implicit geometrical approach to level sets update for 3D non planar X-FEM crack propagation[END_REF]. The extension step discussed in section 2.1.1 is first performed. Thus, point P on the crack front Γ k 0 is such that point M belongs to the plane (P, N, B) which is known. Displacement ∆a P and angle β at point P are also known.
Let Q be the point on the crack front Γ k+1 0 corresponding to point P on the crack front Γ k 0 . Since ∆a P is the displacement at point P , we have:
PQ = ∆a P = ∆a(cos βN + sin βB). ( 16
)
Since β is the angle of the crack growth direction to the plane tangent to the crack, local frame (N Q , B Q ) at point Q is obtained by rotating local frame (N, B) by an angle β with respect to T. The level set values Φ k+1 n (M ) and Φ k+1 t (M ) form a cartesian coordinate system in plane (Q, N Q , B Q ) and we have:
QM = Φ k+1 n (M )B Q + Φ k+1 t (M )N Q . ( 17
)
The level set functions at the end of the growth increment are then computed as:
Φ k+1 n (M ) = QM • B Q , Φ k+1 t (M ) = QM • N Q . ( 18
)
This method condenses the update, reinitialization and orthogonalization steps in one step. No property of the underlying mesh is exploited so that this method can handle all types of elements.
Table 1 gives a concise comparison of the methods to update the level set functions. How each method performs extension, update, reinitialization and orthogonalization steps is shown.
Numerical tests
A mode I-II crack growth
We propose to compare the results obtained using the Simplex Method, the Upwind Method and the Geometric Method on a simple case. We consider the cracked plate depicted on Fig. 2. The length of the plate is L = 8 m, its height is H = 18 m and its thickness is e = 1 m. The initial crack is inclined with respect to the loading symmetry plane (xOy) by an angle of 45°. The initial crack front is located at a quarter of the
G (J/m 2 ) K I (MPa/ √ m) K II (MPa/ √ m) 45.9905 2.72750 1.38547
Table 2 Tabulated valued of the energy release rate and the stress intensity factors.
length and at half the height. The initial crack length is a 0 = 2 √ 2 m. The plate is submitted to a symmetric uniform traction t applied on the top face and the bottom face (see Fig. 2). Since the pre-crack is inclined with respect to the loading symmetry plane, a mode I-II crack growth is expected.
We consider two meshes. The first one is a rectangular grid with twenty eight elements along the length, sixty elements along the height and four elements in the thickness. The second one is obtained by splitting each hexaedron of the first mesh in six tetrahedra. Both meshes share the same nodes, so that the Geometric Method gives the same results with both meshes. The Upwind Method is applied to the first mesh, made of hexaedra. The Simplex Method is applied to the second mesh, made of tetrahedra. The energy release rate and the stress intensity factors are smoothed to make them uniform along the crack front. This way, all points on the crack front travel the same distance. Each simulation consists in one growth increment and the distance travelled by each point of the crack front is ∆a max = 0.4 m. Paris' law exponent is m = 1.
In order to test only the level set update, we use tabulated values of the energy release rate and of the stress intensity factors (cf. Table 2). These values were obtained using the mesh made of tetrahedra, a Young's modulus equal to 205000 MPa, a Poisson's ratio equal to 0.3 and a load amplitude t = 1 MPa. The energy release rate and the stress intensity factors are computed using the energetic approach in a domain restricted to a torus surrounding the crack front of radius R S = 1.16 m, which corresponds approximately to four times the size of an element.
The domain in which the normal and tangential level set functions are updated is restricted to a torus surrounding the crack front. The radius of this torus is automatically computed by the software from a minimal radius given by the user. We chose a minimal radius R = R S +∆a max = 1.56 m. The radius of the torus computed by the software is 2.31 m.
In this test the problem does not depend on the x coordinate. Consequently, we expect the normal and tangential level set functions to be uniform along axis Ox.
Figure 3 shows the results we obtained using the Simplex Method, the Upwind Method and the Geometric Method to update the level set functions. We computed level sets of the normal and tangential level set functions and we represented the projection of these level sets onto the plane (yOz).
The projection onto the plane (yOz) of the level sets of the level set functions obtained using the Simplex Method (cf. Fig. 3(a)) and the Upwind Method (cf. Fig. 3(b)) do not result in curved lines but surfaces. The level set functions obtained using the Simplex Method and the Upwind Method are thus not uniform along axis Ox. Conversely, the projection onto the plane (yOz) of the level sets of the level functions obtained using the Geometric Method (cf. Fig. 3(c)) result in curved lines so that the level functions obtained using the Geometric Method are uniform along axis Ox.
The level sets computed from the normal level set function obtained using the Geometric Method are not parallel to each other. Actually, due to the presence of the bifurcation of the crack, level sets of the normal level set function computed using the Geometric Method are discontinuous, since the normal level set function is not updated where the initial tangential level set function is negative. The post-processing is not able to represent these discontinuities in the general case. However, it is able to represent the kink of the zero level set of the normal level set function. Taking into account this artifact, we conclude the level sets computed from the normal level set function obtained using the Geometric Method are parallel to one another. The level sets computed from the tangential level set function obtained using the Geometric Method are parallel to one another. Furthermore, the level sets of the normal level set function are orthogonal to the level sets of the tangential level set function, where the normal level set function has been updated. This analysis emphasizes the difficulty to obtain accurate results by integrating the Hamilton-Jacobi equations to perform the reinitialization and orthogonalization steps. The level set functions obtained using the Geometric Method are quite good. Nevertheless, this method faces other issues, as we shall see in the analysis of the Brokenshire test.
Brokenshire test
Brokenshire test is a torsion test first proposed by [START_REF] Barr | Torsion fracture tests[END_REF]. The initial work of Barr and Brokenshire consisted in an experimental analysis. The setup of Brokenshire test is depicted on Fig. 4.
Several authors investigated this test with various approaches. [START_REF] Jefferson | Three dimensional finite element simulations of fracture tests using the Craft concrete model[END_REF] used a plastic-damagecontact model for concrete. [START_REF] Su | Finite Element Modelling of Complex 3D Static and Dynamic Crack Propagation by Embedding Cohesive Elements in Abaqus[END_REF] used cohesive elements embedded between solid elements. Baydoun and Fries (2012) used X-FEM and studied dif- We consider an unstructured tetrahedral mesh. In order to accurately calculate the energy release rate and the stress intensity factors the mesh is refined at each propagation step in a region surrounding the crack front. To do so we use HOMARD, an automatic mesh adaptation tool distributed along with code aster (Nico-las et al. 2016). The element size in the initial mesh is h 0 = 20 mm. HOMARD splits the edges such that the element size is divided by two in the refined region. We call HOMARD three times in a row, so that the element size is h = h 0 /2 3 = 2.5 mm in a torus surrounding the crack front of radius R S = 5h = 12.5 mm. The stress intensity factors and the energy release rate are computed using the displacement jump extrapolation technique. We extract the displacement of points lying on the crack faces, within a distance 4h = 10 mm from the crack front. The maximal crack growth size during a growth increment is ∆a max = 4h = 10 mm and the Paris' law exponent we arbitrarily chose is m = 1. The normal and tangential level set functions are updated in the whole domain.
Simplex and Upwind Methods fail to converge during the first increment step (the maximum number of iterations is fixed to five hundred). Geometric Method is able to perform at least the first two growth increments. The zero level sets of the resulting normal and tangential level set functions are shown in Fig. 5.
After the second growth increment, oscillations of the zero level set of the tangential level set function are observed (cf. Fig. 5(b)). As a result, the crack front becomes discontiunous and the computation stops.
We conclude none of the three approaches we presented so far is robust enough. We thus propose a new implementation of the Fast Marching Method and we expect it to be robust enough to be able to perform the simulation until the total failure of the specimen is reached. 3 Fast Marching Method for crack growth [START_REF] Sukumar | Extended finite element method and fast marching method for threedimensional fatigue crack propagation[END_REF][START_REF] Sukumar | Threedimensional non-planar crack growth by a coupled extended finite element and fast marching method[END_REF] proposed to couple the Fast Marching Method and X-FEM to simulate planar and non-planar crack growth. They used the Fast Marching Method to perform the extension and reinitialization steps. In this section, we propose a new approach in which only the reinitialization step is performed by means of the Fast Marching Method.
A level set function Φ satisfies the following eikonal equation:
∇Φ = 1. ( 19
)
The Fast Marching Method proposes to solve this eikonal equation, assuming the values of Φ in the neighborhood of the zero level set of Φ are known. Let d = |Φ| be the distance to the zero level set of Φ. The idea of the method is to propagate the information outward from smaller values of d to larger values. To ensure a monotonic propagation of the information, we split the process in two steps:
1. The function d = Φ is computed in the domain {M : Φ(M ) > 0} from known values of d in the neighborhood of the zero level set of Φ. We then take Φ = d. Each step propagates the (positive) distance function d from smaller to larger values. The propagation of the information during the Fast Marching Method process is illustrated in Fig. 6.
Γ Φ = 0 Φ = -1 Φ = -2 Φ = 1 Φ = 2
In practice, the neighborhood of the zero level set of Φ refers to a set of nodes I computed as follows. We loop over the cells and identify the ones which contain the zero level set of Φ. If the sign of the level set function Φ is not the same at all the nodes of a given cell, this cell contains the zero level set of Φ and the nodes belonging to this cell are added to I.
We now expose our new approach:
1. Extension step. The extension step discussed in section 2.1.1 is performed. 2. Update step. The update step discussed in section 2.1.2 is performed. We recall the updated level set functions Φ n and Φ t obtained at the end of this step are neither expected to be signed distance functions nor expected to be orthogonal. 3. Reinitialization of the normal level set function. The normal level set function at the end of growth increment Φ k+1 n is computed as the solution of the eikonal equation:
∇Φ n = 1, ( 20
)
by means of the Fast Marching Method. We recall I n is the set of nodes belonging to cells which contain the zero level set of Φ n . Initial data consists in the known values of level set function Φ k+1 n at nodes of set I n . Level set function Φ k+1 n is built so that the zero level set of Φ k+1 n coincides with the zero level set of Φ n . Thus, level set function Φ k+1 n at a given node of set I n is the distance from this node to the zero level set of Φ n , where Φ n is positive, and the opposite of the distance from this node to the zero level set of Φ n , where Φ n is negative. Computation of the distance from each node of I n to the zero level set of Φ n is based on the projection algorithm proposed in [START_REF] Colombo | Fast and robust level set update for 3D non-planar X-FEM crack propagation modelling[END_REF]. 4. Reinitialization of the tangential level set function.
The tangential level set function at the end of growth increment Φ k+1 t is computed as the solution of the eikonal equation: During each growth increment, the normal level set function and the tangential level set function are reinitialized. Each time a level set function is reinitialized, the zero level set of this level set function is altered (an illustration of this phenomeom is given in [START_REF] Chopp | Some improvements of the fast marching method[END_REF]). This behavior does not depend on the method used to solve the reinitialization problem and may occur using the Fast Marching Method. The numerical tests reported in section 3.3 show that despite the alteration of the zero level set of the level set functions induced by this phenomenon, our new approach gives acceptable results for our applications.
∇Φ t = 1, ( 21
)
The Fast Marching Method travels across the nodes of the mesh and updates the level set value of each visited node. The way the Fast Marching Method selects the next node to be updated and the way this node is updated depend on the mesh structure. In the following sections, we expose an approach dedicated to unstructured grids, and we then extend this approach from tetrahedra to all types of elements.
V 4 1 V V 2 V 3 Fig. 7 Representation of the tetrahedron V 1 V 2 V 3 V 4 .
Fast Marching Method applied to unstructured grids
We discuss in this section our implementation of the Fast Marching Method dedicated to unstructured grids, which consists in an extension to the tridimensional case of the method proposed by [START_REF] Kimmel | Computing geodesic paths on manifolds[END_REF]. The distance function d represents the distance to the zero level set of a given function Φ, which can be either the normal level set Φ k+1 n or the tangential level set Φ k+1 t . The computation is performed for a given set of nodes N , which can correspond to the domain where Φ is positive or the domain where Φ is negative.
Estimation of the distance function at a given node
We consider a non degenerate tetrahedron in an unstructured grid. The vertices of this tetrahedron are denoted V 1 , V 2 , V 3 and V 4 (see Fig. 7). The value of the distance function d at vertex V i is denoted d i . Distances
d 1 = d(V 1 ), d 2 = d(V 2 ) and d 3 = d(V 3 ) are known and distance d 4 = d(V 4
) is unknown. The first order Taylor series expansion of distance function d around the vertex V 4 evaluated at point P reads:
d(P ) ≈ d(V 4 ) + V 4 P • ∇d(V 4 ). ( 22
)
We assume that this approximation is exact for P = V 1 , for P = V 2 and P = V 3 . We obtain the following system of three equations:
d 1 = d 4 + V 4 V 1 • ∇d(V 4 ), d 2 = d 4 + V 4 V 2 • ∇d(V 4 ), d 3 = d 4 + V 4 V 3 • ∇d(V 4 ). ( 23
)
Since the tetrahedron is not degenerate, (
V 4 V 1 , V 4 V 2 , V 4 V 3 ) is a basis of R 3 . Thus we can write: ∇d(V 4 ) = n 1 V 4 V 1 + n 2 V 4 V 2 + n 3 V 4 V 3 , (24)
where n 1 , n 2 and n 3 are the coordinates of ∇d(V 4 )
over (V 4 V 1 , V 4 V 2 , V 4 V 3 )
. System (23) can then be rewritten as follows:
d 1 d 2 d 3 = d 4 d 4 d 4 + G n, (25)
where n = (n 1 , n 2 , n 3 ) T and matrix G reads:
G = V 4 V 1 • V 4 V 1 V 4 V 1 • V 4 V 2 V 4 V 1 • V 4 V 3 V 4 V 2 • V 4 V 1 V 4 V 2 • V 4 V 2 V 4 V 2 • V 4 V 3 V 4 V 3 • V 4 V 1 V 4 V 3 • V 4 V 2 V 4 V 3 • V 4 V 3 .
Since d is a distance function, the gradient of the distance at vertex V 4 is a unit vector. We thus have :
∇d(V 4 )•∇d(V 4 ) = 1. Matrix G represents the metric tensor in the basis (V 4 V 1 , V 4 V 2 , V 4 V 3 ).
Since the metric tensor represents the scalar product, we have:
n T G n = ∇d(V 4 ) • ∇d(V 4 ) = 1. ( 26
)
Equations ( 25) and (26) represent a system of four equations involving four unknowns: distance d 4 at vertex V 4 and coordinates n 1 , n 2 and n 3 of the gradient of distance at vertex V 4 . We define two criteria to accept a solution of this system of equations:
1. the consistency condition d 4 ≥ max(d 1 , d 2 , d 3 ); 2. the monotonicity condition n 1 ≤ 0, n 2 ≤ 0 and n 3 ≤ 0.
The consistency condition states that, since we want the update to propagate the information outward from smaller values to larger values, we want d 4 to be larger than d 1 , d 2 and d 3 . The monotonicity condition states that -∇d(V 4 ) lies in the tetrahedron, i.e. -∇d(V 4 ) ∈ C, where C is the convex cone defined as follows :
C = {α 1 V 4 V 1 + α 2 V 4 V 2 + α 3 V 4 V 3 , α 1 ≥ 0, α 2 ≥ 0, α 3 ≥ 0}.
We suppose the solid angle at vertex V 4 is acute, so that we have:
V 4 V 1 • V 4 V 2 ≥ 0, V 4 V 1 • V 4 V 3 ≥ 0 and V 4 V 2 • V 4 V 3 ≥ 0.
As a consequence, if the monotonicity condition is fulfilled, each component of the vector G n is negative so that system (25) gives:
d 1 -d 4 ≤ 0, d 2 -d 4 ≤ 0, d 3 -d 4 ≤ 0. ( 27
)
The monotocity condition then appears as a sufficient condition to fulfill the consistency condition. Nevertheless, we accept a solution of the system of equations if, and only if, these two conditions are fulfilled simultaneously.
The way to solve the system of equations, depends on a a priori hypothesis on ∇d(V 4 ):
1. The tridimensional update corresponds to the case where -∇d(V 4 ) is supposed to lie in the interior of convex cone C.
The bidimensional update corresponds to the case
where -∇d(V 4 ) is supposed to lie in a face of convex cone C. 3. The unidimensional update corresponds to the case where -∇d(V 4 ) is supposed to lie on an edge of convex cone C.
Tridimensional update In this case, -∇d(V 4 ) is supposed to lie in the interior of convex cone C. System (25) is rewritten as follows:
d -d 4 1 = G n, (28)
where d = (d 1 , d 2 , d 3 ) T and 1 = (1, 1, 1) T . Since matrix G represents a meric tensor, G is symmetric and positive-definite. We deduce from (28):
G -1 (d -d 4 1) = n, (29)
(d -d 4 1) T = n T G. (30)
We thus have:
(d -d 4 1) T G -1 (d -d 4 1) = n T G n = 1. (31)
Finally, d 4 is solution of the following quadratic equation:
ad 2 4 -2bd 4 + c = 0, (32)
where:
a = 1 T G -1 1, b = 1 T G -1 d, c = d T G -1 d -1. ( 33
)
We assume the discriminant ∆ of ( 32) is non-negative, so that this equation admits two solutions δ 1 and δ 2 , such that δ 1 ≥ δ 2 . We consider only the solution d 4 = δ 1 to enforce the consistency of the update. Since d 4 is known, we can solve (28) to obtain coordinates n 1 , n 2 and n 3 . If distance d 4 fulfills the consistency condition and if coordinates n 1 , n 2 and n 3 of the gradient of distance ∇d(V 4 ) fulfill the monotonicity condition, we accept distance d 4 . Otherwise, we try the bidimensionnal update.
Bidimensional update In this case, -∇d(V 4 ) is supposed to lie in a face of convex cone C, i.e. the intersection of convex cone C and one of the faces
V 1 V 2 V 4 , V 2 V 3 V 4 or V 3 V 1 V 4 .
One of the coordinates n 1 , n 2 and n 3 is set to zero :
n 3 is set to zero if we consider face V 1 V 2 V 4 , n 1 is set to zero if we consider face V 2 V 3 V 4 and n 2 is set to zero if we consider face V 3 V 1 V 4 .
As a consequence, one of the equations of system (25) is removed. Without loss of generality, we consider face V 1 V 2 V 4 , so that n 3 is set to zero and the third equation of system (25) has to be removed. System ( 25) is rewritten as follows:
d -d 4 1 = G n , ( 34
)
where
G = V 4 V 1 • V 4 V 1 V 4 V 1 • V 4 V 2 V 4 V 2 • V 4 V 1 V 4 V 2 • V 4 V 2 , d = (d 1 , d 2 ) T , 1 = (1, 1)
T and n = n 1 , n 2 T . We deduce from ( 34):
(d -d 4 1 ) T G -1 (d -d 4 1 ) = n T G n . ( 35
)
Since n 3 = 0, equation ( 26) gives:
n T G n = n T G n = 1. ( 36
)
Finally, d 4 is solution of the following quadratic equation:
a d 2 4 -2b d 4 + c = 0, ( 37
)
where a , b and c are obtained by replacing 1, d, G with 1 , d , G in (33). We proceed similarly to the tridimensionnal update to determine distance d 4 for face V 1 V 2 V 4 but we try also to compute distance d 4 from faces V 2 V 3 V 4 and V 3 V 1 V 4 and we keep the smallest obtained value. If a value of distance d 4 cannot be computed from any face, we try the unidimensionnal update.
Unidimensional update In this case, -∇d(V 4 ) is supposed to lie on an edge of convex cone C, i.e. the intersection of convex cone C and one of the edges V 1 V 4 , V 2 V 4 or V 3 V 4 . Two of the coordinates n 1 , n 2 and n 3 are set to zero : n 2 and n 3 are set to zero if we consider edge V 1 V 4 , n 1 and n 3 are set to zero if we consider edge V 2 V 4 and n 1 and n 2 are set to zero if we consider edge V 3 V 4 . As a consequence, two of the equations of system (25) are removed.
System ( 25) is rewritten as follows:
d 1 -d 4 = V 4 V 1 • V 4 V 1 n 1 . ( 38
)
The gradient of distance ∇d(V 4 ) reads:
∇d(V 4 ) = n 1 V 4 V 1 . ( 39
)
Since ∇d(V 4 ) is a unit vector, the coordinate n 1 fulfilling the monoticity condition is:
n 1 = - 1 V 4 V 1 . ( 40
)
Distance d 4 fulfilling ( 38) is then:
d 4 = d 1 + V 4 V 1 . (41)
A value of d 4 can be computed from edges
V 4 V 1 , V 4 V 2 and V 4 V 3 .
We select the smallest distance from the three possible values:
d 4 = min {d 1 + V 4 V 1 , d 2 + V 4 V 2 , d 3 + V 4 V 3 } . ( 42
)
A pathological case for which the tridimensional update fails Fig. 8 illustrates a case for which the consistency condition is fullfilled but not the monotonicity condition. The coordinates of the vertices over the reference frame (O, e x , e y , e z ) are V 1 (0, 1/2, 1), V 2 (0, -1/2, 1), V 3 (1, 0, 1) and V 4 (-1/2, 0, 2). The distances associated to vertices V 1 , V 2 and V 3 are respectively d 1 = 1, d 2 = 1 and d 3 = 1. The largest root of (32) in this case is
d 4 = 2 > 1 = max(d 1 , d 2 , d 3 )
, so that the consistency condition is fullfilled. The coordinates of ∇d(V 4 ) over the basis
(V 4 V 1 , V 4 V 2 , V 4 V 3 ) are n 1 = -3/4
, n 2 = -3/4 and n 3 = 1/2. Since n 3 > 0, the monotonicity condition is not fullfilled. The tridimensional update thus fails. We now focus on face V 1 V 2 V 4 . Fig. 9 illustrates this problem. The largest root of (37) in this case is d 4 = 1+ √ 5/2 > 1, so that the consistency condition is fullfilled. The coordinates of ∇d(V 4 ) over basis (
V 4 V 1 , V 4 V 2 , V 4 V 3 ) are n 1 = -1/ √ 5, n 2 = -1/ √
5 and n 3 = 0, so that n 1 ≤ 0, n 2 ≤ 0, n 3 ≤ 0 and the monotonicity condition is fullfilled. The bidimensional update is thus successful.
Sweeping process
The initial data consists in the known values of the distance function d at nodes of a given set I. The set of nodes for which a candidate value of distance d is known is denoted K and the set of nodes with a minimal distance is denoted P. Algorithm 1 outlines the way the nodes are visited by the Fast Marching Method applied to an unstructured grid.
In Algorithm 1 D refers to the set of candidate values for distance d j . There are three cases: Traditional exposition of the Fast Marching Method involves three sets of nodes [START_REF] Sethian | A marching level set method for monotonically advancing fronts[END_REF][START_REF] Sethian | Fast Marching Methods[END_REF]Kim-mel and Sethian 1998;[START_REF] Sethian | Fast methods for the Eikonal and related Hamilton-Jacobi equations on unstructured meshes[END_REF][START_REF] Bronstein | Weighted distance maps computation on parametric threedimensional manifolds[END_REF]):
-The distance function at a node in the set of "accepted" nodes is known and frozen. -The distance function at a node in the set of "tentative" nodes (the so called "Narrow band") is known but can be recomputed. -The distance function at a node in the set of "distant" nodes is unknown.
V 4 e x e z V 2 e y V 1 V 3 O -n
Fig. 8 Illustration of a configuration for which distance d 4 can not be computed using the tridimensional update. The monotonicity condition is not fullfilled, since
-n = -∇d(V 4 ) does not lie in convex cone C. V 1 V 2 V 3 V 4 e z e y e x
O
-n' Fig. 9 Illustration of the result of the bidimensional update for face V 1 V 2 V 4 in the configuration for which the tridimensional update fails. Distance d 4 can be computed because the monotonicity condition is fullfilled, since -n = -∇d(V 4 ) lies in convex cone C.
With our notations, the set of "accepted" nodes is I ∪P, the set of "tentative" nodes is K\(I ∪ P) and the set of "distant" nodes is N \K. Our algorithm does not distinguish the update of the distance function at a "tentative" node and the computation of an initial distance function at a node going from the set of "distant" nodes to the set of "tentative" nodes. In the same way, our algorithm does not perform the initialization of the nodes in the set of "tentative" nodes in a distinct block. Actually, while the node with the smallest value for the distance in K\P belongs to I, the algorithm populates the initial set of "tentative" nodes. When the initial set of "tentative" nodes is fully populated, the node with the smallest value for the distance in K\P belongs to the set of "accepted" nodes.
3.2 Extension of the Fast Marching Method from tetrahedra to all types of elements.
The method we just exposed is designed for a tetrahedral volume mesh. The simplest way to handle any type of element is to build a tetrahedral mesh by splitting the element of the given mesh in a preprocessing step. This solution introduces a new step, a new tool and a new source of problems. We thus propose in this subsection a way to extend the method to arbitrary volume meshes.
In the algorithm, the mesh structure is used to iterate over the tetrahedra adjacent to the node to update. The idea is to build a set of tetrahedra from an arbitrary cell. The algorithm then consists in two nested loops: the outer loop iterates over the cells adjacent to the node to update, while the inner loop iterates over the tetrahedra.
We recall the method computes an updated value of the distance function at the given node, from a given tetrahedron adjacent to the node, by computing an approximation of the gradient of the distance function. We thus propose to iterate over all the tetrahedra we can build from the vertices of the cell, so as many approximated gradients of the distance function as possible are explored. Obviously, some of the tetrahedra built this way can not be used to compute an updated value of the distance function at the given node, since the value of the distance function at some vertices of the cell can be not known yet. Such tetrahedra are simply omitted, as will be explained in the following.
We consider three types of volumic cell, except the tetrahedron: the pyramid, the pentahedron and the hexaedron. We now expose how we compute a set of tetrahedra from each of these cell types.
Pyramid Let {1, 2, 3, 4, 5} be the connectivity of the reference pyramid (cf. Fig. 10). It is possible to build the four following tetrahedra from the pyramid:
{1, 3, 4, 5}, {1, 2, 3, 5}, {1, 2, 4, 5}, {2, 3, 4, 5}. (43)
where {a, b, c, d} is the connectivity of the tetrahedron based on vertices a, b, c and d.
There are three different cases in which we can use the pyramid to update the value of the distance function at some vertex of the pyramid:
1. If the distance function is known at the four vertices of the base (i.e. vertices 1, 2, 3 and 4) and we want to update the value of the distance function at the apex (i.e. vertex 5), we iterate over the four tetrahedra of (43). Fig. 10(a) illustrates this case. 2. If the distance function is known at four nodes of the pyramid, including the apex, we iterate over only three tetrahedra. Pentahedron Let {1, 2, 3, 4, 5, 6} be the connectivity of the reference pentahedron (cf. Fig. 11). We first remark we can build the six following pyramids from the pentahedron:
{1, 2, 3, 4, 5}, {1, 2, 3, 4, 6}, {1, 4, 5, 6, 2}, {1, 4, 5, 6, 3}, {2, 3, 5, 6, 4}, {2, 3, 5, 6, 1}.
(44)
From each of these pyramids, we can build four tetrahedra. We then obtain the twenty four tetrahedra listed in Table 3. Each column in Table 3 refers to one of the six pyramids. In each column, the first row gives the connectivity of the pyramid, while the other rows give the connectivities of four tetrahedra corresponding to that pyramid. Some of these twenty four tetrahedra can be built from two pyramids. For example, we can build terahedron {1, 2, 4, 5} from both pyramids {1, 2, 3, 4, 5} and {1, 4, 5, 6, 2}. By removing the duplicates, we end up with the following list of twelve tetrahedra: {1, 2, 4, 5}, {1, 2, 4, 6}, {1, 2, 5, 6}, {1, 2, 3, 5}, {1, 2, 3, 6}, {2, 4, 5, 6}, {2, 3, 4, 5}, {2, 3, 4, 6}, {1, 3, 5, 6}, {1, 3, 4, 5}, {1, 3, 4, 6}, {3, 4, 5, 6}.
(45) Hexahedron Let {1, 2, 3, 4, 5, 6, 7, 8} be the connectivity of the reference hexahedron. We remark we can build twelve pentahedra from the hexahedron, by splitting the faces two by two. Figure 12(a) illustrates a first way to build four pentahedra from the hexaedron, Fig. 12(b) illustrates a second one and Fig. 12(c) illustrates a third Table 3 The tetrahedra obtained from the six pyramids one can build from the reference pentahedron. The duplicates are in red. one. Since we showed a pentahedron generates twelve tetrahedra, one hundred forty-four ( 144) tetrahedra (including duplicates) can be built.
In practice, we obviously do not store the connectivities of all these tetrahedra. We implemented a function which returns the connectivities of the twelve tetrahedra one can build from a pentahedron. We call this function for each of the twelve pentahedra one can build from a hexahedron to dynamically build the connectivities of the one hundred forty-four tetrahedra.
Numerical tests
A mode I-II crack growth
We propose to apply the Fast Marching Method to the mode I-II crack growth problem we already investigated using the Simplex Method, the Upwind Method and the Geometric Method in section 2.3.1. The results are depicted on Fig. 13.
The results obtained using the mesh made of tetrahedra or the mesh made of hexaedra are almost identical, which validates the extension of the method from tetrahedra to other types of elements. The level sets computed from the normal level set function obtained using the Fast Marching Method are parallel to each another. We remark the level sets of the normal level set function computed using the Fast Marching Method are smooth where the level sets of the normal level set function computed using the Geometric Method are discontinuous. The level sets computed from the tangential level set function obtained using the Fast Marching Method are parallel to each another. Furthermore, the level sets of the normal level set function are orthogonal to the level sets of the tangential level set function, where the normal level set function has been updated.
We conclude that the Fast Marching Method is as robust as the Geometric Method, since both methods lead to similar results where the level sets of the normal level set function are continuous. However the Fast Marching Method has the advantage of building smoother level set functions.
Convergence study
Our new approach based on the Fast Marching Method involves approximations at each step. In order to evaluate a posteriori the convergence of our new approach (involving the four steps of the update of the two level set functions), we propose a convergence study. We consider the uniform planar (β = 0) growth of a circular planar crack. The structure occupies domain Ω = [-1, +1]×[-1, +1]×[-1/8, +1/8]. The radius of the initial crack is R = 1/2. The initial crack occupies domain Γ 0 = {M (x, y, z) ∈ Ω : x 2 + y 2 ≤ R 2 , z = 0}. Each simulation consists in three growth increments and the distance travelled by each point of the crack front during one growth increment is ∆a max = ∆R = 1/20. Paris' law exponent is m = 1. At the end of growth increment k, normal level set function Φ k n and tangential level set function Φ k t read:
Φ k n (M ) = z, (46)
Φ k t (M ) = x 2 + y 2 -(R + k∆R). (47)
A tetrahedron for which the solid angle at each vertex is acute is called an acute tetrahedron. A tetrahedron which is not an acute tetrahedron is called an obtuse tetrahedron. A procedure to compute the solid angle at each vertex of a tetrahedron, based on the formula given by [START_REF] Writh | Relations between edge lengths, dihedral and solid angles in tetrahedra[END_REF], has been implemented. The caracteristics of the unstuctured meshes used in this convergence study are reported in table 4. Each mesh used in the convergence study includes a small part of obtuse tetrahedra.
The convergence of the Fast Marching Method is ensured only if the triangulation is acute, i.e. each tetrahedron of the triangulation is acute [START_REF] Sethian | Ordered upwind methods for static hamilton-jacobi equations: Theory and algorithms[END_REF]. To handle general triangulations, one can either use a method to split the obtuse tetrahedra [START_REF] Bronstein | Weighted distance maps computation on parametric threedimensional manifolds[END_REF][START_REF] Fu | A fast iterative method for solving the eikonal equation on tetrahedral domains[END_REF] or the Order Upwind Methods proposed by [START_REF] Sethian | Ordered upwind methods for static hamilton-jacobi equations: Theory and algorithms[END_REF].
We want to determine if a specific treatment of the obtuse tetrahedra is needed for our applications. We thus perform two convergence studies. In the first one, each time we need to compute the absolute value of a level set function at a vertex, from a tetrahedron such that the solid angle at the vertex is obtuse, we apply the Geometric Method instead of the procedure discussed in 3.1.1 (cf. Fig. 14). In the second one, no specific treatment is applied to obtuse tetrahedra (cf. Fig. 15). With or without the treatment of obtuse tetrahedra, we obtain the same order of convergence for both level set functions. We conclude no specific treatment of obtuse tetrahedra is needed for our applications.
In our approach the Geometric Method is used to initialize the Fast Marching Method. As a consequence, we have to consider two convergence mechanisms: 1-the convergence of the distance function at nodes used to intialize the Fast Marching Method and 2-the convergence of the distance function at the other nodes of the domain. The first convergence mechanism caracterizes the convergence of the Geometric Method while the second one caracterizes the convergence of the Fast Marching Method. Since the approximation of the distance at a given node in the Fast Marching Method is based on a first order Taylor series expansion, we expect the error in terms of L 1 norm to converge at order 1, as reported by [START_REF] Fu | A fast iterative method for solving the eikonal equation on tetrahedral domains[END_REF]. Using the Geometric Method in the whole domain, the error in terms of L 1 norm converges -1 e φ t -step 1 (slope = 2.07) φ t -step 2 (slope = 2.04) φ t -step 3 (slope = 2.00) Fig. 16 Error in terms of L 1 norm for the tangential level set function as a function of the normalized mesh size, obtained using the Geometric Method in the whole domain. at order 2 (cf. Fig. 16). For each growth step, the convergence order of the whole process is in between these two values (1 and 2), which seems consistent.
A three-point bending test
A cracked plate subjected to three-point bending is depicted Fig. 17. The length of this beam, associated to direction x, is L = 260 mm. Its thickness, associated to direction z, is t = 10 mm and its width, associated to direction y, is W = 60 mm. The initial crack length, associated to direction y, is a = 20 mm. The initial crack is inclined with respect to plane (yOz) by an angle of 45°. We have AB = 2L e = 240 mm. Line segments AA and BB correspond to the supports. The loading consists in a compressive force distribution applied on line segment CC . This configuration has been investigated experimentally by [START_REF] Lazarus | Comparison of predictions by mode II or mode III criteria on crack front twisting in three or four point bending experiments[END_REF] and numerically by [START_REF] Citarella | Comparison of crack growth simulation by DBEM and FEM for SEN-specimen undergoing torsion or bending loading[END_REF]. It has already been used by [START_REF] Colombo | Fast and robust level set update for 3D non-planar X-FEM crack propagation modelling[END_REF] and [START_REF] Geniaut | A simple method for crack growth in mixed mode with X-FEM[END_REF] to challenge crack growth methods available in code aster. Since the pre-crack is inclined with respect to the loading symmetry plane, the crack is subjected to a mixed mode loading condition and a twisted propagation is observed. We expose in this section the behaviour of the Fast Marching Method applied to this three-point bending test.
The displacement along direction y is prescribed to zero on both line segments AA and BB . The displacement along direction z is prescribed to zero at both points A and B. The displacement along direction x is prescribed to zero at point C. The resultant of the load applied on line segment CC is F = 2.4 kN. The material is PMMA with Young's modulus E = 2800 MPa and Poisson's ratio ν = 0.38.
We consider an unstructured tetrahedral mesh. In order to accurately calculate the energy release rate and the stress intensity factors we use HOMARD to refine the mesh at each propagation step in a region surrounding the crack front. The element size in the initial mesh is h 0 = 2.5 mm. We call HOMARD five times in a row, so that the element size is h = h 0 /2 5 = 0.078125 mm in a torus surrounding the crack front of radius R S = 5h = 0.390625 mm. The energy release rate and the stress intensity factors are computed using the energetic approach in a domain restricted to this torus.
The maximal crack growth size during a growth increment is ∆a max = 4h = 0.3125 mm and the Paris' law exponent we arbitrarily chose is m = 1. The domain in which the normal and tangential level set functions are updated is restricted to a torus surrounding the crack front. We asked the radius of this torus to be greater than R = R S + ∆a max = 0.703125 mm.
At the end of the simulation, the total failure of the specimen has almost been reached (cf. Fig. 18). We compare the evolution of the crack front position during the simulation we obtained using the Fast Marching Method with the same evolution of the crack front position reported in [START_REF] Citarella | Comparison of crack growth simulation by DBEM and FEM for SEN-specimen undergoing torsion or bending loading[END_REF]. Fig. 19 shows the projection of the crack fronts on plane (yOz) while Fig. 20 shows the projection of the crack fronts on plane (xOz). Citarella and Buchholz extracted the crack fronts at the beginning of some growth increments. We extracted from our data the crack fronts corresponding to the ones reported by Citarella and Buchholz. Each curve is indexed by the growth increment to which it corresponds. The growth increments we extracted do not correspond to the ones reported by Citarella and Buchholz because they used a value of the maximal crack growth size during a growth increment ∆a max = 1 mm, which corresponds to about three times the value we used. One also remarks Citarella and Buchholz used criteria to compute the direction of the crack growth direction and the growth size different from the ones we used. Nevertheless, the solution we obtained with the Fast Marching Method is, qualitatively, in good agreement with the one reported in [START_REF] Citarella | Comparison of crack growth simulation by DBEM and FEM for SEN-specimen undergoing torsion or bending loading[END_REF].
Brokenshire test
We expose in this section the behaviour of the Fast Marching Method applied to the Brokenshire test we introduced in 2.3.2. The zero level sets of the normal and tangential level set functions computed after the first two growth increments are depicted Fig. 21.
Unlike the result we obtained using the Geometric Method, no oscillations are observed on the zero level set of the tangential level set function after the first two growth increments. The simulation went on after the second growth increment and the total failure of the specimen was reached. We conclude our implementation of the Fast Marching Method is more robust than the one of the Geometric Method.
The fracture surfaces observed experimentally and obtained by means of the Fast Marching Method are depicted on Fig. 22. They are similar. This qualitative comparaison allows us to conclude our implementation of the Fast Marching Method is able to reproduce a complex crack path.
Conclusion
We proposed in this paper a new crack growth method based on X-FEM and Fast Marching Method. We used a Fast Marching Method designed for tetrahedral volume meshes and extended to all types of linear elements (pyramids, pentahedra and hexahedra). Thus, our approach allows us to use the same mesh to solve the mechanical problem and to update the level set functions. The price to pay for this ease of use is an order one discretization scheme of the Eikonal equation used to update the level set functions. For their part, [START_REF] Sukumar | Extended finite element method and fast marching method for threedimensional fatigue crack propagation[END_REF][START_REF] Sukumar | Threedimensional non-planar crack growth by a coupled extended finite element and fast marching method[END_REF] chose to update the level set functions on an auxiliary rectangular grid to benefit from a second order scheme. We proposed a new implementation of the Fast Marching Method applied to tetrahedral volume meshes, which consists in an extension of the method proposed by [START_REF] Kimmel | Computing geodesic paths on manifolds[END_REF]. Our implementation is suitable for a finite element software. We also proposed a new presentation of the algorithm which is straightforward to implement.
We proposed several numerical examples to show the capabilities of our approach. We used a simple mode I-II crack growth to show we are able to compute smooth level set functions in the presence of kinks. We performed a convergence study of the proposed approach on a crack growth problem for which an analytical solution, in terms of the level set functions, is known. We investigated the three point bending test introduced by [START_REF] Lazarus | Comparison of predictions by mode II or mode III criteria on crack front twisting in three or four point bending experiments[END_REF] to demonstrate the approcah is able to handle non planar cracks. Finally, this approach was able to simulate the Brokenshire test [START_REF] Barr | Torsion fracture tests[END_REF] until the total failure of the specimen while all other methods available in code aster stopped after two growth increments.
We showed the approach we propose is able to compute smooth level set functions. We now want to get rid of the robustness issues coming from accumulated errors in the position of the crack front during the crack growth. The position of the crack front is computed, at the end of each growth increment, from the energy release rate and the stress intensity factors. Consequently, our next objective is to robustify the computation of the energy release rate and the stress intensity factors.
Fig. 1
1 Fig. 1 Discretized crack front and local frame associated with each point on the crack front.
Fig. 2
2 Fig. 2 Geometry and loading for the mode I-II crack growth.
Fig. 3
3 Fig. 3 Representation of the level sets of the normal and tangential level set functions computed at the end of the crack growth using different implementations for the update step: (a) Simplex Method, (b) Upwind Method and (c) Geometric Method. The zero level sets of the two level set functions are shown in red. The update of the level set functions is restricted to a torus surrounding the crack front.
Fig. 4
4 Fig. 4 Geometry and loading of Brokenshire test (Su et al. 2010).
(a) After one growth increment.(b) After two growth increments.
Fig. 5
5 Fig. 5 Representation of the zero level set of the normal level set function (in grey) and of the zero level set of the tangential level set function (in purple) during the growth of the crack, obtained using the Geometric Method.
Fig. 6
6 Fig. 6 Propagation of the information during the Fast Marching Method process applied to the computation of the signed distance function Φ to the interface Γ.
2. The function d = -Φ is computed in the domain {M : Φ(M ) < 0} from known values of d in the neighborhood of the zero level set of Φ. We then take Φ = -d.
1.
If distance d j can be computed from the tetrahedron, D is a singleton. 2. If distance d j can be computed from at least one of the faces, D contains from one to three candidate values. 3. If distance d j must be computed from the edges, D contains three candidate values.
Fig. 10
10 Fig. 10 Spliting a pyramid into tetrahedra. Orange nodes represent known level sets values and red nodes represent level sets values to be determined.
Fig. 10(b) illustrates the case of the update of the distance function at vertex 3 from known values of the distance function at vertices 1, 2, 4 and 5. In this case, we iterate over tetrahedra {2, 3, 4, 5}, {1, 3, 4, 5} and {1, 2, 3, 5}. 3. If the distance function is known at three vertices of the pyramid, we consider only one tetrahedron. Fig. 10(c) illustrates the case of the update of the distance function at vertex 1 or 4 from known values of the distance function at vertices 2, 3 and 5. If we want to update the distance function at vertex 1 we consider only tetrahedron {1, 2, 3, 5} while if we want to update the distance function at vertex 4 we consider only tetrahedron {2, 3, 4, 5}.
Fig. 11
11 Fig. 11 Spliting of a pentahedron into pyramids.
Fig. 12
12 Fig. 12 Spliting of a hexahedron into pentahedra.
Fig. 13
13 Fig. 13 Representation of the level sets of the normal and tangential level set functions computed at the end of the crack growth using the Fast Marching Method for the update step and applied to: (a) the mesh made of tetrahedra and (b) the mesh made of heaxaedra. The zero level sets of the two level set functions are shown in red.
Fig. 14 Error in terms of L 1 norm for the level set functions as a function of the normalized mesh size, obtained with the treatment of obtuse tetrahedra: normal level set function (top) and tangential level set function (bottom).
Fig. 15
15 Fig. 15 Error terms of L 1 norm for the level set functions as a function of the normalized mesh size, obtained without the treatment of obtuse tetrahedra: normal level set function (top) and tangential level set function (bottom).
Fig. 173D cracked plate subjected to a three-point bending test (the initial crack is in red).
Fig. 173D cracked plate subjected to a three-point bending test (the initial crack is in red).
Fig. 18
18 Fig. 18 Crack path obtained at the end of the simulation of the three-point bending test.
Fig. 19 Fig. 20
1920 Fig. 19 Crack front plots, coordinate y as a function of the coordinate z: (a) results obtained using the Fast Marching Method, (b) results obtained by[START_REF] Citarella | Comparison of crack growth simulation by DBEM and FEM for SEN-specimen undergoing torsion or bending loading[END_REF]. The legend gives the growth increment corresponding to each position of the crack front.
(a) After one growth increment.(b) After two growth increments.
Fig. 21
21 Fig. 21 Representation of the zero level set of the normal level set function (in grey) and the zero level set of the tangential level set function (in blue) during the growth of the crack, obtained using the Fast Marching Method.
Table 1
1 Comparison of the methods to update the level set functions studied in this article: Simplex Method, Upwind Method, Geometric Method and the new approach based on the Fast Marching Method.
Simplex Method Upwind Method Geometric Method Fast Marching Method
Extension step Projection of the nodes onto the crack front. Projection of the nodes onto the crack front. Projection of the nodes onto the crack front. Projection of the nodes onto the crack front.
Update step Geometrical approach. Geometrical approach. Implicit in the fully geometric method. Geometrical approach.
Reinitialization step (Φ n ) Integration of a time-dependent PDE; solver dedicated to triangulated meshes; boundary condition: Φ n = Φ k+1 n at nodes Integration of a time-dependent PDE; solver dedicated to regular meshes; boundary condition: Φ n = Φ k+1 n at nodes Implicit in the fully geometric method. Integration of a stationnary PDE; solver dedicated to triangulated meshes; boundary condition: Φ n = Φ k+1 n at nodes
of I n . of I n . of I n .
Orthogonalization step Integration of a time-dependent PDE; solver dedicated to triangulated meshes; boundary condition: Φ t = Φ t at nodes of I n . Integration of a time-dependent PDE; solver dedicated to regular meshes; boundary condition: Φ t = Φ t at nodes of I n . Implicit in the fully geometric method. Replaced by the computation of Φ k+1 t at nodes of I t , using the fully geometric method.
Reinitialization step (Φ t ) Integration of a time-dependent PDE; solver dedicated to triangulated meshes; boundary condition: Φ t = Φ k+1 t at nodes Integration of a time-dependent PDE; solver dedicated to regular meshes; boundary condition: Φ t = Φ k+1 t at nodes Implicit in the fully geometric method. Integration of a stationnary PDE; solver dedicated to triangulated meshes; boundary condition: Φ t = Φ k+1 t at nodes
of I t . of I t . of I t .
1. Reinitialization of the normal level set function. The
normal level set function at the end of growth incre-
ment Φ k+1 n is computed as the steady-state solution
of the following evolution problem, with respect to
a virtual time τ :
is the distance from this node to the zero level set of Φ t , where Φ t is positive, and the opposite of the distance from this node to the zero level set of Φ t , where Φ t is negative. The computation of the distance from each node of I t to the zero level set of Φ t is based on the projection algorithm presented in the paragraph dedicated to the reinitialization of the normal level set function.
growth increment Φ k+1 t is computed as the steady-
state solution of the following evolution problem:
∂Φ t ∂τ = - Φ t |Φ t | ( ∇Φ t -1), (14)
with initial data:
Φ t (τ = 0) = Φ t . (15)
Let I t be the set of nodes belonging to cells which
contain the zero level set of Φ t . The level set func-
tion Φ k+1 t is built so that the zero level set of Φ k+1 t
coincides with the zero level set of Φ t . Thus, the
level set function Φ k+1 t at a given node of the set
I t This procedure ensures the level set functions Φ k+1 n Φ k+1 t are signed distance functions. and
projection algorithm pre-
sented above to compute the value of the level set
function Φ t at nodes of I n . The tangential level set
function Φ t is first interpolated at the vertices of the
triangulation of the zero level set of Φ n . Let M be a
node in I n . The projection algorithm associates to
M a triangle T and a point P T in T . The value of
the level set function Φ t at M is computed as the
linear interpolation of the tangential level set func-
tion values computed at the vertices of the triangle
T evaluated at point P T .
3. Reinitialization of the tangential level set function.
The tangential level set function at the end of the
by means of the Fast Marching Method. We recall I t is the set of nodes belonging to cells which contain the zero level set of Φ t . Initial data consists in the known values of level set function Φ k+1 at a given node of set I t cannot be deduced from the distance from this node to the zero level set of Φ t . We thus apply the Geometric Method discussed in section 2.2 to compute level set function Φ k+1 t at nodes of set I t . This way, the reinitialization of the tangential level set function and the orthogonalization step are condensed in one step.Table1gives a concise comparison of our new approach based on the Fast Marching Method, with respect to the three other methods to update the level set functions (Simplex Method, Upwind Method and Geometric Method).
t at nodes
of set I t . Since level set function Φ t and level set
function Φ k+1 n Φ k+1 t are not orthogonal, level set function
Table 4
4 Statistics of the unstructured meshes used in the convergence study.
mesh size normalized mesh size (h) # nodes # tetrahedra # obtuse tetra. obtuse tetra. (%) max. angle (°)
1.47E-1 1.00E+0 892 2893 20 0.69 104.19
7.92E-2 5.40E-1 4771 19520 30 0.15 115.89
4.10E-2 2.79E-1 30564 152574 119 0.08 120.95
2.09E-2 1.43E-1 212917 1183921 622 0.05 128.34
Acknowledgements We sincerely thank the two anonymous reviewers whose suggestions helped us to improve this manuscript.
Algorithm 1: Fast Marching Method on an unstructured grid INITIALIZATION: K = I. P = ∅.
PROCESSING:
While K\P = ∅ Find the node i 0 with the smallest value for the distance in K\P. Add node i 0 to P.
For each tetrahedron T adjacent to the node i 0 For each node j of the tetrahedron T If j / ∈ I and j / ∈ P Then If all the other nodes adjacent to T are in K Then
The vertices of 32) is non-negative Then Compute d 4 as the largest root of (32).
Compute the coordinates of ∇d(V 4 ) by solving (28).
If the consistency condition and the monotonicity condition are fulfilled Then |
01796415 | en | [
"shs",
"shs.gestion"
] | 2024/03/04 16:41:24 | 2018 | https://amu.hal.science/hal-01796415/file/BHATTI_cright_How_perceived_corporate_social_responsibility_affects_employee_cynicism.pdf | Carolina Serrano
Emmanuelle Reynaud
Zeeshan Ahmed Bhatti
How perceived corporate social responsibility affects employee cynicism: the mediating role of organizational trust
This study examines to what extent perceived corporate social responsibility (CSR) reduces employee cynicism, and whether trust plays a mediating role in the relationship between CSR and employee cynicism. Three distinct contributions beyond the existing literature are offered. First, the relationship between perceived CSR and employee cynicism is explored in greater detail than has previously been the case. Second, trust in the company leaders is positioned as a mediator of the relationship between CSR and employee cynicism. Third, we disaggregate the measure of CSR and explore the links between this and with employee cynicism. Our findings illustrate that the four distinct dimensions of CSR of Carroll (economic, legal, ethical, and discretionary) are indirectly linked to employee cynicism via organizational trust. In general terms, our findings will help company leaders to understand employees' counterproductive reactions to an organization, the importance of CSR for internal stakeholders, and the need to engage in trust recovery.
1-INTRODUCTION
In March 2016, a French company called Isobat Experts was forced to close because its entire staff went on sick leave. No epidemics had occurred; this was merely a reflection of the malaise that the company's employees were feeling due to their new working conditions, which they described as bullying.
As this example suggests, cynicism among employees has increased in recent years [START_REF] Mustain | Overcoming cynicism: William James and the metaphysics of engagement[END_REF] and has a significant impact on companies' performance [START_REF] Chiaburu | Antecedents and consequences of employee organizational cynicism: A meta-analysis[END_REF]. There are three dimensions to organizational cynicism [START_REF] Dean | Organizational Cynicism[END_REF]. The first is cognitive: Employees think that the firm lacks integrity. The second is affective: Employees develop negative feelings toward the firm. The third is behavioral: Employees publicly criticize the firm [START_REF] Dean | Organizational Cynicism[END_REF]. The behavioral dimension of cynicism may have a direct impact on the organization's performance. When employees engage in bitter and accusatory discourse, this damages the company's image and the atmosphere in the workplace, and it is frequently accompanied by a lack of personal investment on the employees' part [START_REF] Kanter | The Cynical Americans: Living and Working in an Age of Discontent and Disillusion[END_REF]. Surprisingly, although the academic literature on cynicism is growing, scholars have focused on the cognitive and affective dimensions of employee cynicism [START_REF] Ajzen | Nature and Operation of Attitudes[END_REF][START_REF] Özler | Research to Determine the Relationship Between Organizational Cynicism and Burnout Levels of Employees in Health Sector[END_REF] ve Atalay, 2011) and neglected the behavioral aspect-and yet it is key to understand this behavioral dimension if we are to help managers reduce such behavior.
In order for cynicism to be reduced, firms need to create a positive working environment. One pivotal tool in such efforts appears to be corporate social responsibility (CSR). CSR, the widely accepted concept that brings together economic, legal, ethical, and discretionary responsibility [START_REF] Carroll | A Three-Dimensional Conceptual Model of Corporate Social Performance[END_REF], has a significant positive effect on a variety of stakeholders [START_REF] Fonseca | Impact of Social Responsibility Programmes in Stakeholder Satisfaction: An Empirical Study of Portuguese Managers' Perceptions[END_REF][START_REF] Luo | Corporate Social Responsibility, Customer Satisfaction, and Market Value[END_REF][START_REF] Schuler | A Corporate Social Performance-Corporate Financial Performance Behavioral Model for Consumers[END_REF][START_REF] Aguinis | What We Know and Don't Know About Corporate Social Responsibility: A Review and Research Agenda[END_REF]. Some studies have examined the effectiveness of CSR activities in influencing internal stakeholders [START_REF] Rupp | Employee reactions to corporate social responsibility: An organizational justice framework[END_REF] or employees [START_REF] Morgeson | Extending corporate social responsibility research to the human resource management and organizational behavior domains: a look to the future[END_REF][START_REF] Aguilera | Putting the S back in corporate social responsibility: A multilevel theory of social change in organizations[END_REF] and shown that CSR can increase positive employee behavior through organizational commitment [START_REF] Brammer | The contribution of corporation social responsibility to organizational commitment[END_REF] or by reducing employee turnover intention [START_REF] Hansen | Corporate Social Responsibility and the Benefits of Employee Trust: A Cross-disciplinary Perspective[END_REF]. CSR-related efforts increase employees' positive behaviors. Though little scholarly attention has thus far been paid to whether CSR could potentially decrease negative or counterproductive employee behavior [START_REF] Aguinis | What We Know and Don't Know About Corporate Social Responsibility: A Review and Research Agenda[END_REF], it is not unreasonable to suggest that this may be the case. Furthermore, by corroborating this assumption, we could help managers to decrease deviant behaviors such as those stemming from employee cynicism.
Employees have become suspicious of their leaders and their ability to lead organizations properly [START_REF] Johnson | The effects of psychological contract breach and organizational cynicism: Not all social exchange violations are created equal[END_REF], and so the perception that CSR efforts are being undertaken is not always sufficient to secure a positive overall image of a given company in its employees' eyes. In the absence of trust, CSR efforts may not be perceived positively at all. For this reason, trust may be a key mediator in the relationship between perceptions of CSR and the reduction of cynical behaviors. Trust is defined as "the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor" (Mayer et al. 1995, p. 712). Employees who trust their company leaders will believe in their good intentions and perceive their CSR efforts positively. The converse also appears to be true: The PwC 17th Annual Global CEO Survey [START_REF] Preston | Business success beyond the short term: CEO perspectives on Sustainability[END_REF] argues that one of the main business pillars for reducing a trust deficit in leaders is a genuine attempt to target a socially centered purpose. In addition, we know that trust emerges in an environment that employees deem to be trustworthy [START_REF] Albrecht | Perceptions of Integrity, Competence and Trust in Senior Management as Determinants of Cynicism toward Change[END_REF]. We argue that a socially responsible strategy on the part of a corporation can create such an environment. This article seeks to investigate the impact of perceived CSR on the behavioral dimension of employee cynicism. This impact may be direct or indirect, and is also dependent upon a variable that has been shown to be an important outcome of CSR: trust [START_REF] Pivato | The impact of corporate social responsibility on consumer trust: the case of organic food[END_REF][START_REF] Hansen | Corporate Social Responsibility and the Benefits of Employee Trust: A Cross-disciplinary Perspective[END_REF].
2-LITERATURE
2.1-A SOCIAL EXCHANGE RELATIONSHIP BETWEEN EMPLOYEES AND THEIR ORGANIZATION
Social exchange theory is one of the main approaches to the employee-organization relationship and argues that employees' relationship with an organization is shaped by social exchange processes (Blau, 1964;Gouldner, 1960;[START_REF] Coyle-Shapiro | The employment relationship through the lens of social exchange[END_REF]. Individuals develop specific behaviors as an exchange strategy to pay back the support they receive from the organization. In their relationship with the organization and the leaders, if they do not feel that they are trustworthy, they may consider withdrawal or negative behaviors as an acceptable exchange currency and a means of reinstating equity in the social exchange [START_REF] Settoon | Social Exchange in Organizations: Perceived Organizational Support, Leader-Member Exchange, and Employee Reciprocity[END_REF].
When employees feel respected and perceive their firm to be attentive and honest, they will most likely feel obliged to act for the good of the firm and take care to not harm its interests [START_REF] Settoon | Social Exchange in Organizations: Perceived Organizational Support, Leader-Member Exchange, and Employee Reciprocity[END_REF][START_REF] Wayne | Perceived Organizational Support and Leader-Member Exchange: A Social Exchange Perspective[END_REF]. The firm, meanwhile, should feel an obligation to consider the well-being of its employees if it desires their trust and commitment [START_REF] Eisenberger | Reciprocation of Perceived Organizational Support[END_REF]. This type of link is particularly true in the case of CSR [START_REF] El Akremi | How Do Employees Perceive Corporate Responsibility? Development and Validation of a Multidimensional Corporate Stakeholder Responsibility Scale[END_REF][START_REF] Farooq | Employees response to corporate social responsibility: exploring the role of employees collectivist orientation[END_REF][START_REF] Glavas | Is the perception of 'goodness' good enough? Exploring the relationship between perceived corporate social responsibility and employee organizational identification[END_REF]. CSR (or a lack of it) precedes employees' behaviors. [START_REF] Rupp | Employee reactions to corporate social responsibility: An organizational justice framework[END_REF] show that in situations of organizational justice or injustice, employees react according to the principle of reciprocity: Reactions are positive if employees feel a sense of justice, whereas they are vengeful if their sentiment is one of injustice. Thus, CSR is conceptualized as an antecedent of employees' behaviors [START_REF] Rupp | Employee reactions to corporate social responsibility: An organizational justice framework[END_REF]. Discussing their celebrated model of CSR, [START_REF] Aguilera | Putting the S back in corporate social responsibility: A multilevel theory of social change in organizations[END_REF] state: "When organizational authorities are trustworthy, unbiased, and honest, employees feel pride and affiliation and behave in ways that are beneficial to the organization" (Aguilera et al., 2007, p 842). Employees react positively to positive CSR [START_REF] El Akremi | How Do Employees Perceive Corporate Responsibility? Development and Validation of a Multidimensional Corporate Stakeholder Responsibility Scale[END_REF], and so CSR fosters positive employee behaviors as the consequence of an exchange process (Osveh et al., 2015, p. 176). For employees, the impression that company leaders are engaging in CSR leads to a general perception of justice, where all stakeholders are treated fairly; in other words, "these CSR perceptions shape the employees' subsequent attitudes and behaviors toward their firm" (Aguilera et al. 2007, p. 840).
Extant theory clearly asserts that CSR increases positive social relationships and behaviors within organizations [START_REF] Aguilera | Putting the S back in corporate social responsibility: A multilevel theory of social change in organizations[END_REF]), but we do not yet know whether this link is direct or otherwise. Recent empirical research has confirmed that employees are attuned to their organization's actions, which they use to assess the character of the organizational leaders behind them [START_REF] Hansen | Ethical leadership. Assessing the value of a multifoci social exchange perspective[END_REF]. Social exchange is characterized by undetermined obligations and uncertainty about the future actions of both partners in the exchange. This relational uncertainty accords trust a key role in the process of establishing and developing social exchange [START_REF] Cropanzano | Social exchange theory: An interdisciplinary review[END_REF].
Negative social exchange situations will decrease trust and generate counterproductive employee behaviors at work [START_REF] Aryee | Trust as a mediator of the relationship between organizational justice and work outcomes: test of a social exchange model[END_REF][START_REF] Mayer | Trust in management and performance: Who minds the shop while the employees watch the boss?[END_REF]. It has been shown that "when individuals perceive an imbalance in the exchange and experience dissatisfaction, trust decreases" [START_REF] Fulmer | At What Level (and in Whom) We Trust: Trust Across Multiple Organizational Levels[END_REF], p. 1175); as a result of this, people feel insecure and invest energy in self-protective behaviors and in continually making provision for the possibility of opportunism on the part of others [START_REF] Limerick | Managing the new organisation: A blueprint for networks and strategic alliances[END_REF]. In the absence of trust, "people are increasingly unwilling to take risks, demand greater protections against the possibility of betrayal and increasingly insist on costly sanctioning mechanisms to defend their interests" (Cummings, Bromiley, 1996, p. 3-4). Employees are less devoted to working efficiently and become distrusted in their roles by engaging in counterproductive behavior, which is a way to minimize their vulnerability vis-à-vis the employer (Tschannen-Moran, Hoy, 2000). Conversely, a favorable social exchange will create trust and encourage the employee to develop favorable reactions towards the organization [START_REF] Flynn | Identity orientations and forms of social exchange in organizations[END_REF].
Certain case studies have highlighted the role played by trust as a major social exchange mechanism and an antecedent of positive behaviors. In the early 1980s, the executives of Harley Davidson took ownership of the company in order to save it from decline and put in place a new method of management based on trust. The result was astonishing: between 1981 and 1987, annual revenues per employee doubled and productivity rose by 50%. 1
2.2-CORPORATE SOCIAL RESPONSIBILITY AND EMPLOYEE CYNICISM
CSR, or a company's dedication to improving the well-being of society through non-profitable business practices and resource contributions [START_REF] Kotler | Corporate Social Responsibility: Doing the Most Good for Your Company and Your Cause[END_REF], conveys positive values and may therefore positively influence the general perception of company leaders in the eyes of the employees. This concept is framed around the question put forward by [START_REF] Bowen | Social Responsibilities of the businessman[END_REF]: "To what extent does the interest of business in the long run merge with the interests of society?" (p. 5), to which he responds that businesspeople are expected to take their businesses forward in such a way that their decisions and policies are congruent with societal norms and customs.
CSR has received increasing attention in the academic literature in recent years. [START_REF] Carroll | A Three-Dimensional Conceptual Model of Corporate Social Performance[END_REF] definition of CSR is one of the most comprehensive and widely accepted of past decades. According to [START_REF] Gond | Corporate Social Performance Disoriented: Saving the lost paradigm?[END_REF], "the model of CSP [Corporate Social Performance] by [START_REF] Carroll | A Three-Dimensional Conceptual Model of Corporate Social Performance[END_REF] appears as a real breakthrough because it purports to organize the coexistence of what were previously conceptualized as rival approaches to the same phenomenon (…) by incorporating economic responsibility as one level of corporate social responsibility, Carroll's model of CSP reconciles the debate between some economists' narrow view of social involvement [START_REF] Friedman | The social responsibility of business is to increase its profits[END_REF] and the advocates of a broader role for the firm" (p. 682).
Carroll's model describes the four responsibilities that a business is expected to fulfill, namely economic, legal, ethical, and discretionary.
1. Economic responsibility. Society anticipates that a business will successfully generate profit from its goods and services and maintain a competitive position. The aim of establishing businesses was to create economic benefits, as well as products and services, for both shareholders and society. The notion of social responsibility was later incorporated into the execution of business. Managers are liable to adopt those strategies that could generate profit in order to satisfy shareholders. Furthermore, managers are accountable for taking steps toward expanding the business. All the other responsibilities are considered as extensions of economic responsibility, because without it, they have no meaning. 2. Legal responsibility. Building upon economic responsibility, a society wants business to follow the rules and regulations of behavior in a community. A business is expected to discharge its economic responsibility by meeting the legal obligations of society. Managers are expected to develop policies that do not violate the rules of that society. In order to comply with the social contract between firm and society, managers are expected to execute business operations within the framework of the law as defined by the state authorities. Legal responsibilities embody the concept of justice in the organization's functioning. This responsibility is as fundamental to the existence of the business as economic responsibility. 3. Ethical responsibility. This type of responsibility denotes acts that are not enforced by the law but that comply with the ethical norms and customs of society. Although legal responsibility encompasses the notion of fairness in business operations, ethical responsibility takes into account those activities that are expected of a business but not enforced by the law. These cannot be described within a boundary, as they keep expanding alongside society's expectations. This type of responsibility considers the adoption of organizational activities that comply with the norms or concerns of society, provided that shareholders deem them to be in accordance with their moral privileges. Firms find it difficult to deal with these aspects, as ethical responsibilities are usually under public discussion and so not properly defined. It is important to realize that for a company, ethical conduct is a step beyond simply complying with the legal issues. 4. Discretionary responsibility. All those self-volunteered acts that are directed towards the improvement of society and have a strategic value rather than having been implemented due to legal or ethical concerns fall under this type of responsibility. Discretionary responsibility encompasses those activities that a business willingly performs in the interests of society, for instance providing funding for students to study, or helping mothers to work by providing a free day-care center. A business should undertake tasks to promote the well-being of society. Discretionary aspects differ from ethical ones in the sense that discretionary acts are not considered in a moral or ethical light. The public has a desire for businesses to endow funds or other assets for the betterment of society, but a business is not judged as unethical if it does not act accordingly.
CSR perceptions influence internal stakeholders' attitudes [START_REF] Folger | What is the relationship between justice and morality[END_REF]. As Hansen et al. ( 2011) have stated, "employees not only react to how they are treated by their organization, but also to how others (…) are treated (…). If an employee perceives that his or her organization behaves in a highly socially responsible manner-even toward those outside and apart from the organization, he or she will likely have positive attitudes about the company and work more productively on its behalf" (p. 31).
This might indicate that employees' perceptions of CSR could potentially reduce undesirable and counterproductive behaviors in the workplace, including employee cynicism. Employee cynicism has greatly increased in recent decades [START_REF] Mustain | Overcoming cynicism: William James and the metaphysics of engagement[END_REF], and modern-day organizational realities involve the presence of employee cynicism at work as a response to a violation of employees' expectations regarding social exchange [START_REF] Neves | Organizational Cynicism: Spillover Effects on Supervisor-Subordinate Relationships and Performance[END_REF], leading to a reduction of discretionary behaviors that go beyond the minimum required [START_REF] Neves | Organizational Cynicism: Spillover Effects on Supervisor-Subordinate Relationships and Performance[END_REF]. As illustrated by [START_REF] Motowidlo | A theory of individual differences in task and contextual performance[END_REF], extra-role or contextual behavior that goes beyond the minimum task-related required behavior is key to the performance of the organization, as it helps to maintain the positive social and psychological environment in which the 'technical core' must function. Therefore, reducing employee cynicism is key for organizations.
Conceptualizations of cynicism have moved beyond earlier trait-and emotion-based perspectives [START_REF] Cook | Proposed hostility and pharisaic-virtue scales for the MMPI[END_REF] to focus on the construct as an attitude comprising the three dimensions to which we refer above: cognitive, affective, and behavioral ( 2006) examine the links between psychological hardiness and emotions experienced on the one hand and organizational cynicism on the other, while [START_REF] Andersson | Employee cynicism: an examination using a contract violation framework[END_REF] argues that self-esteem and the locus of control may contribute to employee cynicism. Although the literature on cynicism is growing, most studies focus on the general idea of organizational cynicism as an attitude [START_REF] Cutler | The cynical manager[END_REF][START_REF] Fleming | Working at a cynical distance: Implications for the power, subjectivity and resistance[END_REF], and on its cognitive dimension in particular (Johnson and O'Leary-Kelly, 2003).
However, in addition to the cognitive and affective dimensions of employee cynicism [START_REF] Dean | Organizational Cynicism[END_REF], the behavioral dimension is an important aspect and has its own relationships with organizational outcomes. [START_REF] Neves | Organizational Cynicism: Spillover Effects on Supervisor-Subordinate Relationships and Performance[END_REF]. They define employee cynicism as a tendency to engage in disparaging and critical behavior toward the organization in a way that is consistent with their belief that it lacks integrity. This behavior stems from a feeling of hopelessness and pessimism that spreads as a malaise within groups and undermines work relationships (Kantes and Mirvis, 1989). Employees who are organizationally cynical may take a defensive stance, verbally opposing organizational action and mocking organizational initiatives publicly [START_REF] Dean | Organizational Cynicism[END_REF].
Management needs to intervene to reduce these behaviors at work, but since management is both the source and the target of cynicism, it must reestablish a climate of psychological security by indirect means [START_REF] Dean | Organizational Cynicism[END_REF][START_REF] O'leary | From paternalism to cynicism: Narratives of a newspaper company[END_REF]. Some scholars posit that certain organizational-level policies or actions can decrease employee cynicism, such as offering a supportive environment, demonstrating fairness, minimizing the violation of psychological contracts, and reducing organizational politics [START_REF] Chiaburu | Antecedents and consequences of employee organizational cynicism: A meta-analysis[END_REF]. CSR can also act as a policy demonstrating the organization's willingness to engage in socially responsible activity.
Research has shown that psychological contract violations increase cynicism [START_REF] Pugh | After the fall: Layoff victims' trust and cynicism in re-employment[END_REF][START_REF] Chrobot-Mason | Keeping the promise: Psychological contract violations for minority employees[END_REF]. Individuals develop cynicism towards businesses based on the extent to which they perceive them to exhibit benevolence toward their employees [START_REF] Bateman | Roger, me, and my attitude: film propaganda and cynicism toward corporate leadership[END_REF]. In light of this, CSR may be seen as a factor that can reduce behavioral employee cynicism. By engaging in CSR, company leaders may generate a more positive image of the social exchange with their employees and thus decrease their behavioral tendency toward cynicism. If employees see that their organization is genuinely addressing its social obligations, its honest image will be restored in their eyes and, as a reciprocal response, they will decrease their cynical behavior. According to [START_REF] Carroll | A Three-Dimensional Conceptual Model of Corporate Social Performance[END_REF], "to fully address the entire range of obligations business has to society, it must embody the economic, legal, ethical, and discretionary categories of business performance" (p. 499). If all four dimensions of CSR are perceived to be in place, this could act as a signal of honesty on the company's part. Each dimension of CSR may therefore have an impact on employees' behavioral cynicism.
Economic responsibility remains the most important dimension of CSR for big companies [START_REF] Konrad | Empirical Findings on Business-Society Relations in Europe[END_REF]. A company's primary goal is to generate a profit. If it does not do this, it can do nothing for its employees.
Compliance with the law may exert the same positive impact. The importance of the legal aspect of CSR has already been measured by Shum and Yan (2011), who compared the four components of Carroll's construct and found that respondents placed more emphasis on this aspect than any other. This was true regardless of age or occupation; only marketing employees placed a little more focus on economic aspects than legal ones. Although compliance with the law seems an obvious requirement, let us for a moment consider what opinion employees may have of a company that does not exhibit it. Laws have the function of informing about the right behaviors. They stand for the "right thing to do." For employees, compliance with the law may be perceived as a means of preventing unethical behavior. This is why respect for the law and the spirit of the law are a positive signal that could decrease employees' cynicism.
The two other dimensions of CSR are voluntary. According to [START_REF] Aguilera | Putting the S back in corporate social responsibility: A multilevel theory of social change in organizations[END_REF], it is these voluntary actions that can actually be considered CSR, and these are probably also the ones that really matter to employees. Employees value the discretionary dimension because it goes beyond compliance with the law and societal expectations. [START_REF] Adams | Codes of Ethics as signals for ethical behaviour[END_REF] showed that the ethical dimension is important to them also by revealing that employees of companies with codes of ethics perceive colleagues and superiors as more ethical and feel more supported than employees of companies without such codes.
Consequently, this study aims at answering the following research questions: Does behavioral employee cynicism reduce when employees perceive company leaders to be engaging in CSR efforts? And do the four dimensions of CSR play a role in decreasing cynical behavior at work? Accordingly, we posit the following hypotheses: 2007) provide a theoretical model to explain why business organizations engage in CSR. One of the motives for this is relational: "Relational models show that justice conveys information about the quality of employees' relationships with management and that these relationships have a strong impact on employees' sense of identity and self-worth" (p. 842).
H1:
Trust generation, or an increase in trust, is one of the benefits of CSR [START_REF] Bustamante | CSR, Trust and the Employer Brand. CSR Trends[END_REF].
Many definitions of trust generally present it as involving positive expectations of trustworthiness, a willingness to accept vulnerability, or both [START_REF] Fulmer | At What Level (and in Whom) We Trust: Trust Across Multiple Organizational Levels[END_REF]. As reported by [START_REF] Rousseau | Not so different after all: a crossdiscipline view of trust[END_REF], there is a consensus that risk and interdependence are two necessary conditions for trust to emerge and develop. Trust is the expectation that another individual or group will make an effort of good faith to behave in accordance with commitments (both explicit and implicit), to be honest, and not to take excessive advantage of others even when the opportunity exists [START_REF] Cummings | The Occupational Trust Inventory (OTI): Development and validation[END_REF]1996).
Following [START_REF] Mayer | An integrative model of organizational trust[END_REF], we define organizational trust as employees' willingness to be vulnerable to the actions of their employer based on positive expectations about its intentions or behavior. In this definition, the organization is represented by both its top management and its procedures, norms, and decisions. [START_REF] Dirks | Trust in Leadership: Meta-Analytic Findings and Implications for Research and Practice[END_REF] consider top management and management in general as representatives of the organization. Since [START_REF] Levinson | Reciprocation: The relationship between man and organization[END_REF], it has been acknowledged that employees tend to personify the acts of their organization. Organizational acts and management practices thus become a sign of potential support for and interest in the employees [START_REF] Guerrero | Organizational trust and social exchange: what if taking good care of employees were profitable?[END_REF]. Employees who perceive the organizational leadership to be acting in its own best interests rather than in those of its employees will deem it to be less trustworthy due to a lack of benevolence [START_REF] Mayer | An integrative model of organizational trust[END_REF].
As suggested by Fulmer and Gelfand (2012), we clearly identify the trust referent (which refers to the target of the trust) and the trust level (which refers to the level of analysis of the study):
In this study, the trust referent is the organizational leadership as a whole [START_REF] Robinson | Violating the psychological contract: Not the exception but the norm[END_REF], while the trust level is the individual's degree of trust in that referent [START_REF] Fulmer | At What Level (and in Whom) We Trust: Trust Across Multiple Organizational Levels[END_REF].
It is difficult to conceive of an organization thriving without trust [START_REF] Kramer | Trust and distrust in organizations: Emerging perspectives, enduring questions[END_REF]. Without trust, employees struggle to function effectively and cope with their interdependence on each other in a less hierarchical organization [START_REF] Gilbert | An examination of organizational trust antecedents[END_REF]. Trust is likely to influence employee behavior by improving job satisfaction and creativity and reducing anxiety [START_REF] Cook | New work attitude measures of trust, organizational commitment and personal need non-fulfilment[END_REF]. It improves coordination between colleagues (McAllister, 1995), allows their interactions to be more honest and freer, and significantly reduces the withholding of information [START_REF] Zand | Trust and managerial problem solving[END_REF]. Conversely, employees who do not trust their organizational leadership will tend to be defensive-to adopt behaviors that still conform to the organizational rules but that minimize any risk in the relationship.
Because a company that is perceived as socially responsible is generally seen as being honest, perceived CSR-related efforts should exert an impact on organizational trust. Meanwhile, because trust increases the commitment of employees, it may decrease undesirable behavior such as employee cynicism. Researchers have reported that negative attitudes at work, including employee cynicism, are the result of poor social exchange. 2003).
By assuming that trust acts as a mediator between the exchange partners, our model not only observes the relationship between perceptions of CSR and the behavioral dimension of employee cynicism, but also provides an explanation of the psychological process underlying this relationship. This psychological process involves the reduction of negative behavior (employee cynicism) through a positive attitude (increased trust) as predicted by the organization's application of CSR policies.
Based on the above, we hypothesize that the perception of CSR may impact trust, which, in turn, will decrease cynicism:
H2: Organizational trust mediates the relationship between CSR and employee cynicism H2a: Organizational trust mediates the relationship between economic CSR and employee cynicism H2b: Organizational trust mediates the relationship between legal CSR and employee cynicism H2c: Organizational trust mediates the relationship between ethical CSR and employee cynicism
H2d: Organizational trust mediates the relationship between discretional CSR and employee cynicism
2.4-PROPOSED MODEL
3-METHOD
3.1-PROCEDURE AND PARTICIPANTS
An online survey was sent to a sample of 620 in-company employees. The database used comprises all the personal and professional contacts of one of the authors. This author is in charge of an MBA programme at a major French university and most of the respondents (70%) are alumni with whom she periodically has contact. In order to obtain a high response rate, she sent a personal message to each of them explaining the importance of their response and that the responses would be anonymous. She also sent a reminder email to those who did not answer the first time. The method yielded a 65% response rate (N=403) and covered a wide range of occupations. Seventy percent of the participants were male and 30% female. Ninety percent of the participants were European, 8% American, and 2% Asian. The majority of participants (60%) identified themselves as supervisors. Finally, the participants' average age was 38 years and most of the respondents had worked for their company for between two and five years. There were no missing values in the data since the survey was administered online and the respondents were prompted for any questions which they did not answer. Cook's and Leverage method was used to check for any multivariate outliers, which resulted in the removal of 59 observations.
3.2-MEASURES
Items of all measures were scored on a five-point Likert-type scale ranging from 1 (strongly disagree) to 5 (strongly agree).
All four dimensions of the CSR construct (economic, legal, ethical, and discretionary) were measured using the scales developed by Maignan, Ferrell, and Hult (1999) and adapted for the study. Studies report that this instrument has satisfactory psychometric properties, including its construct reliability, convergent validity, and discriminant validity [START_REF] Maignan | Measuring corporate citizenship in two countries: The case of the United States and France[END_REF]. The four items for the economic dimension of CSR (α=.70) are "My company has a procedure in place to respond to every customer complaint," "My company continually improves the quality of our products," "My company uses customer satisfaction as an indicator of its business performance," and "Top management establishes long-term strategies for our business." The four items for the legal dimension of CSR (α=.75) are "Managers of my company are informed about the relevant environmental laws," "All our products meet legal standards," "Managers of my organization try to comply with the law," and "My company seeks to comply with all laws regulating hiring and employee benefits." The four items for the ethical dimension of CSR (α=.77) are "My company has a comprehensive code of conduct," "Members of my organization follow professional standards," "Top managers monitor the potential negative impacts of our activities on our community," and "A confidential procedure is in place for employees to report any misconduct at work (such as stealing or sexual harassment)." The four items for the discretionary dimension of CSR (α=.81) are "My company encourages employees to join civic organizations that support our community," "My company gives adequate contributions to charities," "My company encourages partnerships with local business and schools," and "My company supports local sports and cultural activities."
Concerning organizational trust, responses were coded such that a high score would indicate a high degree of trust in one's employer. Trust was measured using the four items that focus on trust in the organization of the Organizational Trust Inventory's short form developed by [START_REF] Cummings | The Occupational Trust Inventory (OTI): Development and validation[END_REF] and adapted for the present study. The four items (α=.93) are "I globally trust my company," "I think that my company shows integrity," "I feel I can definitely trust my company," and "My company cares about the employees."
Employee cynicism was measured using the behavioral dimension (cynical criticism) of the scale developed by [START_REF] Brandes | Does organizational cynicism matter? Employee and supervisor perspectives on work outcomes[END_REF]. The behavioral component of employee cynicism relates to the practice of making harmful statements about the organization. The four items (α=.70) are all reverse-scored: "I do not complain about company matters," "I do not find faults with what the company is doing," "I do not make companyrelated problems bigger than they actually are," and "I focus on the positive aspects of the company rather than just focusing on the negative aspects."
We controlled for company size and demographic variables such as the respondents' age and position, as these variables can function as potential individual antecedents to specific attitudes at work ( After suppressing missing values and outliers, the sample comprised 366 respondents. To assess the discriminant validities, we conducted confirmatory factor analysis (CFA) for all six variables. Following this, we employed the approach of Henseler, Ringle, and Sarstedt (2015), who argue that other techniques such as the Fornell-Larcker criterion and the assessment of cross-loadings are less sensitive methods and unable to detect a lack of discriminant validity successfully. They suggest a more rigorous approach based on the comparison of heterotraitheteromethod and monotrait-heteromethod (HTMT) correlations. Hence, we applied the HTMT criterion to all constructs and the following table shows all values in the acceptable range -which should be less than 0.85 -as suggested by [START_REF] Henseler | A new criterion for assessing discriminant validity in variance-based structural equation modeling[END_REF]. The results show that the six-factor model fits the data well (X2 = 487.73, X2/DF = 2.06, RMSEA = .05, CFI = .94, TLI = .93 and SRMR = .05). CLI and TFI scores above 0.90 and SMRM and RMSEA scores below 0.07 are judged to confirm a good fitting model [START_REF] Hair | Multivariate Data Analysis[END_REF].
Both procedural and statistical methods were used to control for common method variance (CMV) [START_REF] Spector | Using self-report questions in OB research: A comment on the use of a controversial method[END_REF]. First, since the data were collected through an online survey, a question randomization option was used that showed questions to each respondent in a shuffled manner. Second, Harman's one-factor test was conducted with an unrotated factor solution. The test revealed an explained variance of 27.5%, well below the threshold of 50% suggested by [START_REF] Podsakoff | Common method biases in behavioral research: a critical review of the literature and recommended remedies[END_REF]. Harman's single factor was also run using CFA. According to [START_REF] Malhotra | Common method variance in IS Research: A comparison of alternative approaches and a reanalysis of past research[END_REF]p. 1867), "method biases are assumed to be substantial if the hypothesized model fits the data." Our single-factor model showed a poor data fit (GFI = .704; AGFI = .648; NFI= .639; IFI = .685; TLI = .652; RMR = .104 and RMSEA = .113), which confirms the nonexistence of CMV. Finally, we used a common latent factor (CLF) test and compared the standardized regression weights of all items for models with and without CLF. The differences in these regression weights were found to be very small (<0.200) which confirmed that CMV is not a major issue in our data (Gaskin, 2017). Factor scores were saved for each of the six variables and were further used in path analysis for mediation.
Inspection of the variance inflation factor scores indicates that there are no instances of multicollinearity among any of the variables (the largest variance inflation factor is 2.2). N=366, numbers on diagonal are Cronbach's alpha, * p<0.05, ** p<0.01
4-ANALYSIS AND RESULTS
Table 2 reports the means, standard deviations, and zero-order correlation coefficients for each of the variables in this study. Cronbach's alpha values are listed in the diagonal and range from .70 to .93.
Path analysis was run to test the direct and indirect effects of CSR dimensions on employee cynicism via trust. First, a path analysis model with direct relationships between CSR and employee cynicism was run. The results showed that two of the dimensions (economic [β = -.125; p = .034] and discretionary [β = -.167; p = .009] responsibility) of CSR had significant negative direct relationships with employee cynicism while the relationships of the other two dimensions (legal [β = -.087; p = .164] and ethical [β = -.119; p=.056] responsibility) were found to be insignificant. Table 3 represents the results of hypotheses H1a to H1d which shows that H1a and H1d were supported whereas H1b and H1c were not supported. Even though two of the direct paths were not found significant (legal and ethical) in our total effects model, these analyses were deemed appropriate. Considering that a simple regression equation for the total effect of an independent variable on a dependent variable (X Y) may yield a non-significant effect, e.g., due to suppression and confounding effects [START_REF] Mackinnon | Equivalence of the mediation, confounding, and suppression effect[END_REF], researchers are required to establish a significant total effect before proceeding with test of indirect effects [START_REF] Hayes | Beyond Baron and Kenny: Statistical Mediation Analysis in the New Millennium[END_REF].
Therefore, a mediation model was run in which trust was incorporated as the mediating variable.
To test the model, we used bootstrapping samples [START_REF] Preacher | Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models[END_REF]. When trust was included in the model, none of the CSR dimensions had a direct effect on employee cynicism, therefore we removed all the insignificant paths to reach an optimal model (shown in Fig. 1) with good fit indices.
Figure 1: Results of path analysis of study variables
Since two of the direct paths (economic and discretionary responsibilities) were found significant in the direct effects model, we compared the full mediation model to two alternative models. In the first alternative model (ALT1), a direct path from economic responsibility to cynicism was included, while in the second alternative model (ALT2) a direct path from discretionary responsibility to cynicism was included. The model fit indices of all compared models are presented in Table 4. The full mediation model fits the data well in comparison to the alternate models as no significant difference in the alternate models was found using Chisquare difference tests. [START_REF] Hair | Multivariate Data Analysis[END_REF] suggest, for a model with a difference of 1 degree of freedom, a Δ χ2 of 3.84 or higher would be significant at 0.05 level. Hence, the full-mediation model was retained with only the significant paths. The following fit indices were used to determine model adequacy and were all in the acceptable range: CMIN/df = 1.547; Tucker-Lewis Index = 0.986; comparative fit index = 0.996; root-mean square error of approximation: 0.040; standardized root-mean-square residual: 0.0276. Based on the full mediation model, it can be concluded that all four CSR variables were found to have an indirect negative effect on employee cynicism mediated by employee trust -also referred as intervening variable 2 . More specifically, each indirect effect was significant as the lower level and upper level confidence interval did not include zero. A summary of the results is shown in Table 5. H2, positing mediation, was tested with the bootstrapping resampling method. Following recommended procedures [START_REF] Preacher | Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models[END_REF]Hayes, 2008, Cheung, Lau, 2008), we used the biascorrected bootstrapping method to create the resample using 95% confidence intervals. H2, proposing organizational trust as a mediator of the CSR-employee cynicism relationship, is supported for the four CSR dimensions. Thus, the negative relationship between CSR and employee cynicism is explained by employee trust in the organization. With regard to employee cynicism, the indirect effect of the economic dimension was -.055 (LL = -.09 to UL =-.01), that of the legal dimension was -.113 (LL = -.15 to UL = -.06), that of the ethical dimension was -.122 (LL = -.17 to UL = -.07), and that of the discretionary dimension was -.102 (LL = -.15 to UL =-.05). Overall, the explained variance for the dependent variable cynicism was found to be 20%.
5-DISCUSSION
The objective of this paper was to test a model for how CSR influences employee cynicism via the mediating role of organizational trust. The study illustrates that perceived CSR activities decrease counterproductive behaviors such as employee cynicism with the help of trust. In line with previous research, our study shows that employees' perceptions of their organization's CSR and their organizational leaders' commitment to it influence their behavior at work [START_REF] Folger | What is the relationship between justice and morality[END_REF][START_REF] Hansen | Corporate Social Responsibility and the Benefits of Employee Trust: A Cross-disciplinary Perspective[END_REF][START_REF] Rupp | Employee reactions to corporate social responsibility: An organizational justice framework[END_REF]) and affect their level of cynicism in particular [START_REF] Evans | The Impact of Perceived Corporate Citizenship on Organizational Cynicism, OCB, and Employee Deviance[END_REF]. We demonstrate that:
1-Some dimensions of CSR are negatively related to employee cynicism. 2-Each of the four dimensions of CSR significantly impact trust in the organization, which in turn reduces employee cynicism.
We identify the construct of employee cynicism as a negative behavior in organizations whereby employees tend to engage in disparaging and critical behavior consistent with the belief that the organization and its leaders lack integrity (Dean et al 1998; [START_REF] Brandes | Locating Behavioral Cynicism at Work: Construct Issues and Performance Implications[END_REF]. The belief that the organization lacks integrity [START_REF] Andersson | Employee cynicism: an examination using a contract violation framework[END_REF] Our results suggest that if employees perceive CSR policies to be applied consistently at the organizational level, criticism and witticisms from organizationally cynical employees will be reduced, as such policies help them believe that their organization exhibits integrity. Perceived CSR policies thus help prevent employees from seeing the organization's behavior as purely self-interested [START_REF] Andersson | Employee cynicism: an examination using a contract violation framework[END_REF][START_REF] Neves | Organizational Cynicism: Spillover Effects on Supervisor-Subordinate Relationships and Performance[END_REF].
The direct negative link between CSR perception and employee cynicism is validated for the economic and discretionary dimensions.
In our sample, the economic dimension has a significate negative impact on cynicism. This may be due to the fact that the generation of profits by companies directly benefits employees, through incentive bonuses and because their bargaining power increases with the company's profit margins. More satisfied employees are less likely to be cynical. This result seems to contradict Herzberg's (1987) motivation theory, according to which economic factors cannot satisfy employees. This may be due to the fact that our sample is French. In France, because the unemployment rate is high (9%), having a well-paid job may indeed be enough of a motivating factor to decrease cynical behavior.
The legal dimension has no direct impact on cynicism. In our sample, the respondents do not value strict compliance with the law. Since they are well aware of practices such as tax optimization, they know that being legal does not always mean being ethical.
The impact of the ethical dimension exists but its significance is very weak, which demonstrates that employees' expectations of their organization and leaders go well beyond economic results and compliance with rules and procedures.
The discretionary dimension has a significate negative impact on cynicism. It shows that there is "no inherent contradiction between improving competitive context and making a sincere commitment to bettering society" (Porter, Kramer, 2002: 16). On seeing their company's commitment to the well-being of society, the employees in our sample decrease their potential counterproductive behaviors.
Another contribution of this paper is to emphasize the impact of perceived CSR through all four of its dimensions (economic, legal, ethical, and discretionary) on employees' behavior via trust. This is consistent with Carroll's (1979) primary aim, namely to reconcile businesses' responsibility to make a profit, with its other responsibilities (to be law-abiding, ethically oriented, and socially supportive). This empirical result is new to the literature: although a link between economic performance and CSR had been theoretically identified by many scholars [START_REF] Carroll | A Three-Dimensional Conceptual Model of Corporate Social Performance[END_REF][START_REF] Wartick | The Evolution of the Corporate Social Performance Model[END_REF], previous empirical findings reported a negative correlation between the economic dimension and all the other dimensions of responsibility [START_REF] Aupperle | Instrument Development and application in CSR[END_REF]). In our sample, employees feel a greater sense of trust in firms that are able to respond to their primary economic responsibility. Our empirical evidence may differ due to the current climate: since the economic crisis, respondents have required their organizations to perform well economically in order to sustain economic growth and employment while simultaneously taking care of the stakeholders weakened by the current crisis.
The impact of the legal dimension on organizational trust shows the extent to which individuals tend to assess what they consider to be fair and right in a subjective manner rather than by applying an objective principle of justice imposed by rules or institutions [START_REF] James | The social context of organizational justice: Cultural, intergroup, and structural effects on justice behaviors and perceptions[END_REF].
Compliance with procedures and rules can thus be viewed as a heuristic gauge that people use to evaluate the trustworthy nature of the company's responsibility and leadership.
The impact of the ethical dimension on organizational trust, which is the most significant of the four, demonstrates employees infer information about their leaders' ethics based on what they can observe, such as perceptions about CSR activity [START_REF] Hansen | Corporate Social Responsibility, Ethical Leadership, and Trust Propensity: A Multi-Experience Model of Perceived Ethical Climate[END_REF] Concerning the discretionary dimension, our study demonstrates that employees consider that a company supporting external stakeholders will potentially support internal ones [START_REF] Rupp | Employee reactions to corporate social responsibility: An organizational justice framework[END_REF], and philanthropic expenditure may lead stakeholders (including internal ones) to form more positive impressions of an organization and its leaders' integrity and trustworthiness [START_REF] Brammer | Corporate Reputation and Philanthropy: An Empirical Analysis[END_REF].
Our combining the literatures on CSR and trust offers a deeper understanding of the social exchange mechanisms at work as well as the motivational processes that push employees to engage in counterproductive behaviors. Employees see CSR activities as a sign of the organizational leadership intentions and use these to form an opinion about its trustworthiness.
If the company is perceived as making a positive contribution to society (Gond, Crane, 2010), employees will infer that its intentions are good and will therefore consider it to be trustworthy, as trust is a key attribute when determining employees' perceptions of top management's ethics [START_REF] Brown | Ethical leadership: A social learning perspective for construct development and testing[END_REF]. Their likelihood of being cynical will consequently decrease.
This result is consistent with several studies that find perceived measures of corporate citizenship to predict positive work-related attitudes such as organizational commitment [START_REF] Peterson | The relationship between perceptions of Corporate Citizenship and Organizational Commitment[END_REF]) and organizational citizenship behavior [START_REF] Hansen | Corporate Social Responsibility and the Benefits of Employee Trust: A Cross-disciplinary Perspective[END_REF], while reducing employee turnover intentions [START_REF] Hansen | Corporate Social Responsibility and the Benefits of Employee Trust: A Cross-disciplinary Perspective[END_REF].
The results of our empirical study also confirm those in previous research that establish a link between organizational trust and reactions to the organization and its leaders [START_REF] Aryee | Trust as a mediator of the relationship between organizational justice and work outcomes: test of a social exchange model[END_REF][START_REF] Dirks | Trust in Leadership: Meta-Analytic Findings and Implications for Research and Practice[END_REF][START_REF] Mayer | Trust in management and performance: Who minds the shop while the employees watch the boss?[END_REF]. The impact of perceived CSR on organizational trust shows that CSR has a clear corporate-level benefit. In other words, taking collective interests as a target can generate benefits at the corporate level. In particular, we have confirmed that organizational trust plays a fully-fledged mediating role between perceptions of CSR (in all four of its dimensions) and cynical behavior at work. Like former research that has shown that the implementation of a number of management practices such as employee participation practices, communication, empowerment, compensation, and justice is likely to encourage a sense of trust in the organizational leadership [START_REF] Miles | Participative Management : Quality Vs. Quantity[END_REF][START_REF] Nyhan | Increasing Affective Organizational Commitment in Public Organizations: The Key Role of Interpersonal Trust[END_REF], this paper shows that CSR boosts the employees' belief in their company's trustworthiness, which in turn increases trust. This leads them to be more open to the prospect of appearing vulnerable to the company and to its leaders. In addition, previous studies have shown that organizational trust acts as a partial mediator between employees' perceptions of CSR and work-related outcomes [START_REF] Hansen | Corporate Social Responsibility and the Benefits of Employee Trust: A Cross-disciplinary Perspective[END_REF]. Our findings show that all four dimensions of CSR (economic, legal, ethical, and discretionary) have an effect on employee cynicism and that this is fully mediated by organizational trust. The four dimensions of CSR seem to signal to the employees that the company and the leadership will treat internal stakeholders as fairly as they treat other external stakeholders, such as shareholders or local communities [START_REF] Rupp | Employee reactions to corporate social responsibility: An organizational justice framework[END_REF]. The mediation of trust is consistent with Hansen et al. (2011: 33), for whom "trust is the immediate or most proximate outcome of CSR activity."
These findings are in line with the growing body of research about the impact of CSR on consumer trust and the intention to buy [START_REF] Pivato | The impact of corporate social responsibility on consumer trust: the case of organic food[END_REF][START_REF] Castaldo | The missing link between corporate social responsibility and consumer trust: The case of fair trade products[END_REF][START_REF] Du | Corporate Social Responsibility and competitive advantage: Overcoming the Trust Barrier[END_REF][START_REF] Park | Corporate social responsibilities, consumer trust and corporate reputation: South Korean consumers' perspectives[END_REF][START_REF] Swaen | Impact of corporate social responsibility on consumer trust[END_REF]. The basic contention of this body of research is that the primary outcome of CSR activities is to create trust among consumers and that this trust influences consumers' buying intentions. We show that the same applies to employees: CSR directly influences trust, which in turn negatively influences cynicism.
To conclude, this study offers insights into the dynamics underlying internal value creation based on CSR. The results of the study show that leadership that visibly promotes and enacts CSR activities will impact employees' behavior towards the company and its leadership.
6-LIMITATIONS and FUTURE DIRECTIONS
The study has explored the relationship between employees' perceptions of CSR and their cynical behavior at work. Our research design involved a survey of a large sample of managers. This method may convey a respondent rationalization bias. Thus, future research should study whether employee cynicism can be reduced by an increased belief in corporate trustworthiness via CSR efforts using an experimental design that better captures respondents' actual behavior. Temporality [START_REF] Barbalet | Social Emotions: Confidence, Trust and Loyalty[END_REF] is another variable that may play a significant role in a social exchange relationship linking trust with other variables. While this research presents a social exchange context that assumes a temporal dimension, it overlooks evidence of the duration of the employee-organization relationship.
The role of perceived external prestige should also be investigated [START_REF] Herrbach | Exploring the role of perceived external prestige in managers' turnover intentions[END_REF]. CSR activities may influence employees' beliefs about how outsiders judge their organization's status and image, meaning that such an external source of information could also impact their trust in the organization and their cynical behavior at work. Should employees receive negative information from the external environment about their organization, they might react to this by developing cynicism in order to align their assessment of the organization with that of outsiders [START_REF] Frandsen | Organizational image, identification, and cynical distance: Prestigious professionals in a low-prestige organization[END_REF].
This being said, this study opens the door to a variety of organizational-level ways to reduce employee cynicism by pursuing different CSR activities in a genuine manner, which range from enhancing economic performance and complying with the law to developing an ethical posture and supporting the community, all of which increase organizational trust.
COMPLIANCE WITH ETHICAL STANDARDS
Authors' identifying information: The authors' identifying information is provided on the title page that is separate from the manuscript.
Hess & Jepsen, 2009; Peterson, Rhoads, & Vaught, 2001; Twenge, & Campbell, 2008).
;
Dean et al., 1998; Johnson and O'Leary-Kelly, 2003) and the perception of psychological contract violations (Pugh et al., 2003; Chrobot-Mason, 2003; Johnson and O'Leary-Kelly, 2003) are the main factors in employee cynicism.
As posited by[START_REF] Dean | Organizational Cynicism[END_REF], employee cynicism is an attitude composed of the three attitudinal dimensions of cognition, emotion, and conation. Following research by Johnson and O'Leary-Kelly (2003),[START_REF] Naus | Organizational Cynicism: extending the exit, voice, loyalty, and neglect model of employees' responses to adverse conditions in the workplace[END_REF], and[START_REF] Brandes | Locating Behavioral Cynicism at Work: Construct Issues and Performance Implications[END_REF], this study measured solely the conative dimension of the cynical attitude in order to focus exclusively on employee behavior. Future research should measure the overall attitude of cynicism, which would broaden our understanding of how perceived CSR affects employee reactions at work.A number of lines of future research inquiry could complement this study. As we focused here on an organizational-level referent, future studies should explore the role of stakeholders in the employees' immediate environment, such as supervisors, colleagues, and peers.[START_REF] Whitener | Managers as initiators of trust: An exchange relationship framework for understanding managerial trustworthy behavior[END_REF].[START_REF] Lewicki | What is the role of trust in organizational justice? Handbook of organizational justice[END_REF] have shown, for instance, that trust in one's supervisor plays a central role in mediating the effects of interactional justice on attitudes and behaviors at work.
The perception of CSR is negatively related to employee cynicism CSR generates the belief among employees that the company has positive intentions and meets the more or less implicit demands of society. From employees' point of view, CSR conveys positive values and demonstrates a caring stance, thus generating the belief that the company is trustworthy. Prior research suggests that CSR perceptions impact a variety of employee attitudes and behaviors, including trust in organizational leadership[START_REF] Hansen | Corporate Social Responsibility and the Benefits of Employee Trust: A Cross-disciplinary Perspective[END_REF]. CSR becomes a key factor in establishing, maintaining, or improving a good relationship between company leaders and their employees[START_REF] Persais | La RSE est-elle une question de convention?[END_REF], McWillians, Siegel, 2001). Meanwhile, Pivato et al. (2008, p. 3) identify trust as the "first result of a firm's CSR activities.
H1a: The economic dimension of CSR is negatively related to employee cynicism
H1b: The legal dimension of CSR is negatively related to employee cynicism
H1c: The ethical dimension of CSR is negatively related to employee cynicism
H1d: The discretional dimension of CSR is negatively related to employee cynicism
2.3-CORPORATE SOCIAL RESPONSIBILITY, ORGANIZATIONAL TRUST, AND
EMPLOYEE CYNICISM
" Aguilera et al. (
[START_REF] Johnson | The effects of psychological contract breach and organizational cynicism: Not all social exchange violations are created equal[END_REF] examine employee cynicism as a reaction to social exchange violations in the workplace and find that cynicism stems from a breach in or violation of trust, meaning either reneging on specific promises made to the employees or flouting more general expectations within the framework of trust[START_REF] Andersson | Employee cynicism: an examination using a contract violation framework[END_REF], Andersson, Bateman, 1997, Johnson, O'Leary-Kelly,
Table 1 : Discriminant validity scores for study variables
1
Construct Economic Legal Ethical Discret Trust
Economic
Legal .250
Ethical .303 .376
Discretionary .264 .231 .317
Table 2 : Mean, standard deviation, correlation, and reliability for study variables
2
M SD 1 2 3 4 5 6 7 8 9
1. Size 5.16 1.41 -
2. Age 2.44 .75 .06 -
3. Position 2.57 1.47 .08 -.21 ** -
4. Economic 3.75 .74 .21 ** .13 * -.11 * (0,70)
5. Legal 4.09 .63 .12 * .13 * -.13 * .46 ** (0,75)
6. Ethical 3.73 .80 .29 ** .13 * -.09 .49 ** .60 ** (0,77)
7. Discret. 3.22 .89 .23 ** .09 -.06 .41 ** .40 ** .54 ** (0,81)
8. Trust 3.57 .85 .08 .10 -.14 ** .52 ** .60 ** .60 ** .49 ** (0,93)
9. Cynicism 2.42 .62 -.09 -.21 ** .09 -.32 ** -.33 ** -.34 ** -.30 ** -.47 ** (0,70)
Table 3 : Results of the direct effects model (hypothesis H1a to H1d)
3
Dependent variable
Cynicism
Independent variable Direct effect Upper Bound / Lower Bound P
Economic -.125 [-.011/-.230] .034
Legal -.087 [.036/-.225] .164
Ethical -.119 [.007/-.248] .056
Discretionary -.167 [-.044/-.278] .009
Table 4 : Goodness-of-Fit Indices for Alternative Structural Equation Models
4
Model χ 2 (df) Δ χ 2 (df) AGFI CFI NFI RMSEA SRMR
Full Mediation Model 6.186 (4) 0.969 0.996 0.990 0.040 0.0276
ALT1: Direct path from 2.499 (3) 3.687 (1) 0.983 1.000 0.996 0.000 0.0149
Economic to Cynic
ALT2: Direct path from 2.508 (3) 3.678 (1) 0.836 1.000 0.996 0.000 0.0155
Discretionary to Cynic
Note. The Δ χ 2 (df) was not found significant for either of the alternate models, therefore, full mediation model
was retained. Note. ALT = Alternative Model; AGFI= adjusted goodness-of-fit index; CFI = comparative fit index;
NFI = normed fit index; SRMR = standardized root-mean-square residual; RMSEA = root-mean-square error of
approximation
Table 5 : Results of the total mediation model via Trust (H2a to H2d)
5
Dependent variable
Cynicism
Independent variable Indirect effect (via trust) Upper Bound / Lower Bound
Economic -.055 [-.09/-.01]
Legal -.113 [-.15/-.06]
Ethical -.122 [-.17/-.07]
Discretionary -.102 [-.15/-.05]
. These results concur with those of[START_REF] Bews | A Role for Business Ethics in Facilitating Trustworthiness[END_REF],[START_REF] Brown | Ethical leadership: A social learning perspective for construct development and testing[END_REF] and[START_REF] Xu | Ethical Leadership Behavior and Employee Justice Perceptions: the Mediating Role of Trust in Organization[END_REF], which suggest that genuine ethical managerial conduct enhances managers' trustworthiness, and with those of[START_REF] Mo | Linking Ethical Leadership to Employee burnout, Workplace deviance and Performance: testing the mediating roles of Trust in Leader and Surface Acting[END_REF], who found that the relationships between ethical leadership and employees' work outcome of deviant behavior were significantly mediated by trust in leaders.
http://www.uic.edu.hk/~kentsang/powerst/forbes-The%20Turnaround%20at%20Harley-Davidson.pdf
For discussion on mediating and intervening variables, see[START_REF] Hayes | Beyond Baron and Kenny: Statistical Mediation Analysis in the New Millennium[END_REF] and[START_REF] Mathieu | Clarifying conditions and decision points for mediational type inferences in Organizational Behavior[END_REF]
Conflict of interest: Author A declares that he/she has no conflict of interest. Author B declares that he/she has no conflict of interest. Author C declares that he/she has no conflict of interest. Author D declares that he/she has no conflict of interest.
Ethical approval: All procedures performed in this study that involved human participants were carried out in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.
Informed consent: Informed consent was obtained from all individual participants included in the study. |
03949322 | en | [
"sdv",
"sdv.gen",
"sdv.mhep"
] | 2024/03/04 16:41:24 | 2022 | https://amu.hal.science/hal-03949322/file/cvac191.pdf | Isabelle Plaisance
Panagiotis Chouvardas
Yuliangzi Sun
Mohamed Nemir
Parisa Aghagolzadeh
Sophie Shen
Jun Woo
Shim
Francesca Rochais
Rory Johnson
Nathan Palpant
Thierry Pedrazzini
email: [email protected]
A transposable element into the human long noncoding RNA CARMEN is a switch for cardiac precursor cell specification
Aims:
The major cardiac cell types composing the adult heart arise from common multipotent precursor cells. Cardiac lineage decisions are guided by extrinsic and cell-autonomous factors, including recently discovered long noncoding RNAs (lncRNAs). The human lncRNA CARMEN , which is known to dictate specification towards the cardiomyocyte (CM) and the smooth muscle cell (SMC) fates, generates a diversity of alternatively spliced isoforms.
Methods and Results:
The CARMEN locus can be manipulated to direct human primary cardiac precursor cells (CPCs) into specific cardiovascular fates. Investigating CARMEN isoform usage in differentiating CPCs represents therefore a unique opportunity to uncover isoform-specific function in lncRNAs. Here, we identify one CARMEN isoform, CARMEN-201, to be crucial for SMC commitment.
CARMEN-201 activity is encoded within an alternatively-spliced exon containing a MIRc short interspersed nuclear element. This element binds the transcriptional repressor REST (RE1 Silencing Transcription Factor), targets it to cardiogenic loci, including ISL1, IRX1, IRX5, and SFRP1, and thereby blocks the CM gene program. In turn, genes regulating SMC differentiation are induced.
Conclusions: These data show how a critical physiological switch is wired by alternative splicing and functional transposable elements in a long noncoding RNA. They further demonstrated the crucial importance of the lncRNA isoform CARMEN-201 in SMC specification during heart development.
INTRODUCTION
The mammalian heart is composed of several cell types that derives from mesodermal progenitor cells of the first and second heart field 1 . The distinct lineages arise from multipotent cardiovascular precursors [START_REF] Kattman | Multipotent flk-1+ cardiovascular progenitor cells give rise to the cardiomyocyte, endothelial, and vascular smooth muscle lineages[END_REF][START_REF] Moretti | Multipotent embryonic isl1+ progenitor cells lead to cardiac, smooth muscle, and endothelial cell diversification[END_REF][START_REF] Wu | Developmental origin of a bipotential myocardial and smooth muscle cell precursor in the mammalian heart[END_REF] . In this framework, very few studies have evaluated transcriptional regulation in primary human cardiac precursor cells (CPCs). We have previously derived clonal populations of CPCs from the fetal and the adult human heart [START_REF] Gonzales | Isolation of cardiovascular precursor cells from the human fetal heart[END_REF][START_REF] Plaisance | Cardiomyocyte lineage specification in adult human cardiac precursor cells via modulation of enhancer-associated long noncoding RNA expression[END_REF] . Their comparison allows for the dissection of the molecular mechanisms controlling cardiac specification and differentiation. Understanding the processes regulating cardiac cell programming and reprogramming provides also novel insights for the treatment of cardiovascular disease.
Next generation sequencing coupled to assessment of the epigenetic landscape has shown that mammalian genomes produce thousands of noncoding transcripts. Of these, long noncoding (lnc)RNAs represent the most heterogeneous and diverse class of RNA molecules. LncRNAs can be multiexonic, spliced, capped and polyadenylated . In this case, lncRNAs partner with proteins such as chromatin remodelers to modify the local chromatin environment at target locations. In the cardiovascular system, several lncRNAs have been identified as key players of cell differentiation and homoeostasis.
Braveheart, Fendrr and Meteor have been involved in mesoderm and cardiac differentiation [START_REF] Klattenhoff | Braveheart, a long noncoding RNA required for cardiovascular lineage commitment[END_REF][START_REF] Grote | The tissue-specific lncRNA Fendrr is an essential regulator of heart and body wall development in the mouse[END_REF][START_REF] Alexanian | A transcribed enhancer dictates mesendoderm specification in pluripotency[END_REF] . Myheart controls CM hypertrophy, and Wisper is a regulator of cardiac fibroblasts, critical for the development of fibrosis [START_REF] Han | A long noncoding RNA protects the heart from pathological hypertrophy[END_REF][START_REF] Micheletti | The long noncoding RNA Wisper controls cardiac fibrosis and remodeling[END_REF] . Then, SMILR and SENCR have been associated with SMC proliferation and differentiation [START_REF] Bell | Identification and initial functional characterization of a human vascular cell-enriched long noncoding RNA[END_REF][START_REF] Ballantyne | Smooth Muscle Enriched Long Noncoding RNA (SMILR) Regulates Cell Proliferation[END_REF] .
The way in which functions are encoded in lncRNAs' sequences remains enigmatic. It is thought that lncRNAs comprise modular assemblages of functional elements, composed of structural motifs that interact with proteins or other nucleic acids [START_REF] Kirk | Functional classification of long non-coding RNAs by k-mer content[END_REF] . Recent studies have implicated repetitive transposable elements as a source of functional lncRNA domains [START_REF] Carlevaro-Fita | Ancient exapted transposable elements promote nuclear enrichment of human long noncoding RNAs[END_REF][START_REF] Kapusta | Transposable elements are major contributors to the origin, diversification, and regulation of vertebrate long noncoding RNAs[END_REF] . In this context, lncRNA loci can produce a . Nevertheless, the miRNA precursor represents only one CARMEN isoform among several others. All other isoforms are lncRNA splice variants, with no apparent coding potential, for which functions remain to be fully defined. We showed previously that three isoforms were involved in cardiac specification in human fetal CPCs
21
. Moreover, we reported that one isoform, referred earlier as to CARMEN-7 (formally CARMEN-201 or ENST00000505254), was differentially expressed in CPCs committing to the CM vs. the SMC lineage 6 . Interestingly, the CARMEN locus can be manipulated to direct CPCs into specific cardiovascular fates. Investigating CARMEN isoform usage in differentiating CPCs represents therefore a unique opportunity to uncover isoformspecific function in lncRNAs. Here, we demonstrate that CARMEN-201 controls specification into the SMC lineage in human CPCs, and that this function is encoded within an alternatively-spliced exon containing a MIRc short interspersed nuclear element.
METHODS
Methods are described in details in the Supplementary Information available online.
Human cardiac precursor cells (CPCs)
Fetal heart biopsies were collected at 5 weeks of gestation following abortion, and adult atrial appendages were obtained from patients undergoing cardiac surgery. CPCs were isolated by enzymatic digestion as previously described 5, 6 .
Human plasma samples
A C C E P T E D M A N U S C R I P T CVR-2022-0360 6
Plasma samples were collected from patients admitted to the Lausanne University Hospital with a diagnosis of myocardial infarction.
Study approval
The study was approved by the Lausanne University Hospital Ethics Committee and the Swiss Ethics Committee (Human cardiac precursor cells: Protocols 22/03 and 178/09; Human plasma from patients with myocardial infarction: Protocol PB_2018-00231 and 94/15), and was conducted according to the Declaration of Helsinki. Written informed consent was obtained from all patients included in the study.
GapmeR-mediated knockdown
CARMEN-201 silencing was obtained by adding CARMEN-201-specific GapmeRs targeting Exon 2 to
CPCs at a final concentration of 20 nM.
siRNA-mediated knockdown
siRNA transfection was performed using RNAiMax (ThermoFisher). CPCs were transfected with the indicated siRNA at a final concentration of 10 nM.
RNA extraction, RT-PCR and real time PCR
Total RNA from plasma and from cultured cells was extracted using the miRNeasy Serum/Plasma Advanced kit and the miRNeasy kit (Qiagen).
Absolute quantification of CARMEN isoform expression
pBluescript SK+ plasmids containing CARMEN-201, CARMEN-205 or CARMEN-217 cDNA were synthesized (GenScript, USA). Data were converted into transcript copy per cell, assuming 100% efficiency in conversion of RNA into cDNA.
A C C E P T E D M A N U S C R I P T CVR-2022-0360
7
Subcellular fractionation
Cells were harvested and lysed. The lysate was centrifuged at 3800g for 2 min. The supernatant represented the cytoplasmic fraction. The pellet was used to produce the nuclear fraction.
CRISPR/Cas9-mediated Exon 2 deletion
A CRISPR/Cas9-D10A Nickase-based strategy was used to delete Exon 2 in the CARMEN gene.
CRISPR-On activation
We used the CRISPR/dCas9-based Synergistic Activator Mediator (SAM) gain of function system to activate CARMEN isoform expression in fetal CPCs
23
.
RNA sequencing
Sequencing libraries were prepared according to Illumina RNA Seq library kit instructions. Libraries were sequenced with the Illumina HiSeq2000 (100bp; PE).
TRIAGE analysis
The Transcriptional Regulatory Inference Analysis from Gene Expression (TRIAGE) analysis, providing a means to identify cell type-specific regulatory genes has been described 24 .
Uniform Manifold Approximation and Projection (UMAP) density plots
We used the data provided by Asp et al., 2019 (accession number is European Genome-phenome Archive (EGA): EGAS00001003996). The UMAP were generated using Nebulosa package.
Lentiviral vectors
The SIN-cppt-CMV-EGFP-WHV plasmid was a kind gift of Dr.
Tandem mass spectrometry
For identifying CARMEN-201 protein partners, tryptic peptide mixtures were injected on an Ultimate RSLC 3000 nanoHPLC system (Dionex, Sunnyvale, CA, USA).
Western blotting
Proteins associated to biotinylated C-201 transcript were resolved by SDS-PAGE and electroblotted onto PVDF membranes (GE Healthcare).
RNA Immunoprecipitation (RIP)
The RIP experiment was conducted as described 14 .
RIP following by sequencing
RIP was performed using anti-REST IgG. REST-associated transcripts were purified using the RNeasy isolation kit (Qiagen). Sequencing libraries were prepared according to Illumina RNA Seq library kit instructions. Libraries were sequenced with the Illumina HiSeq2000 (100bp; PE).
Chromatin immunoprecipitation followed by real-time quantitative PCR (ChIP-qPCR)
ChIP-qPCR was performed using the Pierce magnetic ChIP Kit (Thermo Scientific) and the ChIPAb+ REST Kit (Millipore) according to the manufacturers' instructions.
A C C E P T E
Masson's trichome staining
Paraffin tissue sections were also processed for Masson's trichrome staining and analyzed with a Zeiss Axioscan Z1 (Carl Zeiss).
Statistics
All data were collected from at least 3 independent experiments, performed at least in triplicates. Data throughout the paper are expressed as mean ± SEM. Statistical analysis: ANOVA with post-hoc Tukey.
RESULTS
CARMEN isoforms are differentially expressed in CPCs committing to the CM vs. the SMC fate.
To study the importance of CARMEN isoforms in cardiac differentiation, we took advantage of primary CPCs isolated from the human heart. We isolated CPCs from the fetal heart at 5 weeks of gestation, hereafter referred to as fetal CPCs S1b).
Absolute quantification confirmed C-201 represented the main isoform in adult CPCs but the least abundant in fetal CPCs (Fig. 1d), suggesting its involvement in SMC differentiation. All CARMEN isoforms but the miRNA precursor (C-215) were more abundant in the nucleus than the cytoplasm, a feature compatible with the postulated function of lncRNAs as regulators of gene expression (Fig. 1e; Supplemental Fig. S1c).
CARMEN-201 controls SMC specification via its second exon
We next evaluated the involvement of C-201 in SMC differentiation using a knockdown approach.
Antisense oligonucleotides (GapmeRs) were designed to target the C-201 second exon, uniquely present in this isoform (Fig. 1c). C-201 was downregulated in adult CPCs following GapmeR transfection (Supplemental Fig. S2a). The anti-C-201 GapmeRs affected no other isoforms, demonstrating the specificity of the approach. C-201 silencing had no effects on differentiation of fetal CPCs into CMs but completely blocked the capacity of adult CPCs to produce SMCs (Supplemental Fig. S2b andc). To explore the importance of the C-201 second exon in SMC specification, we produce adult CPCs lacking the exon using CRISPR/Cas9 deletion. Guide (g)RNAs were designed to remove the second exon without affecting any other exons (Supplemental Fig. S3a). C-201 Exon 2-deleted adult CPC clones were derived.
Importantly, the C-201 isoform was still expressed in adult CPCs lacking C-201 Exon 2. However, the transcript was reduced by the size of the second exon (Supplemental Fig. S3b). Endogenous C-201 expression was similarly induced in both wild-type and deleted adult CPC (201Ex2) clones during differentiation as detected by using a primer pair amplifying the 3' end of the transcript (Fig. 1f; primer pair P1). The deletion of the second exon was verified using two primer pairs spanning the exon (i.e. P2; P3).
Next, we evaluated the capacity of deleted adult CPCs to produce SMCs. Measurement of marker gene ). Endogenous C-201 expression was downregulated in fetal CPCs in the absence of gRNA but markedly increased when the gRNA was expressed (Fig. 2a). Compared to the large C-201 induction, the other CARMEN isoforms, as well as the two hosted miRNAs, were marginally activated (Fig. 2a; Supplemental Fig. S3c). We tested therefore the effects of C-201 manipulation on the fate of normally cardiogenic fetal CPCs. Strikingly, cells with forced C-201 transcription produced large amount of SMCs, indicating that C-201 expression was sufficient to adopt a SMC fate (Fig. 2b-c).
Production of CMs was minimally affected, likely reflecting differentiation of untransfected fetal CPCs (Supplemental Fig. S3d-e). Globally, expression of C-201 was found to be necessary and sufficient for inducing SMC specification in CPCs.
The CARMEN-201 second exon contains a functional transposable element that drives SMC commitment
The results above prompted us to evaluate the role of the C-201 second exon in SMC specification. We looked at its primary structure (397 nucleotides) and detected a short interspersed nuclear element (SINE), which was identified as a 126 nucleotide-long Mammalian-wide Interspersed Repeat (MIR)c element. Of note, the exon is highly conserved in primates but not found in other species (Fig. 2d). Intriguingly, MIRc is part of a catalog of predicted Repeat Insertion Domains of LncRNAs (RIDLs) promoting nuclear localization 18 , a feature consistent with the pronounced nuclear enrichment of C-201 (Fig. 1e). The exon also contains a partial ALU sequence.
To study the possible role of the MIRc element in determining SMC specification, we produced two lentiviral vectors for overexpressing the entire Exon 2 in fetal CPCs (Supplemental Fig. S4a). The first version contained wild-type MIRc sequences (C-201 Ex2) whereas, in the second vector, the whole MIRc
Identification of upstream regulators of CM and SMC specification in human CPCs
To better understand the processes leading to a switch in cell identity, we profiled the CPC transcriptomes under different experimental conditions. Principal component analysis (PCA) was conducted to evaluate differentiation when C-201 Exon 2 was expressed and not expressed (Fig. 3a). We next analyzed the transcriptomic data in details (Fig. 3b; Supplemental Fig. S4e). and associated biological pathways (GO BP) using all identified TRIAGE candidates validated their functional roles in CM and SMC commitment (Supplemental Fig. S5a).
Samples of differentiating fetal
To further investigate the association of CARMEN expression with blood vessel development in vivo, we used data reporting the comprehensive transcriptional analysis of the embryonic human heart at the single-cell level at different stages of gestation 28 . To understand how CARMEN expression was related to gene network changes, we performed a correlation analysis of CARMEN against all genes in all cells across all developmental time points (Fig. 3e). CARMEN abundance was associated preferentially with pericyte and SMC gene programs, and not with the expression of CM or endothelial cell markers. We next reanalyzed data generated at 6.5 weeks post-conception 28 , an important point in development bound by REST with similar length distribution and orientation. Nevertheless, the enrichment is also higher when exploring repeat-containing genes in general, suggesting that global REST binding to RNA molecules could require additional transposable elements (Supplemental Fig. S6e; Supplemental Table S3).
To evaluate the role of REST in C-201-mediated CPC specification, we first used a knockdown approach. Fetal CPCs were transfected with control or REST siRNA, and induced to differentiate into SMC following C-201 Exon 2 overexpression (Fig. 4g-h). Under control conditions, C-201 Exon 2 expression forced CPCs to adopt a SMC specification. In sharp contrast, REST knockdown abolished the capacity of C-201-expressing CPCs to commit to the SMC lineage. To confirm these results, we tested the effects of REST silencing in adult CPCs spontaneously differentiating into SMC (Supplemental Fig. S6f).
Again, in the absence of REST, adult CPCs were unable to produce a SMC progeny. Interestingly, C-201 appeared downregulated in differentiating REST-deficient CPCs. This was also confirmed using RNA BaseScope in situ hybridization in adult CPCs (Supplemental Fig. S6g). The resetting of C-201 expression following REST knockdown mimicked therefore what observed in differentiating fetal CPCs (Supplemental Fig. S1a-b). Moreover, the cellular distribution of the C-201 isoform was modified following REST silencing (Supplemental Fig. S6h). Significant cytoplasmic enrichment was evident in the absence of REST, in contrast to what measured under basal conditions. This observation suggested therefore that REST, which carries a nuclear localization signal, might contribute to retain C-201 in the nucleus via its capacity to bind the MIRc element in the second exon.
CARMEN-201 inhibits the CM fate via REST-mediated repression of cardiogenic transcription factor expression
The association of C-201 with REST suggested a mechanism involving the targeting of the Several DBDs were predicted in C-201, in particular in the sequences spanning the second exon (Fig. 5a).
As control, we performed a similar analysis for C-205 and C-217. The C-205 transcript was found to contain distinct DBDs (Fig. 5a) whereas C-217 was not predicted to contain significant DBDs (not shown).
In total, 447 gene promoters were identified as potentially bound by C-201, and 387 by C-205 respectively. Among those, 337 were uniquely associated with C-201. These genes were related to GO Biological Processes defining striated muscle contraction (Fig. 5b). In order to identify relevant targets of C-201/REST action, we crossed the list of C-201-bound genes as predicted by TDF with the list of TRIAGE candidates and the list of validated cardiac genes (Human Protein Atlas -ENSEMBL) (Fig. 5c; Supplemental Table S4). Hypergeometric tests explored the significance of the overlaps and revealed four primary candidates: IRX1; IRX5; ISL1 and SFRP1. ISL1 is a member of the LIM homeodomain family of transcription factor, crucial for the development of the SHF. The two Iroquois homeobox transcription factors IRX1 and IRX5 have been involved in developmental patterning in the embryonic heart. Finally, SFRP1 is a modulator of the WNT pathway that plays important roles in cardiac specification and differentiation. Importantly, single-cell analysis demonstrated that these factors were not expressed in CARMEN-expressing cells in the embryonic human heart at 6.5 weeks of gestation (Supplemental Fig. S5b). Importantly, all four genes contained REST binding sites as determined by chromatin immunoprecipitation followed by sequencing (ChIP-Seq) in a study interrogating REST binding in various cell types (Supplemental Fig. S7a; Gene Expression Omnibus: GSM803369; GSM1010735; silencing. Consistently, REST occupancy at candidate promoters in adult CPCs was blunted following GapmeR-mediated C-201 knockdown (Figure 5d).
To validate the relevance of ISL1, IRX1, IRX5 and SFRP1 in CPC specification, we first determined expression in fetal CPCs induced to differentiate into SMC following forced C-201 expression using CRISPR-On (Fig. 5e). Each candidate was downregulated after induction of C-201 expression. We then measured expression in adult CPC clones lacking C-201 exon 2 (201Ex2), i.e. not able to activate a SMC gene program. ISL1, IRX1, IRX5 and SFRP1 expression was restored in these cells during differentiation as compared to what observed in wild-type cells (Fig. 5f). Furthermore, we evaluated expression after manipulating REST. We observed re-expression of the four factors in differentiating adult CPCs after REST knockdown (Supplemental Fig. S7c). Moreover, REST silencing allowed also reexpression of ISL1, IRX1, IRX5 and SFRP1 in fetal CPCs overexpressing C-201 Exon 2 (Fig. 5g).
Altogether, these findings supported that the four candidates were under control by C-201 via RESTmediated repression.
ISL1, IRX1, IRX5 and SFRP1 silencing promotes the SMC fate
We next proceeded to validate the importance of ISL1, IRX1, IRX5 and SFRP1 in controlling specification into the CM vs. the SMC lineage. To mimic REST-mediated repression, we used a siRNA approach to knockdown each candidate in fetal CPCs normally committing to the CM lineage. We first evaluated the effects of individual factor silencing on the capacity of fetal CPCs to produce a functional progeny. Knocking down either IRX1, IRX5 or SFRP1 did not affect other candidate gene expression but ISL1 knockdown slightly decreased IRX1, IRX5 and SFRP1 levels (Supplemental Fig. S8a-d (Supplemental Fig. S8e-h), which are known to mark bipotential cardiac precursors giving raise to CMs and SMCs
33
. Nevertheless, manipulating each candidates separately has little impact on SMC gene expression (Supplemental Fig. S8i-l). In fact, ISL1 knockdown had even a negative effect on late SMC maker expression. This suggested that ISL1 operated at the onset of CPC specification but was also necessary for late-stage differentiation.
We tested therefore combinations of siRNAs targeting IRX1, IRX5 and SFRP1 (Fig. 6e). As a positive control for the activation of the SMC gene program, C-201 Exon 2-overexpressing fetal CPCs were included in the experiment. Each combination was associated with a large induction of epicardial and SMC gene expression as compared to individual knockdown, with maximal impact achieved when all three factors were downregulated simultaneously. This manipulation was as potent as C-201 Exon 2 overexpression in inducing SMC genes in normally cardiogenic CPCs. Accordingly, massive SMC differentiation was observed by immunostaining following IRX1, IRX5 and SFRP1 knockdown (Fig. 6f).
Interestingly, commitment occurred at the expense of the cardiogenic lineage (Fig. 6e andf) but had no impact on endothelial cell production (not shown). Our data indicated therefore that IRX1, IRX5 and SFRP1 downregulation was sufficient to redirect fetal CPCs into the epicardial and the SMC lineages.
Nevertheless, as mentioned above, ISL1 expression appeared necessary during SMC differentiation. To formally demonstrate this point, we performed an additional experiment in which the four factors were silenced together (Supplemental Fig. S8m). In this case, ISL1 silencing produced a slight negative effect on SMC marker expression induced by combined IRX1, IRX5 and SFRP1 knockdown, sustaining a role for ISL1 in the late stage of SMC differentiation.
Interestingly, manipulating IRX1, IRX5 or SFRP1 had a striking effect on CARMEN isoform expression (Fig. 6g). Indeed, while endogenous C-201 was downregulated during specification in differentiating fetal CPCs, its expression was reactivated after IRX5 and SFRP1 knockdown, and even more so when IRX5 and SFRP1 were silenced together, suggesting C-201 was negatively regulated by the two cardiogenic factors. Remarkably, C-217 expression demonstrated a mirror image, consistent with coordinated regulation of the two isoforms and suggesting a switch might operate during SMC specification. C-205 was not modulated under these different conditions. In an attempt to evaluate the relevance of our findings in disease, we queried the association of CARMEN with cardiovascular traits using CTG-VIEW (https://view.genoma.io). We identified important phenotypes related to cardiovascular conditions as strongly associated with CARMEN (Fig. 7a). This prompted us to investigate whether C-201 was differentially expressed in the damaged myocardium. We first used RNA BaseScope in situ hybridization to localize C-201 expression in the human heart. Samples were collected from explanted hearts of transplant patients, and expression was compared in CMs vs.
mural cells (Fig. 7b-e). C-201 was found uniquely expressed in mural cells of large coronary vessels.
Immunostaining for VIMENTIN (marking endothelial cells and fibroblasts) and smooth-muscle myosin heavy chain (SMMHC; marking SMCs) supported C-201 expression being associated primarily with SMCs (Fig. 7f; Supplemental Fig. S8n). C-201 expression seemed equally distributed in the ventricular and atrial vasculature. Next, we measured C-201 expression in the blood of patients experiencing acute coronary syndrome, with no prior history of cardiac disease. Plasma samples were obtained during angioplasty that took place less than 1 hours after myocardial infarction, and at 24 and 48 hours thereafter (Figure 7g).
Individuals were classified based on the presence or absence of ST elevation, namely STEMI and non-STEMI (NSTEMI). C-201 was not expressed immediately after infarction, suggesting that the transcript was not induced under basal conditions. However, the amounts of transcript dramatically increased after one and two days. Importantly, circulating C-201 concentrations were more elevated in STEMI vs. non-STEMI patients. Altogether, it suggested that C-201 was expressed in large vessels of the heart and responded acutely to hemodynamic stress with an expression being proportional to the severity of the disease.
DISCUSSION
In this study, we characterized for the first time the role of lncRNA isoforms in cell fate determination through a systematic examination of the human CARMEN locus. In primary human CPCs committing to the SMC lineage, the C-201 isoform associates with REST via its MIRc element, targets the repressor to important cardiogenic loci, namely IRX1, IRX5, SFRP1 and ISL1, represses their expression
7 .of transcription 8 .
78 Current estimates predict the existence of approximately 200,000 lncRNAs in human, of which very few have been fully characterized. LncRNAs are implicated in a variety of functions that define cell identity and behavior. They exert Cis-and Trans-regulatory functions, controlling chromatin remodeling and transcription. Cis-acting lncRNAs operate at a close vicinity to their site On the other hand, Trans-acting lncRNAs leave their site of transcription and exert functions at remote locations in the genome 9
5 variety of transcripts through alternative splicing 7 , 20 . 21 .
572021 It has been speculated that these alternative isoforms, by containing different combinations of exon sequences, thereby exert diverse functions. Some years ago, we identified CARMEN, a conserved lncRNA essential for cardiogenesis CARMEN has been previously referred to as MIR143HG because the locus hosts MIR-143 and MIR-145, two microRNAs (miRNAs) important for SMC differentiation[START_REF] Vacante | The function of miR-143, miR-145 and the MiR-143 host gene in cardiovascular development and disease[END_REF]
Downloaded from https://academic.oup.com/cardiovascres/advance-article/doi/10.1093/cvr/cvac191/6941187 by Universitätsbibliothek Bern user on 21 December 2022 CVR-2022-0360 11 expression as well as immunostaining demonstrated that adult CPCs lacking C-201 Exon 2 lost their ability to differentiate into SMCs (Fig. 1f-h). CARMEN-201 induction is sufficient to trigger a SMC gene program in undifferentiated fetal CPCs To evaluate the capacity of C-201 to redirect fetal CPCs into the SMC lineage, we used a CRISPR-On approach. We targeted transcriptional activators (dCas9-VP64; MS2-p65-HSF1), 200 bp upstream of the C-201 transcriptional start site (TSS) via expression of a modified gRNA containing MS2 aptamers (SAM system; 23
sequence was scrambled (C-201 mutEx2), thus maintaining length and sequence composition (Supplemental Fig.S4a). Endogenous C-201 expression was downregulated in differentiating fetal CPCs as expected (Fig.2e; P1). However, significant wild-type and mutated C-201 Exon 2 expression was measured in the respective transduced groups as judged by using Exon 2-specific primers (P3). We next evaluated SMC and CM differentiation (Fig.2f-g; Supplemental Fig.S4b-d). Non-transduced fetal CPCs differentiated into CMs. In sharp contrast, overexpression of wild-type C-201 Exon 2 forced fetal CPCs to adopt a SMC fate, as evidenced by expression of SMC markers and immunostaining. Importantly, when the MIRc element was mutated (C-201 mutEx2), SMC differentiation was not observed, supporting a critical role for this transposable element in the capacity of C-201 to direct CPCs into the SMC lineage.
CPCs (None) revealed temporal changes (d0 / Expansion; d1; d7) characterizing CM specification. In contrast, fetal CPCs overexpressing wild-type C-201 Exon 2 deviated significantly from the original differentiation track. Importantly, CPC samples with overexpression of the MIRc-mutated C-201 Exon 2 were transcriptionally similar to untransfected samples, indicating again that the transposable element was necessary for SMC commitment.
14 (
14 corresponding to the formation of the cardiac vasculature. Newly generated Uniform Manifold Approximation and Projection (UMAP) density plots revealed lineage-specific markers, reflecting the cellular diversity of the embryonic human heart including epicardial cells (WT1; TCF21; TBX18), pericytes MCAM; CSPG4; ACTA2; PDGFRB), and SMCs (GATA6; TAGLN; CNN1; MYH11) (Fig.3f). CARMEN was found expressed preferentially in these cells, substantiating an important role for this locus in pericyte and SMC specification from the epicardium during development of the human heart. In contrast, CMs and endothelial cells, marked by TBX5; GATA4; NKX2-5; MYH6 and PECAM; KDR, did not expressed CARMEN at this developmental stage (Supplemental Fig.S5b). Interestingly, ALDH1A2 and the COUP transcription factor NR2F2, critical for atrial identity, were also expressed in CARMEN-expressing cells. The MIRc element is a binding module for the RE1 Silencing Transcription Factor Many lncRNAs function through interacting with proteins. To identify C-201 protein partners, we performed a RNA pulldown assay. Biotinylated C-201 Exon 2 was used as a bait to purify C-201associated proteins from adult CPC lysates. As control, we used a mutated C-201 Exon 2 lacking the MIRc element. Proteins were identified by mass spectrometry. Three proteins were detected as significantly associated to C-201 Exon 2, namely the transcriptional repressor RE1 Silencing Transcription Factor (REST; aka Neuron Restrictive Silencer Factor), the RNA methyltransferase NOP2/Sun RNA Methyltransferase 6 (NSUN6), and the Replication Protein A1 (RPA1), a protein implicated in stabilization of single-stranded DNA (Supplemental Fig. S6a; Supplemental TableS2). We first confirmed the association of each protein with C-201 by pulling down the full C-201 transcript and quantifying the amount of bound proteins by Western blotting (Fig.4a-c; see full unedited gels in Supplementary Information). An antisense C-201 transcript was used as control. The results demonstrated the specific interaction of C-201 with REST, NSUN6 and RPA1 in proliferating and differentiating adult CPCs. We next performed a RNA immunoprecipitation assay (RIP) using antibodies directed against REST, NSUN6 and RPA1 respectively (Fig.4d-f). Quantitative measurement of bound CARMEN isoforms confirmed the interaction of C-201 with REST, NSUN6 and RPA1 during CPC expansion and differentiation. No other CARMEN isoforms were found associated to REST (Supplemental Fig.S6b). In contrast, small amounts of C-205 were detected as bound to NSUN6, and C-217 appeared to interact with both NSUN6 and RPA1 (Supplemental Fig.S6c-d). Finally, to determine whether REST possessed intrinsic propensity to bind MIRc-containing transcripts, we performed a REST RIP coupled to RNA profiling. REST-bound transcripts were found significantly enriched in sequences containing a MIRc element as compared to transcripts not Downloaded from https://academic.oup.com/cardiovascres/advance-article/doi/10.1093/cvr/cvac191/6941187 by Universitätsbibliothek Bern user on 21 December 2022 CVR-2022-0360 15
29 ,
29 repressor to important regulatory loci to control cell fate in differentiating CPCs. In addition, RPA1, a C-201 protein partner, has been implicated in RNA:DNA triple helix stabilization indicating that C-201 could interact with DNA sequences at target promoters. Thus, we took advantage of Triplex Domain Finder (TDF), an application developed to detect DNA binding domains (DBDs) in lncRNAs 30 . TDF identifies also the DNA regions bound by the selected lncRNAs, i.e. gene promoters containing binding Downloaded from https://academic.oup.com/cardiovascres/advance-article/doi/10.1093/cvr/cvac191/6941187 by Universitätsbibliothek Bern user on 21 December 2022 CVR-2022-0360 16 sites for the lncRNA DBDs. Because REST had been associated with repression, we sought to identify C-201 target promoters within the list of downregulated genes following C-201 Exon 2 overexpression.
GSM1010804) 31 .A
31 We evaluated therefore REST occupancy at the promoters of IRX1, IRX5, ISL1 and SFRP1 in differentiating fetal and adult CPCs by ChIP-quantitative real-time (q)PCR. As expected, REST occupied the promoter of the different candidate gene solely in adult CPCs expressing C-201, and not in fetal CPCs (Supplemental Fig.S7b). In these experiments, we used GAPDH as a negative control (REST occupancy in neither fetal nor adult CPCs) and SYN1 as a positive control (REST occupancy in both fetal and adult CPCs). Then, to formally demonstrate the dependence of REST targeting to IRX1; IRX5; ISL1 and SFRP1 on C-201 action, we performed an additional experiment in adult CPCs with or without C-201 Downloaded from https://academic.oup.com/cardiovascres/advance-article/doi/10.1093/cvr/cvac191/6941187 by Universitätsbibliothek Bern user on 21 December 2022
32 .
32 ), suggesting ISL1 lies upstream of these factors in the cardiac regulatory network, in accordance with its role as a pioneer transcription factor in the developing heart We then investigated re-specification of fetal CPCs into the SMC fate following siRNA-mediated silencing of each factor individually or in combination. In differentiating fetal CPCs, knocking down either IRX1, IRX5, ISL1 or SFRP1 restored epicardial gene expression, i.e. WT1, TCF21 and TBX18, confirming that downregulation of these critical cardiogenic factors was a mandatory step in reprogramming CPCs into the smooth muscle fate (Fig.6a-d). In addition, individual factor silencing also resulted in the reexpression of GATA6, HAND2, PDGFRA and TBX20, Downloaded from https://academic.oup.com/cardiovascres/advance-article/doi/10.1093/cvr/cvac191/6941187 by Universitätsbibliothek Bern user on 21 December 2022 CVR-2022-0360 18
expression is increased in response to myocardial infarction in humans
Finally 25 .
25 , C-201 could represent an interesting biomarker for assessing the extent of cardiovascular damage in various pathological situations.Altogether, this work demonstrates how a biological switch is encoded in lncRNA sequence to regulate cardiovascular specification. We have linked two key phenomena, namely alternative splicing and the presence of deeply-conserved transposable elements. LncRNAs display far greater levels of alternative splicing, although it has not been clear whether this reflects relaxed constraint or regulated production of isoforms with distinct functions Here, we have shown an example where these two processes converge to produce functional transcript isoforms, and provided the first physiological role for a transposable element acting via a lncRNA during heart development.FUNDINGThis work was supported by grants from the Swiss National Science Foundation (T.P.; Grant No. CRSII5-1_173738 and 31003A_182322).
AAA
Downloaded from https://academic.oup.com/cardiovascres/advance-article/doi/10.1093/cvr/cvac191/6941187 by Universitätsbibliothek Bern user on 21 December 2022 Downloaded from https://academic.oup.com/cardiovascres/advance-article/doi/10.1093/cvr/cvac191/6941187 by Universitätsbibliothek Bern user on 21 December 2022 Downloaded from https://academic.oup.com/cardiovascres/advance-article/doi/10.1093/cvr/cvac191/6941187 by Universitätsbibliothek Bern user on 21 December 2022 Downloaded from https://academic.oup.com/cardiovascres/advance-article/doi/10.1093/cvr/cvac191/6941187 by Universitätsbibliothek Bern user on 21 December 2022
Figure 1 .
1 Figure 1. CARMEN-201 controls SMC specification via its second exon (a-b) Representative images and quantification of ACTN2-positive TNNI-positive CMs and SMMHCpositive SMCs in differentiating fetal and adult CPC cultures. Scale bar: 50 µm. (c) Annotated CARMEN isoforms. The CARMEN-201 second exon is highlighted in red. (d) Absolute quantification of CARMEN-201 (C-201), CARMEN-205 (C-205) and CARMEN-217 (C-217) in differentiating fetal and adult CPCs. (e) Nuclear and cytoplasmic levels of CARMEN-201, CARMEN-205, CARMEN-217, ACTB, and NEAT. (f) Expression of CARMEN-201, CARMEN-205, CARMEN-217, SMC markers (MYH11; CNN1; TAGLN), and CM markers (MYH6; MYH7) in adult WT and 201Ex2 CPC clones lacking CARMEN-201 Exon 2. (g-h) Representative images and quantification of SMMHC-positive CNN1-positive TAGLN-positive SMCs in cultures of differentiating adult WT or 201Ex2 CPC clones. Scale bar: 50 µm. Data represent means ± SEM; *p < 0.05 as compared to fetal CPCs in expansion; § p < 0.05 compared to the indicated conditions (n=3-6). ANOVA with post-hoc Tukey. See also Supplemental Fig. S1, S2 and S3.
Figure 2 .
2 Figure 2. The CARMEN-201 Exon 2 contains a functional transposable element implicated in SMC specification. (a) Expression of CARMEN isoforms and (b) SMC markers (MYH11; CNN1; TAGLN; CALD1) in differentiating fetal cells either untransfected (None), transfected with the SAM system in the absence of gRNA (SAM) or with the SAM system with a gRNA targeting sequences upstream the CARMEN TSS (SAM/gRNA). (c) Representative images and quantification of SMCs in cultures of differentiating fetal CPCs transfected as in (a). Scale bar: 50 µm. (d) Position of the MIRc transposable element in the CARMEN-201 second exon, and sequence conservation. (e) Expression of CARMEN-201 using either a primer pair specific for the endogenous transcript (P1) or the exogenous exon 2 (P3), and (f) SMC markers (MYH11; CNN1; TAGLN; CALD1) in differentiating fetal CPCs either not transduced (None), transduced with a lentiviral vector encoding CARMEN(C)-201 Ex2 or transduced with a lentiviral vector encoding a mutated C-201 Ex2 (C-201 mutEx2). (g) Representative images and quantification of SMMHC-positive CNN1-positive SMCs in cultures of differentiating fetal CPCs transfected as in (f). Scale
RNA Pulldown and identification of CARMEN-201 protein partners
The pBluescript SK+ plasmids containing C-201 Ex2 or C-201 mutEx2 were used to synthetize biotinylated probes. Precleared lysate was incubated with either no probe, biotinylated C-201 Ex2 or biotinylated mutC-201 Ex2. Proteins were loaded on 12% SDS-PAGE.
CVR-2022-0360
Nicole Deglon (University of Lausanne, Lausanne, Switzerland). EGFP sequences were replaced by either the wild-type CARMEN-201 Exon 2 or a mutated Exon 2 containing scrambled MIRc sequences. A C C E P T E D M A N U S C R I P T Downloaded from https://academic.oup.com/cardiovascres/advance-article/doi/10.1093/cvr/cvac191/6941187 by Universitätsbibliothek Bern user on 21 December 2022
8
specification (see Graphical Abstract). Conversely, in CPCs adopting a CM fate, C-201 is not expressed, allowing IRX1, IRX5, ISL1 and SFRP1 expression and the subsequent activation of the cardiogenic program. Importantly, the MIRc-containing exon in C-201 is found in primates but not in other mammals, suggesting that MIRc-mediated functions controlling commitment into the SMC lineage has been integrated in the CARMEN locus only recently in evolution. The CARMEN locus has been expression during the development of the heart also reflects the role of REST in cell fate determination. REST binds C-201 but not C-205 and C-217, further substantiating the importance of the C-201/REST complex for SMC differentiation. C-201 appears to be also indirectly under control by REST. C-201 relocalizes into the cytoplasm upon REST silencing. Therefore, the nuclear enrichment of C-201 could depend in part on its binding to REST. In this vein, REST binds C-201 via the MIRc repeat in the second exon, which was recently demonstrated to be associated with transcript nuclear localization 18 . CARMEN isoforms were found differentially expressed depending on the cardiac pathology, exemplifying again the complexity of the regulation of the CARMEN locus. Here, we show that C-201 levels increase in the blood during the acute phase of myocardial infarction. The likely source of circulating C-201 is the damaged heart. However, we cannot rule out the possibility that hemodynamic stress also stimulates release from the peripheral vasculature. Nevertheless, assuming a cardiac origin for C-201, its expression in CPCs could be part of the healing process initiated following injury. In this scenario, CPCs expressing C-201 would be specified preferentially into the SMC lineage. Increased myocardial tissue perfusion has been reported in cell-based regenerative therapies for heart disease. However, clinical trials failed to demonstrate functional improvement. This can be expected if precursors are diverted from the cardiogenic lineage secondary to C-201 expression. Our data propose therefore a mean to improve CM production via modulating C-201 expression in cardiac precursors.
CVR-2022-0360 CVR-2022-0360 CVR-2022-0360
A C C E P T E D M A N U S C R I P T 21 . Our data suggest that coordinate regulation of C-201 and C-217 expression takes place during specification, providing a plausible mechanism for controlling specification. CARMEN isoforms appear to share a single promoter. Yet, additional transcription start sites have been recently detected in the C-201 isoform, suggesting transcriptional regulation might control C-201 expression in differentiating SMCs 34 . Multipotent cardiovascular precursor cells expressing ISL1 give rise to both CMs and SMCs 2-4 . In this context, our TRIAGE analysis identifies key cardiac TFs as regulators of fetal CPC differentiation. On the other hand, CPCs respecified into the SMC lineage after C-201 Exon 2 overexpression express a different gene program. Induction of GATA6, HAND2, PDGFRA and TBX20 in CPCs is a characteristic feature of cardiovascular intermediates capable of producing CMs and SMCs 33 . We show also that commitment to the SMC lineage is characterized by the stepwise expression of markers of epicardium-derived cells (EPDCs) such as WT1, MEOX1, KRT19 and TBX18, and pericytes such as MCAM, CSPG4, ACTA2 and PDGFRB. During development, EPDCs establish the subepicardial mesenchyme, then migrate into the myocardium. These cells represent a known source of pericytes and SMCs for the forming coronary vasculature 35 . In addition, genetic tracing experiments suggest that epicardial cells can also give rise to a myocardial progeny 36-38 . Along the same line, a recent single-cell analysis identifies the juxta-cardiac field (JCF) contributing to both EPDCs and CMs 39 . Trajectory analysis revealed a link between precursors from the JCF and the posterior SHF, supporting the postulated developmental origin of CARMEN-expressing CPCs. Of note, TRIAGE identifies IRX1 and IRX5 as important regulators of cardiogenesis. IRX1 is detected in the trabeculated and compact myocardium of the developing ventricular septum whereas IRX5 demonstrates a subendocardial to subepicardial gradient of expression 40 . Consistently, our experiments show that knocking down IRX1 and IRX5 in fetal CPCs allows A C C E P T E D M A N U S C R I P T antagonist. Downregulation of the WNT pathway is an important step in establishing cardiac fates 41 . In specifying CPCs, C-201 acts via REST-mediated repression. A role for C-201 in blocking REST activity during SMC determination, for instance via sequestering REST, is unlikely since REST silencing abolishes SMC commitment in C-201-expressing CPCs. Consistently, REST acts as a transcriptional repressor in the developing heart, where it is thought to repress adult cardiac gene expression 42-44 . Accordingly, blockade of REST in the heart leads to cardiac dysfunction 45 . In addition, our results suggest that temporal REST 46 . Mechanistically, lncRNA m5C modification could be involved in transcript structure and stability 47 . C-201 associates also to RPA1. Importantly, RPA1 binds RNA with high affinity and promotes R-loop formation with homologous DNA 29 . RNA-DNA hybrids initiate cellular processes regulating transcription and genome dynamics, two important determinants of cell specification 48, 49 . Thus, RPA1 might contribute to effective targeting of REST at regulatory loci via its capacity to stabilize C-201/promoter association, a feature consistent with the predicted DNA binding domains in C-201. CARMEN is expressed in adult tissues, particularly in the heart and the vasculature, reflecting expression in CMs and SMCs 50 . An increasing body of evidence suggests CARMEN is associated with pathological states in the cardiovascular system 22 . Relevant to the present work, CARMEN was recently demonstrated to regulate SMC differentiation and proliferation in atherosclerotic plaques 34 . Unstable regions, in which high proliferation of dedifferentiated SMCs is observed, were characterized with A C C E P T E D M A N U S C R I P T and promotes SMC involved in cardiogenesis, implicating however other isoforms than C-201 maintain a cardiogenic identity in committed precursors. C-201 targets also SFRP1, a known WNT human hearts 21 . Interestingly, human Downloaded from https://academic.oup.com/cardiovascres/advance-article/doi/10.1093/cvr/cvac191/6941187 by Universitätsbibliothek Bern user on 21 December 2022 Downloaded from https://academic.oup.com/cardiovascres/advance-article/doi/10.1093/cvr/cvac191/6941187 by Universitätsbibliothek Bern user on 21 December 2022 Downloaded from https://academic.oup.com/cardiovascres/advance-article/doi/10.1093/cvr/cvac191/6941187 by Universitätsbibliothek Bern user on 21 December 2022
reexpression of epicardial and SMC markers, suggesting expression of the two factors is sufficient to
20 21 22
Then, C-201 associates with NSUN6 and RPA1. A recent study demonstrated a role for NSUN6 in methylating mRNAs and lncRNAs, such as MALAT1, NEAT1 AND XIST decreased CARMEN levels. Consistently, SMCs adopting a synthetic phenotype characterized Carmen knockout mice. We have demonstrated previously that CARMEN is induced in the stressed mouse and
Downloaded from https://academic.oup.com/cardiovascres/advance-article/doi/10.1093/cvr/cvac191/6941187 by Universitätsbibliothek Bern user on 21 December 2022
Downloaded from https://academic.oup.com/cardiovascres/advance-article/doi/10.1093/cvr/cvac191/6941187 by Universitätsbibliothek Bern user on 21 December 2022 CVR-2022-0360
ACKNOWLEDGEMENTS
We express our gratitude to Dr Gabriel Cuellar Partida, University of Queensland, Brisbane, Australia, for his help in analyzing the GWAS data. We are grateful to Dr Nicole Deglon and Dr Maria Del Rey, University of Lausanne Medical School, Switzerland, for the plasmids used to prepare lentiviral vectors. We thank the Genomic Technologies Facility, the Protein Analysis Facility, the Cellular Imaging Facility and the Mouse Pathology Facility at the University of Lausanne, Switzerland, for providing expertise in transcriptomics, proteomics, imaging and histology respectively.
AUTHORS CONTRIBUTION
T.P. conceived the project, designed experiments and wrote the paper. I.S. designed and performed all wet-lab experiments, with help from M.N., P.A. and F.A.. P.C. and R.J. conducted the bioinformatic analyses. Y.S., S.S., W.J.S. and N.P. performed the TRIAGE, GWAS and single-cell analyses. F.R. provided critical human material for RNA FISH experiment, and her expertise in heart development.
CONFLICT OF INTEREST
T.P. is co-founder of Haya Therapeutics, Epalinges, Switzerland
DATA AVAILABILITY
All transcriptomic data has been deposited to GEO with the identifier GSE199930. See also Supplemental Fig. S7. |
03517911 | en | [
"info.info-ia"
] | 2024/03/04 16:41:24 | 2022 | https://hal.science/hal-03517911/file/Journal_RE_and_EA_based_Software_Discovery_and_Reuse__ICSR_Extended___R2_.pdf | Abdelhadi Belfadel
email: [email protected]
Jannik Laval
email: [email protected]
Chantal Cherifi
Nejib
Chantal Bonner Cherifi
email: [email protected]
Nejib Moalla
email: [email protected]
Architecture-based Software Discovery and Reuse
Keywords: Enterprise architecture, Capability profile, Requirements engineering, Service reuse, Software reuse
come
Introduction
Modernizing or designing a new business process by reusing the functionality of existing software can be of great benefit to organizations. Leveraging previous developments and considering internal company solutions enriched with external ones, can help facilitate the development of complex systems at controlled costs while maintaining delivery times.
When developing software, the first thing to do is to understand and describe in a precise way the problem that the software must solve. Requirements for a targeted system describe what the system should do, what the services it should provide, and what quality or constraints it must have to make it attractive and acceptable to the owner [START_REF] Robertson | Mastering the requirements process: Getting requirements right (3rd Edition)[END_REF]. These requirements reflect the needs of customers for a system. The process of analyzing, eliciting, and checking these services and constraints is called requirements engineering [START_REF] Sommerville | Software engineering 9[END_REF].
To maximize the reuse opportunities for companies, a component view with a concise evaluation model of software components (that describe in detail the capabilities of software) provides an overview of existing solutions and facilitates the discovery, selection, and decision to reuse [START_REF] Belfadel | Towards software reuse through an enterprise architecture-based software capability profile[END_REF]. This, combined with an organization of the different artifacts (an artifact is a more granular architectural work product that describes an architecture from a specific viewpoint. Examples include a class diagram, a server specification, a list of architectural requirements...) resulting from this evaluation model, aligned with a requirement engineering approach, aims to reduce the complexity when searching and selecting software components to reuse. In addition, focusing on service-oriented solutions, many opportunities for reuse of functionality will arise, resulting in more efficient use of existing resources.
To reduce the complexity of the description of software components that result from the evaluation model and to present the required detail of information at each level of its exploitation, enterprise architecture is of great value. Enterprise Architecture (EA) is the definition and representation of a highlevel view of IT systems and enterprises' business processes. By considering an enterprise architecture-based approach, it is possible to organize the different artifacts in a way that enables an analysis of the reuse possibilities for an organization and ensure the feasibility of a targeted system or project. The insights or information provided by an enterprise architecture is needed, on the one hand, to determine, from a business perspective, the needs, and priorities for change [START_REF] Gosselt | A maturity model based roadmap for implementing togaf[END_REF] and, on the other hand, to organize the various components and technical artifacts and assess how an organization can exploit them.
With reference to this context, this research is an extended work of an already published work in [START_REF] Belfadel | Semantic software capability profile based on enterprise architecture for software reuse[END_REF]. The latter was focused on the design of a software capability profile implemented as a semantic model to gather description of the capabilities of existing solutions from several perspectives (organizational, business, technical and technological aspects). For the sake of readability, the proposed model and contributions published in [START_REF] Belfadel | Semantic software capability profile based on enterprise architecture for software reuse[END_REF] are presented in the following sections, as well as the new contributions related to the exploitation of the capability profiles through the alignment of a requirements engineering approach and an enterprise architecture method. As discussed earlier, the objective of this research work is to leverage the model already published in [START_REF] Belfadel | Semantic software capability profile based on enterprise architecture for software reuse[END_REF] to address stakeholder requirements and efficiently reuse existing solutions, by investigating the highest functional and non-functional compatibility of existing software capability profiles with stakeholder's desired features to be implemented. The expected result from this work is an exploitation process of the software capability profiles, based on the Architecture Development Method from TOGAF [START_REF] Hussain | The Open Group Architecture Framework TOGAF™ Version 9[END_REF], aligned to a requirements engineering approach implemented using Volere Specification [START_REF] Robertson | Mastering the requirements process: Getting requirements right (3rd Edition)[END_REF]. However, the problem we faced is how to align requirements and architecture artifacts in an engineering cycle, to help in the refinement of requirements and select the best candidate components to serve as building blocks in a new system?
To respond to this research problem, this paper is organized as follows: Section 2 focuses on the related work. We focus afterward on the principal building blocks of the proposed solution and present first the Enterprise Architecture Capability Profile (EACP) in section 3. Section 4 presents a concrete use case scenario on which this approach has been applied, and that serves as an example throughout the description of the exploitation process in section 5. Section 6 presents an implementation of the proposed approach. Section 7 discusses our work and finally a conclusion is drawn in Section 8.
Related Work
Requirements Engineering in Software Development Process and Service Reuse
Classical techniques for software solution specification are structured analysis and object-oriented techniques [START_REF] Roel | Requirements engineering: Problem analysis and solution specification[END_REF]. The view of requirement engineering as solution specification is taken by the IEEE 830 standard [START_REF] Ansi | Ieee. ieee guide to software requirements specifications[END_REF] and by other authors on requirements [START_REF] Robertson | Mastering the requirements process: Getting requirements right (3rd Edition)[END_REF] [START_REF] Ma Davis | Software requirements[END_REF]. In this view, and as mentioned by [START_REF] Roel | Requirements engineering: Problem analysis and solution specification[END_REF], a requirements specification consists of a specification of the context where the system operates, desired system functions, the semantic definition of these functions, and quality attributes of the functions.
Several research works and methods ( [START_REF] Syed Waqas | Process to enhance the quality of software requirement specification document[END_REF], [START_REF] Ahmad | Impact minimization of requirements change in software project through requirements classification[END_REF] or [START_REF] Abeer Abdulaziz Alsanad | A domain ontology for software requirements change management in global software development environment[END_REF]) exist in the literature to enhance software requirement specifications and for feature selection. [START_REF] Robertson | Mastering the requirements process: Getting requirements right (3rd Edition)[END_REF] propose Volere as a basis for a requirement specification. It is a result of many years of practice, consulting, and research in requirement engineering and business analysis. Volere provides template sections for each of the requirement types appropriate to today's software systems. [START_REF] Xu | Re2sep: A twophases pattern-based paradigm for software service engineering[END_REF] propose a paradigm for software service engineering to reuse services for developing new applications more rapidly with the aim to satisfy individualized customer requirements. The proposed approach uses service context as a mediating facility to match a service requirement with a service solution. The requirements are defined by the targeted business functionality, service performance, and value. However, no details about the service pattern description or repository, the requirement template nor implementation of the approach are proposed. [START_REF] Chen | A method for service-oriented personalized requirements analysis[END_REF] propose a method that allows users of services to express their requirements. The authors propose a meta-model for elements required in service consumption such as process, goal, or role. The proposed method helps to discover errors and conflicts during requirement refinement. [START_REF] Zachos | Discovering web services to specify more complete system requirements[END_REF] propose a service selection algorithm based on textual requirements expressed by the service consumer. The service selection is based on a discovery algorithm, that uses XQuery and WordNet and focuses on the disambiguation and completeness of the requirements and retrieving discovered services from the UDDI registry. There is additional ontology-based research work such as [START_REF] Verlaine | Towards conceptual foundations of requirements engineering for services[END_REF], where the authors took the CORE Ontology (for Core Ontology for Requirements) [START_REF] Ivan | A core ontology for requirements[END_REF] for requirement elicitation, and established a relationship with the concepts of Web Service Modeling Ontology (WSMO) [START_REF] Roman | Web service modeling ontology[END_REF].
Knowledge Management and Service Repositories for Service Reuse
Research on repositories for effective and useful management and discovery of services for service-oriented paradigm has recently earned significant impulse. Some specifications as UDDI [START_REF] Curbera | Unraveling the web services web: an introduction to soap, wsdl, and uddi[END_REF] or ebXML Registry [START_REF] Breininger | The ebxml registry repository version 3.0. 1[END_REF] has provided primary support to register, discover and integrate services. Due to limited capabilities offered by existing registry specifications for services discovery, some research works in the literature aim at improving service repositories with ontologybased discovery facilities. Later, semantic models have been proposed to enrich the service registry with semantic annotations, combined with matchmaking algorithms to match service capabilities.
Based on the systematic analysis of relevant research works regarding service discovery with consideration of our needs, Table 1 classifies the related service registry and discovery works published between 2002 and 2019 according to the following criteria: C1) Organizational level: exploitation based on the identification of the stakeholders, business problems, goals, and objectives of the targeted project. C2) Functional level: exploitation based on service interfaces, the business functions, and related inputs and outputs. C3) Technical level: exploitation based on the identification of relevant technical requirements, interoperability requirements, and technology constraints. C4) Technology level: exploitation based on the identification of the platforms and infrastructure. C5) Non-functional properties (QoS, Security...). C6) Exploitation based on a Requirements Engineering Process (it involves all the mentioned levels)
Out of Table 1, we notice that several research works considered the functional level and QoS to manage service repository and matchmaking, but few of them considered the other levels such as the organizational level, the technical or technology level. We notice also that few research works considered the exploitation of the service registry in a software engineering cycle using a requirement engineering process to manage the user requirements for service discovery and matchmaking. Software architecture helps to manage the complexity of software by providing an abstraction of the system. The requirements engineering process drives the architecture actions, whereas decisions made in the architectural phase can affect the achievement of initial requirements and thus change them. We should go through these two fundamental activities namely requirement engineering and software architecting during the engineering process. These activities should evolve together to offer support to the developer or architect for formalizing the requirements and architectural artifacts to enable software and service discovery and reuse. There is, however, no structured solution (as depicted in Table 1) on how to perform the co-development of requirements and architecture actions to select the suitable software or services to reuse for the development of new business software.
Service Repository and Discovery C1 C2 C3 C4 C5 C6 [START_REF] Paolucci | Semantic matching of web services capabilities[END_REF], [START_REF] Wu | Similarity-based web service matchmaking[END_REF], [START_REF] Sabou | Towards semantically enhanced web service repositories[END_REF], [START_REF] Yu | A semantically enhanced service repository for user-centric service discovery and management[END_REF], [START_REF] Haniewicz | Local controlled vocabulary for modern web service description[END_REF], [START_REF] Hog | Adaptable web service registry for publishing profile annotation description[END_REF], [START_REF] Yoo | Ontology based keyword dictionary server for semantic service discovery[END_REF], [START_REF] Jonas | A description and retrieval model for web services including extended semantic and commercial attributes[END_REF], [START_REF] Narock | A provenance-based approach to semantic web service description and discovery[END_REF], [START_REF] Moradyan | A query ontology to facilitate web service discovery[END_REF], [START_REF] Ben | Towards a semantic search engine for open source software[END_REF], [START_REF] Kavitha | Towards a novel conceptual framework for analyzing code clones to assist in software development and software reuse[END_REF] [START_REF] Goncharuk | A case study on pragmatic software reuse[END_REF] + +
[15] + + + [START_REF] Loskyll | Semantic service discovery and orchestration for manufacturing processes[END_REF] + [START_REF] Seba | Web service matchmaking by subgraph matching[END_REF], [START_REF] Aabhas V Paliwal | Semantics-based automated service discovery[END_REF], [START_REF] Xue | Restful web service matching based on wadl[END_REF] + [START_REF] Rathore | An arsm approach using pcb-qos classification for web services: a multi-perspective view[END_REF], [START_REF] Becha | Prioritizing consumer-centric nfps in service selection[END_REF], [START_REF] Amandeep | Software reuse analytics using integrated random forest and gradient boosting machine learning algorithm[END_REF] + (+) [START_REF] Ángel Rodríguez-García | Ontology-based annotation and retrieval of services in the cloud[END_REF], [START_REF] Kapitsaki | Annotating web service sections with combined classification[END_REF], [START_REF] Niranjan | Dynamic search and selection of web services[END_REF] + [START_REF] Li | An ontology-based process description and reasoning approach for service discovery[END_REF], [START_REF] Matsuda | Configuration of a production control system through cooperation of software units using their capability profiles in the cloud environment[END_REF] + (+) [START_REF] Alarcon | Rest web service description for graph-based service discovery[END_REF] + + [START_REF] Elshater | godiscovery: Web service discovery made efficient[END_REF] + + [START_REF] Boissel-Dallier | Mediation information system engineering based on hybrid service composition mechanism[END_REF] + (+) [START_REF] Khanfir | Quality and context awareness intention web service ontology[END_REF] + + + [START_REF] Sophea Chhun | Qos ontology for service selection and reuse[END_REF] + + (+) [START_REF] Purohit | Web service selection using semantic matching[END_REF] + [START_REF] Elgazzar | Daas: Cloud-based mobile web service discovery[END_REF], [START_REF] Zeshan | Ontology-based service discovery framework for dynamic environments[END_REF] + + + [START_REF] Mu | An ontology-based collaborative business service selection: contributing to automatic building of collaborative business process[END_REF] + + (+)
Table 1 Service repository and discovery for reuse
Scientific relevance and discussion
From the state-of-the-art on service-oriented software reuse, we analyzed that currently ad-hoc methods are still used to identify the most suitable serviceoriented software or artifacts to reuse, and a methodology or standardized process enabling this is still missing. Moreover, the description, the capability, or qualification of this software are lacking wider view qualification taking into consideration the business, operational and technical views of the software and their related services [START_REF] Belfadel | Semantic software capability profile based on enterprise architecture for software reuse[END_REF]. In addition, no solution has been provided to fit the requirements engineering along with the impact of architecture on requirements when dealing with the identification of the most suitable components and avoiding the misevaluation during the selection phase. Therefore, enhancing the capability description of the software and its related services in different levels of service description, along with its exploitation based on requirement engineering and architecting actions is a big challenge. This analysis highlighted the need for an Enterprise Architecture-based methodology for describing and classifying different artifacts to be available as building blocks for reuse in future projects.
From the above analysis, we propose the following research directions: (i) Improve software and related service capability profile to bring value-in-use of the qualified feature for an organization that is interested in reuse; (ii) Shape a mechanism to identify the most suitable software with specific features or functionalities helping to overcome the use of ad-hoc methods; (iii) Improve the reuse of service-oriented solutions by considering a process that includes requirements engineering and enterprise architecture for formalizing the requirements.
Qualification Model -Software Capability Profile
To bring the value-in-use of existing software and facilitate the discovery and reuse, we present in this section the meta-model of the Software Capability Profile (see Figure 1) and its related EACP Ontology that is presented in detail in [START_REF] Belfadel | Semantic software capability profile based on enterprise architecture for software reuse[END_REF] and depicted in Figure 2. The meta-model is inspired mainly from TO-GAF [START_REF] Hussain | The Open Group Architecture Framework TOGAF™ Version 9[END_REF], ISO 16100 [START_REF]2009 industrial automation systems and integration -manufacturing software capability profiling for interoperability -part 1: Framework[END_REF], Microsoft Application Architecture Guide [START_REF]Microsoft Patterns and Practices Team. Microsoft® Application Architecture Guide, 2nd Edition (Patterns and Practices)[END_REF] and ISO 25010 [START_REF]2011 systems and software engineering -systems and software quality requirements and evaluation (square) -system and software quality models[END_REF]. It aims to gather functional and non-functional specifications; the organizational impact of an organizations' software; and it links the business services to their related physical components to offer a wider view qualification and improve the reuse when developing a new business application. The proposed meta-model is composed of 6 packages. i) Organization package: Composed by the organizational unit, with its related business goals and objectives that guided the development of existing software. ii) Architecture Building Blocks package: This entity is constructed according to the life-cycle creation of Architecture Building Blocks (ABBs) based on the ADM Method. An ABB describes the business problem for which this component was developed for, its implementation specification, standards used, the stakeholders concerned. It provides other details such as the operational vision of the component, the definition of the business function of the ABB, its attributes and constraints, data and application interoperability requirements and designtime quality attributes. iii) Solution Building Blocks package: SBBs represent the physical equivalent of ABBs and describe the components exposed by software. The SBB is linked to the exposed service or API for instance over the web in case of a REST-based application. This latter is defined by the URI, the HTTP method needed to get access to the resource, the related parameters and the serialization used in communication (for instance JSON). It defines run-time and transversal quality attributes which might be updated according to the defined frequency for each attribute. iv) Application package: Describes the technical requirements of the service-oriented software in general with its exposed components (for instance REST services). It also describes the execution environments on which the application is running. v) Business Process package: This package is used in the exploitation phase and represents the "tobe" business application to realize. vi) Requirements package: This package is used in the exploitation phase and represents the requirements elicitation process, helping to guide the developer or the architect during the engineering life-cycle. The requirements are elicited in each phase of the ADM, going from the definition of project driver to the definition of the use cases and requirements in different levels. The resulting EACP profile instances are saved in a semantic repository called in this context the Enterprise Architecture Knowledge Repository (EAKR repository) as an ontology instance. Regarding the design effort of the EACP Ontology, we identified from state-of-the-art solutions some existing ontologies to reuse. We have selected those that cover some of our needs and that are well-defined, consistent, and reused in other projects. We selected Basic Formal Ontology (BFO) which is a top-level ontology and four domain ontologies, namely Ontology Web Language for Services (OWL-S) [START_REF] Martin | Bringing semantics to web services: The owl-s approach[END_REF], The Open Group Architecture Framework ontology (TOGAF-Ontology) [START_REF] Gerber | Towards the formalisation of the togaf content metamodel using ontologies[END_REF], BPMN 2.0 Ontology [START_REF] Rospocher | An ontology for the business process modelling notation[END_REF] and Information Artifact Ontology (IAO) [START_REF] Ceusters | An information artifact ontology perspective on data collections and associated representational artifacts[END_REF]. Third, we managed the selected foundational and domain ontologies, by integrating and extending in Fig. 2 Basic pattern of the main classes and relationships in the EACP ontology a coherent way the different ontologies into the targeted EACP ontology using the Protégé Ontology Editor. Finally, we evaluated the consistency of the resulted model using the Fact++ reasoner. An extract of the related Ontology to EAKR repository is presented in Figure 2.
The scope of this paper does not concern the presentation of the qualification model and the design of its ontology mentioned above and presented in [START_REF] Belfadel | Semantic software capability profile based on enterprise architecture for software reuse[END_REF], however, the elements presented above are enough for the understanding of the exploitation phase that will follow.
Use Case Scenario
This section presents the use case that will serve as an example for the description of the following exploitation process. This use case scenario involves two companies. The first company is specialized in plastic manufacturing, and the second in metal manufacturing. Both companies have significant expertise in engineering and transforming plastic and metal, respectively, parts using several technologies. Their key business issue is the difficulty to detect the Fig. 3 Screenshots of the targeted business application most appropriate business collaboration opportunities among common customer projects. Starting from a Computer-Aided Design (CAD) description of customer projects, the companies' consortium needs to quickly detect the set of projects that they are enabled to produce. The proposed use case aims to accelerate and maintain a collaboration channel in two complementary business domains. The objectives are to reduce project quotation costs and reduce the delay of customer quote treatment.
Currently, clients send product requests to one of the companies in the form of CAD/PDF files. Then, the chosen company decomposes all the project's features (e.g. parts, dimensions, type of surface, and type of raw material) to understand customer needs to verify the feasibility of the product and to determine the relevance of the business opportunity in terms of return on investment. After the decomposition and if there is a need for subcontracting (especially for multi-physical and complex products), the two companies will carry out a succession of negotiations, explore several ways to reach the client's requirements, and submit their best offer to the client. For this purpose, we derive from this use case all needed requirements that we need to discover existing technical services that allow reducing cost and development time. In what follows, we present how the developer goes through an architecture and requirement elicitation process to discover existing services and develop the needed business application. We present an example of the inputs needed in each phase of the exploitation process. A screenshot of the developed prototype is presented in Figure 3.
Exploitation Process
To design a new business application reusing technical services from the EAKR, the architect or developer goes through a requirement elicitation process. This Fig. 4 Exploitation Plan process depicted in Figure 4 is structured in several phases starting with the architectural vision, going through the business architecture, data, application, and technology architecture phase, leading to the generation of an implemented BPMN which uses the qualified services if a match is confirmed.
In the following sub-sections, we describe the actions to realize in each phase, enriched with a concrete example from the use case scenario to strengthen the understanding. Templates in JSON format are also proposed to formalize the architecture artifacts and requirement specifications in each phase.
Phase A: Architecture vision
The objective of this phase is to develop a high-level vision of the business value to be delivered as a result of the project. This phase is mainly focused on gathering the business goals and related objectives of the targeted project.
Other architecture artifacts are used to structure the project drivers, such as the identification of stakeholders, definition of the organizational model of the company, and KPIs enabling to evaluate the targeted business application.
Based on the architectural artifacts of phase A (see Figure 5 for the exhaustive list -column ADM artifacts), we fetch requirement patterns (see Figure 5 for the exhaustive list from Volere specification -column Requirements) produced during previous projects to guide and support the developer for the upcoming requirement definition. If a template is found, it is presented to the developer. In addition to the architecture artifacts, the developer formalizes his requirements by defining the project drivers such as the business actors, the client, and the customer if applicable. These inputs help to consolidate the elicitation phase and redesign his requirements before going further in the process. All the artifacts once validated are saved in the EAKR to be reused as requirements templates in the future exploitation process.
Fig. 7 Phase B: alignment of architecture and requirements artifacts Figure 6 illustrates an example from the use case with some proposed architecture artifacts and requirements for phase A. The main inputs are the definition of business goals and objectives of the targeted system, along with stakeholders that define people who have an interest in the targeted system and whose inputs are needed to build the product.
Phase B: Business Architecture
The objective of this phase is to develop the target business architecture that describes how the enterprise needs to operate to achieve the business goals previously defined and responds to stakeholder concerns. This phase focuses on the business side and supports the developer or architect to prepare all required inputs which are presented in Figure 7.
The most important architectural artifact in this phase is the high-level business scenario. This is designed using BPMN which is a standard language for business process modeling. This first high-level modeling is designed using ArchiMate 1 which is recognized as a standard for EA modeling by the Open Group [START_REF] Josey | An introduction to the archimate® 3.0 specification[END_REF] and that supports business process modeling. This high-level modeling helps to define the business-entity relationship to know which entities are needed for every business action or behavior.
This business model is enriched with other architectural artifacts (see Figure 7 for the exhaustive list -column ADM artifacts) such as the actor catalog updated with their related roles, and the definition of the architecture requirements specification which form a major component of an implementation contract and provides quantitative statements as required in ADM Phase B outputs. It requires the definition of the implementation specifications to guide the development work, implementation standards in case the implementation should follow some specific standard. Figure 8 depicts the model of some architecture artifacts managed during this phase and shows an example of the proposed template for some of these architecture artifacts.
Once these elements are defined, a request is sent to the EAKR to fetch requirement templates of phase B produced during the last projects to be reused. The aim is to guide the developer to define the project constraints and the functional requirements as depicted in Figure 7 and inspired from Volere specification (column Requirement). The resulted templates from the request (if any) present the scope of previous projects sharing the same context, the existing business events connected to the actual business scenario, the use cases of the project or solution, and a set of functional requirements related to the selected use cases and their type, i.e service type component related to an SBB or user task activity if it is a user action. Figure 9 depicts the model of the different artifacts that may result from the EAKR and shows an example of the proposed template for some of these artifacts.
These resulting templates are presented to the developer to guide him during this phase B for the consolidation and refinement of his requirements. It helps to offer support for defining and consolidating the use-cases and related Fig. 9 Example of requirement artifacts managed during phase B and related model functional specifications. The proposed template is inspired by the Atomic Requirement Template which is proposed by [START_REF] Robertson | Mastering the requirements process: Getting requirements right (3rd Edition)[END_REF] and depicted in Figure 10. This phase B ends with well formalized, testable, and categorized as user or service task functional requirements.
Phase C: Information Systems Architecture -Data Architecture
The objective of Phase C is to develop the targeted information system architecture. It involves a combination of data and application architecture. There- The Data architecture phase enhances the definition of the relationship between data entities and targeted business functions previously defined in phase B. We have already depicted in Figure 9 the models that enable to link data entities associated with each business function. Then, the needed action in this phase is to define the properties of each business entity involved in the business functions using relevant data models such as the Class Diagram in the Unified Modeling Language (UML). To this end, we propose templates based on the proposed models for the definition and formalization of the data entities involved in a business function. An example related to the use case is depicted in Figure 12. The left side of the figure depicts the definition of a class model, and the right-side links an entity with its attributes to a specific business function where the entity is used.
Additional architecture requirements specifications are as well formalized such as Data Interoperability or Technology Architecture Constraints (see Figure 11 for the exhaustive list -column ADM artifacts). These constraints have the same description template as for requirements. The data interoperability requirement is needed to formalize specific needs for security policies as for example input validation or for data format and serialization. Regarding the During this phase, architecture artifacts and requirements are defined at the same time because we are reaching the low-level description regarding the business application to develop. Based on these inputs, we fetch and map in the EAKR the ABBs and business functions using the defined data entities, and which are compliant with the constraints if defined (see Figure 13-left column for an ABB template example). The related SBBs and their corresponding technical components are gathered to highlight potential problems of integration and are presented in the proposed template depicted in Figure 13-right column. The related models of ABB, SBB and application package have already been described in section 3 and in [START_REF] Belfadel | Semantic software capability profile based on enterprise architecture for software reuse[END_REF]. This phase deals with the application architecture artifacts and the corresponding requirements. (see Figure 14 for the exhaustive list of the artifacts). The developer or architect is guided to define the technical requirements in the same template as for the atomic requirement (see Figure 15-left side for an example). Technology or infrastructure constraints and application interoperability requirements are either defined in this phase. Those constraints are added to previous ones to fetch for SBBs and related applications in the EAKR. The resulting SBBs and related application (middle and right side of Figure 15) offer the first overview of existing applications and related technical services (in the case of service-oriented solutions) to reuse. These solutions fit the requirements and constraints from a functional, technical, and technology constraint side. Up to this requirement level, a first version of the targeted business application based on BPMN 2.0 is generated. The user and service tasks are generated and a link between service tasks and a set of existing services is performed based on the elements defined during previous phases (related to selected SBBs). This first solution reflects the prototype to realize, aiming to resolve and meet the business need expressed during this elicitation process.
Phase D: Technology Architecture
The last phase D is about the technology architecture artifacts (see Figure 16 for the exhaustive list of the artifacts). The objective of this last phase is to define the basis of the implementation work. As part of phase D, the developer or architect needs to consider what relevant resources are available in the EAKR repository to ensure that the target system will meet some or all the requirements and constraints. It is important to recognize that in practice it will be rarely possible to find and reuse components that reach 100% coverage of all defined requirements and constraints. During the previous phase C, technical and technological constraints are formalized. These latter are considered during this phase D when matching the final SBBs, enriched with non-functional properties defining the Quality of Service needed from the existing services.
The model of non-functional requirement is based on the atomic requirement model. An example of the definition of this non-functional requirement is depicted in Figure 17. The resulted SBBs, if a match is confirmed, reflects strongly the defined requirements and constraints. This helps to implement the business process already produced during the last phase with the final SBBs, and related services with their service endpoints to support the business application. The resulted SBBs are ranked as already defined by [START_REF] Benfenatki | Linked usdl extension for describing business services and users' requirements in a cloud context[END_REF] for QoS ranking, reflecting the non-functional specifications before selecting the final SBB and generating an implemented business process. The result of this rank- ing process is used during the generation of the implemented BPMN, where for each business function (only service tasks), we assign the first ranked SBB to the related task (see Figure 18 for an example). In the next section, we present the implementation of a prototype of this exploitation process developed as a Web Application.
Framework Implementation
We implemented the exploitation process as a Web Application as depicted in Figure 19. The source code is available at Github repository2 . The video of this technical presentation is available here3 and the of the entire use case is available here4 . We selected AngularJS [START_REF] Green | AngularJS[END_REF] as a Web Framework that enables the development of single-page applications following the MVC (Model-View-Controller) pattern for the front-end environment, and NodeJS Framework [START_REF] Cantelon | Node. js in action[END_REF] which is a popular platform for building server-side Web Applications written in Javascript. Regarding the EAKR repository, we deployed the EACP Ontology along with example of qualified open-source solutions from vf-OS 5and FITMAN project6 in Apache Jena Fuseki [START_REF] Jena | Fuseki: serving rdf data over http[END_REF] (see Figure 20).
In the following subsections, we describe how a developer or architect can interact with this application in each phase, what inputs are required (based on the use case presented earlier in this paper), how information is presented, and how validation occurs.
Note that in the case where no artifact was found in the repository with a perfect match, we used string similarity based on Dice's coefficient. Several open-source JavaScript packages exist. We selected the string-similarity pack- age that is publicly offered in GitHub repository 7 . We fixed the threshold to 90%, which could be modified to get flexible results.
Phase A: Architecture Vision
In this phase, one of the artifacts to provide is about the business goals and associated objectives of the targeted project. The offered design possibility is to upload the inputs designed in ArchiMate using the motivation extension. Figure 19 depicts an example from the proposed use case of the motivation diagram. An export in XML format is needed to import it to the EACP Web Application to parse it and retrieve the needed inputs for phase A. Figure 19 depicts phase A of the Web application. The first column is the architecture and requirements elicitation process that guides the developer to consolidate and validate their requirements during this phase. The second column displays the actual phase state and the progression rate of the process. The developer can as well add other stakeholders not mentioned in the motivation diagram to be considered for the next actions.
Once the motivation diagram is uploaded to the framework, a parsing of the XML source file is realized to retrieve the defined business goals and objectives. Based on these inputs, a request is sent to the EAKR (see an example of a SPARQL request in Figure 20) to fetch existing requirement templates guiding the developer during this requirement elicitation phase. Since it is the first instance of this process, we are not supposed to get any template. However, and for illustrative reasons, we defined one template that shares the same business goal to have an example of a template to reuse for defining and consolidating the required requirements for this phase. As we may notice in Figure 19, the requirements needed are the definition of the business context, the client and the customer of the system which is not applicable in this context, and the users that will interact with the targeted system. The retrieved templates are presented on the "EAKR Templates Requirement" side. The process progression column is updated, and the application now is waiting for the validation of the requirements to redirect the developer to phase B of the exploitation process.
Phase B: Business Architecture
Based on the ArchiMate Business-Entity Relationship diagram, the developer uploads the designed diagram to the architecture artifacts user interface. This latter is parsed to retrieve the actors, the business processes, and related data entities involved in each business process or use case. These inputs are considered during the requirement pattern search in the EAKR repository and retrieved using the process depicted in section 5.
The requirements specification of phase B deals with the functional requirements and constraints of the project. Based on the use case list and their related data entities, we fetch the previous project that has been saved to the EAKR based on a string similarity. These existing requirements help to offer support for defining and consolidating the use-cases and functional specifications close to the actual context, the business events connected to the actual Fig. 21 EACP Web Application -Phase B : Requirement Specification business scenario and a set of functional requirements related to the selected use cases and their type (i.e service type component related to an SBB or user task activity if it is a user action). In the case of our business scenario, no template has been found but for illustrative reasons, we initialized requirement templates that correspond to the actual business scenario to be reused.
In this phase, there is a possibility to enrich the functional specifications by adding required constraints or architecture requirements specifications such as the specification of implementation or usage of a specific standard for the future development of the functional requirements. In the context of this proposed scenario, we link an implementation standard to the functional requirement "Visualize CAD file" as depicted in Figure 21.
Phase C: Data and Application Architecture
The Data architecture phase enhances the definition of the relationship between data entities and targeted business functions previously defined in phase B. Then, the next needed action is to define the properties of each business data involved in the business functions using relevant data models such as the Class Diagram in Unified Modeling Language (UML) that is serialized to retrieve the entities with their related attributes.
Based on these inputs, the application fetches and maps in the EAKR the business functions with the architecture building blocks using the defined data entities, and which respects the interoperability and infrastructure constraints as defined in section 5.3. This latter matches the defined functional requirements with business functions defined in the ABB model. ABBs that Fig. 22 EACP Web Application -Phase C : Requirement Specification correspond to the conditions are selected with their related SBB and corresponding applications. The objective is to highlight potential problems of integration in case any selected ABB presents data interoperability constraint which is different from the defined constraint in this phase C. Regarding the Application Architecture phase, the developer is guided to define the technical requirement using the same template as for the functional requirements. Technology or infrastructure constraint is either defined. Those constraints are added to previous ones to select the SBBs as described in section 5. For instance, in this use case scenario, we define a technical requirement related to the targeted business function "Visualize CAD File". Indeed, we target an SBB which manages the CAD Objects with a specific file extension "STL extension", and that is based on the JavaScript library Three.js. Then the action "Fetch ABBs" triggers the selection of the targeted ABBs and related SBBs in the EAKR that respect the defined technical requirement for each business function along with the technology constraints if defined. The result of this action is depicted in Figure 22.
To this level, these inputs enable to download a first version of the targeted business application based on BPMN 2.0 specification. Based on the functional requirements defined in phase B which are composed by user and service tasks, we generate an XML template (see Figure 18).
Phase D: Technology Architecture
During the previous phase, technical and technological constraints are formalized. These latter are considered when matching the final SBBs, enriched in this phase with non-functional properties defining the Quality of Service Fig. 23 EACP Web Application -Phase D : Requirement Specification needed from the existing services. This to consider what relevant resources are available in the EAKR repository to ensure that the target system will meet the requirements and constraints. In the proposed use case scenario, we define an example for the QoS which is depicted in Figure 23 (NFR List). We set the average instance time metric as a non-functional requirement applicable for all the targeted technical services. After validation, SBBs are ranked based on the defined QoS threshold values.
For each business function, we select the first SBB resulted from the ranking process as a building block to reuse for the implementation of the targeted business application. As a final result, we get a last version of an implemented BPMN with the related service endpoints of the solution building blocks.
Discussion
In this work, we propose an Enterprise Architecture Capability Profile specifically designed for service-oriented software enabling the qualification, the discovery, reuse, and sustainability for new business applications development. We demonstrate how the proposed approach can assist developers or architects in the qualification process using the semantic Enterprise Architecture Knowledge Repository, based on a proposed meta-model inspired mainly from TOGAF and ISO 16100 Standard and formalized using semantic web techniques. This helps to offer a wider view qualification process that deals with the two perspectives of services which are the business perspective which brings value-in-use of the qualified feature for an organization that is interested in reuse, and the technical side along with a quality of service of the feature encapsulated by the software service. An exploitation methodology is defined to overcome the use of ad-hoc methods to identify the most suitable compo-nents or artifacts to reuse. The proposed solution is designed based on the alignment of architecting actions with a requirement engineering process, and evolve together helping to investigate the highest functional compatibility of the desired functionalities and their related constraints.
As discussed in [START_REF] Eeles | Capturing architectural requirements[END_REF], on some projects, architectural requirements can be significantly more important than their domain-specific equivalents (as for instance, if we are designing a business application with a specific high availability as implementation constraint, the "up-time" metric would be with a high importance). Regarding the proposed exploitation methodology, it carries the validation of the requirements and drives the design of the foundations (i.e., architecture) and the requirement definition of the business application we are building. This means at least, we offer the necessary structure for defining and validating architectural artifacts and requirement specifications, and at best, propose templates and artifacts of previous projects or qualified solutions for recycling and reuse to meet the business need.
Regarding the exploitation process, as you may notice at run-time, the process finds few results because no previous project with its related requirements has been already introduced and capitalized. Also, it depends on the number of qualified solutions and related services considered as architecture and solution building blocks in the EAKR Repository. Continuous qualification is needed to maximize the exploitation and must be realized frequently to take full advantage of this proposed methodology.
Conclusion
In this work, we defined the Enterprise Architecture Capability Profile that describes the business, operational and technical aspects for service-oriented software. It is designed based on an Enterprise Architecture Framework (TO-GAF) and the best practices related to the implementation of ISO 16100 standard concepts. An exploitation methodology of the designed capability profile is proposed and based on the alignment of a requirements engineering process with the Architecture Development Method from TOGAF. These latter evolve together to investigate the highest functional and technical compatibility of the desired functionalities and related constraints, respond to end-user requirements, and efficiently reuse the qualified solutions. Finally, we provided an implementation with an industrial use case to demonstrate the effectiveness of this approach. Concepts presented in this research work have been implemented as open-source prototypes based on Node JS and Java platforms. These prototypes cover the entire exploitation process that leads to the targeted ready-to-use business application. and innovation programme under grant agreement no. 723710. The content of this paper does not reflect the official opinion of the European Union. Responsibility for the information and views expressed in this paper lies entirely with the authors.
Fig. 1
1 Fig. 1 Proposed meta-model
Fig. 5 Fig. 6
56 Fig. 5 Phase A: alignment of architecture and requirements artifacts
1Fig. 8
8 Fig. 8 Example of architecture artifacts managed during phase B and related model
Fig. 10
10 Fig. 10 Phase B: requirements template selection
Fig. 11
11 Fig. 11 Phase C -Data Architecture: alignment of architecture and requirement artifacts
Fig. 12
12 Fig. 12 Data Architecture artifacts example for Phase C
Fig. 14
14 Fig. 14 Phase C -Application Architecture: alignment of architecture and requirement artifacts
Fig. 16
16 Fig. 16 Phase D: Alignment of Architecture and Requirements artifacts
Fig. 17Phase D: Technology architecture artifacts template example
Fig. 18
18 Fig. 18 Example of an implemented BPMN resulted from Phase D
Fig. 19 EACP
19 Fig. 19 EACP Web Application -Phase A : Architecture Artifacts
https://github.com/AbdBelf/EacpFramework
https://bul.univ-lyon2.fr/index.php/s/xsAMwEoYIbRYbLh
https://bul.univ-lyon2.fr/index.php/s/EfoSLyZwkHYbT9t
www.vf-OS.eu
http://www.fiware4industry.com
https://www.npmjs.com/package/string-similarity
Acknowledgment
This paper presents work developed in the scope of the project vf-OS. This project has received funding from the European Union's Horizon 2020 research |
04046595 | en | [
"info.info-lo"
] | 2024/03/04 16:41:24 | 2010 | https://inria.hal.science/hal-04046595v2/file/leiden.pdf | Gilles Dowek
email: [email protected]
From proof theory to theories theory
In the last decades, several objects such as grammars, economical agents, laws of physics... have been defined as algorithms. In particular, after Brouwer, Heyting, and Kolomogorov, mathematical proofs have been defined as algorithms. In this paper, we show that mathematical theories can be also be defined as algorithms and that this definition has some advantages over the usual definition of theories as sets of axioms.
The logic and the theories When constructing a proof, we use rules that are specific to the objects the proof speaks about and others that are ontologically neutral. We call the rules of the first kind theoretical rules and those of the second logical rules.
The position of the border delimiting the logic from the theories is not always clear and, for instance, it can be discussed if the notion of set belongs to the realm of logic or to that of a specific theory. It seems that the dominant point of view at the end of the XIX th and at the beginning of the XX th century-Frege, Whitehead and Russell, ...-was that this notion of set, or of concept, was logical. But Russell's paradox seems to have finally ruined this point of view: as we must give up full comprehension, and either restrict comprehension to specific propositions, as in Zermelo set theory, or add a type system as in the Principia Mathematica, there are too many ways to do so-with or without the replacement axiom, with or without transfinite types-hence the notion of set is theoretical. We may speculate that, even if full comprehension had not been contradictory, this idea that the notion of set is ontologically neutral would have been finally given up, as even with full comprehension, there still would have been choices to be made for the set existence axioms: first because comprehension depends on the choice of a language, then because other set existence axioms, such as the axiom of choice may be added.
This understanding of the theoretical nature of the notion of set has lead, at the end of the twenties, to a separation of set theory from predicate logic and to the constitution of predicate logic as an autonomous object. As a corollary, it has been assumed that any theory should be expressed as a set of axioms in predicate logic. As we shall see, and this will be the main thesis of this paper, this corollary is far from being necessary.
From a historical point of view
The constitution of predicate logic as an autonomous object, independent of any particular theory, and the simplicity of this formalism, compared to any particular theory such as geometry, arithmetic, or set theory, has lead to the development of a branch of proof theory that focuses on predicate logic. A central result in this branch of proof theory is the cut elimination theorem: if a sequent Γ ⊢ A has a proof, it also has a cut-free proof, both in natural deduction and in sequent calculus.
Once we have a notion of proof of a sequent in predicate logic, we may define a notion of proof of a sequent in a theory-i.e. a set of axioms-T , and say that a proof of the sequent Γ ⊢ A in the theory T is a proof in predicate logic of a sequent Γ, T ′ ⊢ A, where T ′ is a finite subset of T . And from results about proofs in predicate logic, we may deduce results about proofs in arbitrary theories. For instance, a cut elimination theorem: if a sequent Γ ⊢ A has a proof in a theory T , it has a cut free proof in this same theory.
Yet, although such results do have some content, they often are disappointing: they do not say as much as we could expect. For instance, the cut free proofs of a sequent ⊢ A in predicate logic always end with an introduction rule, but this in not the case for the cut free proofs in a theory T . Thus, while a corollary of the cut elimination theorem for predicate logic is that there is no proof of the sequent ⊢ ⊥ in predicate logic, this corollary does not extend to arbitrary theories. And, indeed, there are contradictory theories.
Thus, this definition of a cut free proof of a sequent Γ ⊢ A in a theory T as a cut free proof of a sequent Γ, T ′ ⊢ A for some finite subset T ′ of T is not satisfactory, as the existence of such cut free proofs does not imply the consistency of the theory. In the same way, it does not imply the disjunction or the witness property for constructive proofs.
Thus, this approach to proof theory focused on predicate logic has been unable to handle theories in a satisfactory way. This may be a sign that the definition of the notion of theory as a set of axioms is too general: not much can be said about a too large class of objects and a more restrictive notion of theory might allow to say more about theories.
From a historical point of view, proof theory has side stepped this problem in at least three ways. First, some results, such as cut elimination theorems, have been proven for specific theories, for instance arithmetic. Virtually, any set of axioms yields a new branch of proof theory. Then, the separation between the logic and the theories has been criticized and extended logics have been considered. Typical examples are second-order logic, higher-order logic, Intuitionistic type theory [START_REF] Martin-Löf | Intuitionistic type theory[END_REF], the Calculus of constructions [START_REF] Coquand | The Calculus of constructions[END_REF], ... Finally, the idea that the proofs studied by proof theory are proof of mathematical theorems, and thus require a theory, has be given up and proofs have been studied for for their own sake. A typical example is linear logic [START_REF] Girard | Linear logic[END_REF].
The thesis we shall develop in this paper is that there is another possible way to go for proof theory: modify the notion of theory so that it can be properly handled, without focusing on a single theory, such as arithmetic, and without introducing logics beyond predicate logic, such as second-order logic. This way leads to a branch of proof theory that may be called the theory of theories.
Some technical problems with axioms
We have seen that the cut elimination theorem for predicate logic implies that there is no proof of the sequent ⊢ ⊥ in predicate logic, but that this result does not extend to an arbitrary theory. Let us now consider a more evolved example.
A consequence of the cut elimination theorem is that a bottom up proof search in sequent calculus is complete, even if it is restricted to cut free proofs. In the search for a proof of the sequent ⊢ ⊥, no rule of the cut free sequent calculus applies-except structural rules in some versions of the sequent calculus-and the search fails in finite time. In contrast, the search for a cut free proof of the sequent ⊢ ⊥ in the theory ∀x (P (x) ⇔ P (f (x))), i.e. for a cut free proof of the sequent ∀x (P (x) ⇔ P (f (x))) ⊢ ⊥ in predicate logic, generates an infinite search space. Thus, although the cut elimination theorem allows to reduce the size of the search space, by restricting the search to cut free proofs, it does not allow to reduce it enough so that the search for a proof of a contradiction in the theory ∀x (P (x) ⇔ P (f (x))) fails in finite time. This proof search method "does not know" [START_REF] Dowek | What do we know when we know that a theory is consistent? R. Nieuwenhuis[END_REF] that this theory is consistent and indeed the cut elimination theorem for predicate logic does not imply, per se, the consistency of any non-empty set of axioms.
In order to design a complete proof search method for this theory, that fails in finite time when searching for a proof of ⊢ ⊥, we may attempt to prove a cut elimination theorem, specialized for this theory, that implies its consistency, in the same way as the cut elimination theorem for arithmetic implies the consistency of arithmetic.
However, when attempting to follow this path, the first problem we face is not to prove this cut elimination theorem, it is to state it: to define a notion of cut specialized for this theory. Indeed, although there is an ad hoc notion of cut associated to the axioms of arithmetic-a sequence where a proposition is proven by induction and then used in a particular case-there is no known way to associate a notion of cut to an arbitrary set of axioms.
Among the tools used in proof theory, only one generalizes smoothly from predicate logic to arbitrary sets of axioms: the notion of model. Indeed, not only the definition of the notion of model extends from predicate logic to arbitrary sets of axioms, but the completeness theorem also does. In this respect, there is a clear contrast between the notion of model and that of cut. And the definition of the notion of theory as a set of axioms appears to be tailored for the notion of model and not for that of cut.
Two partial solutions
There are several cases where this problem of the definition of the notion of cut associated to a theory has been addressed and where partial solutions have been proposed. When enumerating these cases, we also have to include cases where a proof search method has been proven complete. Indeed, as shown by Olivier Hermant [START_REF] Hermant | Resolution modulo as the cut-free fragment of sequent calculus modulo[END_REF], the completeness of a proof search method is often not only a consequence of a cut elimination theorem, but it is also equivalent to such a theorem.
Let us start with a slightly artificial example: that of definitions. If we want to use a new proposition symbol P as an abbreviation for a proposition A ∧ B, we can either keep the same language and consider each occurrence of the symbol P as a notational variant for A ∧ B, or extend the language with the symbol P and add the axiom P ⇔ (A ∧ B). Obviously, the same propositions can be proven in both cases and from the cut elimination theorem for predicate logic, we should be able to derive the consistency of the empty theory, and hence that of the theory P ⇔ (A∧B). In the same way, from the cut elimination theorem for predicate logic, we should be able to derive the completeness of a proof search method that first uses the axiom P ⇔ (A∧B) to replace all the occurrences of P by A ∧ B in the proposition to be proven and then searches for a cut free proof of the obtained proposition.
A direct cut elimination proof for this theory can be given following an idea of Dag Prawitz [START_REF] Prawitz | Natural deduction. A Proof-Theoretical Study[END_REF]. In constructive natural deduction, we can replace the axiom P ⇔ (A ∧ B) by two theoretical deduction rules
Γ ⊢ A ∧ B fold Γ ⊢ P and Γ ⊢ P unfold Γ ⊢ A ∧ B
and extend the notion of cut in such a way that the sequence of a fold rule and an unfold rule is a cut. Cut elimination can be proven for this theory and like in predicate logic, a cut free proof always end with a introduction rule-considering the fold rule as an introduction rule and the unfold rule as an elimination rule. Thus, all the corollaries of cut elimination that are based on the fact that the last rule of a cut free proof is an introduction rule, such as consistency, the disjunction property, the witness property, ... generalize.
Of course, there is nothing specific to the proposition A∧B in this example and we can do the same thing with any theory of the form P 1 ⇔ A 1 , ..., P n ⇔ A n where P 1 , ..., P n are proposition symbols that do not occur in A 1 , ..., A n . This class of theories is quite small and it is fair to say that all these theories are trivial. Yet, we may notice that a notion of cut has been defined for a class of theories that contains more than just one theory. Besides this quite artificial example of definitions, this idea of using theoretical deduction rules has been investigated by Dag Prawitz [START_REF] Prawitz | Natural deduction. A Proof-Theoretical Study[END_REF], Marcel Crabbé [START_REF] Crabbé | Non-normalisation de la théorie de Zermelo[END_REF][START_REF] Crabbé | Stratification and cut-elimination[END_REF], Lars Hallnäs [START_REF] Hallnäs | On normalization of proofs in set theory[END_REF], Jan Ekman [START_REF] Ekman | Normal proofs in set theory[END_REF], Sara Negri and Jan von Plato [START_REF] Negri | Structural proof theory[END_REF], ... More precisely, the fold and unfold rules can be used each time we have a theory of the form ∀(P 1 ⇔ A 1 ), ..., ∀(P n ⇔ A n ) where P 1 , ..., P n are arbitrary atomic propositions. In particular, this can be applied successfully to set theory where, for instance, the axiom ∀x∀y (x ∈ P(y) ⇔ ∀z (z ∈ x ⇒ z ∈ y)) can be replaced by two rules
Γ ⊢ ∀z (z ∈ x ⇒ z ∈ y) fold Γ ⊢ x ∈ P(y) and Γ ⊢ x ∈ P(y) unfold Γ ⊢ ∀z (z ∈ x ⇒ z ∈ y)
A remark due to Marcel Crabbé [START_REF] Crabbé | Non-normalisation de la théorie de Zermelo[END_REF] is that there are theories for which cut elimination does not hold. For instance with the rules
Γ ⊢ P ⇒ Q fold Γ ⊢ P and Γ ⊢ P unfold Γ ⊢ P ⇒ Q
we have a proof of Q but no cut free proof of Q. Yet, whether or not cut elimination holds, the notion of cut can be defined and the cut elimination problem can be stated for all these theories.
A second example comes from an attempt to handle equality in automated theorem proving. It has been remarked for long that considering equality axioms such as ∀x∀y∀z (x + (y + z) = (x + y) + z)
together with the axioms of equality generates a very large search space, even when this search is restricted to cut free proofs. For instance, attempting to prove the proposition
P ((a + b) + ((c + d) + e)) ⇒ P (((a + (b + c)) + d) + e)
we may move brackets left and right in many ways, using the associativity axiom and the axioms of equality, before reaching a proposition that has the form A ⇒ A. An idea, that goes back to Max Newman [START_REF] Newman | On theories with a combinatorial definition of "equivalence[END_REF], but that has been fully developed by Donald Knuth and Peter Bendix [START_REF] Knuth | Simple word problems in universal algebras[END_REF], is to replace the associativity axiom by the rewrite rule
+ c) + d) + e)
that is provable without the associativity axiom. Yet, even if the rewrite system is confluent and terminating, normalizing propositions at all times is not sufficient to get rid of the associativity axiom. Indeed, the proposition
∃x (P (a + x) ⇒ P ((a + b) + c))
is provable using the associativity axiom, but its normal form, i.e. the proposition itself, is not provable without the associativity axiom. In particular, the propositions P (a + x) and P ((a + b) + c) are not unifiable, i.e. there is no substitution of the variable x that makes these two propositions identical. The explanation is that, in the proof of this proposition, we must first substitute the term b + c for the variable x and then rewrite the proposition P (a + (b + c)) ⇒ P ((a + b) + c) to P ((a + b) + c) ⇒ P ((a + b) + c) before we reach a proposition of the form A ⇒ A. To design a complete proof search method, the unification algorithm must be replaced by an algorithm that searches for a substitution that does not make the two propositions equal, but let them have the same normal form. Such an algorithm is called an equational unification algorithm modulo associativity. And Gordon Plotkin [START_REF] Plotkin | Building-in equational theories[END_REF] has proven that when unification is replaced by equational unification modulo associativity, the associativity axiom can be dropped without jeopardizing the completeness of the proof search method.
This proof search method is much more efficient than the method that searches for a cut free proof in predicate logic using the associativity axiom and the axioms of equality. In particular, the search for a cut free proof of the sequent ⊢ ⊥ with equational unification fails in finite time, unlike the search of a cut free proof of the sequent ⊢ ⊥, using the associativity axiom and the axioms of equality.
When formulated, not in a proof search setting, but in a purely proof theoretical one, Plotkin's idea boils down to that that two propositions such as P (a + (b + c)) and P ((a + b) + c) that have the same normal form, should be identified. We have called Deduction modulo [START_REF] Dowek | Theorem proving modulo[END_REF][START_REF] Dowek | Proof normalization modulo[END_REF] this idea to reason modulo a congruence on propositions-i.e. an equivalence relation that is compatible with all the symbols of the language. Then, the need to replace unification by equational unification is simply a consequence of the fact that when unifying two propositions P and Q, we must find a substitution σ such that σP and σQ are identical, modulo the congruence. The possibility to drop the associativity axiom is just a consequence of the fact that this axiom is equivalent, modulo associativity, to the proposition ∀x∀y∀z ((x + y) + z = (x + y) + z) that is a consequence of the axiom ∀x (x = x). Finally, the completeness of proof search modulo associativity is a consequence of the fact that, when terms are identified modulo associativity in predicate logic, cut elimination is preserved because cut elimination ignores the inner structure of atomic propositions. Thus, a cut in Deduction modulo is just a sequence formed with an introduction rules and an elimination rule, like in predicate logic, the only difference being that these rules are applied modulo a congruence.
This remark generalizes to all equational theories, i.e. to all the theories whose axioms have the form ∀ (t = u) and we get this way a cut elimination theorem for a class of theories that contains more than one theory and from this cut elimination theorem, we can rationally reconstruct the proof search methods for equational theories designed by Gordon Plotkin.
Deduction modulo
Deduction modulo is thus a tool that permits to associate a notion of cut to any equational theory.
But this notion of cut also subsumes that introduced with the fold and unfold rules. Indeed, if we start with a theory formed with the axioms ∀ (P 1 ⇔ A 1 ), ..., ∀ (P n ⇔ A n ), instead of replacing these axioms by theoretical deduction rules, we can replace them by rewrite rules P 1 -→ A 1 , ..., P n -→ A n that rewrite propositions directly-and not only terms that occur in propositions-and consider the congruence defined with these rules. We obtain this way a notion of cut that is equivalent to the notion of cut introduced with the fold and unfold rules: not all theories have the cut elimination property, but a theory has the cut elimination property in one case if and only if it has this property in the other. Although it is not difficult to prove, this result is not completely trivial because Deduction modulo allows some deep inferences-a rewrite rules can be used at any place in a proposition, while the fold and unfold rules allow shallow inferences only, i.e. rewriting at the root, and this is essential for the cut elimination to work with the fold and unfold rules.
By an abuse of language, when Deduction modulo a congruence has the cut elimination property, we sometimes say that the congruence itself has the cut elimination property.
Thus, Deduction modulo is framework where a theory is not defined as a set of axioms but as a set of rewrite rules and this framework unifies the notions of cut defined by Dag Prawitz and by Gordon Plotkin.
From axioms to algorithms
An important question about Deduction modulo is how strong the congruence can be. For instance, can we take a congruence such that A is congruent to ⊤ if A is a theorem of arithmetic, in which case each proof of each theorem is a proof of all theorems? This seems to be a bad idea, for at least two reasons. First, the decidability of proof checking requires that of the congruence. Then, for the cut free proofs to end with an introduction rule, the congruence must be non-confusing, i.e. when two non-atomic propositions are congruent, they must have the same head symbol and their sub-parts must be congruent. For instance, if both propositions are conjunctions, A ∧ B and A ′ ∧ B ′ , A must be congruent to A ′ and B to B ′ .
Both properties are fulfilled when the congruence is defined by a confluent and terminating rewrite system where the left hand side of each rule is either a term or an atomic proposition, such as P (0) -→ ∀x P (x). But, we sometimes consider cases where the congruence is defined by a rewrite system that is not terminating, for instance if it contains the rule x + y -→ y + x or the rule P -→ P ⇒ Q. Nevertheless, the congruence defined this way may still be decidable and non-confusing.
Cut elimination is a third property that may be required in the definition of the notion of theory.
As the rewrite rules define a decidable congruence, we may say that they define an algorithm. And indeed, we know that the rewrite rules that replace the axioms of the addition 0 + y -→ y S(x) + y -→ S(x + y) define an algorithm for addition. Thus, Deduction modulo is also a way to separate the deduction part from the computation part in proofs. This idea, formulated here in the general setting of predicate logic had already been investigated in some specific systems such as Intuitionistic type theory [START_REF] Martin-Löf | Intuitionistic type theory[END_REF] and the Calculus of constructions [START_REF] Coquand | The Calculus of constructions[END_REF], where a decidable definitional equality is distinguished from the usual propositional equality. Also, the idea of transforming theorems into rewrite rules has been used by Robert S. Boyer and J. Strother Moore in the theorem prover ACL [START_REF] Boyer | A theorem prover for a computational logic[END_REF].
But the novelty of Deduction modulo is that rewrite rules are used as a substitute for axioms and that provability with rewrite rules is proven to be equivalent to provability with axioms. Thus, the rewrite rules are used to express the theory, while the theory is not distinguished from the logic in Intuitionistic type theory, in the Calculus of constructions or in the Boyer-Moore logic.
Thus, it seems that Deduction modulo gives an original answer to the question "What is a theory ?" and this answer is "An algorithm".
This answer is part of a general trend, in the last decades, to answer "An algorithm" to various questions, such as "What is a grammar ?", "What is an economical agent ?", "What is a law of physics ?", "What is a proof ?", ... and this answer may be given to more questions in a near future, e.g. to the question "What is a cell ?"
It must be noticed, however, that the idea that a proof is an algorithm, the Brouwer-Heyting-Kolomogorov interpretation of proofs, may have somehow been an obstacle to our understanding that a theory also is an algorithm. The success of the algorithmic interpretation of proofs seems to have implicitly promoted the idea that this algorithmic interpretation of proofs was the link between the notion of algorithm and that of proof, making it more difficult to remark this other link.
But in the same way computer scientists know that a program expresses an algorithm and that the algorithm used to check that is program is correctly typed is another algorithm, it seems that proofs are algorithms and that their correctness criterion is parametrized by another algorithm: the theory.
Extended logics
This new answer to the question "What is a theory?" gives the possibility to define a general notion of cut for all the theories defined in this way and to prove general cut elimination results that apply to large classes of theories.
It gives the possibility to avoid two of the strategies mentioned above to side step the inability of proof theory of predicate logic to handle theories in a satisfactory way: introduce extended logics, such as second-order or higher-order logic, and focus on particular theories, such as arithmetic.
Second-order and higher-order logics have been studied thoughtfully in the second half of the XX th century. Higher-order logic has a special notion of model and a special completeness theorem [START_REF] Henkin | Completeness in the theory of types[END_REF], a special cut elimination theorem [START_REF] Girard | Une extension de l'interprétation de Gödel à l'analyse, et son application à l'élimination des coupures dans l'analyse et la théorie des types[END_REF], special proof search algorithms [START_REF] Andrews | Resolution in type theory[END_REF][START_REF] Huet | A mechanisation of type theory[END_REF], even a special notion of substitution and a special Skolem theorem [START_REF] Miller | A Compact representation of proofs[END_REF].
The history of the notion of higher-order substitution in itself could have lead to Deduction modulo. Before the modern view that substituting the term λx (Q(f (x))) for the variable P in the proposition (P 0) yields the proposition (λx (Q(f (x))) 0), that is provably equivalent to Q(f (0)), using some conversion axioms, this substitution yielded directly the proposition Q(f (0)). Thus, normalization was included in substitution operation.
Then, the substitution operation was simplified to an operation that is almost the substitution of predicate logic, and conversion axioms were introduced in the formulation of higher-order logic given by Alonzo Church [START_REF] Church | A formulation of the simple theory of types[END_REF]. Shortly after, the idea of incorporating these axioms in the unification algorithm was proposed by Peter Andrews [START_REF] Andrews | Resolution in type theory[END_REF] and this yielded higher-order unification [START_REF] Huet | A unification algorithm for typed λ-calculus[END_REF]. Although Andrews' idea seems to be independent from Plotkin's, higher-order unification is equational unification modulo beta-conversion.
Finally, in Deduction modulo, the conversion axioms are included in the congruence and unification is performed modulo this congruence.
The story is just slightly more complex because besides separating substitution from conversion, Church's formulation of higher-order logic introduces a symbol λ that binds a variable, thus the substitution operation in this logic must handle binders. In the same way, besides being equational, higher-order unification is also nominal [START_REF] Ch | Nominal unification[END_REF], i.e. it involves terms containing binders.
Yet, lambda bound variables can be eliminated from higher-order logic, for instance using combinators or de Bruijn indices and explicit substitutions and the last idiosyncrasy of higher-order logic, the absence of separation between terms and propositions can be avoided by introducing a "unary copula" ε transforming a term of type o into a propositions. This way, higher-order logic can be defined as a theory in Deduction modulo [START_REF] Dowek | HOL-lambda-sigma: an intentional firstorder expression of higher-order logic[END_REF], and in this case it is better to call it Simple type theory, to stress its theoretical nature. We avoid this way the need of a special notion of model, a special completeness theorem, a special cut elimination theorem [START_REF] Dowek | Proof normalization modulo[END_REF], special proof search algorithms [START_REF] Dowek | Theorem proving modulo[END_REF][START_REF] Dowek | Polarized resolution modulo[END_REF], a special notion of substitution and a special Skolem theorem [START_REF] Dowek | Skolemization in simple type theory: the logical and the theoretical points of view[END_REF]. Some formulations of second-order logic, such as the Functional second-order arithmetic of Jean-Louis Krivine and Michel Parigot [START_REF] Leivant | Reasoning about functional programs and complexity classes associated with type disciplines[END_REF][START_REF] Krivine | Programming with proofs[END_REF], even take more advantage of the notion of congruence when expressed as theories in Deduction modulo [START_REF] Dowek | Specifying programs with propositions and with congruences[END_REF].
All the theories expressed in Deduction modulo verify an extended subformula property, and so does higher-order logic. How is this compatible with the well-known failure of the sub-formula property for higher-order logic? First, we have to notice that, strictly speaking, the sub-formula property holds for propositional logic only. The notion of sub-formula has to be adjusted for the sub-formula property to hold for predicate logic: the set of subformulae of a proposition has to be defined as the smallest containing this proposition, that is closed by the sub-tree relation and also by substitution. This way, the proposition ∀x P (x) has an infinite number of sub-formulae as all the instances of the proposition P (x) are sub-formulae of the proposition ∀x P (x). In Deduction modulo, besides being closed by the sub-tree relation and substitution, this set must be closed by the congruence. Thus, if we have a rule P -→ Q ⇒ Q, the sub-formulae of the proposition P are P and Q. Using this definition, the set of sub-formulae of some propositions in higher-order logic contains all the propositions. This is what is usually called the failure of the sub-formula property for higher-order logic.
This ability of predicate logic to handle theories changes the organization of proof theory as a field of knowledge by giving back predicate logic a central place and reformulating in a wider setting the results specially developed for second-order and higher-order logic.
Particular theories
In the same way, the results developed for a particular theory, such as arithmetic are just consequences of general results, once these theories have been expressed as algorithms. And several theories have been expressed in this way, in particular arithmetic [START_REF] Dowek | Arithmetic as a theory modulo[END_REF][START_REF] Allali | Algorithmic equality in Heyting arithmetic modulo[END_REF] and some versions of set theory [START_REF] Dowek | Cut elimination for Zermelo's set theory[END_REF], although some other theories, such as geometry, have not been investigated yet.
The case of arithmetic is interesting as it shows how expressing a theory as an algorithm sheds a new light on this theory. There are two ways to express arithmetic in predicate logic, either, as Peano did, using a predicate N characterizing natural numbers, or not. That a natural number is either 0 or a successor is expressed in the first case by the proposition
∀x (N (x) ⇒ (x = 0 ∨ ∃y (N (y) ∧ x = S(y))))
and in the second by the proposition ∀x (x = 0 ∨ ∃y (x = S(y)))
This simpler formulation is usually preferred. Yet, the formulation of arithmetic in Deduction modulo does use the Peano predicate N . This formulation was originally considered as a first step, before we understood how to eliminate this predicate. Yet, it appears that this predicate cannot be eliminated. Indeed, the formulation of arithmetic without this predicate has a strange property: in the constructive case, it verifies the closed disjunction property but not the open one. That is, if a closed proposition of the form A ∨ B is provable, then either A or B is provable, but some open propositions of the form A∨B are provable, while neither A nor B is, e.g. x = 0∨∃y (x = S(y)). And any theory expressed in Deduction modulo that enjoys the cut elimination property verifies the full disjunction property. This means that not all sets of axioms can be transformed into a nonconfusing congruence that has the cut elimination property. And this is a strength, not a weakness, of Deduction modulo. Deduction modulo rules out some theories, e.g. contradictory theories, theories that do not verify the disjunction property in the constructive case, ... and the counterpart of this is that strong results can be proven about the theory that can be expressed in Deduction modulo, such as consistency or the disjunction property in the constructive case. The lack of strong results in the theory of theories when theories were defined as sets of axioms is a consequence of the fact that this notion of theory was too general, and not much can be said about a too large class of objects.
This remark raises an difficult but interesting question. What are the properties that a set of axioms must fulfill to be transformed into a nonconfusing congruence having the cut elimination property? A recent unpublished result of Guillaume Burel [5] suggests that, in the classical case, up to skolemization, consistency is a sufficient condition. In the constructive case, the problem is open. As all theories expressed by a non-confusing congruence that has the cut elimination property are consistent, and enjoy the disjunction and the witness property, these conditions are necessary for a theory to be expressed as a non confusing congruence that has the cut elimination property, and for instance, the axiom P ∨ Q and the axiom ∃x P (x) cannot. Whether these conditions are sufficient is open.
First chapter of a theory of theories
Deduction modulo has already lead to general cut elimination results [START_REF] Dowek | Proof normalization modulo[END_REF] that subsume cut elimination results for arithmetic and for higher-order logic. It also has lead to proof search algorithms that subsume both equational resolution and higher-order resolution [START_REF] Dowek | Theorem proving modulo[END_REF][START_REF] Dowek | Polarized resolution modulo[END_REF] and to the development of an extended notion of model, where two congruent propositions are interpreted by the same truth value, but not necessarily two provably equivalent propositions [START_REF] Dowek | Truth values algebras and proof normalization[END_REF]. It has also lead to a formulation of a lambda-calculus with dependent types called λΠ-modulo [START_REF] Cousineau | Embedding pure type systems in the lambda-Pi-calculus modulo[END_REF], that permits to express proofs in Deduction modulo as algorithms and that subsumes the Calculus of constructions and more generally all functional Pure type systems [START_REF] Barendregt | Lambda calculi with types[END_REF]. The important point with this calculus is that it completely de-correlates the issue of the functional interpretation of proofs from the problem of the choice of a theory. The functional interpretation of proofs is handled by the dependent types and the theory by the congruence. There is no special link between some typing disciplines-such as polymorphism-and some theories-such as the second-order comprehension scheme.
But, the most interesting about Deduction modulo is that it permits to formulate problems that could not be formulated when theories were defined as sets of axioms. In particular, we may seek for a characterization of the theories that have the cut elimination property or of those that have the proof normalization property. Olivier Hermant has given examples showing that cut elimination does not imply normalization [START_REF] Hermant | Méthodes sémantiques en Déduction modulo[END_REF]. Then, Denis Cousineau [START_REF] Cousineau | A completeness theorem for strong normalization in minimal Deduction modulo[END_REF] has given the first condition both sufficient and necessary for normalization. The fact that this condition is model theoretic suggests that the notion of normalization is also a model-theoretic notion.
This problem and the problem of the characterization of the theories that can be expressed in Deduction modulo are two among the main open problems in this area. Defining a nominal Deduction modulo, i.e. including function symbols that may bind variables is also an important one.
If we look at this theory of theories from outside, we see that some aspects that were minor in the proof theory of predicate logic become majors aspects: the link between the notion of cut and that of model, the link between proof theory and automated theorem proving, and also many-sorted predicate logic, as unlike axioms, rewrite rules are difficult to relativize, ... On the other hand, some topics that have been central in proof theory in the last decades become less central: the case of higher-order logic and the functional interpretation of proofs. These notions are not given up, but, like the notion of triangle in geometry, they just lose their central place by being included in a larger picture.
x
+ (y + z) -→ (x + y) + z that allows to transform the proposition P ((a + b) + ((c + d) + e)) into P ((((a + b) + c) + d) + e) but not vice-versa. If the rewrite rules form a terminating and confluent system, then two terms are provably equal if and only if they have the same normal form. Normalizing the proposition above yields the provably equivalent proposition P ((((a + b) + c) + d) + e) ⇒ P ((((a + b) |
04111939 | en | [
"shs"
] | 2024/03/04 16:41:24 | 2018 | https://hal.science/hal-04111939/file/Parole%20d%27indigents.pdf | Paroles d'indigents : une autre histoire des workhouses à l'époque victorienne
Fabienne Moine
L'histoire des indigents placés en institutions s'est généralement construite à leurs dépens. Rejetés en marge de la société par le système général de l'assistance publique, ceux qui ne pouvaient plus subvenir à leurs besoins furent aussi réduits au silence. C'est en tout cas ce que l'historiographie traditionnelle des workhouses semble affirmer. En 1983, Anne Crowther soulignait que les sources provenant des pensionnaires étaient presque inexistantes. Faire parler les indigents serait tout à fait légitime, selon elle, mais il ne s'agirait que de proposer un « patchwork » de lettres et de souvenirs des pauvres les plus alphabétisés 1 . Le constat fait par Jacques Carré une trentaine d'années plus tard n'est pas plus optimiste : « La mémoire de centaines de milliers d'indigents dans la workhouse en l'espace de trois siècles reste désespérément muette » 2 .
Certes ces sources sont infimes par rapport à la profusion de sources officielles classiques comme les rapports d'inspection, les documents parlementaires, les pamphlets, les tracts, les registres paroissiaux, les articles de presse. À ces sources historiques s'ajoutent les productions artistiques, romans ou poèmes, qui prennent la workhouse pour décor. Le petit Oliver Twist est sans doute le plus célèbre héro victorien à avoir fait la douloureuse expérience de cette institution mais Mrs Trollope, George Eliot ou Thomas Hardy 3 et une cohorte de poètes victoriens ont eux aussi dénoncé les conditions de vie inhumaines. Toutes ces sources permettent de réaliser l'impact énorme que la workhouse a pu avoir sur la culture et l'imaginaire victoriens. Elle est sous observation constante de l'État, des 1
législateurs, des inspecteurs, du personnel, des visiteurs, des écrivains, et des curieux ; il semble important de connaître maintenant le point de vue des pensionnaires sur leur propre condition.
Il ne s'agit pas de surestimer la place prise par les écrits d'indigents ; mais il semble que la workhouse, malgré son caractère carcéral, parvient dans certains cas à stimuler la parole de l'indigent : d'une part, les conditions de vie difficiles, voire abjectes, à l'intérieur de l'établissement conduisent certains indigents à prendre la plume pour exprimer leur désaccord, voire leur haine envers le système. D'autre part, la workhouse, contrairement à la rue, fournit un cadre -un toit, un rythme de vie, une forme de stabilité matérielle -qui rend possible le passage à l'écrit. Sans workhouse, pas de paroles d'indigent, si ce n'est celles que les premiers sociologues et les réformateurs sociaux prêtent aux pauvres, tel Henry Mayhew dans son exploration des quartiers pauvres de Londres, "the undiscovered country of the poorˮ, 4 au cours des années 1860. Mais une fois la parole du pauvre tronquée, récrite et parfois civilisée dans son London Labour and the Labour Poor, peut-on encore déterminer avec précision les paroles qui ont vraiment été prononcées ? Mayhew, dont la pratique d'enquête anticipe celle des journalistes d'investigation de la fin du XIX e siècle, pourrait facilement être accusé de répondre à l'appétence voyeuriste de ses lecteurs. Ce n'est donc peut-être pas chez les explorateurs sociaux qu'il faut chercher la parole de l'indigent.
La pénurie d'écrits d'indigents s'explique par des conditions matérielles insurmontables, un manque d'éducation, des occasions d'écrire trop rares, peu de stimulation intellectuelle ou d'encouragement, ainsi que l'absence d'un lectorat potentiel et d'un marché pour ce genre d'écrits. L'identification de ces documents n'est donc pas chose facile 5 . Mais une fois rassemblées, ces sources se révèlent particulièrement intéressantes puisque, contrairement aux documents officiels, elles dévoilent un niveau de résistance non anticipée et parfois même une ambition artistique chez les résidents.
Cette étude rassemble plusieurs formes de documents écrits par des indigents entre la Nouvelle Loi sur les pauvres de 1834 qui réforme l'assistance publique et les dernières décennies du XIX e siècle, c'est-à-dire à une époque où les conditions de vie en maison de pauvres sont terribles, avant l'influence bénéfique des Visiting Ladies dans les années 1870, ainsi que les réformes et les lois un peu plus favorables au début du XX e siècle. Dans les lettres de plaintes, examinées d'abord dans cette recherche, la description de l'institution semble à première vue conforme aux conditions réelles. Mais ces documents ne témoignent pas uniquement de la déférence du résident et de son acceptation de l'oppression. En écrivant l'histoire de leur pauvreté, certains indigents laissent aussi transparaître quelques signes de rébellion face à l'institution. Les autobiographies d'ex-indigents qui font l'objet de la seconde partie sont plus complètes, mais moins fiables, surtout lorsque plusieurs décennies se sont écoulées entre l'expérience de la workhouse et sa restitution par écrit. Ces autobiographies sont des témoignages sur la vie à l'assistance publique mais elles indiquent aussi les façons dont l'ex-indigent cherche à rendre son expérience de la workhouse rationnelle et à lutter contre le sentiment de perte d'identité qui frappe le pensionnaire d'une institution totalitaire 6 . Dans un troisième temps, cette étude porte sur les « textes cachés », traduction du terme anglais de "hidden transcriptsˮ 7 que James Scott attribue aux formes de défiance et de résistance chez ceux qui peuvent difficilement se faire entendre sur la scène publique. Ces formes de contestation sont rapportées aussi bien dans les écrits de pensionnaires qu'au coeur des textes cachés des vagabonds qui fréquentent les workhouses à la journée et pour lesquels l'institution est aussi bien un espace totalitaire qu'un lieu où ils peuvent construire leur identité sociale. Cette recherche s'achève avec une exploration de la production poétique réalisée dans la workhouse. Cette source extrêmement rare est un témoignage précieux pour comprendre comment la structure de l'institution peut faire surgir des formes créatives de l'obscurité.
Malgré leur nombre restreint, ces sources proposent un éclairage sur des zones d'ombre de l'histoire des workhouses, la lumière ne provenant généralement que du centre institutionnel. Pour l'indigent, l'écrit représente sans doute une façon de s'échapper de cet univers carcéral. Selon les termes de Jacques Carré dans La Prison des pauvres (2016), la workhouse est une « machine à exclure »8 . C'est une « Bastille », terme vernaculaire sous lequel elle est très souvent identifiée, renvoyant à la célèbre prison parisienne avec son caractère arbitraire et son fonctionnement absurde. Mais ne peut-on pas considérer que la prise de parole de l'indigent a aussi pour but de rendre compte de la vie au sein de la communauté, avec ses pratiques sociales et culturelles spécifiques, du lien qui lie l'indigent au groupe et du souhait de retrouver dans le collectif une forme de reconnaissance ou de dignité perdue ? En plus de donner la parole aux « sans-voix », cette étude offre une vision moins noire de la vie dans la workhouse, puisque ces documents écrits par des indigents soulignent qu'ils tentent de regagner leur dignité, de reconstruire leur identité individuelle et collective, d'élaborer des réseaux solidaires et, souvent, de se réapproprier l'espace de la contrainte pour en faire un lieu où leur parole est moins limitée ou formatée.
Les lettres de plainte, témoignages en pointillé
Cette étude s'appuie sur quelques dizaines de lettres préservées aux Archives Nationales, infime partie de la multitude de lettres de plainte sans doute reçues par le Bureau central de l'Assistance publique. En considérant les réponses transmises au greffier du bureau local des surveillants, on réalise que la pratique de la lettre de plainte est extrêmement courante. Aucune des lettres consultées ne remet en cause le système général de l'assistance publique ni l'idéologie qu'il véhicule. Elles mettent plutôt en évidence les défaillances constatées dans la prise en charge du quotidien, au niveau des besoins du pensionnaire. Les indigents se plaignent de leurs conditions de vie, principalement de la nourriture en quantité insuffisante, du manque de soins médicaux, de mauvais traitements et de la gestion indigne des infirmes. Ainsi un indigent de la Kidderminster Union Workhouse, W. Miles, se plaint, dans sa lettre du 10 janvier 1853, de la nourriture et des punitions infligées aux enfants et aux simples d'esprit9 . Eliza France, pensionnaire de la même institution, confirme les allégations de violence trois ans plus tard, le 3 décembre 185610 .
Les cibles privilégiées des attaques sont le médecin, les infirmières, plus rarement les surveillants en chef, matrone ou maître, pour lesquels l'indigent demande généralement une inspection. L'idéologie bienfaitrice du système n'est pas remise en cause puisque c'est vers ceux qui la construisent que se tournent naturellement les indigents. Ils ont foi dans l'institution qui prend généralement les requêtes en considération. Le représentant de la commission centrale va presque systématiquement faire une réponse qu'il transmet au greffier du Bureau des surveillants et dans laquelle il précise si une visite devra être effectuée ou non ou si le Bureau des surveillants doit se saisir de l'affaire. Les demandes sont scrupuleusement étudiées, même si les plaignants sont souvent déboutées. Les réponses systématiques aux lettres conduisent sans doute les indigents à avoir confiance dans l'administration qu'ils imaginent toute puissante et à avoir une attitude de défiance face au personnel qu'ils côtoient chaque jour.
Les codes de la lettre administrative sont respectés puisque la récrimination est toujours faite selon un cadre formel strict. Les lettres indiquent une grande déférence, proche de la servilité, de la part du demandeur. La narration des événements qui conduisent à la plainte est détaillée, même si elle souffre souvent de l'accumulation de détails et de répétitions ; enfin, l'indigent s'en remet totalement à la Commission bienveillante qui rend une justice impartiale. Il est indéniable que ces lettres, faites pour obtenir gain de cause, sont formatées et qu'une forme d'autocensure est sans doute pratiquée. Anne Brunon-Ernst souligne dans son ouvrage sur l'application des théories benthamiennes à l'assistance publique, Le Panoptique des pauvres, que l'écrit atténue la violence de l'oral. De plus, « [L]a parole du résident est entourée, réprimée et contrôlée au sein de l'institution »11 . Pour la contrôler, il faut passer de l'oral à l'écrit, codifié dans l'entreprise panoptique, « museler l'expression verbale. […] Le discours de l'assisté passe de l'articulé à l'imprimé et il perd en même temps la spontanéité, l'imprévisibilité et la violence du dit » 12 .
Mais ce muselage ne semble réussir que partiellement. Si les lettres de plaintes ne témoignent pas d'une volonté subversive, elles sortent parfois du cadre instauré par le protocole d'énonciation. Mary Potter, résidente de la Kidderminster Union Workhouse, prend souvent la plume pour faire valoir ses droits et se dit prête à écrire au ministre s'il le faut si sa lettre du 7 janvier 1863 reste sans réponse ! 13 Une manière de s'extraire de sa simple condition de demandeur consiste, pour certains destinataires à l'éducation plus poussée, à devenir des porteparole ou des écrivains publics, prenant en charge une parole collective qui aurait plus de poids. Des pétitions, réalisées anonymement pour éviter les représailles, sont adressées à la Commission, comme la lettre du 5 avril 1876 rédigée par un collectif de résidents âgés qui réclament une ration normale de thé 14 . William Chance, résident de la Newport Pagnell Poor Law Union Workhouse en 1845, a compris que l'union fait la force et qu'une demande collective a plus de chances d'aboutir, même si ce ne sera finalement pas le cas (lettre du 20 janvier 1845) 15 .
Une autre manière d'obtenir gain de cause consiste à se positionner en spécialiste de la workhouse, c'est-à-dire plus tout à fait en victime, mais en observateur. C'est la méthode de Thomas Hartley, résident de la Kidderminster Union Workhouse pendant au moins quinze années, ses lettres datant les unes de 1848 et les autres de 1863. Il manipule avec habileté le technolecte administratif pour présenter ses doléances comme des observations objectives. Il utilise en expert les catégories utilisées pour classer les pauvres, catégories servant à justifier les mauvais traitements que subissent les indigents âgés, dont il fait partie (lettre du 17 avril 1848). 16 Il connaît ses droits et les revendique, comme celui d'obtenir une « information » spirituelle de qualité (lettre du 17 novembre 1863).17
À travers ces quelques exemples, l'image traditionnelle du résident isolé, indolent ou persécuté construite par la culture victorienne, notamment à travers les descriptions qu'en a faites Charles Dickens, est remise en cause. Les indigents participent à leur propre vie administrative en exigeant la mise en place d'inspections régulières. Ils ne sont pas uniquement les bénéficiaires de l'aide sociale, puisque les petites tranches de vie que l'on aperçoit dans ces courriers indiquent qu'il existe une possibilité d'engagement collectif, un désir de faire changer les choses et une envie de devenir agent de sa condition.
Les autobiographies de pensionnaires : le sens de la souffrance
Le récit autobiographique, qui présente l'enfance de l'auteur passée en institution, est rétrospectif et largement romancé, comblant les interstices laissés par les lettres de résidents. L'autobiographie inscrit a posteriori le temps passé en workhouse dans le parcours de toute une vie, en redonnant un sens aux mois ou années écoulés en institution et en renforçant la construction identitaire du résident, qui n'est plus seulement un pauvre parmi des dizaines d'autres.
Plusieurs autobiographes, conduits dans leur enfance à passer du temps à l'Assistance publique, consacrent quelques chapitres à leur expérience de la workhouse. Les souvenirs sont scrupuleusement retranscrits avec force détails, signe que les jours passés ont eu un effet traumatisant sur l'enfant et que le passage entre ses murs est un moment fondateur dans la construction de l'identité de l'adulte. Même si ces écrits sont marqués du sceau de la subjectivité, ils ont le mérite d'exister, compte tenu des difficultés matérielles et culturelles que l'autobiographe a dû surmonter. Mais c'est aussi à travers sa subjectivité et son inventivité stylistique que l'auteur donne un sens à son passage à l'Assistance publique, éléments laissés de côté par les historiens.
Alors que le temps de l'enfance passé dans la workhouse est systématiquement décrit sans équivoque comme une période terrible, la narration est toujours réalisée avec un soin particulier, l'esthétique de l'établissement remplissant certains objectifs : conforter l'autobiographe dans sa position d'auteur ; montrer que la souffrance a une signification ; et renforcer l'identité de l'auteur qui se serait construite dès l'enfance, grâce au traumatisme de la workhouse.
L'ex-pensionnaire se présente d'abord en tant qu'auteur, ayant atteint un niveau de respectabilité presqu'inimaginable après une enfance terrible. Il est ainsi normal pour lui de romancer l'expérience de la workhouse. Les autobiographes s'inspirent souvent du modèle romanesque dickensien, récit initiatique de jeunes enfants qui s'élèvent spirituellement et matériellement malgré un parcours semé d'embûches 18 . Cette influence est peu surprenante puisque, comme le souligne Martyn Lyons dans un article sur les autobiographes ouvriers, Dickens et la Bible font partie de leurs sources principales et offrent des modèles narratifs faciles à copier 19 .
C'est précisément au coeur du chapitre II d'Oliver Twist, qui décrit la workhouse à la fois comme une prison et comme un lieu où règne le non-sens, que les autobiographes trouvent leur inspiration. L'autobiographie de Charles Shaw (1832-1906) suit le schéma du roman dickensien et adopte les métaphores filées, chères au romancier. Shaw est né à Tunstall en 1832. Il reçoit une 18 Dans le cas du petit Oliver, il est décrit comme un futur criminel qui finira pendu ("That boy will be hungˮ). Il est ainsi logique qu'il soit placé à l'isolement en cellule ("solitary incarcerationˮ) et puni ("floggedˮ). Voir Charles Dickens, Oliver Twist None of us wanted to go, but we must go, and so we came to our big home for the time. The very vastness of it chilled us. Our reception was more chilling still. Everybody we saw and spoke to looked metallic, as if worked from within by a hidden machinery. Their voices were metallic, and sounded harsh and imperative. The younger ones huddled more closely to their parents, as if from fear of these stern officials. Doors were unlocked by keys belonging to bunches, and the sound of keys and locks and bars, and doors banging, froze the blood within us21 .
La métaphore du métal permet de déshumaniser totalement l'institution qui ressemble à une machine émancipée de tout contrôle humain. L'environnement est saturé de métal, causant une expérience synesthétique fréquemment exploitée dans la prose dickensienne. De plus, à la manière du romancier anglais, la workhouse est décrite comme une prison et ses pensionnaires comme des criminels :
It was all so unusual and strange, and so unhomelike. We finally landed in a cellar, clean and bare, and as grim as I have since seen in prison cells. We were told this was the place where we should have to be washed and put on our workhouse attire. Nobody asked us if we were tired, or if we had had any breakfast. We might have committed some unnameable crime, or carried some dreaded infection22 .
Enfin Shaw n'oublie pas les jeux avec les signifiants à la manière de Dickens, comme lorsqu'il commente une scène de punition excessive dans le chapitre XIV :
Such was the Bastile sixty years ago. Such was one scene in Chell, and if you drop the 'C' in the word it only remains more truly descriptive of the place where such 'discipline' could take place 23 .
La seconde fonction de la stylisation de l'expérience en workhouse sert à souligner que la souffrance donne de la cohérence à l'existence. La maltraitance est transcendée pour devenir rétrospectivement une étape utile du récit initiatique et de la construction personnelle : le récit du moi humilié est nécessaire à l'écriture du moi sublimé, héroïque, édifié. L'expérience décrite s'inscrit comme une étape nécessaire dans la progression téléologique du protagoniste. Elle semble contenir les prémices de la vie future de l'autobiographe. Le récit se construit comme un progrès, une ascension depuis une enfance pauvre et de surcroît malheureuse vers une forme de respectabilité dont l'autobiographe peut s'enorgueillir. Deux exemples témoignent de cette relecture du passé douloureux à l'aune des idéologies du présent. D'abord celui de Charles Shaw dont l'engagement à la fois radical et libéral est visible dans l'analyse qu'il fait du système de la workhouse. Ses souvenirs le conduisent à citer et à critiquer la pensée économique de Malthus sur laquelle s'appuie la Nouvelle Loi sur les pauvres :
We were a part of Malthus's 'superfluous population,' and our existence only tended to increase the poverty from which we suffered. 'Benevolence,' he said, 'in a being so shortsighted as man, would lead to the grossest error, and soon transform the fair and cultivated soil of civilised society into a dreary scene of want and confusion 24 .
Shaw se remémore le passé et la souffrance subie en les passant au crible de ses réflexions économiques. Il exprime son soutien au libéralisme mais aussi le rejet de ses conséquences :
The New Poor Law was wise enough, economically considered, but there could have been the economy without the brutality and harshness and humiliation pressed into the souls of old and young. Any system which makes young boys ashamed of their existence must be somewhat devilish in its evil ingenuity, and that was what Bastile 'discipline' did for me 25 .
Sa position contre le protectionnisme lui permet de dénoncer la cruauté envers les enfants :
[M]any well-to-do people lived pleasant lives in England before the Corn Laws were repealed, while little children, through long hours of labour and scant food, were passing through a very Tophet of agony, and their cries were only heard by such folks as Mrs Barrett Browning, Carlyle, and Lord Ashley 26 .
La souffrance des enfants prend du sens lorsqu'elle est dénoncée et combattue par les penseurs, les écrivains et les hommes politiques 27 Morton Stanley. Stanley relit son séjour à l'aune de son expérience de chrétien et d'explorateur. Son récit ressemble aux autobiographies spirituelles ou aux récits de conversion qui témoignent du cheminement du chrétien. Par exemple, il explique qu'il a appris à la workhouse à aimer ses ennemis et à mettre de côté ses préjugés. Il est fier d'y avoir reçu une éducation chrétienne : "without this teaching I should have been little superior to the African savage. It has been the driving power for good, the arrestor of evilˮ28 . Il fait plusieurs références à l'esclavage, comme lorsqu'il écrit que les pauvres sont privés de liberté et ne sont rien d'autres que des esclaves : "No Greek helot or dark slave ever underwent such discipline as the boys of St. Asaphˮ29 . Ces allusions à l'esclavage trouvent leur source dans les expéditions qu'il mena en Afrique, signe qu'il récrit son histoire à partir de sa vie passée en montrant que sa mission chrétienne se trouvait déjà en latence dans son passé. Après avoir fait régulièrement l'expérience de la souffrance physique dans la workhouse, notamment lors d'un viol rapporté par certains historiens, il fait le mur, séparation symbolique entre enfer et paradis et s'enfuit avec un ami : "We ran away with a boundless belief that beyond the walls lay the peopled South that was next to Heaven for happinessˮ 30 .
Les tactiques du déguisement
Si la transition de l'expérience de la workhouse au récit autobiographique peut être considérée comme une rare victoire sur l'adversité par une infime minorité de résidents, il existe aussi des tentatives multiples de résistance par des anonymes. L'art de la dissimulation constitue une de ces tentatives invisibles. Pour Michel de Certeau dans L'Invention du quotidien, la dissimulation est une réussite du plus faible contre le plus fort, une tactique pour se soustraire temporairement au pouvoir institutionnel. Ainsi, les pensionnaires produisent des formes de résistance depuis l'intérieur du système, en ajustant leurs pratiques par rapport aux structures cadre 31 . Erving Goffman a montré que cet ajustement, qu'il nomme « adaptation secondaire », est indispensable à la survie en milieu totalitaire : « Les adaptations secondaires représentent pour l'individu le moyen de s'écarter du rôle et du personnage que l'institution lui assigne tout naturellement » 32 . L'existence d'une vie clandestine et de micro-histoires à l'asile peut aisément être transposée à l'univers de la workhouse : « Une multitude de petites histoires insignifiantes trahissent, chacune à sa manière, une aspiration à la liberté : chaque fois que se forme une société, une vie clandestine apparaît » 33 . La stratégie de dissimulation de "One of Themˮ s'exprime d'abord dans l'ambiguïté de l'observateur, à la fois à l'intérieur et à l'extérieur du système, pensionnaire et explorateur social. L'auteur ne cherche pas à s'extraire de son milieu, comme son pseudonyme l'indique, et pourtant il décrit ses co-résidents comme le faisaient les observateurs qui s'introduisaient incognito dans les poches de pauvreté londonienne et abreuvaient le public anglais de comptes rendus mi-voyeuristes miempathiques. 35 Le positionnement ambivalent de "One of Themˮ 31 Michel de Certeau, L'Invention du quotidien, Paris, Éditions Gallimard, 1990, p.91. 32 Erving Goffman, op. cit., p. 245. 33 Ibid., p.358. 34 C'est en tout cas l'hypothèse de Higginbotham. 35 James Greenwood fut le premier à publier le récit d'une nuit passée dans le quartier des vagabonds à la workhouse de Lambeth en 1866. Il publia son correspond à une de ces formes d'adaptation que décrit Goffman ou de Certeau. Ici, le résident, qui porte un regard synthétique et analytique sur son environnement, prend ses distances avec le monde de la pauvreté et de l'anonymat [START_REF]C'est la même position ambiguë qu'occupe Orwell lorsqu'il décrit le "social typeˮ du mendiant. Voir George Orwell, Down[END_REF] . "One of Themˮ utilise des méthodes de classification anthropologique et sociologique à plusieurs reprises, par exemple lorsqu'il décrit le profil des femmes, ou bien classe les comportements inhabituels ou déviants. "The workhouse abounds in curiosities of humanityˮ, [START_REF]One of Them[END_REF] annonce-t-il en ouverture du chapitre X qui porte sur les "Curious Paupersˮ avant de passer en revue, de manière très organisée, les différents types d'indigents extravagants. "One of Themˮ est sans doute victime de l'obsession du classement chez les victoriens, mais les catégories qu'il détermine lui sont propres. Elles vont plus loin que celles fixées par la loi et rendent compte d'une grande diversité d'êtres humains, rentrés en institution pour une large variété de raisons. Les catégories sont subdivisées en nihilistes libres penseurs, faux médecins et imposteurs, eux-mêmes subdivisés en sous-catégories, mathématiciens, latinistes, héros ou officiers 38 . Puis ces groupes sont accompagnés de portraits de résidents : Harry, le résident solitaire ; le traumatisé ; le vieux Rannock, obsédé de l'argent… À travers les observations nuancées qu'il présente en tant qu'expert, il rejette les justifications morales et essentialistes de la pauvreté. Il appelle le personnel à modérer son point de vue, notamment pour ne pas criminaliser l'indigent : "[The officers of a workhouse] regard all paupers as of quite an inferior order of beingsˮ 39 . En se glissant dans la peau de l'explorateur social et en étayant son discours sur la méthode sociologique, "One of Themˮ rend légitime son point de vue d'indigent, habituellement maintenu dans une forme d'invisibilité sociale.
Cette vie clandestine est décrite dans
"One of Themˮ présente aussi les nombreuses pratiques internes de dissimulation comme des formes de résistance à l'ordre établi. Non répertoriées dans les rapports des observateurs officiels, elles sont au centre de Indoor Paupers. Les indigents mettent en place des tactiques pour faire semblant, tricher, duper, cacher leur identité. Les normes s'établissent selon un modèle carnavalesque, en opposition avec celles du monde de la normativité qu'est l'institution. La workhouse n'est donc que littéralement « la maison du travail », puisque dans la pratique, c'est le faire semblant, le "skulkingˮ, qui domine. Savoir tromper la vigilance des surveillants est élevé au rang d'art :
The skulking -I soon found was common all over the place, most of the inmates vying with one another as to which of them should do the least possible amount of work, and with much success, I must confess. It would require a dozen officers, each with the eyes of an Argus, to watch these gentry and keep them up to even a decent seeming of industry 40 .
La duplicité est particulièrement visible dans les pratiques culturelles décrites par "One of Themˮ. Elles prennent le contrepied des rares pratiques culturelles légitimées par l'institution afin de les déconstruire et s'en moquer. L'activité prosélyte des surveillants ("guardiansˮ) est ainsi tournée en dérision, par exemple par l'indigent italien qui fait semblant de lire les bibles déposées pour s'attirer les faveurs de l'institution 41 . Les lectures religieuses sont vite remplacées par les ouvrages de culture populaire et les imprimés à bon marché ("broadsheetsˮ) qui racontent les faits hauts en couleur des bandits de grand chemin, "the deeds of Claude Duval, Dick Turpin, and other desperadoesˮ 42 . Les pensionnaires et surtout les vagabonds sont des personnages qui entrent en scène pour faire leur numéro et avoir leur courte heure de gloire. "Stumpy Bluffˮ est un vagabond au nom évocateur. Il passe de workhouse en workhouse, pour le plus grand plaisir de ses admirateurs qui attendent avec impatience les nouvelles histoires de leur 40 Ibid., p. 19. 41 Ibid., p. 35. George Orwell s'attaque avec la même virulence à l'Armée du Salut qui « pue la charité » (op. cit., p. 168). 42 "One of Themˮ, op. cit., p.36.
« mentor »43 . Les vagabonds déploient toute leur ingéniosité pour incarner parfaitement leurs rôles, échafauder des arnaques et faire chanter leurs victimes. Celui que l'on nomme le "moucherˮ (vagabond) est le roi du déguisement : "He will try, and try again, varying his trickery on each occasion until he hits on the dodge which carries him through. He will change his story and his appearance over and over -become a Proteus, in factˮ 44. Ceux qui séjournent dans les "casual wardsˮ, salles d'asile construites dans des quartiers spéciaux des workhouses, ne représentent qu'un sixième du total des vagabonds. En échange du souper et d'une seule nuit à l'asile, ils doivent fournir un travail manuel le lendemain matin, ce qui les empêche de chercher un travail, vrai problème pour les artisans itinérants. La situation des vagabonds est ainsi encore plus absurde que celles des pensionnaires classiques. Certains cherchent à échapper au règlement ou au moins à jouer avec les codes imposés. De nombreuses techniques de dissimulation ont cours chez ceux qui s'organisent quasiment en corporation et créent leurs propres documents officiels. Ce sont les « textes cachés » qu'ils produisent « dans les coulisses, à l'abri du regard des puissants », selon les mots de James Scott 45 . Ils prennent d'abord la forme de cartes, comme la "cadger's mapˮ (la carte du mendiant), incluse dans The Slang Dictionary publié en 1865 par John Camden Hotten. Les vagabonds s'échangent ces cartes dans les asiles de nuit afin de partager des informations sur les étapes de leurs errances et notamment sur la qualité relative des workhouses. Les lieux et les indications de déplacement sont codifiés sous forme de symboles (croix, losanges, ronds, triangles, etc…) que Camden Hotten appelle des « hiéroglyphes ». Tel Champollion, il semble avoir déchiffré ces hiéroglyphes dont il donne une transcription : par exemple, un rectangle signifie que le lieu est "gammyˮ, c'est-à-dire douteux dans l'argot des vagabonds ; un rond avec un point au centre fait référence à un endroit "flummuxedˮ ou dangereux ; ou encore un triangle avertit le vagabond que la workhouse est "cooper'dˮ ou surpeuplée. Le sens de la carte du mendiant doit échapper aux non-initiés. Les codes symboliques s'accompagnent de codes linguistiques qui consolident l'existence du groupe. Lionel Rose a donné à l'argot partagé par les vagabonds le nom de "trampoloquiaˮ [START_REF] Rose | Rogues and Vagabonds: Vagrant Underworld in Britain 1815-1985[END_REF] et en dresse un glossaire d'environ 300 termes en annexe de son volume Rogues and Vagabonds. On trouve par exemple le terme "lurkˮ qui signifie combine ; "skillyˮ l'équivalent de "gruelˮ ou encore "praterˮ qui qualifie celui qui fait semblant d'évangéliser aux côtés des organisations religieuses. Orwell rend aussi compte de la vivacité de l'argot des vagabonds dans les années 1930 dans le chapitre XXXII de Down and Out. La langue suit le parcours sinueux du vagabond qui s'exclut de l'espace de la normativité : "The rule seems to be that words accepted as swear words have some magical character, which sets them apart and makes them useless for ordinary conversationˮ 47 . Ainsi les mots de l'argot perdent leur signifié habituel et mènent une existence autonome. Voici quelques termes choisis par Orwell pour dépeindre les catégories de vagabonds qu'il a rencontrés pendant ses mois de pérégrinations : « A gagger -a beggar or street performer of any kind. A moocher -one who begs outright, without pretence of doing a trade. A nobber -one who collects pennies for a beggar. A chanter -a street musician. A clodhopper -a street dancer… » 48 .
Dans son rapport d'inspection de 1868, intitulé Vagrancy Laws and Vagrants: a Lecture, John Lambert, inspecteur des pauvres, remarquait déjà la vivacité de l'argot et son insaisissabilité : "I need scarcely say that the vagrant tongue is a living language, although it is much to be wished that it were a dead one, and that consequently it is undergoing continual changeˮ [START_REF] Lambert | Vagrancy Laws and Vagrants: a Lecture[END_REF] . Il reproduit à la suite de ce commentaire un message de vagabonds suivi de la traduction de cet « élégant extrait » afin de mettre en évidence le caractère insidieux de cette langue qui fonctionne sans référents partagés par tous : I pulled down a fan and a roll of snow. I starred the glaze and snammed 16 ridge yacks. I took them to a swag chovey bloak, and got six finnips and a cooter for the yacks. A cross cove who had his regulars down; a fly grabbed him. I am afraid he will blow it.
Il propose ensuite une traduction de cet extrait :
I stole from a shop door a waistcoat and a web of Irish linen. I broke the corner of a window and got 16 gold watches. I took them to a person who buys stolen property, who gave me six £5notes and a sovereign for the watches 50 .
Cette langue autonome et vive agit comme un organisme qui se développe de manière souterraine et vient noircir les murs des espaces publics, ses termes argotiques se retrouvant ensuite dans les graffitis qui recouvrent les murs des workhouses.
Selon les termes de Guillaume Marche, les graffitis sont des formes infrapolitiques, contenant les germes d'une mobilisation sociale. Mots de passe, cris de ralliement ou signatures individuelles pour rappeler son existence ou résister à l'anonymat, les graffitis s'affichent comme un forum graphique qui redonne une forme d'agentivité aux vagabonds 51 . Anne Crowther souligne également l'importance du graffiti pour cette catégorie de population : "Experienced vagrants had their own grapevine, and by graffiti and word of mouth they would inform their fellows of any changes or advantages in certain casual wardsˮ 52 . Dans sa contribution au Rapport sur le vagabondage adressé au président de la Commission sur la Loi des Pauvres en 1866, l'inspecteur Andrew Doyle inclut des « notices », une centaine de graffitis laissés par les résidents sur les murs des workhouses, pour illustrer son propos sur les déplacements et les pratiques sociales des vagabonds. Les graffitis indiquent leur présence en un certain lieu mais maintiennent leur anonymat derrière le pseudonyme qui brouille les pistes de l'institution. Il 50 Ibid., p. 30. 51 Guillaume Marche, "Expressivism and Resistance: Graffiti as an Infrapolitical Form of Protest against the War on Terrorˮ, Revue française d'études américaines, Infrapolitics and Mobilizations 131 (2012), p. 80. 52 Anne Crowther, op. cit., p. 250.
est composé du prénom du vagabond auquel sont adjoints des qualités physiques et morales, des professions ou des lieux. Les notices soulignent la capacité de dissimulation et le potentiel d'autodérision du vagabond : "Saucy Harry and his moll will be at Chester to eat their Christmas dinner, when they hope Saucer and the fraternity will meet them at the union -14 th November 1865ˮ [START_REF] Doyle | Reports on Vagrancy Made to the President of the Poor Law Board by Poor Law Inspectors[END_REF] . Les vagabonds se présentent souvent comme des sociopathes pour exprimer leur rejet de l'ordre social : "Beware of the Cheshire tramps, Spanish Jem, Kildare Jem, Dublin Dick, Navvy Jack, Dick Graven, the shrewd Cheshire trampsˮ . Sur les murs, "Dick Turpinˮ 54 et "Robin Hoodˮ 55 côtoient "Hamlet, Prince of Denmarkˮ 56 et "the Governor of Chester Castleˮ 57 car les nobles et les ignobles cohabitent dans ce monde carnavalesque. Dans cette société aux valeurs inversées, rien n'empêche de dénoncer les conditions de vie en workhouse : "Boys look here, there therse [sic] Long Lank working at Warrington for two or three rags of clothes and taking the bread out of anothers [sic] mouthˮ 58 ou bien "This bloody hole is lousyˮ 59 .
Les différentes modalités de dissimulation permettent aux indigents de réinventer leur quotidien et d'exprimer une forme de résistance à l'enfermement. L'exploration des pratiques sociales et culturelles perçues du point de vue de l'indigent montre qu'il n'y a pas un seul groupe social sous le terme parapluie de « pauvre », mais des individus qui ont des pratiques et des vies différentes, ce que l'institution tend à gommer.
Tentatives de création, la dignité retrouvée
La résistance ne passe pas toujours par la dissimulation mais au contraire par la visibilité assurée par le projet artistique. La création littéraire prend tout son sens dans le milieu de la workhouse, lieu où le travail occupe une place centrale mais où, paradoxalement, le travail produit n'a aucune valeur. Ainsi lorsque des pensionnaires écrivent des poèmes, forme artistique privilégiée, et surtout parviennent à les faire publier dans la presse locale, voire à les rassembler en petits volumes, ils remettent en cause les fondements de la workhouse nouveau régime. Ceci étant dit, la production poétique n'a rien de subversif, contrairement aux graffitis, car sa fonction est plus lyrique que politique et qu'elle sort leurs auteurs de l'anonymat souterrain pour les placer dans la lumière.
Contrairement aux poèmes sur la workhouse qui sont pléthore, notamment dans la classe moyenne pour laquelle le secours en institution est perçu comme une déchéance physique et morale, il n'existe que quelques exemples de poèmes écrits depuis la workhouse, leur rareté étant corrélée à l'éducation insuffisante et aux conditions matérielles défavorables des indigents. Si les poèmes des pensionnaires soulignent toujours les conditions de vie terribles, ils présentent aussi une gamme de perceptions plus large, entre dénonciation et reconnaissance du cadre protecteur de l'institution, que ceux écrits par les poètes de la classe moyenne.
Les poèmes des pensionnaires partagent de nombreux traits avec ceux écrits par les ouvriers. Sur le plan matériel, les poètes de la workhouse ne peuvent publier leurs poèmes que grâce à l'aide de mécènes, de donateurs ou d'éditeurs engagés dans la promotion d'une voix populaire, eux-mêmes souvent issus de la classe ouvrière ou de la petite classe moyenne 60 . Ces poèmes sont des témoignages sur les pratiques sociales et culturelles au sein de la workhouse parce qu'ils apportent des informations précieuses sur l'existence d'une vie sociale peu perceptible au coeur des rapports parlementaires. Par ailleurs, la publication de poèmes, dans la presse locale ou sous la forme d'un recueil, renforce le prestige social de leurs auteurs au niveau local. Ceuxci se présentent en général comme des porte-parole de leur communauté et jouent un rôle important dans la promotion des événements qui jalonnent la vie de l'institution. Les poèmes expriment un sentiment de fierté d'appartenir à une communauté et redonnent une forme de dignité aux membres de l'institution. Mais c'est aussi précisément ce sentiment glorifié d'appartenance à la communauté, voire à la classe, qui délégitime l'écriture des classes populaires, puisque le propre du génie poétique est d'être unique en son genre.
L'indigent doit d'abord surmonter les difficultés matérielles. "One of Themˮ n'est pas poète mais il fournit des clés pour comprendre le cadre matériel dans lequel l'écrit peut émerger. Il mentionne les difficultés à se procurer du papier et des crayons, mais insiste surtout sur la question du temps insuffisant qui le préoccupe, puisqu'il n'estime pouvoir accorder que dix-neuf heures hebdomadaires à son projet. Il se décide alors à écrire la nuit dans son lit, recourant à certains subterfuges pour achever son volume "produced 'in the dark', physically speakingˮ61 . Les résidents sont conscients de leurs faiblesses mais ils savent les transformer en forces, allant jusqu'à affirmer que les maladresses de style font l'originalité de leur création poétique.
Les préfaces qui précèdent généralement les ouvrages de poésie démotique apportent des éléments indicatifs sur les raisons qui poussent à écrire, malgré l'adversité. Les poètes précisent que leurs rimes sont nées dans et de la workhouse, réclamant ainsi l'indulgence du public. Par exemple, dans sa préface à Lays from the Poorhouse (1860), l'Écossais John Young justifie ainsi l'émergence de ses poèmes depuis la workhouse de Barnhill :
Whatever slight abilities the Author of the following effusions may possess, have been cultivated within the walls of a Poorhouse -not the most favourable soil for the sweet flowers of poesy. Nevertheless, it was within the walls of a Poorhouse that the Author first strung his uncouth lyre, and running his ungainly fingers over its chords, drew forth their first notes […]62 .
Grace Dickinson, rentrée à la workhouse d'Halifax en 1861, a écrit quelques dizaines de poèmes, rassemblés puis publiés sous le titre de Songs in the Night par l'aumônier Thomas Snow après la mort de leur auteur. Dans la préface et le récit biographique en marge du recueil de poèmes, il insiste sur les conditions dans lesquelles ils ont été écrits puisque, comme ceux de Young, ils ont surgi de l'ombre de la workhouse, berceau de la création artistique :
Many of them are literally 'songs in the night,' having been composed in the silent hours of darkness, or when that silence was disturbed by the moans of the suffering, the restlessness of the aged and superannuated, or the hard and audible breathing of the dying; and all of them have been produced in the night of affliction 63 . Grâce à leur potentiel artistique, les poètes sont sollicités par l'institution pour écrire des poèmes de circonstance, lus pour des occasions spéciales. Leur activité poétique leur permet donc de jouer un rôle social et de produire un bien utile. Par exemple, l'Écossais John Young célébra le centième anniversaire de Robert Burns et fêta le nouvel an avec deux poèmes, "A Voice from the poor-house, on the centenary birthday of Burns, January 25, 1859ˮ et "New Year's Address: delivered to the inmates of Barhill Poorhouse at their annual soirée, January 1, 1859ˮ . L'institution donnant une mission et une utilité, il est naturel d'écrire des poèmes laudatifs pour rendre hommage à ses Dickinson est chargée de soutenir les pensionnaires en deuil et de rendre hommage aux sans voix qui décèdent. C'est avec un poème sur ce sujet qu'elle a connu son premier succès au sein de la workhouse. Publié dans le magazine religieux hebdomadaire de la Religious Tract Society de Londres, The Sunday at Home: A Family Magazine for Sabbath Reading 68 , son poème lui apporte la notoriété locale et lance sa modeste carrière de porteparole des indigents sous les initiales "D.G.ˮ. L'appartenance de son auteur à la catégorie des pauvres de la workhouse est mise en évidence dans le titre "Prayer for a child, Composed by a Widowed Mother, the Inmate of a Union Workhouseˮ, renforçant l'image de la pauvreté respectable soutenue par l'aumônier : "The saying that 'paupers are unthankful' has 67 Ibid., p. 76. 68 n°442, volume 9, 18 octobre 1862, p. 672. passed into a proverb, but in this case we can point to a most signal exceptionˮ 69 ). Les pauvres portent désormais un nom et ont même une image physique comme les deux enfants de Dickinson dont le portrait est reproduit au début du recueil. Dickinson donne visibilité et dignité aux pauvres, sans oublier de flatter l'institution. Elle incarne ainsi l'esprit bienveillant de la workhouse que l'aumônier et les surveillants veulent mettre en avant en faisant d'elle une figure de l'institution de Halifax jusque dans la mort : "But common as death is to the inmates of that room, the death of GRACE DICKINSON was felt by everyone of them to be an event of no common characterˮ 70 .
Enfin il reste
L'inventivité thématique est très limitée, d'autant plus que Dickinson a peu de sources d'inspiration autour d'elle. L'ensemble de ses poèmes est dévotionnel, inspiré des hymnaires, qu'il s'agisse des prières pour les défunts, des louanges, des actions de grâce, des paraphrases de versets bibliques. Mais chaque petit poème fait preuve de variations formelles systématiques, depuis la "common measureˮ classique, alternant trimètres et tétramètres iambiques, jusqu'à des formes plus innovantes comme le sonnet composé de 14 hexamètres "Jesus spake as never man did speak beforeˮ. Ce que Thomas Snow nomme "something new in the manner of its productionˮ 71 , c'est peut-être la réussite formelle dans un environnement totalement réfractaire à l'innovation.
Conclusion
Les poèmes de Grace Dickinson, exclue en tant que femme, mère seule, invalide et assistée, sont bien sûr une exception, mais ils montrent que la dimension carcérale de la workhouse peut parfois aussi donner lieu à une forme de production artistique. Pour Dickinson, la workhouse de Halifax a sans doute été le refuge ou le substitut de foyer dont parle Jacques Carré :
Pour certains indigents, en particulier les femmes et les enfants, cette institution a même pu représenter un refuge bienvenu par rapport au taudis, au dénuement, au désespoir. Les mères célibataires pouvaient y accoucher, alors qu'elles étaient bannies 69 Ibid., p. XIII. 70 Ibid., p. XIV. 71 Ibid., p. XI. des hôpitaux caritatifs. Les orphelins et enfants abandonnés y étaient nourris, sinon éduqués. S'ils survivaient, ils étaient placés en apprentissage. La petite workhouse de village, dans le meilleur des cas, pouvait former une sorte d'ersatz de famille72 .
Bien sûr, on peut toujours se demander si les dispositifs qui produisent des formes littéraires jouent un rôle d'émancipation ou bien contribuent à renforcer l'enfermement des locuteurs dans des rôles et des catégories imposés. C'est souvent le sort des écrits démotiques qui se situent entre résistance et déférence. C'est peut-être dans ce jeu avec l'autorité que se trouvent les éléments d'une autre histoire des workhouses. Considérer l'histoire d'en bas et surtout écrite par le bas permet peut-être de nuancer les termes des textes officiels et de redonner un peu la parole aux pauvres.
[1837], Londres, Penguin, 2003, p. 15 et p. 18. 19 Martyn Lyons, « La culture littéraire des travailleurs: autobiographies ouvrières dans l'Europe du XIX e siècle », Annales. Histoire, Sciences Sociales 2001/4, p. 942. éducation sommaire mais efficace dans une Dame School 20 puis à l'école du dimanche. Il doit passer quelques mois dans les années 1840 à la Wolstanton & Burslem Union Workhouse, située à Chell dans le Staffordshire, lorsque son père perd son emploi après avoir participé à une grève. Il écrit ses souvenirs sous forme d'articles anonymes dans The Staffordshire Sentinel, entre 1892 et 1893, puis les rassemble dans son autobiographie, When I Was a Child (1903), qu'il publie sous le nom de plume "An Old Potterˮ. Il deviendra pasteur méthodiste radical. Il rapporte dans le chapitre XII ses premières impressions de la workhouse, dans un style parfaitement dickensien :
Down and Out in Paris and London (1933) de George Orwell et dans Indoor Paupers, autobiographie rédigée par un auteur anonyme qui choisit d'écrire sous le nom de "One of Themˮ en 1886. Dans ces deux textes, la dissimulation et le déguisement sont présentés comme des stratégies nécessaires à la survie dans la workhouse mais aussi comme intrinsèquement liées à l'institution. L'auteur de Indoor Paupers serait John Rutherford, résident de juillet à novembre 1885 à la Poplar Union Workhouse dans l'est de Londres. 34 Pensionnaire relativement instruit, il aurait écrit son récit pendant son court séjour et l'aurait publié chez Chatto et Windus en janvier de l'année suivante.
et lorsqu'elle est déconstruite par la critique économique. Shaw replace ainsi son séjour en institution au coeur de sa démarche intellectuelle et critique. Le deuxième exemple concerne l'explorateur Henry Morton Stanley, résident de la St Asaph Union Workhouse en 1847. Il rend compte de cet épisode dans son autobiographie publiée par les soins de son épouse en 1909, The Autobiography of Sir Henry 24 Ibid., p. 97. 25 Ibid., p. 114. 26 Ibid., p. 98. 27 Dans "The Cry of Childrenˮ, Elizabeth Barrett Browning dénonce les conditions de travail des enfants ; Thomas Carlyle accuse dans ses écrits la révolution industrielle qui conduit à la déshumanisation et à la souffrance. Quant à Lord Ashley, il a introduit le Ten Hours Act qui limite le temps de travail des enfants.
à régler les difficultés financières liées à la publication. Le volume de Dickinson est soutenu financièrement par le Bureau des surveillants, et les bénéfices des ventes seront reversés aux deux enfants de la poète, restés orphelins à la workhouse 64 . dont la production prend cette fois-ci une dimension positive et noble. La question de la production et de la productivité se trouve au coeur des poèmes et en fait leur spécificité. Contrairement aux autres formes d'écrits qui dénoncent la privation et l'oisiveté, la poésie insiste sur la capacité à produire ou à créer, donc à avoir une existence utile et fructueuse. Deux poèmes d'Ann Candler, qui a passé une vingtaine d'années entre 1783 et 1803 dans la workhouse d'Ipswich et a pu la quitter grâce aux bénéfices récoltés par la publication de sa poésie, présentent la situation de Candler avant et après son départ. Dans "Reflections on my own situation: Written in T-tt-ngst-ne House of Industry, Feb. 1802ˮ, elle décrit le vide de sa vie entre les murs de l'institution: Grace Dickinson, « Preface by Thomas Snow », Songs in the Night, Londres, Macintosh, 1863, pp. xi-xii. 64 Ibid., p. XVII. Les préfixes privatifs sont les signes grammaticaux de toutes ses privations : en termes d'argent, d'éducation, d'amitié ou de liberté. Le second poème, "On perusing the history of Jacob: After I had left T-tt-ngst-ne House of Industryˮ, présente une situation diamétralement opposée. Comme Jacob, elle a traversé les épreuves, mais c'est une nouvelle vie paisible et surtout utile qui s'ouvre à elle : So may my eve of life be more serene, More tranquil than the former part has been; This cheering ray no threatening clouds o'er cast, That may too much resemble what is past. O! let me spend the short remains of life In peace and quiet, far from noise and strife, My conduct such as best becomes my age And something useful still my time engage 66 .
And, with the dregs of human kind,
A niggard alms receive.
Uncultivated, void of sense,
Unsocial, insincere,
Their rude behaviour gives offence,
Their language wounds the ear 65 .
La pratique artistique, dur labeur dans un tel cadre, permet à
quelques pensionnaires exceptionnels de s'extraire
momentanément de leur milieu, en accédant à la condition
d'artiste Within these dreary walls confin'd,
A lone recluse, I live,
63
Que seraient devenus les enfants et les femmes seules sans l'aide de l'institution paternaliste ? Oft with myself communing, I think what ills would come, Unless within this Union, My children had a home. Here females are protected From insult and from shame; The rules, so well directed, Bring credit to your name 67 .
bienfaiteurs. Dans "To the Chairman of the Board of Guardiansˮ,
Dickinson remercie ses mécènes pour leur engagement auprès
des plus nécessiteux :
In widow'd destitution,
I to this place did flee;
But for such institution,
What would become of me?
What could I do in sickness?
I could not earn my bread,
Nor in my present weakness,
Unto the factory tread.
65 Ann Candler, Poetical Attempts, by Ann Candler, a Suffolk Cottager, Ipiwich,
John Row, 1803, pp. 16-17.
66 Ibid., p. 68.
« On peut définir une institution totalitaire (total institution) comme un lieu de résidence et de travail où un grand nombre d'individus, placés dans la même situation, coupés du monde extérieur pour une période relativement longue, mènent ensemble une vie recluse dont les modalités sont explicitement et minutieusement réglées ». Voir Erving Goffman, Asiles : Études sur la condition sociale des malades mentaux, Paris, Les éditions de Minuit, trad. Liliane et Claude Lainé,[1961] 1968, p. 41.
Voir James Scott, La Domination et les arts de la résistance, trad. Olivier Ruchet, Paris, Éditions Amsterdam,[1992] 2008.
Jacques Carré, op.cit., p. 267.
National Archives, Londres, Local Government Board and predecessors: Correspondence with Poor Law Unions and Other Local Authorities, MH 12/14020/287.
National Archives, Londres MH 12/14021/316.
Anne Brunon-Ernst, Le Panoptique des pauvres. Jeremy Bentham et la réforme de l'assistance en Angleterre, Paris, Presses de la Sorbonne Nouvelle, 2007, p.
12 Ibid.,
13 National Archives, Londres, MH 12/14023/159. 14 National Archives, Londres MH 12/8487/44. 15 National Archives, Londres, MH 12/489/250.
National Archives, Londres, MH 12/14019/195.
National Archives, Londres, MH 12/14023/215.
Les Dame schools étaient des écoles primaires privées organisées par certaines femmes à leur domicile. Elles fonctionnaient comme des garderies, et les enfants, souvent les plus démunis, apprenaient au mieux quelques rudiments en calcul et en grammaire.
Charles Shaw, When I Was a Child [1903], Seaford, Island Press Seaford, 1977, pp. 96-97.
Ibid., p. 67.
Dorothy Stanley (dir.), The Autobiography of Sir Henry Morton Stanley, Boston et New York, The Riverside Press Cambridge, 1909, p. 27.
Ibid., p.12.
Ibid., p. 122.
Ibid., p. 43.
James Scott, op.cit., p. 49.
C'est le cas des écossais George Gilfillan et D.H. Edwards ou de Ben Brierley, tisserand du Lancashire devenu éditeur, qui cherchent à promouvoir une voix poétique populaire.
"One of Themˮ, op. cit., p. 22.
John Young, « Preface », Lays from the Poorhouse, Glasgow, George Gallie, 1881, p. III.
Jacques Carré, op. cit., pp. 473-474. |
04112014 | en | [
"info"
] | 2024/03/04 16:41:24 | 2022 | https://hal.science/hal-04112014/file/Les%20M%C3%A9thodes%20Formelles%20pour%20les%20Micro-R%C3%A9seaux.pdf | Moez Krichen
email: [email protected]
Les Méthodes Formelles pour les Micro-Réseaux
teaching and research institutions in France or abroad, or from public or private research centers.
Les Méthodes Formelles pour les Micro-Réseaux
Introduction
L'utilisation de systèmes de micro-réseaux à courant continu (DC) a connu une croissance significative ces dernières années en raison de la nécessité de systèmes d'énergie efficaces et durables [START_REF] Shahgholian | A brief review on microgrids : Operation, applications, modeling, and control[END_REF]. Le développement des micro-réseaux DC a révolutionné la distribution d'énergie, et il soutient la croissance des sources d'énergie renouvelable telles que l'énergie solaire photovoltaïque et l'énergie éolienne. Les systèmes de micro-réseaux DC présentent certains avantages par rapport aux systèmes d'énergie traditionnels en courant alternatif (AC), tels qu'une efficacité accrue, une réduction des pertes de transmission et la capacité de fonctionner dans des zones éloignées [START_REF] Espina | Distributed control strategies for microgrids : An overview[END_REF]. La conception de systèmes de micro-réseaux DC est une tâche difficile en raison de la complexité et de la variabilité intrinsèques du système. Les techniques de concep-tion traditionnelles deviennent inefficaces ; d'où la nécessité d'incorporer des techniques formelles [START_REF] Muhammad | A review on microgrids' challenges & perspectives[END_REF].
Malgré leurs avantages potentiels, les micro-réseaux peuvent également être sujets à des accidents et des défaillances (60; 46). Par exemple, en 2011, un micro-réseau à l'Université de Californie, San Diego, a connu une panne de courant due à une erreur logicielle, causant une perturbation importante des opérations sur le campus. De même, en 2013, un micro-réseau dans le New Jersey a échoué pendant la tempête Sandy, entraînant une panne d'électricité prolongée et des pertes économiques importantes.
La conception d'un système de micro-réseau DC est un processus compliqué qui doit prendre en compte les conditions météorologiques, l'emplacement géographique, l'infrastructure de communication appropriée et le type de clients. Historiquement, les techniques de simulation ont été utilisées pour tester et vérifier le fonctionnement du micro-réseau. Cependant, étant donné le nombre de variables qui peuvent affecter la fiabilité et l'efficacité du micro-réseau, ainsi que le comportement dynamique du micro-réseau, les chercheurs se sont tournés vers une approche formelle pour assurer le fonctionnement approprié du micro-réseau [START_REF] Wu | Formal verification method considering electric vehicles and data centers participating in distribution network planning[END_REF].
Les méthodes formelles offrent une approche prometteuse pour relever ces défis et minimiser le risque d'accidents et de défaillances (55; 33). Les méthodes formelles sont des techniques mathématiques qui permettent la vérification et la validation de systèmes complexes (1; 36). Elles sont particulièrement utiles pour concevoir et gérer des systèmes critiques en termes de sécurité, tels que les micro-réseaux. Les méthodes formelles permettent aux ingénieurs de modéliser et d'analyser mathématiquement le comportement des micro-réseaux, de détecter les erreurs et les incohérences et de vérifier la correction du comportement du système (19; 27).
Le but de ce chapitre est d'explorer l'utilisation des méthodes formelles pour optimiser les performances, la sécurité et la fiabilité des micro-réseaux. Plus précisément, le chapitre vise à :
-Fournir un aperçu des micro-réseaux et de leur importance dans le contexte de la génération d'énergie durable. -Discuter des avantages de l'utilisation des méthodes formelles pour l'optimisation des micro-réseaux et des défis liés à la gestion de systèmes énergétiques complexes. -Décrire comment les technologies de cloud, defog et d'IoT, ainsi que l'IA et l'apprentissage automatique, peuvent être intégrées aux méthodes formelles pour améliorer les performances des micro-réseaux. -Examiner le rôle des méthodes formelles dans la prise en compte des préoccupations de sécurité dans les micro-réseaux basés sur la blockchain. -Identifier les défis et les problèmes ouverts dans ce domaine et recommander des orientations futures pour la recherche. Le reste du chapitre est structuré comme suit. La section 2 fournit un aperçu plus détaillé des micro-réseaux. La section 3 donne un aperçu des méthodes formelles. La section 4 discute de la façon dont les technologies de cloud peuvent être intégrées aux méthodes formelles pour améliorer les performances des micro-réseaux. La section 5 discute de la façon dont les technologies de fog peuvent être combinées avec des méthodes formelles pour améliorer les performances des micro-réseaux. La section 6 décrit les avantages de l'utilisation des technologies IoT et des méthodes formelles pour l'optimisation des micro-réseaux. La section 7 examine comment l'IA et l'apprentissage automatique peuvent être intégrés aux méthodes formelles pour améliorer la sécurité et la fiabilité des micro-réseaux. La section 8 discute des avantages de l'utilisation des méthodes formelles pour la sécurité des micro-réseaux. La section 9 examine le rôle des méthodes formelles dans la prise en compte des préoccupations de sécurité dans les micro-réseaux basés sur la blockchain. La section 10 identifie les défis et les problèmes ouverts dans ce domaine, tels que la scalabilité et l'efficacité des méthodes formelles pour les systèmes de micro-réseaux à grande échelle. La section 11 recommande des orientations futures pour la recherche sur l'intégration des méthodes formelles avec les technologies de micro-réseaux. Enfin, la section 12 conclut le chapitre.
Microgrilles
Un microgrille est un système d'alimentation à petite échelle qui peut fonctionner indépendamment ou en parallèle avec le réseau principal. Les microgrids sont généralement composés d'une gamme de ressources énergétiques distribuées, telles que des panneaux solaires, des turbines éoliennes et des systèmes de stockage d'énergie, et sont conçus pour fournir une alimentation fiable et résiliente aux communautés locales. Le concept de microgrids n'est pas nouveau et existe depuis des décennies, mais les récents progrès technologiques et la demande croissante de solutions énergétiques durables ont suscité un intérêt renouvelé pour les microgrids.
L'un des principaux avantages des microgrids est leur capacité à fonctionner indépendamment du réseau principal. Cette indépendance signifie que les microgrids peuvent continuer à fournir de l'énergie aux communautés locales même en cas de panne de courant ou d'autres perturbations du réseau principal. Cette fiabilité et cette résilience rendent les microgrids attrayants pour les communautés situées dans des zones reculées ou isolées ou qui sont vulnérables aux catastrophes naturelles.
Cependant, les microgrids présentent également certains risques et défis. L'un des principaux risques associés aux microgrids est leur vulnérabilité aux cyber-attaques et autres menaces de sécurité. Étant donné que les microgrids sont souvent connectés à Internet ou à d'autres réseaux de communication, ils peuvent potentiellement être ciblés par des pirates informatiques ou d'autres acteurs malveillants. De plus, les microgrids sont soumis aux mêmes risques physiques que tout autre système d'alimentation, tels que les catastrophes naturelles ou les défaillances d'équipement.
Pour protéger les microgrids de ces risques, une gamme de techniques classiques ont été développées. Ces techniques comprennent des mesures de sécurité physique, telles que des clôtures et des contrôles d'accès, ainsi que des mesures de cybersécurité, telles que des pare-feu et des systèmes de détection d'intrusion. De plus, les microgrids peuvent être conçus avec des systèmes de redondance et de sauvegarde pour garantir que l'alimentation peut être rétablie rapidement en cas de panne ou de défaillance.
Cependant, ces techniques classiques ont leurs limites. Les mesures de sécurité physique peuvent être coûteuses et peuvent ne pas être efficaces contre certains types de menaces, tels que les attaques internes. Les mesures de cybersécurité peuvent également être complexes et nécessiter des connaissances et une expertise spécialisées pour être mises en oeuvre efficacement. De plus, ces techniques peuvent ne pas être suffisantes pour protéger contre les menaces émergentes, telles que celles posées par l'intelligence artificielle ou l'informatique quantique.
Pour répondre à ces limites, de nouvelles approches sont nécessaires pour protéger les microgrids. Une approche prometteuse est l'utilisation de méthodes formelles, qui sont un ensemble de techniques mathématiques et logiques utilisées pour concevoir, spécifier et vérifier les systèmes logiciels et matériels (29; 44). Les méthodes formelles offrent une approche rigoureuse et systématique pour la conception et l'analyse du système, ce qui peut aider à assurer la correction et la fiabilité du système (21; 41).
Les Méthodes Formelles
Les méthodes formelles sont un ensemble de techniques mathématiques et logiques utilisées pour concevoir, spécifier et vérifier les systèmes logiciels et matériels (42; 30). Formellement, une méthode est considérée comme formelle si elle est basée sur une théorie mathématique, possède une syntaxe et une sémantique bien définies, et est susceptible d'analyse et de vérification rigoureuses (40; 37).
L'utilisation des méthodes formelles implique la création d'un modèle mathématique du système en cours de conception ou d'analyse. Ce modèle est généralement exprimé dans un langage formel tel que la logique temporelle ou la théorie des automates (43; 22). Le modèle peut ensuite être analysé à l'aide de méthodes formelles telles que la démonstration de théorème ou la vérification de modèle, qui impliquent l'utilisation d'algorithmes mathématiques pour vérifier la correction du système (32; 23).
Il existe plusieurs types de méthodes formelles (24) couramment utilisées dans la vérification et la validation de systèmes complexes, notamment :
-La démonstration de théorème : cela implique l'utilisation de preuves mathématiques pour démontrer la correction d'un système. -Les méthodes basées sur les automates : cela implique l'utilisation de la théorie des automates pour analyser le comportement d'un système. Les méthodes basées sur les automates sont souvent utilisées pour vérifier les systèmes concurrents ou distribués, ainsi que les systèmes à logique de contrôle complexe. -Les méthodes basées sur les algèbres de processus : cela implique l'utilisation d'un langage formel pour modéliser le comportement d'un système comme un ensemble de processus interagissant entre eux. Les algèbres de processus sont souvent utilisées pour vérifier les protocoles de communication ou autres systèmes distribués. -Les méthodes basées sur la logique temporelle : cela implique l'utilisation de formules logiques pour décrire le comportement temporel d'un système. La logique temporelle est souvent utilisée dans la vérification de modèle pour vérifier les systèmes ayant des propriétés temporelles complexes. -Les assistants de preuve : cela implique l'utilisation d'assistants de preuve interactifs pour vérifier mécaniquement la correction d'un système. Les assistants depreuve sont souvent utilisés pour vérifier la correction des conceptions de logiciels ou de matériels. Chaque type de méthode formelle a ses propres forces et faiblesses, et les méthodes appropriées à utiliser dépendent des caractéristiques spécifiques du système à vérifier. En utilisant une combinaison de différentes méthodes formelles, les ingénieurs peuvent analyser et vérifier en profondeur le comportement d'un système, minimisant ainsi le risque d'accidents et de défaillances.
Pour utiliser les méthodes formelles pour analyser un système, nous commençons par créer un modèle mathématique du système. Ce modèle est généralement exprimé dans un langage formel, qui possède une syntaxe et une sémantique bien définies. Le modèle peut ensuite être analysé à l'aide de méthodes formelles telles que la démonstration de théorème, la vérification de modèle ou l'interprétation abstraite pour vérifier la correction du système.
Le processus d'analyse d'un système à l'aide de méthodes formelles implique les étapes suivantes :
1. Formalisation : La première étape consiste à formaliser le système en créant un modèle mathématique qui capture son comportement. Le modèle est généralement exprimé dans un langage formel qui possède une syntaxe et une sémantique bien définies.
2. Spécification : Une fois que le modèle formel est créé, une spécification est développée qui décrit le comportement souhaité du système. La spécification est généralement exprimée dans un langage formel qui possède une syntaxe et une sémantique bien définies.
3. Vérification : La prochaine étape consiste à vérifier que le système satisfait la spécification. Cela implique l'utilisation de méthodes formelles telles que la démonstration de théorème, la vérification de modèle ou l'interprétation abstraite pour analyser le modèle formel et vérifier qu'il répond à la spécification.
Correction :
Si le système ne répond pas à la spécification, la prochaine étape consiste à identifier la cause de l'échec et à corriger le modèle formel pour ré-soudre le problème. Cela peut impliquer la révision du modèle ou de la spécification, ou la modification de la conception du système.
5. Implémentation : Une fois que le modèle formel et la spécification sont vérifiés,la prochaine étape consiste à implémenter le système en utilisant le langage de programmation choisi. La mise en oeuvre doit être cohérente avec le modèle formel et la spécification pour garantir que le système implémenté fonctionne correctement.
être utilisées pour diriger le processus de développement et garantir que le résultat final répond à toutes les attentes. En utilisant des méthodologies formelles, les concepteurs peuvent assurer la fiabilité de l'infrastructure cloud du micro-réseau. Cela peut aider à éviter les crises causées par des pannes de courant ou des pannes de machines [START_REF] Muniasamy | Formal methods based security for cloud-based manufacturing cyber physical system[END_REF]. De plus, l'infrastructure cloud utilisée par le système de micro-réseau peut être sécurisée à l'aide des approches formelles. La sécurité de l'infrastructure cloud peut être vérifiée contre les failles que les pirates pourraient exploiter à l'aide de méthodologies formelles. Cela peut aider à prévenir le piratage et d'autres formes de cybercriminalité.
De plus, lesméthodes formelles offrent un moyen d'analyser et d'améliorer l'efficacité de l'infrastructure cloud, ce qui contribue à optimiser les performances du système de micro-réseau. Les développeurs du système de micro-réseau peuvent résoudre les problèmes de performance et trouver des possibilités d'amélioration dans l'infrastructure cloud sous-jacente en utilisant des méthodologies formelles. Les opérateurs de microréseaux peuvent ainsi réaliser des économies d'énergie plus importantes et réduire les coûts d'exploitation.
Lorsque les approches formelles sont combinées avec le cloud computing, le système de micro-réseau obtenu est plus sûr, plus fiable et plus rentable. Pour minimiser le risque d'accidents et de cyberattaques, et maximiser l'efficacité du système de microréseau, des approches formelles peuvent être utilisées pour garantir la sécurité et la fiabilité de l'infrastructure cloud. micro-réseaux peuvent se reposer sur l'assurance que leur infrastructure de fog est exempte d'erreurs grâce aux méthodes formelles de vérification [START_REF] Benzadri | A formal framework for secure fog architectures : Application to guarantee reliability and availability[END_REF]. Cela peut aider à éviter les crises causées par des pannes de courant ou des équipements défectueux [START_REF] Marir | A strategy-based formal approach for fog systems analysis[END_REF].
De plus, l'infrastructure de fog du micro-réseau peut être sécurisée à l'aide d'approches formelles. L'absence de défauts qui pourraient être exploités par des cybercriminels dans l'infrastructure de fog peut être vérifiée à l'aide d'approches formelles. Cela peut aider à prévenir le piratageet d'autres formes de cybercriminalité.
En outre, les approches formelles offrent un moyen d'analyser et d'améliorer l'efficacité de l'infrastructure de fog, ce qui contribue à optimiser les performances du système de micro-réseaux. Les développeurs de l'infrastructure de fog du système de micro-réseaux peuvent trouver des problèmes potentiels de performance et des domaines d'amélioration en utilisant des méthodologies formelles. Les opérateurs de micro-réseaux peuvent bénéficier de plus grandes économies d'énergie et de frais d'exploitation plus bas grâce à cela.
Une amélioration du micro-réseau en termes de sécurité, de fiabilité et d'efficacité peut être obtenue grâce à l'utilisation d'approches formelles en tandem avec les technologies de fog. Pour minimiser le risque d'accidents et de cyberattaques, et maximiser l'efficacité du micro-réseau, des approches formelles peuvent être utilisées pour garantir la sécurité et la fiabilité de l'infrastructure de fog.
Assurer la sécurité et la fiabilité des micro-réseaux avec l'IoT et les méthodes formelles
Les micro-réseaux ont de plus en plus recours aux technologies de l'Internet des objets (IoT) (52; 13; 18; 30). Les appareils de l'Internet des objets peuvent suivre et gérer les opérations des micro-réseaux telles que la production, la distribution et le stockage d'énergie. Des capteurs peuvent surveiller la production d'énergie renouvelable telle que les panneaux solaires et les éoliennes, permettant au micro-réseau de faire des ajustements qui maximisent l'efficacité [START_REF] Lei | Dynamic energy dispatch based on deep reinforcement learning in iot-driven smart isolated microgrids[END_REF].
L'utilisation de la technologie IoT dans les micro-réseaux soulève également des questions concernant la sécurité et la fiabilité. Des problèmes de sécurité ou de pannes de courant peuvent survenir si les appareils de l'Internet des objets utilisés dans le système de micro-réseau ne fonctionnent pas correctement ou sont susceptibles de subir des menaces cybernétiques. Dans de tels cas, des approches formelles sont nécessaires [START_REF] Oliveira | Formal methods to analyze energy efficiency and security for iot : a systematic review[END_REF].
La preuve du fonctionnement correct dans toutes les situations, ainsi que l'identificationdes erreurs et des vulnérabilités, sont toutes possibles à l'aide des méthodes formelles appliquées à l'Internet des objets. Des spécifications rigides pour les systèmes de l'Internet des objets peuvent être créées à l'aide de méthodologies formelles, qui peuvent ensuite être utilisées pour diriger le processus de développement et garantir que le produit final répond à tous les critères. En utilisant des approches formelles, les concepteurs peuvent assurer l'intégrité et la précision des appareils IoT du microréseau. Cela peut aider à éviter les crises causées par des pannes de courant ou des pannes de machines [START_REF] Hofer | Towards formal methods of iot application layer protocols[END_REF].
De plus, les approches formelles peuvent aider à optimiser les performances du système de micro-réseau en offrant un moyen d'examiner et d'améliorer l'efficacité des appareils IoT. En utilisant des méthodes formelles, les concepteurs peuvent examiner les appareils IoT du micro-réseau pour repérer les problèmes de performance et les opportunités d'optimisation. Les opérateurs de micro-réseaux peuvent ainsi réaliser des économies d'énergie plus importantes et réduire les coûts d'exploitation [START_REF] Webster | Exploring the effects of environmental conditions and design choices on iot systems using formal methods[END_REF].
Les appareils IoT utilisés dans l'infrastructure du micro-réseau peuvent avoir leur sécurité garantie en employant des approches formelles. Il est possible d'utiliser des procédures formelles pour s'assurer que les appareils IoT sont sécurisés contre toutes les failles qui pourraient être exploitées par des pirates. Cela peut aider à prévenir le piratage et d'autres formes de cybercriminalité.
Lorsque les approches formelles sont utilisées avec les technologies de l'Internet des objets, le micro-réseau qui en résulte peut être plus sécurisé, fiable et efficace. Les approches formelles peuvent aider à optimiser les performances du système de microréseau en assurant la sécurité et la fiabilité des appareils IoT, évitant ainsi les problèmes de sécurité potentiels et les menaces cybernétiques.
IA et ML pour les micro-réseaux : assurer la sécurité et la fiabilité avec des méthodes formelles
L'IA (Intelligence Artificielle) et le ML (Apprentissage Automatique) sont de plus en plus utilisés pour les micro-réseaux (12; 2). En utilisant l'IA et le ML, les micro-réseaux peuvent optimiser leur consommation d'énergie et réduire leur empreinte carbone. Par exemple, l'IA peut être utilisée pour surveiller et prédire la demande d'énergie, ainsi que pour contrôler et gérer les systèmes de stockage d'énergie. Cela signifie que les micro-réseaux peuvent ajuster leur production d'énergie en fonction des données de demande et d'approvisionnement en temps réel, réduisant ainsi le gaspillage d'énergie et augmentant l'efficacité [START_REF] Mohammadi | Ai-based optimal scheduling of renewable ac microgrids with bidirectional lstm-based wind power forecasting[END_REF]. Le ML peut être utilisé pour la prévision de l'énergie, la détection d'anomalies et la maintenance prédictive. Cela signifie que les micro-réseaux peuvent prédire la demande future d'énergie et détecter toutes anomalies ou défaillances du système, réduisant ainsi lerisque de pannes de courant ou de défaillances d'équipements.
Cependant, l'utilisation de l'IA et du ML dans les micro-réseaux soulève également des préoccupations quant à leur sécurité et leur fiabilité. Si les algorithmes utilisés dans le système de micro-réseau ne sont pas corrects ou exempts d'erreurs, cela pourrait potentiellement conduire à des dangers pour la sécurité ou à des pannes de courant.
Les méthodes formelles peuvent être utilisées pour prouver que les algorithmes d'IA et de ML fonctionneront correctement dans toutes les situations, et pour détecter toute erreur ou vulnérabilité potentielle [START_REF] Krichen | Are formal methods applicable to machine learning and artificial intelligence ?[END_REF]. De plus, les méthodes formelles peuvent être utilisées pour générer des spécifications rigoureuses pour les systèmes d'IA et de ML, qui peuvent être utilisées pour guider le processus de développement et garantir que le produit final satisfait toutes les exigences (3). En utilisant des méthodes formelles, les développeurs peuvent vérifier que les algorithmes d'IA et de ML utilisés dans le système de micro-réseau sont corrects et exempts d'erreurs [START_REF] Ghalib | How can factors underlying human preferences lead to methods of formal characterizations towards developing safe artificial general intelligence[END_REF]. Cela peut aider à prévenir les dangers potentiels pour la sécurité, telsque les pannes de courant ou les défaillances d'équipements.
De plus, les méthodes formelles peuvent aider à optimiser les performances du système de micro-réseau en fournissant une méthode pour analyser et améliorer l'efficacité des algorithmes d'IA et de ML [START_REF] Gossen | Towards explainability in machine learning : The formal methods way[END_REF]. En utilisant des méthodes formelles, les développeurs peuvent identifier les problèmes de performance potentiels ou les domaines à améliorer dans le système de micro-réseau [START_REF] Huang | Bridging formal methods and machine learning with global optimisation[END_REF]. Cela peut entraîner des économies d'énergie plus importantes et des coûts réduits pour les opérateurs de micro-réseau. Dans l'ensemble, l'utilisation de méthodes formelles en conjonction avec l'IA et le ML peut conduire à un système de micro-réseau plus sûr, plus fiable et plus efficace.
En conclusion, l'IA et le ML ont un grand potentiel pour optimiser la consommation d'énergie des micro-réseaux, mais leur utilisation nécessite également une réflexion minutieuse sur les questions de sécurité et de fiabilité. L'utilisation de méthodes formelles peut aider à garantir que les algorithmes d'IA et de ML utilisés dans le système de micro-réseau sont exempts d'erreurs etsatisfont toutes les exigences, ce qui conduit à un système de micro-réseau plus sûr et plus efficace.
Méthodes formelles pour les aspects de sécurité
Les micro-réseaux sont vulnérables à diverses menaces de sécurité, telles que les attaques cybernétiques, les attaques physiques et les catastrophes naturelles [START_REF] Nejabatkhah | Cybersecurity of smart microgrids : A survey[END_REF]. Ces menaces peuvent avoir des conséquences graves sur la stabilité et la fiabilité du microréseau, ainsi que sur la sécurité des personnes et des équipements impliqués. L'utilisation de méthodes formelles peut jouer un rôle important dans la garantie de la sécurité des micro-réseaux en fournissant une approche rigoureuse et systématique pour analyser et vérifier les propriétés de sécurité du système (27; 28).
Les méthodes formelles peuvent être utilisées pour modéliser et analyser les propriétés de sécurité d'un micro-réseau, y compris l'identification des vulnérabilités de sécurité potentielles et le développement de contre-mesures pour remédier à ces vulnérabilités. Les méthodes formelles peuvent également être utilisées pour vérifier la correction des protocoles de sécurité et des algorithmes utilisés dans le micro-réseau, ainsi que pour s'assurer que le système est résilient aux attaqueset peut se rétablir rapidement en cas de violation de sécurité.
L'adaptation des méthodes formelles aux besoins de sécurité spécifiques des microréseaux nécessite une compréhension des caractéristiques uniques de ces systèmes. Les micro-réseaux sont généralement composés de nombreux composants interconnectés, tels que des sources d'alimentation, des charges et des systèmes de surveillance et de contrôle. Ces composants peuvent être possédés et exploités par différentes entités, ce qui peut rendre difficile de garantir des pratiques de sécurité cohérentes dans tout le système. De plus, les micro-réseaux peuvent être soumis à des changements rapides de l'offre et de la demande d'énergie, ce qui peut rendre difficile la modélisation et l'analyse précises du système.
Pour relever ces défis, les parties prenantes dans le domaine de la sécurité des microréseaux peuvent adapter les méthodes formelles existantes aux besoins spécifiques des micro-réseaux. Cela peut impliquer le développement de nouvelles techniques et outils de modélisation qui peuvent capturer les caractéristiques uniques des micro-réseaux, ainsi que le développement de nouvelles méthodes de vérification et de validation qui peuvent être appliquées à ces modèles.
En résumé, l'utilisation de méthodes formellespeut jouer un rôle important dans la garantie de la sécurité des micro-réseaux. L'adaptation des méthodes formelles aux besoins spécifiques des micro-réseaux nécessite une compréhension des caractéristiques uniques de ces systèmes et le développement de nouvelles techniques et outils pour traiter ces caractéristiques. En outre, la prise en compte des facteurs humains impliqués dans l'utilisation de méthodes formelles pour la sécurité est cruciale pour garantir que les besoins de sécurité des micro-réseaux sont compris et traités de manière coordonnée et cohérente. En abordant ces questions, les parties prenantes peuvent garantir que les micro-réseaux sont sécurisés, fiables et résilients face aux menaces de sécurité croissantes, contribuant ainsi à assurer la sécurité et la stabilité du réseau électrique dans son ensemble.
Micro-réseaux basés sur la blockchain : le rôle des méthodes formelles
La technologie de la blockchain et les contrats intelligents ont émergé comme des outils prometteurs pour la conception et l'exploitation des micro-réseaux (26; 39; 17). La technologie de la blockchain fournit une plateforme décentralisée et sécurisée pour enregistrer lestransactions et gérer les systèmes distribués, tandis que les contrats intelligents permettent l'automatisation des transactions et l'exécution de la logique métier complexe (56; 10).
L'utilisation de la technologie de la blockchain et des contrats intelligents dans les micro-réseaux présente cependant un certain nombre de défis en matière de sécurité et de fiabilité. Ces défis découlent de la nature distribuée et ouverte des systèmes de blockchain, ainsi que de la complexité des contrats intelligents qui régissent leur comportement. Les méthodes formelles peuvent être utilisées pour relever ces défis et aider à assurer la sécurité et la fiabilité des micro-réseaux basés sur la blockchain.
Les méthodes formelles peuvent être utilisées pour modéliser et analyser le comportement des systèmes de la blockchain et des contrats intelligents, et pour vérifier qu'ils satisfont un ensemble de propriétés de sécurité et de fiabilité [START_REF] Murray | Survey of formal verification methods for smart contracts on blockchain[END_REF]. Les méthodes formelles peuvent également être utilisées pour identifier les vulnérabilités potentielles dans le système et développer des contre-mesures pour atténuer ces vulnérabilités [START_REF] Brunese | A blockchain based proposal for protecting healthcare systems through formal methods[END_REF].
L'utilisation de méthodes formellesdans le contexte de micro-réseaux basés sur la blockchain est encore un domaine de recherche relativement nouveau, mais qui présente de grandes promesses pour assurer la sécurité et la fiabilité de ces systèmes [START_REF] Brunese | A blockchain based proposal for protecting healthcare systems through formal methods[END_REF]. Les méthodes formelles peuvent être utilisées pour vérifier la correction des contrats intelligents qui régissent le comportement des composants du micro-réseau, garantissant ainsi qu'ils fonctionnent comme prévu et qu'ils n'introduisent pas de vulnérabilités dans le système. Les méthodes formelles peuvent également être utilisées pour analyser le comportement du micro-réseau dans son ensemble, identifier les vulnérabilités potentielles et s'assurer que le système est résilient aux attaques.
Une façon dont les méthodes formelles peuvent être utilisées pour améliorer la sécurité et la fiabilité des micro-réseaux basés sur la blockchain est par le développement de contrats formels. Les contrats formels sont des contrats exprimés dans un langage formel, tel que la notation Z ou le langage Alloy. Ces contrats peuvent être utilisés pour spécifier le comportement des contrats intelligents et d'autres composants du microréseau, et peuvent être vérifiés à l'aide de méthodes formelles pour garantir qu'ils satisfont un ensemble depropriétés de sécurité et de fiabilité.
Une autre façon dont les méthodes formelles peuvent être utilisées pour améliorer la sécurité et la fiabilité des micro-réseaux basés sur la blockchain est par le développement de techniques de vérification automatisées. Les techniques de vérification automatisées peuvent être utilisées pour analyser le comportement des contrats intelligents et d'autres composants du micro-réseau, identifier les vulnérabilités potentielles et s'assurer que le système fonctionne comme prévu. Ces techniques peuvent être intégrées dans le processus de développement du micro-réseau, garantissant ainsi que le système est vérifié et validé à chaque étape de développement.
En résumé, l'intégration de la technologie de la blockchain et des contrats intelligents présente à la fois des opportunités et des défis pour la conception et l'exploitation des micro-réseaux. L'utilisation de méthodes formelles peut aider à assurer la sécurité et la fiabilité de ces systèmes, en fournissant une approche rigoureuse et systématique pour l'analyse et la vérification de leur comportement. Les méthodes formelles peuvent être utilisées pour modéliser et analyser le comportement des systèmes de la blockchain et des contrats intelligents, et pour vérifier qu'ils satisfont un ensemble de propriétésde sécurité et de fiabilité. Le développement de contrats formels et de techniques de vérification automatisées peut également améliorer la sécurité et la fiabilité des micro-réseaux basés sur la blockchain.
Défis et questions ouvertes
L'utilisation des méthodes formelles dans la conception et l'analyse des microréseaux est un domaine de recherche émergent qui offre de grandes perspectives pour améliorer la sécurité, la fiabilité et l'efficacité de ces systèmes. Cependant, il existe plusieurs défis et questions ouvertes qui doivent être résolus pour réaliser pleinement le potentiel des méthodes formelles dans la conception et l'analyse des micro-réseaux. Ces défis et questions ouvertes comprennent :
-
Conclusion
Dans ce chapitre, nous avons exploré l'utilisation de méthodes formelles pour optimiser les performances, la sécurité et la fiabilité des micro-réseaux. Les micro-réseaux deviennent de plus en plus populaires comme moyen de fournir une énergie fiable et durable, mais la gestion de leur complexité peut être difficile. Les méthodes formelles offrent une approche prometteuse pour relever ce défi en permettant la vérification et la validation de systèmes complexes.
Nous avons discuté des différentes technologies qui peuvent être intégrées aux méthodes formelles pour améliorer les performances et la sécurité des micro-réseaux. Celles-ci incluent les technologies cloud, fog et IoT, ainsi que l'IA et le ML. En intégrant ces technologies aux méthodes formelles, les opérateurs de micro-réseau peuvent optimiser la génération et la consommation d'énergie, prévenir les pannes de courant et assurer le fonctionnement sûr et fiable de leurs systèmes.
Nous avons également discuté du rôle des méthodes formelles dans la résolution des problèmes de sécurité dans les micro-réseaux basés sur la blockchain. Les méthodes formelles peuvent aider à garantir l'intégrité et la confidentialité des transactions dans ces systèmes, ainsi qu'à prévenir les attaques contre l'infrastructuredu micro-réseau.
Malgré la promesse des méthodes formelles, il existe encore plusieurs défis et questions ouvertes qui doivent être résolus. Par exemple, il est nécessaire de mener davantage de recherches sur la scalabilité et l'efficacité des méthodes formelles pour les systèmes de micro-réseau à grande échelle. De plus, il est nécessaire de développer des méthodes et des outils formels normalisés pouvant être utilisés par les opérateurs et les ingénieurs de micro-réseau.
En conclusion, l'utilisation de méthodes formelles pour l'optimisation des microréseaux est une approche prometteuse pour relever les défis de gestion des systèmes énergétiques complexes. En intégrant des méthodes formelles aux technologies cloud, fog et IoT, ainsi qu'à l'IA et au ML, les opérateurs de micro-réseau peuvent garantir le fonctionnement sûr et fiable de leurs systèmes tout en optimisant la génération et la consommation d'énergie. De plus, les méthodes formelles peuvent aider à résoudre les problèmes de sécurité dans les micro-réseaux basés sur la blockchain. Cependant, des recherches supplémentaires sont nécessaires pour relever les défis et les questions ouvertes dans ce domaine et pour développer des méthodeset des outils formels normalisés. Dans l'ensemble, l'utilisation de méthodes formelles représente une étape importante dans le développement de systèmes de micro-réseau durables et fiables.
La vérification de modèle : cela implique
La démonstration de
théorème est souvent utilisée pour vérifier les systèmes critiques ou les systèmes
à logique complexe.
-l'utilisation d'algorithmes pour
vérifier tous les comportements possibles du système par rapport à une spéci-
fication formelle. La vérification de modèle est souvent utilisée pour vérifier les
systèmes à états finis ou les systèmes à logique temporelle.
-L
'interprétation abstraite : cela implique
l'utilisation d'abstractions mathé-
matiques pour analyser le comportement d'un système. L'interprétation abs-
traite est souvent utilisée pour vérifier les programmes avec des structures de
données complexes ou non-terminales.
-
La vérification en cours d'exécution : cela
implique la surveillance du com-
portement d'un système pendant son exécution et sa vérification par rapport à
une spécification formelle. La vérification en cours d'exécution est souvent utili-
sée pour détecter et diagnostiquer les erreurs ou anomalies dans le comportement
d'un système.
-Les
tests basés sur les modèles : cela
implique la génération de cas de test à partir d'un modèle formel du système(34; 35; 25). Les tests baséssur les modèles sont souvent utilisés pour garantir la correction fonctionnelle d'un système.
Manque de modèles et de spécifications normalisés : Il
Complexité du modèle : Les micro-réseaux peuvent être composés de différents types de sources d'énergie et peuvent être difficiles à modéliser avec précision. L'interaction entre ces différentes sources peut être complexe et difficile à capturer, ce qui rend difficile l'application de méthodes formelles pour analyser le comportement du système. -Les méthodes formelles nécessitent souvent des ressources computationnelles importantes pour analyser même de petits systèmes, ce qui peut être un obstacle significatif à leur adoption dans la conception et l'analyse des micro-réseaux. Cela est particulièrement vrai pour la vérification de modèle, qui peut nécessiter de grandes quantités de mémoire et de puissance de traitement pour analyser des systèmes complexes. -Considérations pratiques : Le coût du développement et de la vérification de modèles formels pour les micro-réseaux peut être important, en particulier pour les systèmes à petite échelle. Cela peut limiter l'adoption de méthodes formelles dans la conception et l'analyse des micro-réseaux, en particulier pour les systèmes ayant des budgets ou des ressources limités. -Vérification et validation : La vérification et la validation des modèles formels peuvent être difficiles, en particulier pour les systèmes complexes et dynamiques tels que les micro-réseaux. Il est nécessaire de développer des méthodes et des outils pour garantir que les modèles formels capturent avec précision le comportement du système et peuvent être vérifiés et validés de manière rapide et efficace. -Facteurs humains : L'utilisation de méthodes formelles dans la conception et l'analyse des micro-réseaux nécessite une collaboration entre les experts techniques et les parties prenantes telles que les opérateurs de système et les décideurs politiques. Il est nécessaire de développer des méthodes et des outils qui peuvent faciliter la communication et la collaboration entre ces différents groupes pour garantir que les modèles formels capturent avec précision les besoins et les exigences du système.-Implémentation dans le monde réel : L'implémentation de modèles formels dans les systèmes de micro-réseau du monde réel peut être difficile, en particulier dans les systèmes qui ont déjà été conçus et déployés. Il est nécessaire de développer des méthodes et des outils qui peuvent faciliter l'intégration de modèles formels dans les systèmesexistants et les flux de travail, ainsi que des méthodes pour garantir que les modèles continuent de refléter avec précision le comportement du système au fil du temps. En abordant ces défis et questions ouvertes, les méthodes formelles ont le potentiel de jouer un rôle important dans la conception, l'exploitation et la maintenance des micro-réseaux, aidant à garantir leur sécurité, leur fiabilité et leur efficacité face à une demande croissante de systèmes d'alimentation durables et résilients.
fluctuations des conditions météorologiques et environnementales. Ces change-dant, pour réaliser pleinement le potentiel des méthodes formelles dans la conception
ments peuvent rendre difficile le développement de modèles et de spécifications et l'analyse des micro-réseaux, il y a plusieurs orientations futures et recommandations
précis pour le système, ainsi que l'application de méthodes formelles pour ana-que les parties prenantes de ce domaine devraient envisager :
lyser le comportement du système en temps réel.
n'existe pas de norme largement acceptée pour la modélisation et la spécification des micro--Complexité computationnelle : 11 Orientations futures et recommandations -Développer
réseaux. Cela rend difficile la comparaison des différentes conceptions de micro-
réseau et l'application cohérente de méthodes formelles sur différents systèmes. À mesure que le domaine de la conception et de l'analyse des micro-réseaux continue
-Naturedynamique des micro-réseaux : Les micro-réseaux peuvent être sou-d'évoluer, l'utilisation de méthodes formelles est prête à jouer un rôle de plus en plus
mis à des changements rapides de l'offre et de la demande d'énergie, ainsi que des important pour garantir la sécurité, la fiabilité et l'efficacité de ces systèmes. Cepen-
des modèles et des spécifications normalisés :
Le développement de modèles et de spécifications normalisés pour lesmicro-réseaux est crucial pour l'adoption et l'application des méthodes formelles dans ce domaine. Les parties prenantes de ce domaine devraient travailler ensemble pour développer et promouvoir des normes largement acceptées pour la modélisation et la spécification des micro-réseaux afin de faciliter l'utilisation de méthodes formelles dans différents systèmes.-Aborder la complexité computationnelle : La complexité computationnelle des méthodes formelles peut constituer un obstacle significatif à leur adoption dans la conception et l'analyse des micro-réseaux. Les parties prenantes devraient se concentrer sur le développement et la promotion de méthodes et d'outils qui peuvent réduire la charge computationnelle des méthodes formelles, tels que les techniques de réduction de modèle et les algorithmes efficaces de vérification de modèle. -
Intégrer les méthodes formelles dans les flux de travail existants :
Pour être efficaces, les méthodes formelles doivent être intégrées dans les flux de travail et les systèmes de conception et d'analyse des micro-réseaux existants. Les parties prenantes devraient se concentrer sur le développement et la promotion de méthodes et d'outils qui peuvent faciliter l'intégration des méthodes formelles dans les flux de travailet les systèmes existants, ainsi que des méthodes pour garantir que les modèles reflètent toujours avec précision le comportement du système au fil du temps. -Aborder les facteurs humains : L'application réussie des méthodes formelles dans la conception et l'analyse des micro-réseaux nécessite une collaboration entre les experts techniques et les parties prenantes telles que les opérateurs de système et les décideurs politiques. Les parties prenantes devraient se concentrer sur le développement et la promotion de méthodes et d'outils qui peuvent faciliter la communication et la collaboration entre ces différents groupes pour garantir que les modèles formels capturent avec précision les besoins et les exigences du système. -
Valider et vérifier les modèles formels : La
vérification et la validation des modèles formels sont cruciales pour garantir qu'ils capturent avec précision le comportement du système. Les parties prenantes devraient se concentrer sur le développement et la promotion de méthodes et d'outils qui peuvent vérifier et valider efficacement les modèles formels, ainsi que des méthodes pour garantir que les modèles restent précis au fil du temps. -Développer des solutions pratiques : Le coût du développement et dela vérification de modèles formels peut être important, en particulier pour les systèmes à petite échelle. Les parties prenantes devraient se concentrer sur le développement de solutions pratiques pour rendre les méthodes formelles plus accessibles et abordables pour les micro-réseaux, telles que des outils de modélisation et de vérification de modèle open source et des programmes de formation pour les ingénieurs et les concepteurs de micro-réseaux. -Inclure la sécurité et la cybersécurité : La sécurité et la cybersécurité sont des préoccupations importantes pour les micro-réseaux, en particulier dans le contexte de la connectivité à d'autres réseaux et d'Internet. Les parties prenantes devraient se concentrer sur l'inclusion de la sécurité et de la cybersécurité dans les modèles et les spécifications formels pour les micro-réseaux, ainsi que sur le développement et la promotion d'outils de vérification de sécurité pour garantir que les micro-réseaux sont protégés contre les cyberattaques et les autres
Validation : Après l'implémentation, le système doit être validé pour s'assurer qu'il fonctionne correctement et répond à la spécification. Cela peut impliquer l'utilisation de tests de validation, de simulations ou d'autres techniques pour vérifier le comportement du système en fonctionnement réel.
Maintenance : Enfin, le système doit être maintenu pour assurer sa fiabilité et sa sécurité à long terme. Cela peut impliquer la mise en place de procédures de maintenance régulières, la correction des bogues ou des erreurs, et la mise à jour de la spécification ou du modèle formel pour refléter les changements dans le système.En utilisant les méthodes formelles, les ingénieurs peuvent concevoir et développer des systèmes logiciels et matériels qui sont plus fiables, plus sûrs et plus robustes. Cependant, l'utilisation de ces méthodes peut être coûteuse et nécessite une expertise technique avancée. Par conséquent, leur utilisation est souvent réservée aux systèmes critiques où la sécurité et la fiabilité sont des préoccupations majeures.4 Optimisation des micro-réseaux avec les technologies cloud : assurer la sécurité et la fiabilité grâce aux méthodes formellesLa gestion et l'optimisation des micro-réseaux dépendent de plus en plus des technologies cloud. Les micro-réseaux produisent de grandes quantités de données, notamment des enregistrements de production et de consommation d'énergie, qui peuvent être stockés, traités et analysés à l'aide de ressources de calcul et de stockage cloud[START_REF] Dabbaghjamanesh | A novel distributed cloud-fog based framework for energy management of networked microgrids[END_REF]. Par conséquent, les opérateurs de micro-réseaux peuvent améliorer l'efficacité de leur système et économiser de l'argent sur les coûts énergétiques[START_REF] Wang | Cloud computing and local chip-based dynamic economic dispatch for microgrids[END_REF].Cependant, l'utilisation des technologies cloud dans les micro-réseaux soulève également des questions concernant la sécurité et la fiabilité. Pour que les données soient stockées et traitées en toute sécurité dans le cloud, l'infrastructure sous-jacente doit être impénétrable aux pirates. Dans de tels cas, des approches formelles sont nécessaires. En ce qui concerne la sécurité des systèmes logiciels, en particulier de l'infrastructure cloud, rien ne vaut la rigueur mathématique des approchesformelles. Elles peuvent être utilisées pour garantir l'intégrité et la sécurité de l'infrastructure cloud du micro-réseau.Il est possible d'appliquer des méthodes formelles pour vérifier le fonctionnement correct de l'infrastructure cloud dans toutes les situations et pour identifier les éventuelles lacunes ou failles de sécurité[START_REF] Chen | Performance modeling and verification of load balancing in cloud systems using formal methods[END_REF]. De plus, les approches formelles peuvent être utilisées pour produire des exigences détaillées pour les systèmes cloud, qui peuvent
Amélioration des performances des micro-réseaux avec les technologies de fog et les méthodes formellesLe fog computing est un type de calcul distribué qui apporte le cloud computing au bord du réseau, où se trouvent les dispositifs et les ressources générant et utilisant des données. Avec l'aide des technologies de fog, les micro-réseaux peuvent collecter et analyser des données en temps réel, ce qui réduit la latence et améliore les performances[START_REF] Bernardes | Fog computing model to orchestrate the consumption and production of energy in microgrids[END_REF]. Pour mieux gérer la consommation et la distribution d'énergie en temps réel, par exemple, les dispositifs de fog peuvent recueillir et analyser des données à partir de dispositifs Internet des objets (IoT) et de capteurs déployés dans le micro-réseau[START_REF] Keskin | An ensemble learning approach for energy demand forecasting in microgrids using fog computing[END_REF].Cependant, des questions sur la sécurité et la fiabilité de la technologie de fog dans les micro-réseaux ont été soulevées. L'infrastructure de stockage et de traitement des données de fog doit être sécurisée et exempte de vulnérabilités. Dans de tels cas, des approches formelles sont nécessaires. La validité des systèmes logiciels, y compris l'infrastructure de fog, peut être vérifiée à l'aide de méthodes formelles.Elles peuvent être utilisées pour s'assurer que l'infrastructure de fog du micro-réseau est sécurisée et fiable.La preuve du fonctionnement correct dans tous les scénarios et l'identification des défauts et des vulnérabilités dans l'infrastructure de fog peuvent être réalisées à l'aide d'approches formelles. De plus, les approches formelles peuvent être utilisées pour produire des exigences détaillées pour les systèmes de fog qui peuvent être utilisées pour orienter et vérifier le processus de développement. Les développeurs du système de |